diff --git a/.gitignore b/.gitignore index 1e99e3f1ed..64993c70cb 100644 --- a/.gitignore +++ b/.gitignore @@ -27,3 +27,5 @@ downstream/titles/*/*/master.html release-notes/master.html *.docx *.swp +*.ini +.vale.ini diff --git a/README.adoc b/README.adoc index cd6b5be91e..61dbe8435f 100644 --- a/README.adoc +++ b/README.adoc @@ -108,12 +108,12 @@ The first time you contribute: . Clone the forked repository locally. ----- -$ git clone git@github.com:/red-hat-ansible-automation-platform-documentation.git +$ git clone git@github.com:/aap-docs.git ----- If this command fails, be sure that you have link:https://docs.github.com/en/github/authenticating-to-github/adding-a-new-ssh-key-to-your-github-account[set up an SSH key for GitHub]. -. Navigate to the `red-hat-ansible-automation-platform-documentation` directory. +. Navigate to the `aap-docs` directory. . Set the upstream remote repository. @@ -161,13 +161,13 @@ $ git push origin HEAD Typically the previous command gives the URL to open a pull request. If not, you can open one from the link:https://github.com/ansible/aap-docs/pulls[Pull requests] tab of the GitHub UI. -After you submit a pull request, it will be reviewed by members of this project. +After you submit a pull request, it must be reviewed by members of this project. ### Building the guide -You must have `asciidoctor` installed. See the link:https://asciibinder.net/[Asciibinder documentation] for more information on installing Asciibinder. +You must have `asciidoctor` installed. See the link:https://asciidoctor.org/[Asciidoctor documentation] for more information on installing Asciibinder. -. Navigate to the `red-hat-ansible-automation-platform-documentation` directory. +. Navigate to the directory that contains the document you want to build. . Use the following command to build the guide: ----- diff --git a/downstream/titles/controller/controller-getting-started/aap-common b/downstream/aap-common/aap-common similarity index 100% rename from downstream/titles/controller/controller-getting-started/aap-common rename to downstream/aap-common/aap-common diff --git a/downstream/aap-common/apache-2.0-license.adoc b/downstream/aap-common/apache-2.0-license.adoc new file mode 100644 index 0000000000..e1f0ff2b81 --- /dev/null +++ b/downstream/aap-common/apache-2.0-license.adoc @@ -0,0 +1,59 @@ +[preface] +[id="apache-2.0-license"] + += Open Source license + +.Apache license + +Version 2.0, January 2004 + +http://www.apache.org/licenses/ + +TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + +*1. Definitions.* + +*"License"* shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. + +*"Licensor"* shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. + +*"Legal Entity"* shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, *"control"* means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. + +*"You"* (or *"Your"*) shall mean an individual or Legal Entity exercising permissions granted by this License. + +*"Source"* form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. + +*"Object"* form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. + +*"Work"* shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). + +*"Derivative Works"* shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. + +*"Contribution"* shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, *"submitted"* means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as *"Not a Contribution."* + +*"Contributor"* shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. + +*2. Grant of Copyright License.* Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. + +*3. Grant of Patent License.* Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. + +*4. Redistribution.* You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: + +.. You must give any other recipients of the Work or Derivative Works a copy of this License; and +.. You must cause any modified files to carry prominent notices stating that You changed the files; and +.. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and +.. If the Work includes a *"NOTICE"* text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. + +You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. + +*5. Submission of Contributions.* Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. + +*6. Trademarks.* This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. + +*7. Disclaimer of Warranty.* Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. + +*8. Limitation of Liability.* In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. + +*9. Accepting Warranty or Additional Liability.* While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. + +END OF TERMS AND CONDITIONS \ No newline at end of file diff --git a/downstream/aap-common/external-site-disclaimer.adoc b/downstream/aap-common/external-site-disclaimer.adoc index 020020f7eb..b33617837b 100644 --- a/downstream/aap-common/external-site-disclaimer.adoc +++ b/downstream/aap-common/external-site-disclaimer.adoc @@ -12,9 +12,9 @@ // The following example adds a symlink to snippets from a hub title // $ cd /titles/hub/getting-started // $ ln -s ../../../snippets ./snippets -// +// // Including the file in a document // Add the following in the file where you want the text to be included: // include::snippets/external-site-disclaimer.adoc[] - -*Disclaimer*: Links contained in this note to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. +//[ddacosta] generalized this to be usable in broader applications. +*Disclaimer*: Links contained in this information to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. diff --git a/downstream/aap-common/gplv3-license-text.adoc b/downstream/aap-common/gplv3-license-text.adoc new file mode 100644 index 0000000000..f892122a5b --- /dev/null +++ b/downstream/aap-common/gplv3-license-text.adoc @@ -0,0 +1,232 @@ +[id="gplv3-license-text"] + += Open Source license + +.GNU GENERAL PUBLIC LICENSE + +Version 3, 29 June 2007 + +Copyright © 2007 Free Software Foundation, Inc.<> + +Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. + +[discrete] +==== Preamble +The GNU General Public License is a free, copyleft license for software and other kinds of works. + +The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. + +When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. + +To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. + +For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. + +Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. + +For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. + +Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. + +Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. + +The precise terms and conditions for copying, distribution and modification follow. + +[discrete] +==== TERMS AND CONDITIONS + +.0. Definitions. +“This License” refers to version 3 of the GNU General Public License. + +“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. + +“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations. + +To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work. + +A “covered work” means either the unmodified Program or a work based on the Program. + +To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. + +To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. + +An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. + +.1. Source Code. +The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work. + +A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. + +The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. + +The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. + +The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. + +The Corresponding Source for a work in source code form is that same work. + +.2. Basic Permissions. +All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. + +You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. + +Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. + +.3. Protecting Users' Legal Rights From Anti-Circumvention Law. +No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. + +When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. + +.4. Conveying Verbatim Copies. +You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. + +You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. + +.5. Conveying Modified Source Versions. +You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: + +* a) The work must carry prominent notices stating that you modified it, and giving a relevant date. +* b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”. +* c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. +* d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. +A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. + +.6. Conveying Non-Source Forms. +You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: + +* a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. +* b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. +* c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. +* d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. +* e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. + +A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. + +A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. + +“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. + +If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). + +The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. + +Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. + +.7. Additional Terms. +“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. + +When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. + +Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: + +* a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or +* b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or +* c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or +* d) Limiting the use for publicity purposes of names of licensors or authors of the material; or +* e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or +* f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. + +All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. + +If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. + +Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. + +.8. Termination. +You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). + +However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. + +Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. + +Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. + +.9. Acceptance Not Required for Having Copies. +You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. + +.10. Automatic Licensing of Downstream Recipients. +Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. + +An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. + +You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. + +.11. Patents. +A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's “contributor version”. + +A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. + +Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. + +In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. + +If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. + +If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. + +A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. + +Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. + +.12. No Surrender of Others' Freedom. +If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. + +.13. Use with the GNU Affero General Public License. +Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. + +.14. Revised Versions of this License. +The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. + +Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. + +If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. + +Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. + +.15. Disclaimer of Warranty. +THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. + +.16. Limitation of Liability. +IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. + +.17. Interpretation of Sections 15 and 16. +If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. + +END OF TERMS AND CONDITIONS +[discrete] +==== How to Apply These Terms to Your New Programs +If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. + +To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found. +---- + + Copyright (C) + + This program is free software: you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation, either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . +---- +Also add information on how to contact you by electronic and paper mail. + +If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: +---- + Copyright (C) + This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. + This is free software, and you are welcome to redistribute it + under certain conditions; type `show c' for details. +---- +The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an “about box”. + +You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <>. + +The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <>. + diff --git a/downstream/aap-common/open-source-apache.adoc b/downstream/aap-common/open-source-apache.adoc new file mode 100644 index 0000000000..2bb2cfdfd3 --- /dev/null +++ b/downstream/aap-common/open-source-apache.adoc @@ -0,0 +1,2 @@ + +include::{Apache}[leveloffset=+1] \ No newline at end of file diff --git a/downstream/aap-common/open-source-gnu3.adoc b/downstream/aap-common/open-source-gnu3.adoc new file mode 100644 index 0000000000..45bf2dfdde --- /dev/null +++ b/downstream/aap-common/open-source-gnu3.adoc @@ -0,0 +1,2 @@ + +include::{GNU3}[leveloffset=+1] \ No newline at end of file diff --git a/downstream/aap-common/providing-feedback.adoc b/downstream/aap-common/providing-feedback.adoc index ef5a39c0e3..9fb541ab69 100644 --- a/downstream/aap-common/providing-feedback.adoc +++ b/downstream/aap-common/providing-feedback.adoc @@ -3,5 +3,5 @@ [id="providing-feedback"] = Providing feedback on Red Hat documentation -If you have a suggestion to improve this documentation, or find an error, please contact technical support at link:https://access.redhat.com[https://access.redhat.com] to create an issue on the {PlatformNameShort} Jira project using the *docs-product* component. +If you have a suggestion to improve this documentation, or find an error, you can contact technical support at link:https://access.redhat.com[https://access.redhat.com] to open a request. diff --git a/downstream/assemblies/aap-hardening/assembly-aap-compliance.adoc b/downstream/archive/archived-assemblies/aap-hardening/assembly-aap-compliance.adoc similarity index 100% rename from downstream/assemblies/aap-hardening/assembly-aap-compliance.adoc rename to downstream/archive/archived-assemblies/aap-hardening/assembly-aap-compliance.adoc diff --git a/downstream/assemblies/aap-hardening/assembly-aap-security-enabling.adoc b/downstream/archive/archived-assemblies/aap-hardening/assembly-aap-security-enabling.adoc similarity index 100% rename from downstream/assemblies/aap-hardening/assembly-aap-security-enabling.adoc rename to downstream/archive/archived-assemblies/aap-hardening/assembly-aap-security-enabling.adoc diff --git a/downstream/archive/archived-assemblies/aap-hardening/assembly-aap-security-use-cases.adoc b/downstream/archive/archived-assemblies/aap-hardening/assembly-aap-security-use-cases.adoc new file mode 100644 index 0000000000..d1079af972 --- /dev/null +++ b/downstream/archive/archived-assemblies/aap-hardening/assembly-aap-security-use-cases.adoc @@ -0,0 +1,37 @@ +ifdef::context[:parent-context: {context}] + +[id="aap-security-use-cases"] += {PlatformNameShort} security automation use cases + +:context: aap-security-enabling + +[role="_abstract"] + +{PlatformNameShort} provides organizations the opportunity to automate many of the manual tasks required to maintain a strong IT security posture. +Areas where security operations might be automated include security event response and remediation, routine security operations, compliance with security policies and regulations, and security hardening of IT infrastructure. + +include::aap-hardening/con-security-operations-center.adoc[leveloffset=+1] +include::aap-hardening/con-patch-automation-with-aap.adoc[leveloffset=+1] +include::aap-hardening/con-benefits-of-patch-automation.adoc[leveloffset=+2] +include::aap-hardening/con-patching-examples.adoc[leveloffset=+2] +include::aap-hardening/ref-keep-up-to-date.adoc[leveloffset=+3] +include::aap-hardening/ref-install-security-updates.adoc[leveloffset=+3] +include::aap-hardening/ref-specify-package-versions.adoc[leveloffset=+3] +include::aap-hardening/ref-complex-patching-scenarios.adoc[leveloffset=+2] + + + + + + + + +//// +Consider adding a link to future Builder docs here +[role="_additional-resources"] +.Additional resources +* A bulleted list of links to other material closely related to the contents of the concept module. +* Currently, modules cannot include xrefs, so you cannot include links to other content in your collection. If you need to link to another assembly, add the xref to the assembly that includes this module. +* For more details on writing concept modules, see the link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. +* Use a consistent system for file names, IDs, and titles. For tips, see _Anchor Names and File Names_ in link:https://github.com/redhat-documentation/modular-docs#modular-documentation-reference-guide[Modular Documentation Reference Guide]. +//// \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-automation-controller-operational-secrets.adoc b/downstream/archive/archived-assemblies/aap-hardening/ref-automation-controller-operational-secrets.adoc similarity index 100% rename from downstream/modules/aap-hardening/ref-automation-controller-operational-secrets.adoc rename to downstream/archive/archived-assemblies/aap-hardening/ref-automation-controller-operational-secrets.adoc diff --git a/downstream/assemblies/builder/assembly-definition-file-breakdown.adoc b/downstream/archive/archived-assemblies/builder/assembly-definition-file-breakdown.adoc similarity index 100% rename from downstream/assemblies/builder/assembly-definition-file-breakdown.adoc rename to downstream/archive/archived-assemblies/builder/assembly-definition-file-breakdown.adoc diff --git a/downstream/assemblies/central-auth/assembly-assign-hub-admin-permissions.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-assign-hub-admin-permissions.adoc similarity index 100% rename from downstream/assemblies/central-auth/assembly-assign-hub-admin-permissions.adoc rename to downstream/archive/archived-assemblies/central-auth/assembly-assign-hub-admin-permissions.adoc diff --git a/downstream/assemblies/central-auth/assembly-central-auth-add-user-storage.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-central-auth-add-user-storage.adoc similarity index 100% rename from downstream/assemblies/central-auth/assembly-central-auth-add-user-storage.adoc rename to downstream/archive/archived-assemblies/central-auth/assembly-central-auth-add-user-storage.adoc diff --git a/downstream/archive/archived-assemblies/central-auth/assembly-central-auth-automate-keycloak.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-central-auth-automate-keycloak.adoc new file mode 100644 index 0000000000..29bad9a268 --- /dev/null +++ b/downstream/archive/archived-assemblies/central-auth/assembly-central-auth-automate-keycloak.adoc @@ -0,0 +1,8 @@ +[id="assembly-keycloak"] + +== Using Keycloak to automate single sign-on with {HubName} + +Keycloak is an open source tool developed by Red Hat that enables administrators to implement the use of SSO in software-as-a-service (SaaS) applications. It runs on top of Ansible using the collection and dependencies from {Galaxy}. With the base requirements running to install Keycloak, you can then install Keycloak and use it to automate the use of single sign-on with the use of Ansible playbooks. + +include::platform/proc-central-auth-install-dependencies.adoc[leveloffset=+1] +include::platform/proc-central-auth-install-keycloak.adoc[leveloffset=+1] diff --git a/downstream/assemblies/central-auth/assembly-central-auth-group-perms.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-central-auth-group-perms.adoc similarity index 100% rename from downstream/assemblies/central-auth/assembly-central-auth-group-perms.adoc rename to downstream/archive/archived-assemblies/central-auth/assembly-central-auth-group-perms.adoc diff --git a/downstream/assemblies/central-auth/assembly-central-auth-hub.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-central-auth-hub.adoc similarity index 100% rename from downstream/assemblies/central-auth/assembly-central-auth-hub.adoc rename to downstream/archive/archived-assemblies/central-auth/assembly-central-auth-hub.adoc diff --git a/downstream/assemblies/central-auth/assembly-central-auth-identity-broker.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-central-auth-identity-broker.adoc similarity index 100% rename from downstream/assemblies/central-auth/assembly-central-auth-identity-broker.adoc rename to downstream/archive/archived-assemblies/central-auth/assembly-central-auth-identity-broker.adoc diff --git a/downstream/assemblies/central-auth/assembly-configuring-central-auth-generic-oidc-settings.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-configuring-central-auth-generic-oidc-settings.adoc similarity index 100% rename from downstream/assemblies/central-auth/assembly-configuring-central-auth-generic-oidc-settings.adoc rename to downstream/archive/archived-assemblies/central-auth/assembly-configuring-central-auth-generic-oidc-settings.adoc diff --git a/downstream/assemblies/central-auth/assembly-install-central-auth-hub.adoc b/downstream/archive/archived-assemblies/central-auth/assembly-install-central-auth-hub.adoc similarity index 100% rename from downstream/assemblies/central-auth/assembly-install-central-auth-hub.adoc rename to downstream/archive/archived-assemblies/central-auth/assembly-install-central-auth-hub.adoc diff --git a/downstream/assemblies/core/assembly-understanding-ansible-concepts.adoc b/downstream/archive/archived-assemblies/core/assembly-understanding-ansible-concepts.adoc similarity index 100% rename from downstream/assemblies/core/assembly-understanding-ansible-concepts.adoc rename to downstream/archive/archived-assemblies/core/assembly-understanding-ansible-concepts.adoc diff --git a/downstream/assemblies/dev-guide/assembly-creating-content.adoc b/downstream/archive/archived-assemblies/dev-guide/assembly-creating-content.adoc similarity index 100% rename from downstream/assemblies/dev-guide/assembly-creating-content.adoc rename to downstream/archive/archived-assemblies/dev-guide/assembly-creating-content.adoc diff --git a/downstream/assemblies/dev-guide/assembly-introduction.adoc b/downstream/archive/archived-assemblies/dev-guide/assembly-introduction.adoc similarity index 100% rename from downstream/assemblies/dev-guide/assembly-introduction.adoc rename to downstream/archive/archived-assemblies/dev-guide/assembly-introduction.adoc diff --git a/downstream/assemblies/dev-guide/assembly-migrate-ansible-versions.adoc b/downstream/archive/archived-assemblies/dev-guide/assembly-migrate-ansible-versions.adoc similarity index 100% rename from downstream/assemblies/dev-guide/assembly-migrate-ansible-versions.adoc rename to downstream/archive/archived-assemblies/dev-guide/assembly-migrate-ansible-versions.adoc diff --git a/downstream/assemblies/dev-guide/assembly-migrating-existing-content.adoc b/downstream/archive/archived-assemblies/dev-guide/assembly-migrating-existing-content.adoc similarity index 100% rename from downstream/assemblies/dev-guide/assembly-migrating-existing-content.adoc rename to downstream/archive/archived-assemblies/dev-guide/assembly-migrating-existing-content.adoc diff --git a/downstream/assemblies/dev-guide/assembly-setting-up-dev-environment.adoc b/downstream/archive/archived-assemblies/dev-guide/assembly-setting-up-dev-environment.adoc similarity index 100% rename from downstream/assemblies/dev-guide/assembly-setting-up-dev-environment.adoc rename to downstream/archive/archived-assemblies/dev-guide/assembly-setting-up-dev-environment.adoc diff --git a/downstream/assemblies/dev-guide/assembly-tools-components.adoc b/downstream/archive/archived-assemblies/dev-guide/assembly-tools-components.adoc similarity index 100% rename from downstream/assemblies/dev-guide/assembly-tools-components.adoc rename to downstream/archive/archived-assemblies/dev-guide/assembly-tools-components.adoc diff --git a/downstream/assemblies/dev-guide/assembly-virt-env-to-ee.adoc b/downstream/archive/archived-assemblies/dev-guide/assembly-virt-env-to-ee.adoc similarity index 100% rename from downstream/assemblies/dev-guide/assembly-virt-env-to-ee.adoc rename to downstream/archive/archived-assemblies/dev-guide/assembly-virt-env-to-ee.adoc diff --git a/downstream/archive/archived-assemblies/devtools/assembly-devtools-setup.adoc b/downstream/archive/archived-assemblies/devtools/assembly-devtools-setup.adoc new file mode 100644 index 0000000000..2f3c1e4da9 --- /dev/null +++ b/downstream/archive/archived-assemblies/devtools/assembly-devtools-setup.adoc @@ -0,0 +1,15 @@ +ifdef::context[:parent-context: {context}] +[id="devtools-setup"] + += Configuring {ToolsName} + + +:context: devtools-setup +[role="_abstract"] + +include::devtools/proc-setup-vscode-workspace.adoc[leveloffset=+1] +// include::devtools/proc-create-python-venv.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-planning.adoc b/downstream/archive/archived-assemblies/devtools/assembly-rhdh-planning.adoc similarity index 100% rename from downstream/assemblies/devtools/assembly-rhdh-planning.adoc rename to downstream/archive/archived-assemblies/devtools/assembly-rhdh-planning.adoc diff --git a/downstream/archive/archived-assemblies/devtools/assembly-rhdh-uninstall.adoc b/downstream/archive/archived-assemblies/devtools/assembly-rhdh-uninstall.adoc new file mode 100644 index 0000000000..99262a4e00 --- /dev/null +++ b/downstream/archive/archived-assemblies/devtools/assembly-rhdh-uninstall.adoc @@ -0,0 +1,14 @@ +ifdef::context[:parent-context-of-assembly-rhdh-uninstall: {context}] +[id="rhdh-uninstall_{context}"] + += Uninstalling the Ansible plug-ins + +:context: rhdh-uninstall + +include::devtools/proc-rhdh-uninstall-ocp-helm.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-uninstall-ocp-operator.adoc[leveloffset=+1] + +ifdef::parent-context-of-assembly-rhdh-uninstall[:context: {parent-context-of-assembly-rhdh-uninstall}] +ifndef::parent-context-of-assembly-rhdh-uninstall[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-testing-playbooks.adoc b/downstream/archive/archived-assemblies/devtools/assembly-testing-playbooks.adoc similarity index 100% rename from downstream/assemblies/devtools/assembly-testing-playbooks.adoc rename to downstream/archive/archived-assemblies/devtools/assembly-testing-playbooks.adoc diff --git a/downstream/assemblies/eda/assembly-about-event-driven-ansible-automation.adoc b/downstream/archive/archived-assemblies/eda/assembly-about-event-driven-ansible-automation.adoc similarity index 100% rename from downstream/assemblies/eda/assembly-about-event-driven-ansible-automation.adoc rename to downstream/archive/archived-assemblies/eda/assembly-about-event-driven-ansible-automation.adoc diff --git a/downstream/assemblies/eda/assembly-ansible-rulebooks.adoc b/downstream/archive/archived-assemblies/eda/assembly-ansible-rulebooks.adoc similarity index 100% rename from downstream/assemblies/eda/assembly-ansible-rulebooks.adoc rename to downstream/archive/archived-assemblies/eda/assembly-ansible-rulebooks.adoc diff --git a/downstream/assemblies/eda/assembly-eda-set-up-token.adoc b/downstream/archive/archived-assemblies/eda/assembly-eda-set-up-token.adoc similarity index 100% rename from downstream/assemblies/eda/assembly-eda-set-up-token.adoc rename to downstream/archive/archived-assemblies/eda/assembly-eda-set-up-token.adoc diff --git a/downstream/assemblies/eda/assembly-installation-eda-controller.adoc b/downstream/archive/archived-assemblies/eda/assembly-installation-eda-controller.adoc similarity index 100% rename from downstream/assemblies/eda/assembly-installation-eda-controller.adoc rename to downstream/archive/archived-assemblies/eda/assembly-installation-eda-controller.adoc diff --git a/downstream/assemblies/eda/assembly-using-eda-controller.adoc b/downstream/archive/archived-assemblies/eda/assembly-using-eda-controller.adoc similarity index 100% rename from downstream/assemblies/eda/assembly-using-eda-controller.adoc rename to downstream/archive/archived-assemblies/eda/assembly-using-eda-controller.adoc diff --git a/downstream/assemblies/hub/assembly-basic-remote-management.adoc b/downstream/archive/archived-assemblies/hub/assembly-basic-remote-management.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-basic-remote-management.adoc rename to downstream/archive/archived-assemblies/hub/assembly-basic-remote-management.adoc diff --git a/downstream/assemblies/hub/assembly-basic-repo-management.adoc b/downstream/archive/archived-assemblies/hub/assembly-basic-repo-management.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-basic-repo-management.adoc rename to downstream/archive/archived-assemblies/hub/assembly-basic-repo-management.adoc diff --git a/downstream/assemblies/hub/assembly-configure-hub-primary.adoc b/downstream/archive/archived-assemblies/hub/assembly-configure-hub-primary.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-configure-hub-primary.adoc rename to downstream/archive/archived-assemblies/hub/assembly-configure-hub-primary.adoc diff --git a/downstream/assemblies/hub/assembly-faq.adoc b/downstream/archive/archived-assemblies/hub/assembly-faq.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-faq.adoc rename to downstream/archive/archived-assemblies/hub/assembly-faq.adoc diff --git a/downstream/assemblies/hub/assembly-hub-connect-sso.adoc b/downstream/archive/archived-assemblies/hub/assembly-hub-connect-sso.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-hub-connect-sso.adoc rename to downstream/archive/archived-assemblies/hub/assembly-hub-connect-sso.adoc diff --git a/downstream/assemblies/hub/assembly-hub-create-api-token.adoc b/downstream/archive/archived-assemblies/hub/assembly-hub-create-api-token.adoc similarity index 67% rename from downstream/assemblies/hub/assembly-hub-create-api-token.adoc rename to downstream/archive/archived-assemblies/hub/assembly-hub-create-api-token.adoc index e54c0c2a07..0f4433fcce 100644 --- a/downstream/assemblies/hub/assembly-hub-create-api-token.adoc +++ b/downstream/archive/archived-assemblies/hub/assembly-hub-create-api-token.adoc @@ -10,9 +10,11 @@ Before you can interact with {HubName} by uploading or downloading collections, Your method for creating the API token differs according to the type of {HubName} that you are using: -* {HubNameStart} uses Offline token management. See xref:proc-create-api-token[Creating the API token in {HubName}]. +* {HubNameStart} uses offline token management. See xref:proc-create-api-token_api-token[Creating the offline token in {HubName}]. -* {PrivateHubNameStart} uses API token management. See xref:proc-create-api-token-pah[Creating the API token in {PrivateHubName}]. +* {PrivateHubNameStart} uses API token management. See xref:proc-create-api-token-pah_api-token[Creating the API token in {PrivateHubName}]. + +* If you are using Keycloak to authenticate your {PrivateHubName}, follow the procedure for xref:proc-create-api-token_api-token[Creating the offline token in {HubName}]. include::hub/proc-create-api-token.adoc[leveloffset=+1] diff --git a/downstream/assemblies/hub/assembly-hub-install-setup-script.adoc b/downstream/archive/archived-assemblies/hub/assembly-hub-install-setup-script.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-hub-install-setup-script.adoc rename to downstream/archive/archived-assemblies/hub/assembly-hub-install-setup-script.adoc diff --git a/downstream/assemblies/hub/assembly-hub-overview.adoc b/downstream/archive/archived-assemblies/hub/assembly-hub-overview.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-hub-overview.adoc rename to downstream/archive/archived-assemblies/hub/assembly-hub-overview.adoc diff --git a/downstream/assemblies/hub/assembly-private-automation-hub.adoc b/downstream/archive/archived-assemblies/hub/assembly-private-automation-hub.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-private-automation-hub.adoc rename to downstream/archive/archived-assemblies/hub/assembly-private-automation-hub.adoc diff --git a/downstream/assemblies/hub/assembly-uploading-content-hub.adoc b/downstream/archive/archived-assemblies/hub/assembly-uploading-content-hub.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-uploading-content-hub.adoc rename to downstream/archive/archived-assemblies/hub/assembly-uploading-content-hub.adoc diff --git a/downstream/assemblies/hub/assembly-user-access.adoc b/downstream/archive/archived-assemblies/hub/assembly-user-access.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-user-access.adoc rename to downstream/archive/archived-assemblies/hub/assembly-user-access.adoc diff --git a/downstream/assemblies/hub/assembly-view-only-access.adoc b/downstream/archive/archived-assemblies/hub/assembly-view-only-access.adoc similarity index 100% rename from downstream/assemblies/hub/assembly-view-only-access.adoc rename to downstream/archive/archived-assemblies/hub/assembly-view-only-access.adoc diff --git a/downstream/assemblies/hub/assembly_ppah-installation.adoc b/downstream/archive/archived-assemblies/hub/assembly_ppah-installation.adoc similarity index 100% rename from downstream/assemblies/hub/assembly_ppah-installation.adoc rename to downstream/archive/archived-assemblies/hub/assembly_ppah-installation.adoc diff --git a/downstream/assemblies/navigator/assembly-executing-content.adoc b/downstream/archive/archived-assemblies/navigator/assembly-executing-content.adoc similarity index 100% rename from downstream/assemblies/navigator/assembly-executing-content.adoc rename to downstream/archive/archived-assemblies/navigator/assembly-executing-content.adoc diff --git a/downstream/assemblies/navigator/assembly-installing-on-rhel.adoc b/downstream/archive/archived-assemblies/navigator/assembly-installing-on-rhel.adoc similarity index 100% rename from downstream/assemblies/navigator/assembly-installing-on-rhel.adoc rename to downstream/archive/archived-assemblies/navigator/assembly-installing-on-rhel.adoc diff --git a/downstream/archive/archived-assemblies/platform/assembly-aap-architecture.adoc b/downstream/archive/archived-assemblies/platform/assembly-aap-architecture.adoc new file mode 100644 index 0000000000..107da4f7ff --- /dev/null +++ b/downstream/archive/archived-assemblies/platform/assembly-aap-architecture.adoc @@ -0,0 +1,13 @@ +// This assembly is part of the AAP Planning Guide +[id='aap_architecture'] += {PlatformName} Architecture + +Deploy all components of {PlatformNameShort} so that all features and capabilities are available for use without the need to take further action. + +Red Hat tests the installation of {PlatformNameShort} {PlatformVers} based on a defined set of infrastructure topologies or reference architectures. Enterprise organizations can use one of the {EnterpriseTopologyPlural} for production deployments to ensure the highest level of uptime, performance, and continued scalability. Organizations or deployments that are resource constrained can use a {GrowthTopology}. + +The following section provides a comprehensive architectural example of an {PlatformNameShort} deployment. + +include::platform/con-aap-example-architecture.adoc[leveloffset=+1] +include::platform/ref-example-CONT-architecture.adoc[leveloffset=+1] +include::platform/ref-example-OCP-architecture.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-ag-controller-LDAP-auth.adoc b/downstream/archive/archived-assemblies/platform/assembly-ag-controller-LDAP-auth.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-ag-controller-LDAP-auth.adoc rename to downstream/archive/archived-assemblies/platform/assembly-ag-controller-LDAP-auth.adoc diff --git a/downstream/assemblies/platform/assembly-ag-controller-session-limits.adoc b/downstream/archive/archived-assemblies/platform/assembly-ag-controller-session-limits.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-ag-controller-session-limits.adoc rename to downstream/archive/archived-assemblies/platform/assembly-ag-controller-session-limits.adoc diff --git a/downstream/assemblies/platform/assembly-ag-controller-set-up-enterprise-authentication.adoc b/downstream/archive/archived-assemblies/platform/assembly-ag-controller-set-up-enterprise-authentication.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-ag-controller-set-up-enterprise-authentication.adoc rename to downstream/archive/archived-assemblies/platform/assembly-ag-controller-set-up-enterprise-authentication.adoc diff --git a/downstream/assemblies/platform/assembly-attaching-subscriptions.adoc b/downstream/archive/archived-assemblies/platform/assembly-attaching-subscriptions.adoc similarity index 93% rename from downstream/assemblies/platform/assembly-attaching-subscriptions.adoc rename to downstream/archive/archived-assemblies/platform/assembly-attaching-subscriptions.adoc index ac588ac5f1..81b9a25f79 100644 --- a/downstream/assemblies/platform/assembly-attaching-subscriptions.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-attaching-subscriptions.adoc @@ -1,4 +1,4 @@ - +// emurtoug removed this assembly from the Planning guide to avoid duplication of subscription content added to Access management and authentication [id="proc-attaching-subscriptions_{context}"] diff --git a/downstream/archive/archived-assemblies/platform/assembly-configure-aap-operator.adoc b/downstream/archive/archived-assemblies/platform/assembly-configure-aap-operator.adoc new file mode 100644 index 0000000000..f4f9040494 --- /dev/null +++ b/downstream/archive/archived-assemblies/platform/assembly-configure-aap-operator.adoc @@ -0,0 +1,33 @@ +ifdef::context[:parent-context: {context}] + +[id="configure-aap-operator_{context}"] + +:context: configure-aap-operator + += Configuring the {OperatorPlatformName} on {OCP} + +The {Gateway} for {PlatformNameShort} enables you to manage the following {PlatformNameShort} components to form a single user interface: + +* {ControllerNameStart} +* {HubNameStart} +* {EDAName} +* {LightspeedShortName} (This feature is disabled by default, you must opt in to use it.) + +Before you can deploy the {Gateway} you must have {OperatorPlatformNameShort} installed in a namespace. +If you have not installed {OperatorPlatformNameShort} see xref:install-aap-operator_operator-platform-doc[Installing the {OperatorPlatformName} on {OCP}]. + +[NOTE] +==== +{GatewayStart} is only available under {OperatorPlatformNameShort} version 2.5. Every component deployed under {OperatorPlatformNameShort} 2.5 defaults to version 2.5. +==== + +If you have the {OperatorPlatformNameShort} and some or all of the {PlatformNameShort} components installed see xref:operator-deploy-central-config_{context}[Deploying the platform gateway with existing {PlatformNameShort} components] for how to proceed. + +include::platform/proc-operator-link-components.adoc[leveloffset=+1] +include::platform/proc-operator-access-aap.adoc[leveloffset=+1] +include::platform/proc-operator-deploy-central-config.adoc[leveloffset=+1] +include::platform/proc-operator-external-db-gateway.adoc[leveloffset=+1] +include::platform/proc-operator-aap-faq.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-configuring-websockets.adoc b/downstream/archive/archived-assemblies/platform/assembly-configuring-websockets.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-configuring-websockets.adoc rename to downstream/archive/archived-assemblies/platform/assembly-configuring-websockets.adoc diff --git a/downstream/assemblies/platform/assembly-content-migration.adoc b/downstream/archive/archived-assemblies/platform/assembly-content-migration.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-content-migration.adoc rename to downstream/archive/archived-assemblies/platform/assembly-content-migration.adoc diff --git a/downstream/assemblies/platform/assembly-controller-dashboard.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-dashboard.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-controller-dashboard.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-dashboard.adoc diff --git a/downstream/assemblies/platform/assembly-controller-isolation-function-variables.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-isolation-function-variables.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-controller-isolation-function-variables.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-isolation-function-variables.adoc diff --git a/downstream/assemblies/platform/assembly-controller-job-templates.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-job-templates.adoc similarity index 82% rename from downstream/assemblies/platform/assembly-controller-job-templates.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-job-templates.adoc index 93c0c91346..f5b30964ac 100644 --- a/downstream/assemblies/platform/assembly-controller-job-templates.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-controller-job-templates.adoc @@ -4,8 +4,9 @@ A job template combines an Ansible playbook from a project and the settings required to launch it. Job templates are useful to run the same job many times. Job templates also encourage the reuse of Ansible playbook content and collaboration between teams. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-job-templates[Job Templates] in the _{ControllerUG}_. +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-job-templates[Job Templates] in _{ControllerUG}_. include::platform/proc-controller-getting-started-with-job-templates.adoc[leveloffset=+1] +include::platform/proc-controller-create-job-template.adoc[leveloffset=+1] include::platform/proc-controller-edit-job-template.adoc[leveloffset=+1] include::platform/proc-controller-run-job-template.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-kerberos-authentication.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-kerberos-authentication.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-controller-kerberos-authentication.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-kerberos-authentication.adoc diff --git a/downstream/assemblies/platform/assembly-controller-licensing.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-licensing.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-controller-licensing.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-licensing.adoc diff --git a/downstream/assemblies/platform/assembly-controller-managing-subscriptions.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-managing-subscriptions.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-controller-managing-subscriptions.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-managing-subscriptions.adoc diff --git a/downstream/assemblies/platform/assembly-controller-security.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-security.adoc similarity index 99% rename from downstream/assemblies/platform/assembly-controller-security.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-security.adoc index 8580a9127d..4794e615ee 100644 --- a/downstream/assemblies/platform/assembly-controller-security.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-controller-security.adoc @@ -11,8 +11,10 @@ This protection ensures that jobs can only access playbooks, roles, and data fro For credential security, you can upload locked SSH keys and set the unlock password to "ask". You can also have the system prompt you for SSH credentials or sudo passwords rather than having the system store them in the database. +//Moved to jobs include::platform/con-controller-playbook-access-info-sharing.adoc[leveloffset=+1] include::platform/ref-controller-isolation-functionality.adoc[leveloffset=+2] + include::platform/con-controller-rbac.adoc[leveloffset=+1] include::platform/con-controller-role-hierarchy.adoc[leveloffset=+2] include::platform/ref-controller-applying-rbac.adoc[leveloffset=+3] @@ -25,6 +27,5 @@ include::platform/ref-controller-rbac-user-view.adoc[leveloffset=+4] include::platform/ref-controller-rbac-roles.adoc[leveloffset=+3] include::platform/ref-controller-rbac-built-in-roles.adoc[leveloffset=+4] include::platform/ref-controller-rbac-personas.adoc[levelffset=+3] - include::platform/con-controller-function-of-roles.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-set-up-social-authentication.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-set-up-social-authentication.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-controller-set-up-social-authentication.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-set-up-social-authentication.adoc diff --git a/downstream/assemblies/platform/assembly-controller-token-based-authentication.adoc b/downstream/archive/archived-assemblies/platform/assembly-controller-token-based-authentication.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-controller-token-based-authentication.adoc rename to downstream/archive/archived-assemblies/platform/assembly-controller-token-based-authentication.adoc diff --git a/downstream/assemblies/platform/assembly-converting-playbooks-for-aap2.adoc b/downstream/archive/archived-assemblies/platform/assembly-converting-playbooks-for-aap2.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-converting-playbooks-for-aap2.adoc rename to downstream/archive/archived-assemblies/platform/assembly-converting-playbooks-for-aap2.adoc diff --git a/downstream/assemblies/platform/assembly-custom-inventory-scripts.adoc b/downstream/archive/archived-assemblies/platform/assembly-custom-inventory-scripts.adoc similarity index 96% rename from downstream/assemblies/platform/assembly-custom-inventory-scripts.adoc rename to downstream/archive/archived-assemblies/platform/assembly-custom-inventory-scripts.adoc index b6a40e541f..fc4a1778bb 100644 --- a/downstream/assemblies/platform/assembly-custom-inventory-scripts.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-custom-inventory-scripts.adoc @@ -6,11 +6,11 @@ ==== Inventory scripts have been discontinued. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-inventories#ref-controller-export-old-scripts[Export old inventory scripts] in the _{ControllerUG}_. +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-inventories#ref-controller-export-old-scripts[Export old inventory scripts] in _{ControllerUG}_. ==== If you use custom inventory scripts, migrate to sourcing these scripts from a project. -For more information, see xref:assembly-inventory-file-importing[Inventory File Importing], and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-inventories#ref-controller-inventory-sources[Inventory sources] in the _{ControllerUG}_. +For more information, see xref:assembly-inventory-file-importing[Inventory File Importing], and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-inventories#ref-controller-inventory-sources[Inventory sources] in _{ControllerUG}_. If you are setting up an inventory file, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario#proc-editing-installer-inventory-file_platform-install-scenario[Editing the Red Hat Ansible Automation Platform installer inventory file] and find examples specific to your setup. diff --git a/downstream/assemblies/platform/assembly-deploy-eda-controller-on-aap-operator.adoc b/downstream/archive/archived-assemblies/platform/assembly-deploy-eda-controller-on-aap-operator.adoc similarity index 89% rename from downstream/assemblies/platform/assembly-deploy-eda-controller-on-aap-operator.adoc rename to downstream/archive/archived-assemblies/platform/assembly-deploy-eda-controller-on-aap-operator.adoc index 78a32c8bfd..add161366b 100644 --- a/downstream/assemblies/platform/assembly-deploy-eda-controller-on-aap-operator.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-deploy-eda-controller-on-aap-operator.adoc @@ -3,14 +3,14 @@ ifdef::context[:parent: {context}] [id="deploy-eda-controller-on-aap-operator-ocp"] -= Deploying {EDAcontroller} with {OperatorPlatform} on {OCPShort} += Deploying {EDAcontroller} with {OperatorPlatformName} on {OCP} :context: deploying [role="_abstract"] {EDAcontroller} is the interface for event-driven automation and introduces automated resolution of IT requests. This component helps you connect to sources of events and acts on those events using rulebooks. When you deploy {EDAcontroller}, you can automate decision making, use numerous event sources, implement event-driven automation within and across multiple IT use cases, and achieve more efficient service delivery. -Use the following instructions to install {EDAName} with your {OperatorPlatform} on {OCPShort}. +Use the following instructions to install {EDAName} with your {OperatorPlatformNameShort} on {OCPShort}. include::platform/proc-deploy-eda-controller-with-aap-operator-ocp.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-encrypting-plaintext-passwords.adoc b/downstream/archive/archived-assemblies/platform/assembly-encrypting-plaintext-passwords.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-encrypting-plaintext-passwords.adoc rename to downstream/archive/archived-assemblies/platform/assembly-encrypting-plaintext-passwords.adoc diff --git a/downstream/archive/archived-assemblies/platform/assembly-gw-dashboard.adoc b/downstream/archive/archived-assemblies/platform/assembly-gw-dashboard.adoc new file mode 100644 index 0000000000..bc24dd9f72 --- /dev/null +++ b/downstream/archive/archived-assemblies/platform/assembly-gw-dashboard.adoc @@ -0,0 +1,18 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + +[id="gw-dashboard_{context}"] += {PlatformNameShort} dashboard + +:context: gw-dashboard + +[role="_abstract"] +The {PlatformNameShort} dashboard provides automation management and monitoring capabilities, allowing you to administer and configure automation functions, as well as view recent job activity details and performance statistics related to it. + +include::platform/con-gw-dash-features.adoc[leveloffset=+1] + +include::platform/con-gw-dash-components.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-install-upgrade-pah.adoc b/downstream/archive/archived-assemblies/platform/assembly-install-upgrade-pah.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-install-upgrade-pah.adoc rename to downstream/archive/archived-assemblies/platform/assembly-install-upgrade-pah.adoc diff --git a/downstream/assemblies/platform/assembly-installing-high-availability-hub.adoc b/downstream/archive/archived-assemblies/platform/assembly-installing-high-availability-hub.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-installing-high-availability-hub.adoc rename to downstream/archive/archived-assemblies/platform/assembly-installing-high-availability-hub.adoc diff --git a/downstream/assemblies/platform/assembly-migrate-isolated-execution-nodes.adoc b/downstream/archive/archived-assemblies/platform/assembly-migrate-isolated-execution-nodes.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-migrate-isolated-execution-nodes.adoc rename to downstream/archive/archived-assemblies/platform/assembly-migrate-isolated-execution-nodes.adoc diff --git a/downstream/assemblies/platform/assembly-migrate-legacy-venv-to-ee.adoc b/downstream/archive/archived-assemblies/platform/assembly-migrate-legacy-venv-to-ee.adoc similarity index 88% rename from downstream/assemblies/platform/assembly-migrate-legacy-venv-to-ee.adoc rename to downstream/archive/archived-assemblies/platform/assembly-migrate-legacy-venv-to-ee.adoc index d8403c3276..8315b76555 100644 --- a/downstream/assemblies/platform/assembly-migrate-legacy-venv-to-ee.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-migrate-legacy-venv-to-ee.adoc @@ -1,17 +1,10 @@ - - ifdef::context[:parent-context: {context}] - - [id="upgrading-to-ees"] = Migrating to {ExecEnvName} - :context: upgrading-to-ees - - // [role="_abstract"] include::platform/con-why-ee.adoc[leveloffset=+1] @@ -34,7 +27,7 @@ include::dev-guide/assembly-virt-env-to-ee.adoc[leveloffset=+1] // Remove comments once 2.2 version of the docs are published. //== Additional resources -//* See the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/ansible_navigator_creator_guide/index[Red Hat Ansible Automation Platform Creator Guide] for more information on migrating to {ExecEnvName}. +//* See the _link:{LinkNavigatorGuide}_ guide for more information on migrating to {ExecEnvName}. ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-migrate-platform.adoc b/downstream/archive/archived-assemblies/platform/assembly-migrate-platform.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-migrate-platform.adoc rename to downstream/archive/archived-assemblies/platform/assembly-migrate-platform.adoc diff --git a/downstream/assemblies/platform/assembly-multi-credential-assignment.adoc b/downstream/archive/archived-assemblies/platform/assembly-multi-credential-assignment.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-multi-credential-assignment.adoc rename to downstream/archive/archived-assemblies/platform/assembly-multi-credential-assignment.adoc diff --git a/downstream/archive/archived-assemblies/platform/assembly-operator-debugging.adoc b/downstream/archive/archived-assemblies/platform/assembly-operator-debugging.adoc new file mode 100644 index 0000000000..eeb83846f0 --- /dev/null +++ b/downstream/archive/archived-assemblies/platform/assembly-operator-debugging.adoc @@ -0,0 +1,15 @@ +ifdef::context[:parent-context: {context}] + +:context: operator-debugging + + +[id="operator-debugging"] += Debugging the {OperatorPlatform} + +include::platform/con-operator-ansible-verbosity.adoc[leveloffset=1] + + + + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-platform-whats-next.adoc b/downstream/archive/archived-assemblies/platform/assembly-platform-whats-next.adoc similarity index 71% rename from downstream/assemblies/platform/assembly-platform-whats-next.adoc rename to downstream/archive/archived-assemblies/platform/assembly-platform-whats-next.adoc index 99336b92ba..f9bc69b01c 100644 --- a/downstream/assemblies/platform/assembly-platform-whats-next.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-platform-whats-next.adoc @@ -9,7 +9,7 @@ Whether you are a new {PlatformNameShort} user looking to start automating, or a //isolated node migration //playbooks to download - -include::assembly-migrate-platform.adoc[leveloffset=+1] +//[ddacosta]Migration not part of 2.5EA so removing this section until a migration path is made available. The content will need to be reworked for changes to migration/upgrade. +// include::assembly-migrate-platform.adoc[leveloffset=+1] include::platform/proc-update-ee-image-locations.adoc[leveloffset=+1] include::platform/con-why-automation-mesh.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-supported-installation-scenarios.adoc b/downstream/archive/archived-assemblies/platform/assembly-supported-installation-scenarios.adoc similarity index 100% rename from downstream/assemblies/platform/assembly-supported-installation-scenarios.adoc rename to downstream/archive/archived-assemblies/platform/assembly-supported-installation-scenarios.adoc diff --git a/downstream/assemblies/platform/assembly-using-rhsso-operator-with-automation-hub.adoc b/downstream/archive/archived-assemblies/platform/assembly-using-rhsso-operator-with-automation-hub.adoc similarity index 86% rename from downstream/assemblies/platform/assembly-using-rhsso-operator-with-automation-hub.adoc rename to downstream/archive/archived-assemblies/platform/assembly-using-rhsso-operator-with-automation-hub.adoc index 4052f59442..e3cc82310f 100644 --- a/downstream/assemblies/platform/assembly-using-rhsso-operator-with-automation-hub.adoc +++ b/downstream/archive/archived-assemblies/platform/assembly-using-rhsso-operator-with-automation-hub.adoc @@ -9,7 +9,7 @@ ifdef::context[:parent-context: {context}] {PrivateHubNameStart} uses {RHSSO} for authentication. The {OperatorRHSSO} creates and manages resources. -Use this Operator to create custom resources to automate {RHSSO} administration in Openshift. +Use this operator to create custom resources to automate {RHSSO} administration in OpenShift. * When installing {PlatformNameShort} on _Virtual Machines_ (VMs) the installer can automatically install and configure {RHSSO} for use with {PrivateHubName}. @@ -28,7 +28,8 @@ include::platform/proc-create-keycloak-instance.adoc[leveloffset=2] include::platform/proc-create-keycloak-realm.adoc[leveloffset=2] include::platform/proc-create-keycloak-client.adoc[leveloffset=2] include::platform/proc-create-a-user.adoc[leveloffset=2] -include::platform/proc-installing-the-ansible-platform-operator.adoc[leveloffset=2] +//[gmurray] commenting out for now as we're trying to encourage users to install all components via platform gateway. +//include::platform/proc-installing-the-ansible-platform-operator.adoc[leveloffset=1] include::platform/proc-creating-a-secret.adoc[leveloffset=2] include::platform/proc-installing-hub-using-operator.adoc[leveloffset=2] include::platform/proc-determine-hub-route.adoc[leveloffset=2] diff --git a/downstream/assemblies/playbooks/assembly-playbook-gs.adoc b/downstream/archive/archived-assemblies/playbooks/assembly-playbook-gs.adoc similarity index 100% rename from downstream/assemblies/playbooks/assembly-playbook-gs.adoc rename to downstream/archive/archived-assemblies/playbooks/assembly-playbook-gs.adoc diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-login.adoc b/downstream/archive/archived-assemblies/troubleshooting-aap/assembly-troubleshoot-login.adoc similarity index 100% rename from downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-login.adoc rename to downstream/archive/archived-assemblies/troubleshooting-aap/assembly-troubleshoot-login.adoc diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-subscriptions.adoc b/downstream/archive/archived-assemblies/troubleshooting-aap/assembly-troubleshoot-subscriptions.adoc similarity index 100% rename from downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-subscriptions.adoc rename to downstream/archive/archived-assemblies/troubleshooting-aap/assembly-troubleshoot-subscriptions.adoc diff --git a/downstream/archive/archived-images/rpm-a-env-b.png b/downstream/archive/archived-images/rpm-a-env-b.png new file mode 100644 index 0000000000..2e79b7fad6 Binary files /dev/null and b/downstream/archive/archived-images/rpm-a-env-b.png differ diff --git a/downstream/archive/archived-images/rpm-b-env-b.png b/downstream/archive/archived-images/rpm-b-env-b.png new file mode 100644 index 0000000000..24e01a3ce2 Binary files /dev/null and b/downstream/archive/archived-images/rpm-b-env-b.png differ diff --git a/downstream/modules/aap-hardening/con-about-aap-hardening.adoc b/downstream/archive/archived-modules/aap-hardening/con-about-aap-hardening.adoc similarity index 100% rename from downstream/modules/aap-hardening/con-about-aap-hardening.adoc rename to downstream/archive/archived-modules/aap-hardening/con-about-aap-hardening.adoc diff --git a/downstream/archive/archived-modules/aap-hardening/con-benefits-of-patch-automation.adoc b/downstream/archive/archived-modules/aap-hardening/con-benefits-of-patch-automation.adoc new file mode 100644 index 0000000000..ae1e776644 --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/con-benefits-of-patch-automation.adoc @@ -0,0 +1,10 @@ +[id="con-benefits-of-patch-automation"] + += Benefits of patch automation + +Automating the patching process provides a number of benefits: + +* Reduces error-prone manual effort. +* Decreases time to deploy patches at scale. +* Ensures consistency of patches across similar systems. Manual patching of similar systems can result in human error (forgetting one or more, patching using different versions) that impacts consistency. +* Enables orchestration of complex patching scenarios where an update might require taking a system snapshot before applying a patch, or might require additional configuration changes when the patch is applied. diff --git a/downstream/modules/aap-hardening/con-controller-configuration.adoc b/downstream/archive/archived-modules/aap-hardening/con-controller-configuration.adoc similarity index 100% rename from downstream/modules/aap-hardening/con-controller-configuration.adoc rename to downstream/archive/archived-modules/aap-hardening/con-controller-configuration.adoc diff --git a/downstream/modules/aap-hardening/con-controller-stig-considerations.adoc b/downstream/archive/archived-modules/aap-hardening/con-controller-stig-considerations.adoc similarity index 100% rename from downstream/modules/aap-hardening/con-controller-stig-considerations.adoc rename to downstream/archive/archived-modules/aap-hardening/con-controller-stig-considerations.adoc diff --git a/downstream/archive/archived-modules/aap-hardening/con-patch-automation-with-aap.adoc b/downstream/archive/archived-modules/aap-hardening/con-patch-automation-with-aap.adoc new file mode 100644 index 0000000000..8df0e5f714 --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/con-patch-automation-with-aap.adoc @@ -0,0 +1,8 @@ +[id="con-patch-automation-with-aap"] + += Patch automation with {PlatformNameShort} + +Software patching is a fundamental activity of security and IT operations teams everywhere. +Keeping patches up to date is critical to remediating software vulnerabilities and meeting compliance requirements, but patching systems manually at scale can be time-consuming and error-prone. +Organizations should put thought into patch management strategies that meet their security, compliance, and business objectives, to prioritize the types of patches to apply (known exploits, critical or important vulnerabilities, optimizations, routine updates, new features, and so on) against the IT assets available across the enterprise. +Once policies and priorities have been defined and a patching plan is established, the manual tasks involved in patch management can be automated using {PlatformName} to improve patch deployment speed and accuracy, reduce human error, and limit downtime. diff --git a/downstream/archive/archived-modules/aap-hardening/con-patching-examples.adoc b/downstream/archive/archived-modules/aap-hardening/con-patching-examples.adoc new file mode 100644 index 0000000000..0c36ed064c --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/con-patching-examples.adoc @@ -0,0 +1,7 @@ +[id="con-patching-examples"] + += Patching examples + +The following playbooks are provided as patching examples, and should be modified to fit the target environment and tested thoroughly before being used in production. +These examples use the `ansible.builtin.dnf` module for managing packages on RHEL and other operating systems that use the `dnf` package manager. +Modules for patching other Linux operating systems, Microsoft Windows, and many network devices are also available. diff --git a/downstream/archive/archived-modules/aap-hardening/con-security-operations-center.adoc b/downstream/archive/archived-modules/aap-hardening/con-security-operations-center.adoc new file mode 100644 index 0000000000..3cd095b9e9 --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/con-security-operations-center.adoc @@ -0,0 +1,14 @@ +[id="con-security-operations-center"] + += {PlatformName} as part of a Security Operations Center + +Protecting your organization is a critical task. +Automating functions of your _Security Operations Center_ (SOC) can help you streamline security operations, response, and remediation activities at scale to reduce the risk and cost of breaches. +{PlatformName} can connect your security teams, tools, and processes for more successful automation adoption and use. +Learn how automation can help you safeguard your business and respond to growing security threats faster. + +link:https://www.redhat.com/en/resources/security-automation-ebook[Simplify your security operations center] provides an overview of the benefits to automating SOC operations, including such use cases as: + +* Investigation enrichment +* Threat hunting +* Incident response diff --git a/downstream/modules/aap-hardening/proc-configure-external-authentication.adoc b/downstream/archive/archived-modules/aap-hardening/proc-configure-external-authentication.adoc similarity index 59% rename from downstream/modules/aap-hardening/proc-configure-external-authentication.adoc rename to downstream/archive/archived-modules/aap-hardening/proc-configure-external-authentication.adoc index 2ddb9a1f14..2211741afc 100644 --- a/downstream/modules/aap-hardening/proc-configure-external-authentication.adoc +++ b/downstream/archive/archived-modules/aap-hardening/proc-configure-external-authentication.adoc @@ -7,7 +7,14 @@ [role="_abstract"] -As noted in the xref:con-user-authentication-planning_{context}[User authentication planning] section, external authentication is recommended for user access to the {ControllerName}. After you choose the authentication type that best suits your needs, navigate to {MenuAEAdminSettings} and select *Authentication* in the {ControllerName} UI, click on the relevant link for your authentication back-end, and follow the relevant instructions for link:https://docs.ansible.com/automation-controller/latest/html/administration/configure_tower_in_tower.html#authentication[configuring the authentication] connection. +As noted in the xref:con-user-authentication-planning_{context}[User authentication planning] section, external authentication is recommended for user access to the {ControllerName}. +After you choose the authentication type that best suits your needs: + +. Navigate to {MenuAMAuthentication}. +. Click btn:[Create authentication] +. Select the *Authentication type* you require from the menu. +. Click btn:[Next] +. On the *Authentication details* in the {ControllerName} UI, click on the relevant link for your authentication back-end, and follow the relevant instructions for link:https://docs.ansible.com/automation-controller/latest/html/administration/configure_tower_in_tower.html#authentication[configuring the authentication] connection. // [ddacosta] The following will need to be rewritten for the way this is configured in 2.5 When using LDAP for external authentication with the {ControllerName}, navigate to {MenuAEAdminSettings} and select *Authentication* and then select *LDAP settings* on the {ControllerName} and ensure that one of the following is configured: diff --git a/downstream/modules/aap-hardening/proc-controller-stig-considerations.adoc b/downstream/archive/archived-modules/aap-hardening/proc-controller-stig-considerations.adoc similarity index 100% rename from downstream/modules/aap-hardening/proc-controller-stig-considerations.adoc rename to downstream/archive/archived-modules/aap-hardening/proc-controller-stig-considerations.adoc diff --git a/downstream/archive/archived-modules/aap-hardening/ref-aap-authentication.adoc b/downstream/archive/archived-modules/aap-hardening/ref-aap-authentication.adoc new file mode 100644 index 0000000000..5fade77568 --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/ref-aap-authentication.adoc @@ -0,0 +1,38 @@ +// Module included in the following assemblies: +// downstream/assemblies/assembly-hardening-aap.adoc + +[id="ref-aap-authentication_{context}"] + += {PlatformNameShort} authentication + +[role="_abstract"] + +{ControllerNameStart} currently supports the following external authentication mechanisms through the {Gateway} UI: + +* Local +* LDAP +* SAML +* TACACS+ +* Radius +* Azure Active Directory +* Google OAuth +* Generic OIDC +* Keycloak +* GitHub single sign-on +* GitHub +* GitHub Organization +* GitHub team +* GitHub enterprise +* GitHub enterprise organization +* GitHub enterprise team + +Choose an authentication mechanism that adheres to your organization's authentication policies. +The authentication mechanism used must ensure that the authentication-related traffic between {PlatformNameShort} and the authentication back-end is encrypted when the traffic occurs on a public or non-secure network (for example, LDAPS or LDAP over TLS, HTTPS for OAuth2 and SAML providers, etc.). + +For more information on authentication methods, see link:{URLCentralAuth}/gw-configure-authentication#gw-config-authentication-type[Configuring an authetication type]. + +In the {Gateway}, any “system administrator” account can edit, change, and update any inventory or automation definition. +Restrict these account privileges to the minimum set of users possible for low-level {PlatformNameShort} configuration and disaster recovery. + + + diff --git a/downstream/modules/aap-hardening/ref-automation-controller-authentication.adoc b/downstream/archive/archived-modules/aap-hardening/ref-automation-controller-authentication.adoc similarity index 53% rename from downstream/modules/aap-hardening/ref-automation-controller-authentication.adoc rename to downstream/archive/archived-modules/aap-hardening/ref-automation-controller-authentication.adoc index be7e3266fa..84ef394919 100644 --- a/downstream/modules/aap-hardening/ref-automation-controller-authentication.adoc +++ b/downstream/archive/archived-modules/aap-hardening/ref-automation-controller-authentication.adoc @@ -9,7 +9,7 @@ {ControllerNameStart} currently supports the following external authentication mechanisms: -* Azure Activity Directory +* {MSEntraID}, formerly known as {Azure} Active Directory * GitHub single sign-on * Google OAuth2 single sign-in * LDAP @@ -18,9 +18,6 @@ * TACACS+ * Generic OIDC -Choose an authentication mechanism that adheres to your organization's authentication policies, and refer to the link:https://docs.ansible.com/automation-controller/latest/html/administration/configure_tower_in_tower.html#authentication[Controller Configuration - Authentication] documentation to understand the prerequisites for the relevant authentication mechanism. The authentication mechanism used must ensure that the authentication-related traffic between {PlatformNameShort} and the authentication back-end is encrypted when the traffic occurs on a public or non-secure network (for example, LDAPS or LDAP over TLS, HTTPS for OAuth2 and SAML providers, etc.). +Choose an authentication mechanism that adheres to your organization's authentication policies, and see the link:https://docs.ansible.com/automation-controller/latest/html/administration/configure_tower_in_tower.html#authentication[Controller Configuration - Authentication] documentation to understand the prerequisites for the relevant authentication mechanism. The authentication mechanism used must ensure that the authentication-related traffic between {PlatformNameShort} and the authentication back-end is encrypted when the traffic occurs on a public or insecure network (for example, LDAPS or LDAP over TLS, HTTPS for OAuth2 and SAML providers, etc.). In {ControllerName}, any “system administrator” account can edit, change, and update any inventory or automation definition. Restrict these account privileges to the minimum set of users possible for low-level {ControllerName} configuration and disaster recovery. - - - diff --git a/downstream/archive/archived-modules/aap-hardening/ref-complex-patching-scenarios.adoc b/downstream/archive/archived-modules/aap-hardening/ref-complex-patching-scenarios.adoc new file mode 100644 index 0000000000..9c12df076f --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/ref-complex-patching-scenarios.adoc @@ -0,0 +1,21 @@ +[id="ref-complex-patching-scenarios"] + += Complex patching scenarios + +In {PlatformNameShort}, multiple automation jobs can be chained together into workflows, which can be used to coordinate multiple steps in a complex patching scenario. + +The following example complex patching scenario demonstrates taking virtual machine snapshots, patching the virtual machines, and creating tickets when an error is encountered in the workflow. + +. Run a project sync to ensure the latest playbooks are available. In parallel, run an inventory sync to make sure the latest list of target hosts is available. +. Take a snapshot of each target host. +.. If the snapshot task fails, submit a ticket with the relevant information. +. Patch each of the target hosts. +.. If the patching task fails, restore the snapshot and submit a ticket with the relevant information. +. Delete each snapshot where the patching task was successful. + +The following workflow visualization shows how the components of the example complex patching scenario are executed: + +image:workflow.png[Workflow representation] + +.Additional resources +For more information on workflows, see link:{URLControllerUserGuide}/controller-workflows[Workflows in automation controller]. \ No newline at end of file diff --git a/downstream/archive/archived-modules/aap-hardening/ref-install-security-updates.adoc b/downstream/archive/archived-modules/aap-hardening/ref-install-security-updates.adoc new file mode 100644 index 0000000000..87f2e3a92c --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/ref-install-security-updates.adoc @@ -0,0 +1,18 @@ +[id="ref-install-security-updates"] + += Installing security updates only + +For organizations with a policy requiring that all RPMs including security errata be kept up to date, the following playbook might be used in a regularly scheduled job template. + +---- +- name: Install all security-related RPM updates + hosts: target_hosts + become: true + + tasks: + - name: Install latest RPMs with security errata + ansible.builtin.dnf: + name: '*' + security: true + state: latest +---- \ No newline at end of file diff --git a/downstream/archive/archived-modules/aap-hardening/ref-keep-up-to-date.adoc b/downstream/archive/archived-modules/aap-hardening/ref-keep-up-to-date.adoc new file mode 100644 index 0000000000..d3a56bde26 --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/ref-keep-up-to-date.adoc @@ -0,0 +1,18 @@ +[id="ref-keep-up-to-date"] + += Keeping everything up to date + +For some {RHEL} servers, such as a lab or other non-production systems, you might want to install all available patches on a regular cadence. +The following example playbook might be used in a job template that is scheduled to run weekly, and updates the system with all of the latest RPMs. + +---- +- name: Install all available RPM updates + hosts: target_hosts + become: true + + tasks: + - name: Install latest RPMs + ansible.builtin.dnf: + name: '*' + state: latest +---- \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-private-automation-hub-authentication.adoc b/downstream/archive/archived-modules/aap-hardening/ref-private-automation-hub-authentication.adoc similarity index 100% rename from downstream/modules/aap-hardening/ref-private-automation-hub-authentication.adoc rename to downstream/archive/archived-modules/aap-hardening/ref-private-automation-hub-authentication.adoc diff --git a/downstream/archive/archived-modules/aap-hardening/ref-specify-package-versions.adoc b/downstream/archive/archived-modules/aap-hardening/ref-specify-package-versions.adoc new file mode 100644 index 0000000000..416d387aaf --- /dev/null +++ b/downstream/archive/archived-modules/aap-hardening/ref-specify-package-versions.adoc @@ -0,0 +1,42 @@ +[id="ref-specify-package-versions"] + += Specifying package versions + +For production systems, a well-established configuration management practice is to deploy only known, tested combinations of software to ensure that systems are configured correctly and perform as expected. +This includes deploying only known versions of operating system software and patches to ensure that system updates do not introduce problems with production applications. + +[NOTE] +==== +The following example playbook installs a specific version of the `httpd` RPM and its dependencies when the target host uses the RHEL 9 operating system. +This playbook does not take action if the specified versions are already in place or if a different version of RHEL is installed. +==== +---- +- name: Install specific RPM versions + hosts: target_hosts + gather_facts: true + become: true + + vars: + httpd_packages_rhel9: + - httpd-2.4.53-11.el9_2.5 + - httpd-core-2.4.53-11.el9_2.5 + - httpd-filesystem-2.4.53-11.el9_2.5 + - httpd-tools-2.4.53-11.el9_2.5 + - mod_http2-1.15.19-4.el9_2.4 + - mod_lua-2.4.53-11.el9_2.5 + + tasks: + - name: Install httpd and dependencies + ansible.builtin.dnf: + name: '{{ httpd_packages_rhel9 }}' + state: present + allow_downgrade: true + when: + - ansible_distribution == "RedHat" + - ansible_distribution_major_version == "9" +---- + +[NOTE] +==== +By setting `allow_downgrade: true`, if a newer version of any defined package is installed on the system, it is downgraded to the specified version instead. +==== \ No newline at end of file diff --git a/downstream/archive/archived-modules/aap-migration/proc-managed-target-import.adoc b/downstream/archive/archived-modules/aap-migration/proc-managed-target-import.adoc new file mode 100644 index 0000000000..ea2f58ac20 --- /dev/null +++ b/downstream/archive/archived-modules/aap-migration/proc-managed-target-import.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="managed-target-import"] += Importing the migration content to the target environment + +To import the migration content into your target environment, perform the following steps. + +.Procedure + +. Coordinate with your Managed {PlatformNameShort} provider for database imports. +. Import database dumps to the Managed {PlatformNameShort} environment. +. Apply exported secrets where permitted in your Managed {PlatformNameShort} environment. +. Transfer applicable custom configurations. diff --git a/downstream/archive/archived-modules/aap-migration/proc-managed-target-prep.adoc b/downstream/archive/archived-modules/aap-migration/proc-managed-target-prep.adoc new file mode 100644 index 0000000000..284224a738 --- /dev/null +++ b/downstream/archive/archived-modules/aap-migration/proc-managed-target-prep.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="managed-target-prep"] += Preparing and assessing the target environment + +To prepare and assess your target environment, perform the following steps. + +.Procedure + +. Confirm Managed {PlatformNameShort} environment is provisioned correctly. +. Verify PostgreSQL database version is on version 15. +. Document baseline configuration of the Managed {PlatformNameShort} environment. +. Identify service limitations and differences from the source environment. +. Ensure enough resources are allocated in the managed environment. diff --git a/downstream/archive/archived-modules/aap-migration/proc-managed-validation.adoc b/downstream/archive/archived-modules/aap-migration/proc-managed-validation.adoc new file mode 100644 index 0000000000..1ce328dc89 --- /dev/null +++ b/downstream/archive/archived-modules/aap-migration/proc-managed-validation.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="managed-validation"] += Validating the target environment + +To validate your migrated environment, perform the following steps. + +.Procedure +. Verify all migrated components are functional. +. Test workflows and automation processes. +. Validate user access and permissions. +. Confirm content synchronization and availability. +. Test integration with Managed {PlatformNameShort}-specific features. +. Establish monitoring and support procedures. diff --git a/downstream/modules/builder/con-about-builder.adoc b/downstream/archive/archived-modules/builder/con-about-builder.adoc similarity index 100% rename from downstream/modules/builder/con-about-builder.adoc rename to downstream/archive/archived-modules/builder/con-about-builder.adoc diff --git a/downstream/modules/builder/con-ansible-config-file-path.adoc b/downstream/archive/archived-modules/builder/con-ansible-config-file-path.adoc similarity index 100% rename from downstream/modules/builder/con-ansible-config-file-path.adoc rename to downstream/archive/archived-modules/builder/con-ansible-config-file-path.adoc diff --git a/downstream/modules/central-auth/con-central-auth-reqs.adoc b/downstream/archive/archived-modules/central-auth/con-central-auth-reqs.adoc similarity index 96% rename from downstream/modules/central-auth/con-central-auth-reqs.adoc rename to downstream/archive/archived-modules/central-auth/con-central-auth-reqs.adoc index d801d535a0..6c4a67a304 100644 --- a/downstream/modules/central-auth/con-central-auth-reqs.adoc +++ b/downstream/archive/archived-modules/central-auth/con-central-auth-reqs.adoc @@ -4,7 +4,7 @@ There are several minimum requirements to install and run {AAPCentralAuth}: -* Any operating system that runs Java +* A supported RHEL8 based server that runs Java * Java 8 JDK * zip or gzip and tar * At least 512mb of RAM diff --git a/downstream/modules/central-auth/proc-aap-configure-centralauth.adoc b/downstream/archive/archived-modules/central-auth/proc-aap-configure-centralauth.adoc similarity index 100% rename from downstream/modules/central-auth/proc-aap-configure-centralauth.adoc rename to downstream/archive/archived-modules/central-auth/proc-aap-configure-centralauth.adoc diff --git a/downstream/modules/central-auth/proc-configure-central-auth-generic-oidc-settings.adoc b/downstream/archive/archived-modules/central-auth/proc-configure-central-auth-generic-oidc-settings.adoc similarity index 100% rename from downstream/modules/central-auth/proc-configure-central-auth-generic-oidc-settings.adoc rename to downstream/archive/archived-modules/central-auth/proc-configure-central-auth-generic-oidc-settings.adoc diff --git a/downstream/modules/central-auth/proc-login-centralauth.adoc b/downstream/archive/archived-modules/central-auth/proc-login-centralauth.adoc similarity index 100% rename from downstream/modules/central-auth/proc-login-centralauth.adoc rename to downstream/archive/archived-modules/central-auth/proc-login-centralauth.adoc diff --git a/downstream/modules/central-auth/proc-running-aap-install.adoc b/downstream/archive/archived-modules/central-auth/proc-running-aap-install.adoc similarity index 100% rename from downstream/modules/central-auth/proc-running-aap-install.adoc rename to downstream/archive/archived-modules/central-auth/proc-running-aap-install.adoc diff --git a/downstream/modules/core/con-execution-environments.adoc b/downstream/archive/archived-modules/core similarity index 100% rename from downstream/modules/core/con-execution-environments.adoc rename to downstream/archive/archived-modules/core diff --git a/downstream/modules/dev-guide/proc-select-custom-venv-export.adoc b/downstream/archive/archived-modules/dev-guide similarity index 87% rename from downstream/modules/dev-guide/proc-select-custom-venv-export.adoc rename to downstream/archive/archived-modules/dev-guide index 2574e1f25a..cf20aac006 100644 --- a/downstream/modules/dev-guide/proc-select-custom-venv-export.adoc +++ b/downstream/archive/archived-modules/dev-guide @@ -3,7 +3,7 @@ = Selecting the custom virtual environment to export [role="_abstract"] -Select the custom virtual environment you wish to export using `awx-manage export_custom_venv` command. +Select the custom virtual environment you want to export by using `awx-manage export_custom_venv` command. .Procedure diff --git a/downstream/modules/devtools/proc-create-molecule-scenario.adoc b/downstream/archive/archived-modules/devtools/proc-create-molecule-scenario.adoc similarity index 98% rename from downstream/modules/devtools/proc-create-molecule-scenario.adoc rename to downstream/archive/archived-modules/devtools/proc-create-molecule-scenario.adoc index ca32ca2e62..30243c5b2b 100644 --- a/downstream/modules/devtools/proc-create-molecule-scenario.adoc +++ b/downstream/archive/archived-modules/devtools/proc-create-molecule-scenario.adoc @@ -1,4 +1,4 @@ -[id="create-molcule-scenario"] +[id="create-molcule-scenario_{context}"] = Creating a molecule scenario diff --git a/downstream/modules/devtools/proc-create-python-venv.adoc b/downstream/archive/archived-modules/devtools/proc-create-python-venv.adoc similarity index 97% rename from downstream/modules/devtools/proc-create-python-venv.adoc rename to downstream/archive/archived-modules/devtools/proc-create-python-venv.adoc index 7198090501..1b5a1759c6 100644 --- a/downstream/modules/devtools/proc-create-python-venv.adoc +++ b/downstream/archive/archived-modules/devtools/proc-create-python-venv.adoc @@ -1,4 +1,4 @@ -[id="create-python-venv"] +[id="create-python-venv_{context}"] = Creating a Python virtual environment diff --git a/downstream/archive/archived-modules/devtools/proc-devtools-molecule-test-roles-collection.adoc b/downstream/archive/archived-modules/devtools/proc-devtools-molecule-test-roles-collection.adoc new file mode 100644 index 0000000000..be8c54d156 --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-devtools-molecule-test-roles-collection.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: PROCEDURE + +[id="devtools-molecule-test-roles-collection_{context}"] += Using Molecule to test your roles + +It is useful to run your automation content in a test environment before using it to automate production infrastructure. +Testing ensures the automation works as designed. + +Molecule is a complete testing framework designed to help you automate the testing of Ansible roles in different environments, +ensuring that the roles behave as expected across various platforms and configurations. + +A Molecule scenario is a set of configurations and tests for roles within a collection. +You can have as many scenarios as you like and Molecule will run one after the other. + +* `molecule.yml` is the central configuration entry point for Molecule per scenario. +With this file, you can configure each tool that Molecule will employ when testing your role. +* `create.yml` is a playbook file used for creating the instances and storing data in `instance-config`. +* `converge.yml` is the playbook file that contains the call for your role. +Molecule will invoke this playbook with `ansible-playbook` and run it against an instance created by the driver. +* `destroy.yml` contains the Ansible code for destroying the instances and removing them from `instance-config`. + +. Navigate to the `extensions/` directory in your collection and initialize a new default molecule scenario: ++ +---- +molecule init scenario + +--- +- name: Include a role from a collection + hosts: localhost + gather_facts: false + tasks: + - name: Testing role + ansible.builtin.include_role: + name: foo.bar.my_role + tasks_from: main.yml + +---- + +// https://www.ansible.com/blog/developing-and-testing-ansible-roles-with-molecule-and-podman-part-1/ +// https://www.ansible.com/blog/developing-and-testing-ansible-roles-with-molecule-and-podman-part-2/ + diff --git a/downstream/archive/archived-modules/devtools/proc-devtools-run-roles-collection.adoc b/downstream/archive/archived-modules/devtools/proc-devtools-run-roles-collection.adoc new file mode 100644 index 0000000000..b72467f11d --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-devtools-run-roles-collection.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="devtools-run-roles-collection_{context}"] += Running and testing your collection + +the Ansible VS +When you are developing your roles, you can use `ansible-lint` in the Ansible +extension to display potential rule violations in the terminal in {VSCode}. + +When you package your collection and install it into your playbook projects, +the code extension autocomplete feature is available for your collection. +// This helps you write functional playbooks. + +If you have your Ansible collection PATH set, you can use Ansible Navigator +to browse your collection and its contents. + +Use ansible-navigator to run your playbooks because it is useful for troubleshooting. +You can explore the output at various depth levels. + +== Using your collection in playbooks + +// +Collection path + +// == Navigator + +Autocompletion in playbooks - connect scaffolding of both projects + diff --git a/downstream/modules/devtools/proc-directory-setup.adoc b/downstream/archive/archived-modules/devtools/proc-directory-setup.adoc similarity index 93% rename from downstream/modules/devtools/proc-directory-setup.adoc rename to downstream/archive/archived-modules/devtools/proc-directory-setup.adoc index 6bf73ecd20..3deaa52afc 100644 --- a/downstream/modules/devtools/proc-directory-setup.adoc +++ b/downstream/archive/archived-modules/devtools/proc-directory-setup.adoc @@ -1,4 +1,4 @@ -[id="directory-setup"] +[id="directory-setup_{context}"] = Setting up a directory for your playbooks diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-add-additional-scm.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-add-additional-scm.adoc new file mode 100644 index 0000000000..485255619c --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-add-additional-scm.adoc @@ -0,0 +1,49 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-add-additional-scm_{context}"] += Adding additional Source Control Management options + +The standard Ansible plug-ins templates are preconfigured to support GitHub Cloud. +Follow the procedure below to add support for additional Source Control Management (SCM) solutions. + +.Procedure + +. Create a fork of the link:https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml[Ansible plug-ins software templates repository]. +. In your repository, update the `enum` and `enumNames` keys with the SCM values. +. Update the software template `action` to match your SCM type. +. Register the forked repository with your customized templates in {RHDH}. + +For example, if you wanted to add GitLab as an SCM, your software template file would look similar to the following: + +---- +... +spec: + ... + parameters: + ... + properties: + sourceControl: + title: Select source control option + type: string + description: Select the source control option for your Ansible project. + default: gitlab.com + enum: + - gitlab.com + enumNames: + - 'GitLab' +... + +---- + +Under the `steps` section, use the appropriate action for your SCM: + +---- + steps: + ... + - id: publish + name: Publish + action: publish:gitlab + ... + +---- + diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-backup-operator-configmap.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-backup-operator-configmap.adoc new file mode 100644 index 0000000000..748ee90ee3 --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-backup-operator-configmap.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-backup-operator-configmap_{context}"] += Backing up your {RHDHShort} Operator ConfigMap + +Before you install {AAPRHDH}, create a local copy of the ConfigMap for the {RHDHShort} Operator. +You can use a section of the ConfigMap when you are populating a custom ConfigMap. + +.Procedure + +// Is export KUBECONFIG=/home/secrets/rosa/kubeconfig needed? + +. Find the namespace for your {RHDHShort} Operator. ++ +When you installed the {RHDHShort} Operator, a namespace was created for it. +Select *Topology* and look for the {RHDHShort} Operator in the *Project* dropdown list. +The default namespace is `rhdh-operator`. +. Run the following command to make a copy of the ConfigMap for your {RHDHShort} Operator, `backstage-default-config`. ++ +Replace `` with your {RHDHShort} Operator namespace, and `` with +the filename you want to use for your copy of the {RHDHShort} Operator. ++ +---- +$ oc get configmap backstage-default-config -n -o yaml > +---- + diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-create-custom-configmap-operator-install.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-create-custom-configmap-operator-install.adoc new file mode 100644 index 0000000000..589cae3465 --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-create-custom-configmap-operator-install.adoc @@ -0,0 +1,72 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-create-operator-custom-configmap-operator-install_{context}"] += Creating a custom Operator ConfigMap + +Create a custom ConfigMap, for instance `rhdh-custom-config`, for your project. +For more details about creating a custom ConfigMap, see the +link:{BaseURL}/red_hat_developer_hub/{RHDHVers}/html-single/administration_guide_for_red_hat_developer_hub/index#proc-add-custom-app-config-file-ocp-operator_admin-rhdh[Adding a custom application configuration file to OpenShift Container Platform using the Operator] +in the _Administration guide for Red Hat Developer Hub_. + +Populate the ConfigMap with YAML from the backup that you made of the {RHDHShort} Operator ConfigMap. +// This enables the dynamic plug-ins specific to the backstage showcase. + +.Prerequisites + +* You have saved a backup copy of the Configmap for the {RHDHShort} Operator. + +.Procedure + +. In the OpenShift web console, navigate to the project you created. +. Click *ConfigMaps* in the navigation pane. +. Click *Create ConfigMap*. +. Replace the default YAML code in the new ConfigMap with the following code: ++ +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: rhdh-custom-config +data: + deployment.yaml: |- + # Replace with RHDH Operator ConfigMap deployment.yaml block here +---- +. Copy the `deployment.yaml:` section from your local copy of the RHDH Operator ConfigMap. +. Paste the `deployment.yaml:` section into the `rhdh-custom-config` ConfigMap, replacing the `deployment.yaml:` line. +. Add a sidecar container (`ansible-devtools-server`) to the list of containers under `resources` in the `deployment.spec.template.spec.[containers]` block of the ConfigMap: ++ +---- + spec: + replicas: 1 + selector: + matchLabels: + rhdh.redhat.com/app: + template: + metadata: + labels: + rhdh.redhat.com/app: + spec:\ + ... + containers: + - name: backstage-backend + ... + - resources: {} # Add sidecar container for Ansible plug-ins + terminationMessagePath: /dev/termination-log + name: ansible-devtools-server + command: + - adt + - server + ports: + - containerPort: 8000 + protocol: TCP + imagePullPolicy: IfNotPresent + terminationMessagePolicy: File + image: 'ghcr.io/ansible/community-ansible-dev-tools:latest' + +---- +. Click btn:[Create] to create the ConfigMap. + +.Verification + +To view your new ConfigMap, click *ConfigMaps* in the navigation pane. + diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-operator-add-custom-configmap-cr.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-operator-add-custom-configmap-cr.adoc new file mode 100644 index 0000000000..ca402f8f80 --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-operator-add-custom-configmap-cr.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-operator-add-custom-configmap-cr_{context}"] += Adding the rhdh-custom-config file to the {RHDHShort} Operator Custom Resource + +Update the {RHDHShort} Operator Custom Resource to add the `rhdh-custom-config` file. + +. In the OpenShift console, select the *Topology* view. +. Click *More actions {MoreActionsIcon}* on the {RHDHShort} Operator Custom Resource and select *Edit backstage* to edit the Custom Resource. +. Add a `rawRuntimeConfig:` block for your custom ConfigMap `spec:` block. +It must have the same indentation level as the `spec.application:` block. ++ +---- +spec: + application: + ... + database: + ... + rawRuntimeConfig: + backstageConfig: rhdh-custom-config + +---- +. Click btn:[Save]. +. The {RHDHShort} Operator redeploys the pods to reflect the updated Custom Resource. + + +// .Verification + +// We should be able to see existing config maps that handle the app-config for rhdh instance and a different configMap that would serve the dynamic plugins that are being installed. + +// Considering the custom ConfigMaps are named - +// - app-config-rhdh - Holds baseUrl, template config, plugin-specific config, and RBAC configuration +// - rhaap-dynamic-plugins-config - contains dynamic plugins to be installed + diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-operator-install-add-plugins-app-config.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-operator-install-add-plugins-app-config.adoc new file mode 100644 index 0000000000..d5d8a85495 --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-operator-install-add-plugins-app-config.adoc @@ -0,0 +1,109 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-operator-install-add-plugins-app-config_{context}"] += Adding the {AAPRHDHShort} configuration to app-config-rhdh + +Add the Ansible plugin-specific configuration to the `app-config-rhdh` ConfigMap, +and add a route for the templates for the automation content scaffolder plugin. + + +.Procedure + +. In the OpenShift web console, select *ConfigMaps*. +. Select the `app-config-rhdh` ConfigMap. +. Select the *YAML* tab to edit the ConfigMap. +. In the `data.app-config-custom.yaml.catalog` block, add a `locations:` block for the GitHub repository for the templates that are used to scaffold collections and playbooks. The `locations:` block must have the same indentation as the `rules:` block that precedes it. ++ +---- + locations: + - type: url + target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml + rules: + - allow: [Template] +---- +. In the `data.app-config-custom.yaml` block, add an `ansible: block with the same indentation as `catalog`, to point to your Dev Spaces instance and your {PlatformNameShort} instance. +Replace `baseUrl` with the URLs for your own instances. ++ +---- + ansible: + devSpaces: + baseUrl: 'https://devspaces.apps.example-cluster.com/' + creatorService: + baseUrl: '127.0.0.1' + port: '8000' + rhaap: + baseUrl: 'https://controller.acme.demoredhat.com' + token: ... + checkSSL: false +---- ++ +After you have added both blocks, the ConfigFile resembles the following: ++ +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: app-config-rhdh +data: + app-config-custom.yaml: | + app: + baseUrl: https:// + backend: + baseUrl: https:// + cors: + origin: https:// + catalog: + rules: + - allow: [Component, System, Group, Resource, Location, Template, API, User] + locations: + - type: url # Add RHDH templates URL + target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml + rules: + - allow: [Template] + ansible: # Add Dev Spaces and AAP URLs + devSpaces: + baseUrl: 'https://devspaces.apps.example-cluster.com/' + creatorService: + baseUrl: '127.0.0.1' + port: '8000' + rhaap: + baseUrl: 'https://controller.acme.demoredhat.com' + token: ... + checkSSL: false + auth: + environment: development + providers: + guest: + dangerouslyAllowOutsideDevelopment: true + github: + development: + clientId: '...' + clientSecret: '...' + integrations: + github: + - host: github.com + token: ... + enabled: + github: true + signInPage: github + permission: + enabled: true + rbac: + admin: + users: + - name: ... + superUsers: + - name: ... +---- +. Click btn:[Save]. ++ +Your {RHDHShort} instance reloads. + +.Verification + +. Select the *Topology* in the OpenShift web console to monitor the rolling update. +. When the update is complete, click the *Open URL* icon on the deployment pod to open your {RHDH} instance in a browser window. +. Select *Create* in the navigation pane. +The *Create Ansible Collection Project* and *Create Ansible Playbook Project* are displayed. +//When you perform a rolling update in OpenShift, the new pods are baked with the updated image or configuration before being deployed. + diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-operator-sidecar.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-operator-sidecar.adoc new file mode 100644 index 0000000000..1269e25027 --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-operator-sidecar.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-uninstall-ocp-operator-sidecar_{context}"] += Removing the Sidecar container from the {RHDHShort} Custom Resource ConfigMap + +// Remove sidecar from rhdh-custom-config + +// Do this if you need config apart from the Sidecar container to your `rhdh-custom-config` Custom Resource ConfigMap. + +If you added extra configuration to the ConfigMap where you added the sidecar container (`rhdh-custom-config` in our example), then you cannot remove the reference to the ConfigMap from the {RHDHShort} Custom Resource. + +Instead, you must remove only the YAML code relating to the sidecar from the ConfigMap. + +---- +Add YAML code + +---- + +//created a custom resource ConfigMap +//as described in the xref:rhdh-create-operator-custom-configmap-operator-install_rhdh-install-ocp-operator[Creating a custom Operator ConfigMap] +//and you added only the {ToolsName} sidecar container to it, then you can remove the reference to the ConfigMap from the {RHDHShort} Custom Resource. + diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-operator.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-operator.adoc new file mode 100644 index 0000000000..63221b218b --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-operator.adoc @@ -0,0 +1,80 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-uninstall-ocp-operator_{context}"] += Uninstalling an Operator installation + +To delete the dynamic plug-ins from your installation, you must edit the ConfigMaps +that reference Ansible. + +The deployment auto reloads when the ConfigMaps are updated. +You do not need to reload the deployment manually . + +.Procedure + +. Open the custom ConfigMap where you referenced the dynamic plug-ins, `rhaap-dynamic-plugins-config`. +.. Locate the dynamic plug-ins in the `plugins:` block. ++ +*** To disable the dynamic plug-ins, update the `disabled` attribute to `true` for the three dynamic plug-ins. +*** To delete the dynamic plug-ins, delete the lines that reference the dynamic plug-ins from the `plugins:` block: ++ +---- + - disabled: false + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + integrity: + ... + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + integrity: + ... + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + integrity: +---- +.. Click btn:[Save]. +. To completely remove all the Ansible plugins remove the entire list entries that contain ++ +---- +http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz +http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz +http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz +---- +. Open the custom {RHDH}ConfigMap, `app-config-rhdh`. +.. Remove the `locations:` block to delete the templates from the {RHDHShort} instance. +.. Remove the `ansible:` block to delete the Ansible-specific configuration. ++ +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: rhdh-app-config +data: + app-config-custom.yaml: | + app: + baseUrl: https:// + backend: + baseUrl: https:// + cors: + origin: https:// + catalog: + rules: + - allow: [Component, System, Group, Resource, Location, Template, API, User] + locations: + - type: url + target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml + rules: + - allow: [Template] + ansible: + devSpaces: + baseUrl: 'https://devspaces.apps.ansible-rhdh-dev.testing.ansible.com/' + creatorService: + baseUrl: '127.0.0.1' + port: '8000' + rhaap: + baseUrl: 'https://controller.acme.demoredhat.com' + token: ... + checkSSL: false +---- +. Click btn:[Save]. + diff --git a/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-remove-sidecar-cr.adoc b/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-remove-sidecar-cr.adoc new file mode 100644 index 0000000000..fc1abdf258 --- /dev/null +++ b/downstream/archive/archived-modules/devtools/proc-rhdh-uninstall-ocp-remove-sidecar-cr.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-uninstall-ocp-operator-remove-sidecar-cr_{context}"] += Removing the custom resource ConfigMap from the {RHDHShort} Operator Custom Resource + +// If a custom resource is created to load the sidecar container, +//we need to be specific about what we remove while we are willing to remove just the sidecar container, +// because the customResouce acts as the source of truth for the entire RHDH deployment then. +// +If you created a custom resource ConfigMap +as described in the xref:rhdh-create-operator-custom-configmap-operator-install_rhdh-install-ocp-operator[Creating a custom Operator ConfigMap] +and you added only the {ToolsName} sidecar container to it, then you can remove the reference to the ConfigMap from the {RHDHShort} Custom Resource. + +[NOTE] +==== +Ensure that you do not have any additional need for the custom ConfigMap before you remove it from the Custom Resource. +==== + +.Procedure + +. In the OpenShift console, select the Topology view. +. Click *More actions {MoreActionsIcon}* on the RHDH Operator Custom Resource and select *Edit backstage* to edit the Custom Resource. +. Remove the ConfigMap reference from the {RHDHShort} Operator Custom Resource. +For this example, the ConfigMap name is `rhdh-custom-config`. ++ +---- +... +spec: + application: + ... + database: + ... + rawRuntimeConfig: # Remove the backstageConfig’ YAML key below + backstageConfig: rhdh-custom-config + +---- +. Click btn:[Save]. + diff --git a/downstream/modules/devtools/proc-setup-vscode-workspace.adoc b/downstream/archive/archived-modules/devtools/proc-setup-vscode-workspace.adoc similarity index 93% rename from downstream/modules/devtools/proc-setup-vscode-workspace.adoc rename to downstream/archive/archived-modules/devtools/proc-setup-vscode-workspace.adoc index ddd87b26be..9f238252ed 100644 --- a/downstream/modules/devtools/proc-setup-vscode-workspace.adoc +++ b/downstream/archive/archived-modules/devtools/proc-setup-vscode-workspace.adoc @@ -1,4 +1,4 @@ -[id="setup-vscode-workspace"] +[id="setup-vscode-workspace_{context}"] = Setting up a {VSCode} workspace diff --git a/downstream/modules/eda/con-about-event-driven-ansible.adoc b/downstream/archive/archived-modules/eda/con-about-event-driven-ansible.adoc similarity index 100% rename from downstream/modules/eda/con-about-event-driven-ansible.adoc rename to downstream/archive/archived-modules/eda/con-about-event-driven-ansible.adoc diff --git a/downstream/archive/archived-modules/eda/con-credential-types-list-view.adoc b/downstream/archive/archived-modules/eda/con-credential-types-list-view.adoc new file mode 100644 index 0000000000..9d1c192825 --- /dev/null +++ b/downstream/archive/archived-modules/eda/con-credential-types-list-view.adoc @@ -0,0 +1,16 @@ +[id="eda-credentials-list-view"] + += Credentials list view + +When you log in to the {PlatformNameShort} and select {MenuADCredentials}, the Credentials page has a pre-loaded *Decision Environment Container Registry* credential. When you create your own credentials, they will be added to this list view. . + +From the menu bar, you can search for credentials in the *Name* search field. + +You also have the following options in the menu bar: + +* Choose how fields are shown in the list view by clicking the btn:[Manage columns] icon. You have four options in which you can arrange your fields: +** *Column* - Shows the column in the table. +** *Description* - Shows the column when the item is expanded as a full width description. +** *Expanded* - Shows the column when the item is expanded as a detail. +** *Hidden* - Hides the column. +* Choose between a btn:[List view] or a btn:[Card view], by clicking the icons. diff --git a/downstream/modules/eda/con-decision-environments.adoc b/downstream/archive/archived-modules/eda/con-decision-environments.adoc similarity index 100% rename from downstream/modules/eda/con-decision-environments.adoc rename to downstream/archive/archived-modules/eda/con-decision-environments.adoc diff --git a/downstream/modules/eda/con-rulebook-actions.adoc b/downstream/archive/archived-modules/eda/con-rulebook-actions.adoc similarity index 100% rename from downstream/modules/eda/con-rulebook-actions.adoc rename to downstream/archive/archived-modules/eda/con-rulebook-actions.adoc diff --git a/downstream/modules/eda/proc-eda-set-up-token.adoc b/downstream/archive/archived-modules/eda/proc-eda-set-up-token.adoc similarity index 100% rename from downstream/modules/eda/proc-eda-set-up-token.adoc rename to downstream/archive/archived-modules/eda/proc-eda-set-up-token.adoc diff --git a/downstream/archive/archived-modules/eda/ref-deploy-eda-controller-with-aap-operator-on-ocp.adoc b/downstream/archive/archived-modules/eda/ref-deploy-eda-controller-with-aap-operator-on-ocp.adoc new file mode 100644 index 0000000000..7f0e0fb22d --- /dev/null +++ b/downstream/archive/archived-modules/eda/ref-deploy-eda-controller-with-aap-operator-on-ocp.adoc @@ -0,0 +1,7 @@ +[id="deploying-eda-controller-with-aap-operator-on-ocp"] + += Deploying {EDAcontroller} with {OperatorPlatformNameShort} on {OCPShort} + +{EDAName} is not limited to {PlatformNameShort} on VMs. You can also access this feature on {OperatorPlatformNameShort} on {OCPShort}. To deploy {EDAName} with {OperatorPlatformNameShort}, follow the instructions in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/index#deploy-eda-controller-on-aap-operator-ocp[Deploying Event-Driven Ansible controller with Ansible Automation Platform Operator on OpenShift Container Platform]. + +After successful deployment, you can connect to event sources and resolve issues more efficiently. diff --git a/downstream/modules/eda/ref-eda-controller-tasks.adoc b/downstream/archive/archived-modules/eda/ref-eda-controller-tasks.adoc similarity index 100% rename from downstream/modules/eda/ref-eda-controller-tasks.adoc rename to downstream/archive/archived-modules/eda/ref-eda-controller-tasks.adoc diff --git a/downstream/modules/eda/ref-installing-eda-controller-on-aap.adoc b/downstream/archive/archived-modules/eda/ref-installing-eda-controller-on-aap.adoc similarity index 87% rename from downstream/modules/eda/ref-installing-eda-controller-on-aap.adoc rename to downstream/archive/archived-modules/eda/ref-installing-eda-controller-on-aap.adoc index 810e919be3..70b50a169f 100644 --- a/downstream/modules/eda/ref-installing-eda-controller-on-aap.adoc +++ b/downstream/archive/archived-modules/eda/ref-installing-eda-controller-on-aap.adoc @@ -10,5 +10,5 @@ To prepare for installation of {EDAcontroller}, review the following information When you are ready to install the {EDAcontroller}, see the procedures in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index[Red Hat Ansible Automation Platform Installation Guide] beginning with link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario[Chapter 3. Installing {PlatformName}]. -Lastly, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars#event-driven-ansible-controller_platform-install-scenario[Appendix A. 5. Event-Driven Ansible controller variables] in the {PlatformName} Installation Guide to view predefined variables for {EDAcontroller}. +Lastly, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars#event-driven-ansible-variables_platform-install-scenario[{EDAcontroller} variables] in the {PlatformName} Installation Guide to view predefined variables for {EDAcontroller}. diff --git a/downstream/modules/hub/con-enable-view-only.adoc b/downstream/archive/archived-modules/hub/con-enable-view-only.adoc similarity index 100% rename from downstream/modules/hub/con-enable-view-only.adoc rename to downstream/archive/archived-modules/hub/con-enable-view-only.adoc diff --git a/downstream/archive/archived-modules/con-namespaces.adoc b/downstream/archive/archived-modules/hub/con-namespaces.adoc similarity index 100% rename from downstream/archive/archived-modules/con-namespaces.adoc rename to downstream/archive/archived-modules/hub/con-namespaces.adoc diff --git a/downstream/modules/hub/con-remote-repos.adoc b/downstream/archive/archived-modules/hub/con-remote-repos.adoc similarity index 100% rename from downstream/modules/hub/con-remote-repos.adoc rename to downstream/archive/archived-modules/hub/con-remote-repos.adoc diff --git a/downstream/modules/hub/con-rh-certified-synclist.adoc b/downstream/archive/archived-modules/hub/con-rh-certified-synclist.adoc similarity index 52% rename from downstream/modules/hub/con-rh-certified-synclist.adoc rename to downstream/archive/archived-modules/hub/con-rh-certified-synclist.adoc index efa55f8cce..d1556e6665 100644 --- a/downstream/modules/hub/con-rh-certified-synclist.adoc +++ b/downstream/archive/archived-modules/hub/con-rh-certified-synclist.adoc @@ -2,10 +2,11 @@ = Explanation of Red Hat {CertifiedName} synclists -A synclist is a curated group of Red Hat Certified Collections that is assembled by your organization administrator. +A synclist is a curated group of Red Hat Certified Collections assembled by your organization administrator. It synchronizes with your local {HubNameMain}. Use synclists to manage only the content that you want and exclude unnecessary collections. Design and manage your synclist from the content available as part of Red Hat content on {Console} -Each synclist has its own unique repository URL that you can use to designate as a remote source for content in {HubName}. -You securely access each synclist by using an API token. \ No newline at end of file +Each synclist has its own unique repository URL that you can designate as a remote source for content in {HubName}. +You can securely access each synclist by using an API token. + diff --git a/downstream/modules/hub/con-user-access.adoc b/downstream/archive/archived-modules/hub/con-user-access.adoc similarity index 100% rename from downstream/modules/hub/con-user-access.adoc rename to downstream/archive/archived-modules/hub/con-user-access.adoc diff --git a/downstream/modules/hub/proc-add-user-to-group.adoc b/downstream/archive/archived-modules/hub/proc-add-user-to-group.adoc similarity index 100% rename from downstream/modules/hub/proc-add-user-to-group.adoc rename to downstream/archive/archived-modules/hub/proc-add-user-to-group.adoc diff --git a/downstream/modules/hub/proc-assigning-permissions.adoc b/downstream/archive/archived-modules/hub/proc-assigning-permissions.adoc similarity index 100% rename from downstream/modules/hub/proc-assigning-permissions.adoc rename to downstream/archive/archived-modules/hub/proc-assigning-permissions.adoc diff --git a/downstream/modules/hub/proc-assigning-roles.adoc b/downstream/archive/archived-modules/hub/proc-assigning-roles.adoc similarity index 100% rename from downstream/modules/hub/proc-assigning-roles.adoc rename to downstream/archive/archived-modules/hub/proc-assigning-roles.adoc diff --git a/downstream/modules/hub/proc-configure-automation-hub-server-cli.adoc b/downstream/archive/archived-modules/hub/proc-configure-automation-hub-server-cli.adoc similarity index 100% rename from downstream/modules/hub/proc-configure-automation-hub-server-cli.adoc rename to downstream/archive/archived-modules/hub/proc-configure-automation-hub-server-cli.adoc diff --git a/downstream/modules/hub/proc-configure-automation-hub-server-gui.adoc b/downstream/archive/archived-modules/hub/proc-configure-automation-hub-server-gui.adoc similarity index 98% rename from downstream/modules/hub/proc-configure-automation-hub-server-gui.adoc rename to downstream/archive/archived-modules/hub/proc-configure-automation-hub-server-gui.adoc index 954c85cdb8..2ce8d51859 100644 --- a/downstream/modules/hub/proc-configure-automation-hub-server-gui.adoc +++ b/downstream/archive/archived-modules/hub/proc-configure-automation-hub-server-gui.adoc @@ -16,7 +16,7 @@ Creating a new token revokes any previous tokens generated for {HubName}. Update . Navigate to your {ControllerName}. . Create a new credential. -.. Navigate to {MenuAMCredentials}. +.. Navigate to {MenuAECredentials}. .. Click btn:[Add]. .. Enter the name for your new credential in the *Name* field. .. Optional: Enter a description and enter or select the name of the organization with which the credential is associated. diff --git a/downstream/modules/hub/proc-create-super-users.adoc b/downstream/archive/archived-modules/hub/proc-create-super-users.adoc similarity index 100% rename from downstream/modules/hub/proc-create-super-users.adoc rename to downstream/archive/archived-modules/hub/proc-create-super-users.adoc diff --git a/downstream/modules/hub/proc-create-users.adoc b/downstream/archive/archived-modules/hub/proc-create-users.adoc similarity index 100% rename from downstream/modules/hub/proc-create-users.adoc rename to downstream/archive/archived-modules/hub/proc-create-users.adoc diff --git a/downstream/modules/hub/proc-delete-user.adoc b/downstream/archive/archived-modules/hub/proc-delete-user.adoc similarity index 100% rename from downstream/modules/hub/proc-delete-user.adoc rename to downstream/archive/archived-modules/hub/proc-delete-user.adoc diff --git a/downstream/modules/hub/proc-enabling-firewall-services.adoc b/downstream/archive/archived-modules/hub/proc-enabling-firewall-services.adoc similarity index 100% rename from downstream/modules/hub/proc-enabling-firewall-services.adoc rename to downstream/archive/archived-modules/hub/proc-enabling-firewall-services.adoc diff --git a/downstream/modules/hub/proc-obtaining-org-collection-url.adoc b/downstream/archive/archived-modules/hub/proc-obtaining-org-collection-url.adoc similarity index 95% rename from downstream/modules/hub/proc-obtaining-org-collection-url.adoc rename to downstream/archive/archived-modules/hub/proc-obtaining-org-collection-url.adoc index 7708a81ece..24b36ef28d 100644 --- a/downstream/modules/hub/proc-obtaining-org-collection-url.adoc +++ b/downstream/archive/archived-modules/hub/proc-obtaining-org-collection-url.adoc @@ -1,4 +1,4 @@ -[id="proc-create-api-token"] +[id="retrieve-api-token_{context}"] = Retrieving the API token for your Red Hat Certified Collection You can synchronize {CertifiedName} curated by your organization from `{Console}` to your {PrivateHubName}. diff --git a/downstream/modules/hub/proc-run-setup-script-hub.adoc b/downstream/archive/archived-modules/hub/proc-run-setup-script-hub.adoc similarity index 100% rename from downstream/modules/hub/proc-run-setup-script-hub.adoc rename to downstream/archive/archived-modules/hub/proc-run-setup-script-hub.adoc diff --git a/downstream/modules/hub/proc-shared-filesystem.adoc b/downstream/archive/archived-modules/hub/proc-shared-filesystem.adoc similarity index 100% rename from downstream/modules/hub/proc-shared-filesystem.adoc rename to downstream/archive/archived-modules/hub/proc-shared-filesystem.adoc diff --git a/downstream/modules/hub/proc-upload-collection.adoc b/downstream/archive/archived-modules/hub/proc-upload-collection.adoc similarity index 100% rename from downstream/modules/hub/proc-upload-collection.adoc rename to downstream/archive/archived-modules/hub/proc-upload-collection.adoc diff --git a/downstream/modules/hub/proc-verify-sso-install.adoc b/downstream/archive/archived-modules/hub/proc-verify-sso-install.adoc similarity index 100% rename from downstream/modules/hub/proc-verify-sso-install.adoc rename to downstream/archive/archived-modules/hub/proc-verify-sso-install.adoc diff --git a/downstream/modules/hub/proc_ppah-download-aap.adoc b/downstream/archive/archived-modules/hub/proc_ppah-download-aap.adoc similarity index 100% rename from downstream/modules/hub/proc_ppah-download-aap.adoc rename to downstream/archive/archived-modules/hub/proc_ppah-download-aap.adoc diff --git a/downstream/modules/hub/proc_ppah-install-ansible-core.adoc b/downstream/archive/archived-modules/hub/proc_ppah-install-ansible-core.adoc similarity index 100% rename from downstream/modules/hub/proc_ppah-install-ansible-core.adoc rename to downstream/archive/archived-modules/hub/proc_ppah-install-ansible-core.adoc diff --git a/downstream/modules/hub/proc_ppah-install-dev.adoc b/downstream/archive/archived-modules/hub/proc_ppah-install-dev.adoc similarity index 100% rename from downstream/modules/hub/proc_ppah-install-dev.adoc rename to downstream/archive/archived-modules/hub/proc_ppah-install-dev.adoc diff --git a/downstream/modules/hub/proc_ppah-install-ha.adoc b/downstream/archive/archived-modules/hub/proc_ppah-install-ha.adoc similarity index 100% rename from downstream/modules/hub/proc_ppah-install-ha.adoc rename to downstream/archive/archived-modules/hub/proc_ppah-install-ha.adoc diff --git a/downstream/modules/hub/proc_ppah-online-offline-install.adoc b/downstream/archive/archived-modules/hub/proc_ppah-online-offline-install.adoc similarity index 100% rename from downstream/modules/hub/proc_ppah-online-offline-install.adoc rename to downstream/archive/archived-modules/hub/proc_ppah-online-offline-install.adoc diff --git a/downstream/modules/hub/proc_ppah-prepare-rhel-host.adoc b/downstream/archive/archived-modules/hub/proc_ppah-prepare-rhel-host.adoc similarity index 100% rename from downstream/modules/hub/proc_ppah-prepare-rhel-host.adoc rename to downstream/archive/archived-modules/hub/proc_ppah-prepare-rhel-host.adoc diff --git a/downstream/modules/hub/ref-permissions.adoc b/downstream/archive/archived-modules/hub/ref-permissions.adoc similarity index 100% rename from downstream/modules/hub/ref-permissions.adoc rename to downstream/archive/archived-modules/hub/ref-permissions.adoc diff --git a/downstream/modules/hub/ref-variables-connect-hub-sso.adoc b/downstream/archive/archived-modules/hub/ref-variables-connect-hub-sso.adoc similarity index 94% rename from downstream/modules/hub/ref-variables-connect-hub-sso.adoc rename to downstream/archive/archived-modules/hub/ref-variables-connect-hub-sso.adoc index 437cb97c95..bcb4544d28 100644 --- a/downstream/modules/hub/ref-variables-connect-hub-sso.adoc +++ b/downstream/archive/archived-modules/hub/ref-variables-connect-hub-sso.adoc @@ -5,7 +5,7 @@ If you are installing {HubName} and {RHSSO} together for the first time or you h If you are installing {HubName} and you intend to connect it to an existing externally managed {RHSSO} instance, configure the variables for externally managed {RHSSO}. -For more information about these inventory variables, refer to link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars#ref-hub-variables[{HubNameMain} variables] in the _{PlatformName} Installation Guide_. +For more information about these inventory variables, refer to link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars#hub-variables[{HubNameMain} variables] in the _{PlatformName} Installation Guide_. The following variables can be configured for both {PlatformNameShort} managed and external {RHSSO}: diff --git a/downstream/modules/hub/ref_ppah-access.adoc b/downstream/archive/archived-modules/hub/ref_ppah-access.adoc similarity index 100% rename from downstream/modules/hub/ref_ppah-access.adoc rename to downstream/archive/archived-modules/hub/ref_ppah-access.adoc diff --git a/downstream/modules/hub/ref_ppah-config.adoc b/downstream/archive/archived-modules/hub/ref_ppah-config.adoc similarity index 100% rename from downstream/modules/hub/ref_ppah-config.adoc rename to downstream/archive/archived-modules/hub/ref_ppah-config.adoc diff --git a/downstream/modules/navigator/proc-installing-navigator-rhel-rpm.adoc b/downstream/archive/archived-modules/navigator/proc-installing-navigator-rhel-rpm.adoc similarity index 86% rename from downstream/modules/navigator/proc-installing-navigator-rhel-rpm.adoc rename to downstream/archive/archived-modules/navigator/proc-installing-navigator-rhel-rpm.adoc index 884dcd464d..e98e697042 100644 --- a/downstream/modules/navigator/proc-installing-navigator-rhel-rpm.adoc +++ b/downstream/archive/archived-modules/navigator/proc-installing-navigator-rhel-rpm.adoc @@ -11,6 +11,7 @@ You can install {Navigator} on Red Hat Enterprise Linux (RHEL) from an RPM. .Prerequisites +* You have installed Python 3.10 or later. * You have installed RHEL 8.6 or later. * You registered your system with Red Hat Subscription Manager. @@ -34,13 +35,13 @@ $ subscription-manager attach --pool= v.{PlatformVers} for RHEL 8 for x86_64 + ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-navigator +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms ansible-navigator ---- + v.{PlatformVers} for RHEL 9 for x86-64 + ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-navigator +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms ansible-navigator ---- .Verification diff --git a/downstream/modules/platform/con-SM-standalone-contr-ext-database.adoc b/downstream/archive/archived-modules/platform/con-SM-standalone-contr-ext-database.adoc similarity index 100% rename from downstream/modules/platform/con-SM-standalone-contr-ext-database.adoc rename to downstream/archive/archived-modules/platform/con-SM-standalone-contr-ext-database.adoc diff --git a/downstream/modules/platform/con-SM-standalone-contr-non-inst-database.adoc b/downstream/archive/archived-modules/platform/con-SM-standalone-contr-non-inst-database.adoc similarity index 100% rename from downstream/modules/platform/con-SM-standalone-contr-non-inst-database.adoc rename to downstream/archive/archived-modules/platform/con-SM-standalone-contr-non-inst-database.adoc diff --git a/downstream/modules/platform/con-SM-standalone-hub-ext-database.adoc b/downstream/archive/archived-modules/platform/con-SM-standalone-hub-ext-database.adoc similarity index 100% rename from downstream/modules/platform/con-SM-standalone-hub-ext-database.adoc rename to downstream/archive/archived-modules/platform/con-SM-standalone-hub-ext-database.adoc diff --git a/downstream/modules/platform/con-SM-standalone-hub-non-inst-database.adoc b/downstream/archive/archived-modules/platform/con-SM-standalone-hub-non-inst-database.adoc similarity index 100% rename from downstream/modules/platform/con-SM-standalone-hub-non-inst-database.adoc rename to downstream/archive/archived-modules/platform/con-SM-standalone-hub-non-inst-database.adoc diff --git a/downstream/modules/platform/con-aap-example-architecture.adoc b/downstream/archive/archived-modules/platform/con-aap-example-architecture.adoc similarity index 89% rename from downstream/modules/platform/con-aap-example-architecture.adoc rename to downstream/archive/archived-modules/platform/con-aap-example-architecture.adoc index 2495006ab3..54a52102b7 100644 --- a/downstream/modules/platform/con-aap-example-architecture.adoc +++ b/downstream/archive/archived-modules/platform/con-aap-example-architecture.adoc @@ -20,5 +20,4 @@ The architecture for this example consists of the following: * Two execution nodes per {ControllerName} cluster .Example {PlatformNameShort} {PlatformVers} architecture -// dcd - Image in progress with graphics team and will be added once complete. -image::aap_ref_arch_2.4.png[Reference architecture for an example setup of a standard {PlatformNameShort} deployment] +image::rpm-b-env-a.png[Reference architecture for an example setup of a standard {PlatformNameShort} deployment] diff --git a/downstream/modules/platform/con-aap-upgrade-prereq.adoc b/downstream/archive/archived-modules/platform/con-aap-upgrade-prereq.adoc similarity index 100% rename from downstream/modules/platform/con-aap-upgrade-prereq.adoc rename to downstream/archive/archived-modules/platform/con-aap-upgrade-prereq.adoc diff --git a/downstream/modules/platform/con-aap-upgrades-legacy.adoc b/downstream/archive/archived-modules/platform/con-aap-upgrades-legacy.adoc similarity index 100% rename from downstream/modules/platform/con-aap-upgrades-legacy.adoc rename to downstream/archive/archived-modules/platform/con-aap-upgrades-legacy.adoc diff --git a/downstream/modules/platform/con-cluster-platform-ext-database.adoc b/downstream/archive/archived-modules/platform/con-cluster-platform-ext-database.adoc similarity index 100% rename from downstream/modules/platform/con-cluster-platform-ext-database.adoc rename to downstream/archive/archived-modules/platform/con-cluster-platform-ext-database.adoc diff --git a/downstream/modules/platform/con-controller-access.adoc b/downstream/archive/archived-modules/platform/con-controller-access.adoc similarity index 100% rename from downstream/modules/platform/con-controller-access.adoc rename to downstream/archive/archived-modules/platform/con-controller-access.adoc diff --git a/downstream/modules/platform/con-controller-create-users.adoc b/downstream/archive/archived-modules/platform/con-controller-create-users.adoc similarity index 97% rename from downstream/modules/platform/con-controller-create-users.adoc rename to downstream/archive/archived-modules/platform/con-controller-create-users.adoc index 695031caa6..ab57210c2d 100644 --- a/downstream/modules/platform/con-controller-create-users.adoc +++ b/downstream/archive/archived-modules/platform/con-controller-create-users.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-create-users"] Users associated with an organization are shown in the *Access* tab of the organization. diff --git a/downstream/modules/platform/con-controller-custom-dynamic-inv-scripts.adoc b/downstream/archive/archived-modules/platform/con-controller-custom-dynamic-inv-scripts.adoc similarity index 84% rename from downstream/modules/platform/con-controller-custom-dynamic-inv-scripts.adoc rename to downstream/archive/archived-modules/platform/con-controller-custom-dynamic-inv-scripts.adoc index 14db49d847..8d9a2d09d4 100644 --- a/downstream/modules/platform/con-controller-custom-dynamic-inv-scripts.adoc +++ b/downstream/archive/archived-modules/platform/con-controller-custom-dynamic-inv-scripts.adoc @@ -19,4 +19,5 @@ The credential type must specify all the necessary types of inputs. Then, when you create a credential of this type, the secrets are stored in an encrypted form. If you apply that credential to the inventory source, the script has access to those inputs. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#assembly-controller-custom-credentials[Custom Credential Types] in the {ControllerUG}. +TBD +//For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#assembly-controller-custom-credentials[Custom Credential Types] in _{ControllerUG}_. diff --git a/downstream/modules/platform/con-controller-custom-logos.adoc b/downstream/archive/archived-modules/platform/con-controller-custom-logos.adoc similarity index 87% rename from downstream/modules/platform/con-controller-custom-logos.adoc rename to downstream/archive/archived-modules/platform/con-controller-custom-logos.adoc index 095dfbe813..45ca86300f 100644 --- a/downstream/modules/platform/con-controller-custom-logos.adoc +++ b/downstream/archive/archived-modules/platform/con-controller-custom-logos.adoc @@ -3,7 +3,7 @@ = Custom logos and images {ControllerNameStart} supports the use of a custom logo. -You can add a custom logo by uploading an image and supplying a custom login message from the *Platform gateway settings* page. From the navigation panel, select {MenuSetGateway}. +You can add a custom logo by uploading an image and supplying a custom login message from the *{GatewayStart} settings* page. From the navigation panel, select {MenuSetGateway}. //image::ag-configure-aap-ui.png[Custom logo] For the best results, use a `.png` file with a transparent background. diff --git a/downstream/modules/platform/con-controller-enable-logging-SAML.adoc b/downstream/archive/archived-modules/platform/con-controller-enable-logging-SAML.adoc similarity index 100% rename from downstream/modules/platform/con-controller-enable-logging-SAML.adoc rename to downstream/archive/archived-modules/platform/con-controller-enable-logging-SAML.adoc diff --git a/downstream/modules/platform/con-controller-fact-scan-job-templates.adoc b/downstream/archive/archived-modules/platform/con-controller-fact-scan-job-templates.adoc similarity index 100% rename from downstream/modules/platform/con-controller-fact-scan-job-templates.adoc rename to downstream/archive/archived-modules/platform/con-controller-fact-scan-job-templates.adoc diff --git a/downstream/modules/platform/con-controller-function-of-roles.adoc b/downstream/archive/archived-modules/platform/con-controller-function-of-roles.adoc similarity index 100% rename from downstream/modules/platform/con-controller-function-of-roles.adoc rename to downstream/archive/archived-modules/platform/con-controller-function-of-roles.adoc diff --git a/downstream/modules/platform/con-controller-groups-hosts.adoc b/downstream/archive/archived-modules/platform/con-controller-groups-hosts.adoc similarity index 100% rename from downstream/modules/platform/con-controller-groups-hosts.adoc rename to downstream/archive/archived-modules/platform/con-controller-groups-hosts.adoc diff --git a/downstream/modules/platform/con-controller-host-metric-utilities.adoc b/downstream/archive/archived-modules/platform/con-controller-host-metric-utilities.adoc similarity index 92% rename from downstream/modules/platform/con-controller-host-metric-utilities.adoc rename to downstream/archive/archived-modules/platform/con-controller-host-metric-utilities.adoc index 8581def9e4..12e50b5a62 100644 --- a/downstream/modules/platform/con-controller-host-metric-utilities.adoc +++ b/downstream/archive/archived-modules/platform/con-controller-host-metric-utilities.adoc @@ -6,5 +6,5 @@ You can also soft delete hosts in bulk through the API. ifdef::controller-GS,controller-AG[] -For more information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-host-metric-utilities[Host metrics utilities] section of the _{ControllerUG}_. +For more information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-host-metric-utilities[Host metrics utilities] section of _{ControllerUG}_. endif::controller-GS,controller-AG[] diff --git a/downstream/modules/platform/con-controller-multi-cred-background.adoc b/downstream/archive/archived-modules/platform/con-controller-multi-cred-background.adoc similarity index 100% rename from downstream/modules/platform/con-controller-multi-cred-background.adoc rename to downstream/archive/archived-modules/platform/con-controller-multi-cred-background.adoc diff --git a/downstream/modules/platform/con-controller-rbac-permissions.adoc b/downstream/archive/archived-modules/platform/con-controller-rbac-permissions.adoc similarity index 100% rename from downstream/modules/platform/con-controller-rbac-permissions.adoc rename to downstream/archive/archived-modules/platform/con-controller-rbac-permissions.adoc diff --git a/downstream/modules/platform/con-controller-rbac.adoc b/downstream/archive/archived-modules/platform/con-controller-rbac.adoc similarity index 100% rename from downstream/modules/platform/con-controller-rbac.adoc rename to downstream/archive/archived-modules/platform/con-controller-rbac.adoc diff --git a/downstream/modules/platform/con-controller-resources.adoc b/downstream/archive/archived-modules/platform/con-controller-resources.adoc similarity index 84% rename from downstream/modules/platform/con-controller-resources.adoc rename to downstream/archive/archived-modules/platform/con-controller-resources.adoc index 9c99e79d93..dfdcd76284 100644 --- a/downstream/modules/platform/con-controller-resources.adoc +++ b/downstream/archive/archived-modules/platform/con-controller-resources.adoc @@ -5,7 +5,7 @@ The *Resources* menu provides access to the following components of {ControllerName}: * Templates -* xref:controller-credentials[Credentials] +* TBD[Credentials] * xref:controller-projects[Projects] * xref:controller-inventories[Inventories] * Hosts \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-role-hierarchy.adoc b/downstream/archive/archived-modules/platform/con-controller-role-hierarchy.adoc similarity index 100% rename from downstream/modules/platform/con-controller-role-hierarchy.adoc rename to downstream/archive/archived-modules/platform/con-controller-role-hierarchy.adoc diff --git a/downstream/modules/platform/con-controller-views.adoc b/downstream/archive/archived-modules/platform/con-controller-views.adoc similarity index 82% rename from downstream/modules/platform/con-controller-views.adoc rename to downstream/archive/archived-modules/platform/con-controller-views.adoc index c13cdb4fc7..8c605b7abd 100644 --- a/downstream/modules/platform/con-controller-views.adoc +++ b/downstream/archive/archived-modules/platform/con-controller-views.adoc @@ -1,16 +1,15 @@ [id="con-controller-views"] = Views +//No longer required in 2.5 version The {ControllerName} UI provides several options for viewing information. * xref:proc-controller-viewing-dashboard[Dashboard view] -* xref:proc-controller-jobs-view[Jobs view] -//The following aren't included in the Views menu for the tech preview. +* xref:proc-controller-jobs-view[Jobs view] * xref:proc-controller-schedules-view[Schedules view] * xref:proc-controller-activity-stream[Activity Stream] * xref:proc-controller-workflow-approvals[Workflow Approvals] -//Host Metrics is included in the Analytics menu * xref:proc-controller-host-metrics[Host Metrics] include::proc-controller-viewing-dashboard.adoc[leveloffset=+1] diff --git a/downstream/archive/archived-modules/platform/con-eda-2-5-with-controller-2-4.adoc b/downstream/archive/archived-modules/platform/con-eda-2-5-with-controller-2-4.adoc new file mode 100644 index 0000000000..f70df85dcb --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-eda-2-5-with-controller-2-4.adoc @@ -0,0 +1,61 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-07-05 + +:_mod-docs-content-type: CONCEPT + +[id="eda-2-5-with-controller-2-4_{context}"] += {EDAName} 2.5 with controller 2.4 + +Use the following example to populate the inventory file to deploy a new single instance of {EDAName} 2.5 with controller 2.4. For {EDAName}, the requirements for a connection to controller are the `automation_controller_main_url` pointing to the 2.4 controller URL. + + +---- +[automationedacontroller] +eda.example.org + +[automationgateway] +eda.example.org + +[database] +data.example.com + +[all:vars] + +automationedacontroller_admin_password='' + +automationedacontroller_pg_host='data.example.com' +automationedacontroller_pg_port=5432 + +automationedacontroller_pg_database='automationedacontroller' +automationedacontroller_pg_username='automationedacontroller' +automationedacontroller_pg_password='' +automationedacontroller_pg_sslmode='prefer' + + +automation_controller_main_url='automationcontroller.example.org ' +#automationedacontroller_controller_verify_ssl=true <1> + +registry_url='registry.redhat.io' +registry_username='' +registry_password='' + +automationgateway_admin_password='' + +automationgateway_pg_host='data.example.com' +automationgateway_pg_port=5432 + +automationgateway_pg_database='automationgateway' +automationgateway_pg_username='automationgateway' +automationgateway_pg_password='' +automationgateway_pg_sslmode='prefer' +---- + +<1> This variable will set whether or not TLS will be verified. It is set to true by default, but if not needed be set to false. + +[NOTE] +==== +* Keep `controller` out of the inventory file. Ensure that `[automationcontroller]` is an empty group. +* Only add an {EDAName} 2.5 server. Do not add an {EDAName} 2.4 server since there is no upgrade option available. +==== + + diff --git a/downstream/archive/archived-modules/platform/con-edge-manager-access-devices.adoc b/downstream/archive/archived-modules/platform/con-edge-manager-access-devices.adoc new file mode 100644 index 0000000000..89e02c144b --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-edge-manager-access-devices.adoc @@ -0,0 +1,7 @@ +[id="edge-manager-access-devices"] + += Accessing devices remotely + +For troubleshooting an edge device, a user can be authorized to remotely connect to that device's console through the agent. +This does not require an SSH connection and works even if that device is on a private network (behind a NAT), has a dynamic IP address, or has its SSH service disabled. + diff --git a/downstream/archive/archived-modules/platform/con-edge-manager-core-capabilities.adoc b/downstream/archive/archived-modules/platform/con-edge-manager-core-capabilities.adoc new file mode 100644 index 0000000000..6161a8c253 --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-edge-manager-core-capabilities.adoc @@ -0,0 +1,17 @@ +[id="edge-manager-core-capabilities"] + += Core capabilities + +The {RedHatEdge} offers the following automated edge device management: + +* Define and keep consistent configurations across your edge device fleet +* Manage operating system updates, host configurations, and application deployments +* Check device health and deployment status through centralized reporting +* Support for modern container workloads using Podman, Docker, or Kubernetes + +== Tool integration + +* Kubernetes-style declarative APIs that complement your existing Ansible workflows +* Support for both container and VM workloads using Podman +* Compatible with image-based Linux operating systems running bootc or ostree +* Web-based interface for device monitoring and management diff --git a/downstream/archive/archived-modules/platform/con-edge-manager-rbac-auth.adoc b/downstream/archive/archived-modules/platform/con-edge-manager-rbac-auth.adoc new file mode 100644 index 0000000000..5ac2b906ce --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-edge-manager-rbac-auth.adoc @@ -0,0 +1,18 @@ +[id="edge-manager-rbac-auth"] + += {RedHatEdge} authorization + +The {RedHatEdge} Kubernetes authorization uses role-based access control (RBAC) to control authorization for {RedHatEdge} API endpoints. + +You can set up Kubernetes RBAC authorization by using the following roles: + +* `Role` and `RoleBinding` for namespace-wide authorization +* `ClusterRole` and `ClusterRoleBinding` for cluster-wide authorization + +You can use the `Role` or `ClusterRole` API objects to define the allowed API resources and verbs for a particular role. + +The `RoleBinding` or `ClusterRoleBinding` API objects grant permissions that are defined in a role to one or more users. + +.Additional resources + +For more information, see link:https://kubernetes.io/docs/reference/access-authn-authz/rbac/[Using RBAC Authorization] in the Kubernetes documentation. diff --git a/downstream/archive/archived-modules/platform/con-gw-cache-queue.adoc b/downstream/archive/archived-modules/platform/con-gw-cache-queue.adoc new file mode 100644 index 0000000000..a805d50054 --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-gw-cache-queue.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-cache-queue_{context}"] + += Caching and queueing system + +In {PlatformNameShort} {PlatformVers}, link:https://redis.io/[Redis (REmote DIctionary Server)] is used as the caching and queueing system. Redis is an open source, in-memory, NoSQL key/value store that is used primarily as an application cache, quick-response database and lightweight message broker. + +Centralized Redis is provided for the {Gateway} and {EDAName} and shared between those components. {ControllerNameStart} and {HubName} have their own instances of Redis. + +This cache and queue system stores data in memory, rather than on a disk or solid-state drive (SSD), which helps deliver speed, reliability, and performance. In {PlatformNameShort}, the system caches the following types of data for the various services in {PlatformNameShort}: + +.Data types cached by Centralized Redis +[options="header"] +|==== +| {ControllerNameStart} | {EDAName} server | {HubNameStart} | {GatewayStart} +| N/A {ControllerName} does not use shared Redis in {PlatformNameShort} {PlatformVers} | Event queues | N/A {HubName} does not use shared Redis in {PlatformNameShort} {PlatformVers} | Settings, Session Information, JSON Web Tokens +|==== + +This data can contain sensitive Personal Identifiable Information (PII). Your data is protected through secure communication with the cache and queue system through both Transport Layer Security (TLS) encryption and authentication. + +[NOTE] +==== +The data in Redis from both the {Gateway} and {EDAName} are partitioned; therefore, neither service can access the other’s data. +==== \ No newline at end of file diff --git a/downstream/archive/archived-modules/platform/con-gw-dash-components.adoc b/downstream/archive/archived-modules/platform/con-gw-dash-components.adoc new file mode 100644 index 0000000000..ce6aeb5b5a --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-gw-dash-components.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-dash-components"] + += {PlatformNameShort} dashboard components + +Quick starts:: You can learn about Ansible automation functions with guided tutorials called quick starts. In the dashboard, you can access quick starts by selecting a quick start card. From the panel displayed, click btn:[Start] and complete the onscreen instructions. You can also filter quick starts by keyword and status. +Resource status:: Indicates the status of your hosts, projects and inventories. The status indicator links to your configured hosts, projects and inventories where you can search, filter, add and modify these resources. +Job Activity:: You can view a summary of your current job status, filter the job status within a period of time or by job type, or click *Go to Jobs* to view a complete list of jobs that are currently available. +Jobs:: You can view recent jobs that have run, or click *View all Jobs* to view a complete list of jobs that are currently available, or create a new job. +Projects: You can view recently updated projects or click *View all Projects* to view a complete list of the projects that are currently available, or create a new project. +Inventories:: You can view recently updated inventories or click *View all Inventories* to view a complete list of available inventories, or create a new inventory. +Rulebook Activations:: You can view the list of recent rulebook activations and their status, display the complete list of rulebook activations that are currently available, or create a new rulebook activation. +Rule Audit:: You view recently fired rule audits, view rule audit records, and view rule audit data based on corresponding rulebook activation runs. +Decision Environments:: You can view recently updated decision environments, or click *View all Decision Environments* to view a complete list of available inventories, or create a new decision environment. diff --git a/downstream/archive/archived-modules/platform/con-gw-dash-features.adoc b/downstream/archive/archived-modules/platform/con-gw-dash-features.adoc new file mode 100644 index 0000000000..565f17337a --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-gw-dash-features.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gw-dash-features"] + += {PlatformNameShort} dashboard features + +The {PlatformNameShort} dashboard provides the following features: + +Manage view:: You can enable, disable, and sort dashboard components so only the features you need are visible on the dashboard. diff --git a/downstream/archive/archived-modules/platform/con-gw-managing-access.adoc b/downstream/archive/archived-modules/platform/con-gw-managing-access.adoc new file mode 100644 index 0000000000..91c2473867 --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-gw-managing-access.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-managing-access"] + += Managing access with role based access control + +Role-based access control (RBAC) restricts user access based on their role within an organization. The roles in RBAC refer to the levels of access that users have to the network. + +You can control what users can do with the components of {PlatformNameShort} at a broad or granular level depending on your RBAC policy. You can designate whether the user is a system administrator or normal user and align roles and access permissions with their positions within the organization. + +Roles can be defined with multiple permissions that can then be assigned to resources, teams and users. The permissions that make up a role dictate what the assigned role allows. Permissions are allocated with only the access needed for a user to perform the tasks appropriate for their role. diff --git a/downstream/archive/archived-modules/platform/con-gw-roles.adoc b/downstream/archive/archived-modules/platform/con-gw-roles.adoc new file mode 100644 index 0000000000..83ec9df0c2 --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-gw-roles.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gw-roles"] + += Roles + +Roles are units of organization in the {PlatformName}. When you assign a role to a team or user, you are granting access to use, read, or write credentials. Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among resources, or with other users. All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource. diff --git a/downstream/modules/platform/con-independence-of-resource-roles.adoc b/downstream/archive/archived-modules/platform/con-independence-of-resource-roles.adoc similarity index 100% rename from downstream/modules/platform/con-independence-of-resource-roles.adoc rename to downstream/archive/archived-modules/platform/con-independence-of-resource-roles.adoc diff --git a/downstream/archive/archived-modules/platform/con-inventory-introduction.adoc b/downstream/archive/archived-modules/platform/con-inventory-introduction.adoc index a81da53040..15758ae7f4 100644 --- a/downstream/archive/archived-modules/platform/con-inventory-introduction.adoc +++ b/downstream/archive/archived-modules/platform/con-inventory-introduction.adoc @@ -63,4 +63,7 @@ The first part of the inventory file specifies the hosts or groups that Ansible [NOTE] ==== -The inventory file variables `registry_username` and `registry_password` are only required if a non-bundle installer is used. \ No newline at end of file +You must have valid subscriptions attached before installing the {PlatformNameShort}. If you have a new Red Hat Account, you do not have to perform any additional tasks. By default, Simple Content Access, which hosts access to all your subscriptions and content, is enabled. You can enable Simple Content Access by visiting Red Hat Console. + +The only scenario where you still need to attach a subscription manually is to an air-gapped system that uses Red Hat Satellite for subscription validation. This exception is no longer supported after Red Hat Satellite 6.6. +==== diff --git a/downstream/archive/archived-modules/platform/con-operator-ansible-verbosity.adoc b/downstream/archive/archived-modules/platform/con-operator-ansible-verbosity.adoc new file mode 100644 index 0000000000..98c555ae09 --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-operator-ansible-verbosity.adoc @@ -0,0 +1,44 @@ +[id="con-operator-ansible-verbosity_{context}"] + += Ansible verbosity + +Setting the verbosity of the `ansible-runner` command, controls the output detail of `ansible-playbook`. The verbosity ranges from 0 (minimal output) to 7 (maximum debugging). + +{OperatorPlatform} users and admins can set the Ansible verbosity by setting the "ansible.sdk.operatorframework.io/verbosity" annotation on the Custom Resource. + +.Example +For a database operator with `MongoDB` and `PostgreSQL` in the `db.example.com` Group, you can configure the `MongoDB` verbosity higher to debug. The operator container’s spec in the `config/manager/manager.yaml` would look like this: + +---- +name: manager + image: "quay.io/example/database-operator:v1.0.0" + imagePullPolicy: "Always" + args: + # This value applies to all GVKs specified in watches.yaml + # that are not overridden by environment variables. + - "--ansible-verbosity" + - "1" + env: + # Override the verbosity for the MongoDB kind + - name: ANSIBLE_VERBOSITY_MONGODB_DB_EXAMPLE_COM + value: "4" +---- + +After the {OperatorPlatform} is deployed, the only way to change the verbosity is through the "ansible.sdk.operatorframework.io/verbosity" annotation. Continuing with the above example, the Custom Resource may look like: + +---- +apiVersion: automationcontroller.ansible.com/v1beta1 +kind: AutomationController +metadata: + annotations: + "ansible.sdk.operatorframework.io/verbosity": "5" + creationTimestamp: '2024-10-02T12:24:35Z' + generation: 3 + labels: + app.kubernetes.io/component: automationcontroller + app.kubernetes.io/managed-by: automationcontroller-operator + app.kubernetes.io/operator-version: '2.5' + +spec: + +---- \ No newline at end of file diff --git a/downstream/modules/platform/con-persisting-data-from-auto-runs.adoc b/downstream/archive/archived-modules/platform/con-persisting-data-from-auto-runs.adoc similarity index 100% rename from downstream/modules/platform/con-persisting-data-from-auto-runs.adoc rename to downstream/archive/archived-modules/platform/con-persisting-data-from-auto-runs.adoc diff --git a/downstream/modules/platform/con-platform-ext-database.adoc b/downstream/archive/archived-modules/platform/con-platform-ext-database.adoc similarity index 100% rename from downstream/modules/platform/con-platform-ext-database.adoc rename to downstream/archive/archived-modules/platform/con-platform-ext-database.adoc diff --git a/downstream/modules/platform/con-platform-non-inst-database.adoc b/downstream/archive/archived-modules/platform/con-platform-non-inst-database.adoc similarity index 100% rename from downstream/modules/platform/con-platform-non-inst-database.adoc rename to downstream/archive/archived-modules/platform/con-platform-non-inst-database.adoc diff --git a/downstream/modules/platform/con-single-eda-controller-with-internal-database.adoc b/downstream/archive/archived-modules/platform/con-single-eda-controller-with-internal-database.adoc similarity index 100% rename from downstream/modules/platform/con-single-eda-controller-with-internal-database.adoc rename to downstream/archive/archived-modules/platform/con-single-eda-controller-with-internal-database.adoc diff --git a/downstream/archive/archived-modules/platform/con-why-automation-mesh.adoc b/downstream/archive/archived-modules/platform/con-why-automation-mesh.adoc new file mode 100644 index 0000000000..d17cf7ff60 --- /dev/null +++ b/downstream/archive/archived-modules/platform/con-why-automation-mesh.adoc @@ -0,0 +1,21 @@ +[id="con-why-automation-mesh"] + += Benefits of {AutomationMesh} + +The {AutomationMesh} component of the {PlatformName} simplifies the process of distributing automation across multi-site deployments. For enterprises with multiple isolated IT environments, {AutomationMesh} provides a consistent and reliable way to deploy and scale up automation across your execution nodes using a peer-to-peer mesh communication network. + +//[ddacosta] There is no upgrade/migration path for 2.5EA so removing this until upgrade/migration is possible. +//When upgrading from version 1.x to the latest version of {PlatformNameShort}, you must migrate the data from your legacy isolated nodes into execution nodes necessary for {AutomationMesh}. You can implement {AutomationMesh} by planning out a network of hybrid and control nodes, then editing the inventory file found in the {PlatformNameShort} installer to assign mesh-related values to each of your execution nodes. + + +[role="_additional-resources"] +.Additional resources + +//[ddacosta] There is no upgrade/migration path for 2.5EA so removing this until upgrade/migration is possible. +//* For instructions on how to migrate from isolated nodes to execution nodes, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/index[Red Hat Ansible Automation Platform Upgrade and Migration Guide]. + +* For information about automation mesh and the various ways to design your automation mesh for your environment: + +** For a VM-based installation, see the link:{LinkAutomationMesh}. + +** For an operator-based installation, see the link:{LinkOperatorMesh} \ No newline at end of file diff --git a/downstream/modules/platform/con-why-migrate-ansible-29.adoc b/downstream/archive/archived-modules/platform/con-why-migrate-ansible-29.adoc similarity index 100% rename from downstream/modules/platform/con-why-migrate-ansible-29.adoc rename to downstream/archive/archived-modules/platform/con-why-migrate-ansible-29.adoc diff --git a/downstream/modules/platform/con-why-migrate-ansible-core-213.adoc b/downstream/archive/archived-modules/platform/con-why-migrate-ansible-core-213.adoc similarity index 100% rename from downstream/modules/platform/con-why-migrate-ansible-core-213.adoc rename to downstream/archive/archived-modules/platform/con-why-migrate-ansible-core-213.adoc diff --git a/downstream/modules/platform/con-why-migrate-venvs-ee.adoc b/downstream/archive/archived-modules/platform/con-why-migrate-venvs-ee.adoc similarity index 100% rename from downstream/modules/platform/con-why-migrate-venvs-ee.adoc rename to downstream/archive/archived-modules/platform/con-why-migrate-venvs-ee.adoc diff --git a/downstream/modules/platform/proc-aap-controller-backup.adoc b/downstream/archive/archived-modules/platform/proc-aap-controller-backup.adoc similarity index 62% rename from downstream/modules/platform/proc-aap-controller-backup.adoc rename to downstream/archive/archived-modules/platform/proc-aap-controller-backup.adoc index fc3d72f5e8..65db5c92fd 100644 --- a/downstream/modules/platform/proc-aap-controller-backup.adoc +++ b/downstream/archive/archived-modules/platform/proc-aap-controller-backup.adoc @@ -7,22 +7,21 @@ Use this procedure to back up a deployment of the controller, including jobs, in .Prerequisites -* You must be authenticated with an Openshift cluster. -* The {OperatorPlatform} has been installed to the cluster. -* The {ControllerName} is deployed to using the {OperatorPlatform}. +* You must be authenticated with an OpenShift cluster. +* You have installed {OperatorPlatformNameShort} on the cluster. +* You have deployed {ControllerName} using the {OperatorPlatformNameShort}. .Procedure -. Log in to *{OCP}*. +. Log in to {OCP}. . Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. +. Select your {OperatorPlatformNameShort} deployment. . Select the *Automation Controller Backup* tab. . Click btn:[Create AutomationControllerBackup]. . Enter a *Name* for the backup. -. Enter the *Deployment name* of the deployed {PlatformNameShort} instance being backed up. -For example, if your {ControllerName} must be backed up and the deployment name is `aap-controller`, enter 'aap-controller' in the *Deployment name* field. +. In the *Deployment name* field, enter the name of the AutomationController custom resource object of the deployed {PlatformNameShort} instance being backed up. This name was created when you link:{URLOperatorInstallation}/aap-migration#aap-create_controller[created your AutomationController object]. . If you want to use a custom, pre-created pvc: -.. Optionally enter the name of the *Backup persistant volume claim*. -.. Optionally enter the *Backup PVC storage requirements*, and *Backup PVC storage class*. +.. [Optional]: enter the name of the *Backup persistent volume claim*. +.. [Optional]: enter the *Backup PVC storage requirements*, and *Backup PVC storage class*. + [NOTE] ==== @@ -43,14 +42,14 @@ $ df -h | grep "/var/lib/pgsql/data" A backup tarball of the specified deployment is created and available for data recovery or deployment rollback. Future backups are stored in separate tar files on the same pvc. .Verification -. Log in to Red Hat *{OCP}* +. Log in to {OCP}. . Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. +. Select your {OperatorPlatformNameShort}. . Select the *AutomationControllerBackup* tab. . Select the backup resource you want to verify. . Scroll to *Conditions* and check that the *Successful* status is `True`. + [NOTE] ==== -If *Successful* is `False`, the backup has failed. Check the {ControllerName} operator logs for the error to fix the issue. +If the status is `Failure`, the backup has failed. Check the {ControllerName} operator logs for the error to fix the issue. ==== diff --git a/downstream/modules/platform/proc-aap-controller-restore.adoc b/downstream/archive/archived-modules/platform/proc-aap-controller-restore.adoc similarity index 70% rename from downstream/modules/platform/proc-aap-controller-restore.adoc rename to downstream/archive/archived-modules/platform/proc-aap-controller-restore.adoc index 00e428b1ef..a6a537f594 100644 --- a/downstream/modules/platform/proc-aap-controller-restore.adoc +++ b/downstream/archive/archived-modules/platform/proc-aap-controller-restore.adoc @@ -7,19 +7,21 @@ Use this procedure to restore a previous controller deployment from an Automatio [NOTE] ==== -The name specified for the new AutomationController custom resource must not match an existing deployment or the recovery process will fail. If the name specified does match an existing deployment, see xref:aap-troubleshoot-backup-recover[Troubleshooting] for steps to resolve the issue. +The name specified for the new AutomationController custom resource must not match an existing deployment. + +If the backup custom resource being restored is a backup of a currently running AutomationController custom resource the recovery process will fail. See xref:aap-troubleshoot-backup-recover[Troubleshooting] for steps to resolve the issue. ==== .Prerequisites -* You must be authenticated with an Openshift cluster. -* The {ControllerName} has been deployed to the cluster. +* You must be authenticated with an OpenShift cluster. +* You have deployed {ControllerName} on the cluster. * An AutomationControllerBackup is available on a PVC in your cluster. .Procedure -. Log in to *{OCP}*. +. Log in to {OCP}. . Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. +. Select your {OperatorPlatformNameShort} deployment. . Select the *Automation Controller Restore* tab. . Click btn:[Create AutomationControllerRestore]. . Enter a *Name* for the recovery deployment. @@ -27,7 +29,7 @@ The name specified for the new AutomationController custom resource must not mat + [NOTE] ==== -This should be different from the original deployment name. +This must be different from the original deployment name. ==== + . Select the *Backup source to restore from*. *Backup CR* is recommended. @@ -38,9 +40,9 @@ A new deployment is created and your backup is restored to it. This can take app .Verification -. Log in to Red Hat *{OCP}* +. Log in to Red Hat {OCP}. . Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. +. Select your {OperatorPlatformNameShort} deployment. . Select the *AutomationControllerRestore* tab. . Select the restore resource you want to verify. . Scroll to *Conditions* and check that the *Successful* status is `True`. diff --git a/downstream/archive/archived-modules/platform/proc-aap-controller-yaml-backup.adoc b/downstream/archive/archived-modules/platform/proc-aap-controller-yaml-backup.adoc new file mode 100644 index 0000000000..9a51ae5a99 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-aap-controller-yaml-backup.adoc @@ -0,0 +1,37 @@ +[id="aap-controller-yaml-backup"] + += Using YAML to back up the {ControllerNameStart} deployment + +See the following procedure for how to back up a deployment of the {ControllerName} using YAML. + +.Prerequisites + +* You must be authenticated with an OpenShift cluster. +* You have installed {OperatorPlatformNameShort} on the cluster. +* You have deployed {ControllerName} using the {OperatorPlatformNameShort}. + +.Procedure + +. Create a file named "backup-automation-controller.yml" with the following contents: ++ +---- +--- +apiVersion: automationcontroller.ansible.com/v1beta1 +kind: AutomationControllerBackup +metadata: + name: AutomationControllerBackup-2024-07-15 + namespace: my-namespace +spec: + deployment_name: controller +---- ++ + +[NOTE] +==== +The "deployment_name" above is the name of the {ControllerName} deployment you intend to backup from. +The namespace above is the one containing the {ControllerName} deployment you intend to back up. +==== + +. Use the `oc apply` command to create the backup object in your cluster: + +`$ oc apply -f backup-automation-controller.yml` diff --git a/downstream/archive/archived-modules/platform/proc-aap-controller-yaml-restore.adoc b/downstream/archive/archived-modules/platform/proc-aap-controller-yaml-restore.adoc new file mode 100644 index 0000000000..bdc5dcb316 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-aap-controller-yaml-restore.adoc @@ -0,0 +1,58 @@ +[id="aap-controller-yaml-restore"] + += Using YAML to recover the {ControllerNameStart} deployment +See the following procedure for how to restore a deployment of the {ControllerName} using YAML. + +.Prerequisite +The external database must be a PostgreSQL database that is the version supported by the current release of {PlatformNameShort}. + +[NOTE] +==== +{PlatformNameShort} {PlatformVers} supports {PostgresVers}. +==== + +.Procedure + +The external postgres instance credentials and connection information must be stored in a secret, which is then set on the {ControllerName} spec. + +. Create a `external-postgres-configuration-secret` YAML file, following the template below: ++ +---- +apiVersion: v1 +kind: Secret +metadata: + name: external-restore-postgres-configuration + namespace: <1> +stringData: + host: "" <2> + port: "" <3> + database: "" + username: "" + password: "" <4> + sslmode: "prefer" <5> + type: "unmanaged" +type: Opaque +---- +<1> Namespace to create the secret in. This should be the same namespace you want to deploy to. +<2> The resolvable hostname for your database node. +<3> External port defaults to `5432`. +<4> Value for variable `password` should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. +<5> The variable `sslmode` is valid for `external` databases only. The allowed values are: `*prefer*`, `*disable*`, `*allow*`, `*require*`, `*verify-ca*`, and `*verify-full*`. +. Apply `external-postgres-configuration-secret.yml` to your cluster using the `oc create` command. ++ +---- +$ oc create -f external-postgres-configuration-secret.yml +---- +. When creating your `AutomationControllerRestore` custom resource object, specify the secret on your spec, following the example below: ++ +---- +kind: AutomationControllerRestore +apiVersion: automationcontroller.ansible.com/v1beta1 +metadata: + namespace: my-namespace + name: AutomationControllerRestore-2024-07-15 +spec: + deployment_name: restored_controller + backup_name: AutomationControllerBackup-2024-07-15 + postgres_configuration_secret: 'external-restore-postgres-configuration' +---- \ No newline at end of file diff --git a/downstream/archive/archived-modules/platform/proc-aap-create_controller.adoc b/downstream/archive/archived-modules/platform/proc-aap-create_controller.adoc new file mode 100644 index 0000000000..ed886963ad --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-aap-create_controller.adoc @@ -0,0 +1,19 @@ +[id="aap-create_controller"] + += Creating an AutomationController object + +[role=_abstract] + +Use the following steps to create an *AutomationController* custom resource object. + +.Procedure +. Log in to *{OCP}*. +. Navigate to menu:Operators[Installed Operators]. +. Select the {OperatorPlatformNameShort} installed on your project namespace. +. Select the *Automation Controller* tab. +. Click btn:[Create AutomationController]. You can create the object through the *Form view* or *YAML view*. The following inputs are available through the *Form view*. +.. Enter a name for the new deployment. +.. In *Advanced configurations*: +... From the *Secret Key* list, select your xref:create-secret-key-secret_aap-migration[secret key secret]. +... From the *Old Database Configuration Secret* list, select the xref:create-postresql-secret_aap-migration[old postgres configuration secret]. +.. Click btn:[Create]. diff --git a/downstream/archive/archived-modules/platform/proc-aap-create_eda.adoc b/downstream/archive/archived-modules/platform/proc-aap-create_eda.adoc new file mode 100644 index 0000000000..03b6bb5b20 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-aap-create_eda.adoc @@ -0,0 +1,19 @@ +[id="aap-create_eda"] + += Creating an EDA object + +[role=_abstract] + +Use the following steps to create an *EDA* custom resource object. + +.Procedure +. Log in to *{OCP}*. +. Navigate to menu:Operators[Installed Operators]. +. Select the {OperatorPlatformNameShort} installed on your project namespace. +. Select the *Automation Hub* tab. +. Click btn:[Create AutomationHub]. You can create the object through the *Form view* or *YAML view*. The following inputs are available through the *Form view*. +.. Enter a name for the new deployment. +.. In *Advanced configurations*: +... From the *Admin Password Secret* list, select your xref:create-secret-key-secret_aap-migration[secret key secret]. +... From the *Database Configuration Secret* list, select the xref:create-postresql-secret_aap-migration[postgres configuration secret]. +.. Click btn:[Create]. diff --git a/downstream/archive/archived-modules/platform/proc-aap-create_hub.adoc b/downstream/archive/archived-modules/platform/proc-aap-create_hub.adoc new file mode 100644 index 0000000000..cb2593b298 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-aap-create_hub.adoc @@ -0,0 +1,20 @@ +[id="aap-create_hub"] + += Creating an AutomationHub object + +[role=_abstract] + +Use the following steps to create an *AutomationHub* custom resource object. + +.Procedure +. Log in to *{OCP}*. +. Navigate to menu:Operators[Installed Operators]. +. Select the {OperatorPlatformNameShort} installed on your project namespace. +. Select the *Automation Hub* tab. +. Click btn:[Create AutomationHub]. You can create the object through the *Form view* or *YAML view*. +The following inputs are available through the *Form view*. +.. Enter a name for the new deployment. +.. In *Advanced configurations*: +... From the *Admin Password Secret* list, select your xref:create-secret-key-secret_aap-migration[secret key secret]. +... From the *Database Configuration Secret* list, select the xref:create-postresql-secret_aap-migration[postgres configuration secret]. +.. Click btn:[Create]. diff --git a/downstream/modules/platform/proc-aap-hub-backup.adoc b/downstream/archive/archived-modules/platform/proc-aap-hub-backup.adoc similarity index 69% rename from downstream/modules/platform/proc-aap-hub-backup.adoc rename to downstream/archive/archived-modules/platform/proc-aap-hub-backup.adoc index 9f2b8a11a8..f849ff8ba2 100644 --- a/downstream/modules/platform/proc-aap-hub-backup.adoc +++ b/downstream/archive/archived-modules/platform/proc-aap-hub-backup.adoc @@ -7,14 +7,14 @@ Use this procedure to back up a deployment of the hub, including all hosted Ansi .Prerequisites -* You must be authenticated with an Openshift cluster. -* The {OperatorPlatform} has been installed to the cluster. -* The {HubName} is deployed to using the {OperatorPlatform}. +* You must be authenticated with an OpenShift cluster. +* You have installed {OperatorPlatformNameShort} on the cluster. +* You have deployed {HubName} using the {OperatorPlatformNameShort}. .Procedure -. Log in to *{OCP}*. +. Log in to {OCP}. . Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. +. Select your {OperatorPlatformNameShort} deployment. . Select the *Automation Hub Backup* tab. . Click btn:[Create AutomationHubBackup]. . Enter a *Name* for the backup. @@ -24,4 +24,4 @@ For example, if your {HubName} must be backed up and the deployment name is `aap .. Optionally, enter the name of the *Backup persistent volume claim*, *Backup persistent volume claim namespace*, *Backup PVC storage requirements*, and *Backup PVC storage class*. . Click btn:[Create]. + -A backup of the specified deployment is created and available for data recovery or deployment rollback. +This creates a backup of the specified deployment and is available for data recovery or deployment rollback. diff --git a/downstream/modules/platform/proc-aap-hub-restore.adoc b/downstream/archive/archived-modules/platform/proc-aap-hub-restore.adoc similarity index 76% rename from downstream/modules/platform/proc-aap-hub-restore.adoc rename to downstream/archive/archived-modules/platform/proc-aap-hub-restore.adoc index 7ae89f9475..7313dd809c 100644 --- a/downstream/modules/platform/proc-aap-hub-restore.adoc +++ b/downstream/archive/archived-modules/platform/proc-aap-hub-restore.adoc @@ -12,14 +12,14 @@ The name specified for the new AutomationHub custom resource must not match an e .Prerequisites -* You must be authenticated with an Openshift cluster. -* The {HubName} has been deployed to the cluster. +* You must be authenticated with an OpenShift cluster. +* You have deployed {HubName} on the cluster. * An AutomationHubBackup is available on a PVC in your cluster. .Procedure -. Log in to *{OCP}*. +. Log in to {OCP}. . Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. +. Select your {OperatorPlatformNameShort} deployment. . Select the *Automation Hub Restore* tab. . Click btn:[Create AutomationHubRestore]. . Enter a *Name* for the recovery deployment. @@ -27,4 +27,4 @@ The name specified for the new AutomationHub custom resource must not match an e . Enter the *Backup Name* of the AutomationHubBackup object. . Click btn:[Create]. + -A new deployment is created and your backup is restored to it. +This creates a new deployment and restores your backup to it. diff --git a/downstream/modules/platform/proc-adding-a-subscription-manifest-to-aap-without-an-internet-connection.adoc b/downstream/archive/archived-modules/platform/proc-adding-a-subscription-manifest-to-aap-without-an-internet-connection.adoc similarity index 100% rename from downstream/modules/platform/proc-adding-a-subscription-manifest-to-aap-without-an-internet-connection.adoc rename to downstream/archive/archived-modules/platform/proc-adding-a-subscription-manifest-to-aap-without-an-internet-connection.adoc diff --git a/downstream/modules/platform/proc-approving-the-imported-collection.adoc b/downstream/archive/archived-modules/platform/proc-approving-the-imported-collection.adoc similarity index 100% rename from downstream/modules/platform/proc-approving-the-imported-collection.adoc rename to downstream/archive/archived-modules/platform/proc-approving-the-imported-collection.adoc diff --git a/downstream/modules/platform/proc-building-the-custom-execution-environment.adoc b/downstream/archive/archived-modules/platform/proc-building-the-custom-execution-environment.adoc similarity index 100% rename from downstream/modules/platform/proc-building-the-custom-execution-environment.adoc rename to downstream/archive/archived-modules/platform/proc-building-the-custom-execution-environment.adoc diff --git a/downstream/archive/archived-modules/platform/proc-central-auth-install-dependencies.adoc b/downstream/archive/archived-modules/platform/proc-central-auth-install-dependencies.adoc new file mode 100644 index 0000000000..bfb4390962 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-central-auth-install-dependencies.adoc @@ -0,0 +1,46 @@ +[id="proc-central-auth-dependencies"] + += Install the collections and dependencies for {Galaxy} + +.Prerequisites + +. A supported version of either the link:https://access.redhat.com/product-life-cycles[{RHEL}] or the link:https://docs.fedoraproject.org/en-US/releases/[Fedora] operating system + +. A supported version of https://access.redhat.com/support/policy/updates/ansible-automation-platform[{PlatformNameShort}] + +.Procedure + +. Install the collection for {Galaxy}: ++ +[listing] +$ ansible-galaxy collection install middleware_automation.keycloak + +[NOTE] +==== +This collection has two dependencies that you might also want to install depending on your network configuration. + +`middleware_automation.redhat_csp_download` enables {PlatformNameShort} to connect to the Customer Portal to download {RHSSO} technology. + +`middleware_automation.wildfly` enables Keycloak to run on top of the Wildfly application server including Red Hat JBoss, the version of Wildfly officially supported by Red Hat. +==== + +. Depending on your configuration, you might also want to add some Python dependencies containing libraries used by {PlatformNameShort}: requires to use: ++ +[listing] +# pip 3 install xml jmespath + +.Verification + +. Run the command to display the collection list for {Galaxy}: ++ +[listing] +# ansible-galaxy collection list + +[listing] +# ansible-galaxy collection list +# /root/.ansible/collections/ansible_collections +Collection Version +----------------------------------------- ------- +middleware_automation.keycloak 1.0.1 +middleware_automation.redhat_csp_download 1.2.1 +middleware_automation.wildfly 1.0.2 diff --git a/downstream/archive/archived-modules/platform/proc-central-auth-install-keycloak.adoc b/downstream/archive/archived-modules/platform/proc-central-auth-install-keycloak.adoc new file mode 100644 index 0000000000..f418ca9647 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-central-auth-install-keycloak.adoc @@ -0,0 +1,191 @@ +[id="proc-central-auth-install-keycloak"] + += Install Keycloak with Ansible + +When you have installed the collection and dependencies necessary for your network to run the installation of Keycloak performs the following actions on the system running it: + +* Creates user and group accounts + +* Downloads the installation archive from the Keycloak website + +* Unarchives the content you downloaded while ensuring that the files are associated with the correct users and privileges + +* Verifies that your version of Java Virtual Machine is up-to-date + +* Integrates Keycloak into the host service management system (e.g., systemd daemon) + +[NOTE] +==== +Both user and group account names are keycloak by default. +==== + +. Create the following playbook: ++ +- name: Playbook for Keycloak Hosts + hosts: all + vars: + keycloak_admin_password: "remembertochangeme" + keycloak_realm: TestRealm + collections: + - middleware_automation.keycloak + roles: + - keycloak + + +The configuration adds {PlatformNameShort}'s collection for Keycloak to the list the playbook uses, and the associated `middleware_automation.keycloak.keycloak`. + +[NOTE] +==== +The password at the end of the playbook creation at the variable ˋkeycloak_admin_passwordˋ must be a minimum of 12 characters. +==== + +[IMPORTANT] +==== +The playbook uses a variable for the Keycloak server administrator. Because this variable is a password, it must be secured using Ansible Vault or another secret management system. + +. Run the following playbook: ++ +# ansible-playbook -i inventory playbook.yml + +. Example 1 +Example 1. Configuring single sign-on for use with Ansible in the same playbook +[example] +--- +- name: Playbook for Keycloak Hosts + hosts: all + vars: + keycloak_admin_password: "remembertochangeme" + keycloak_realm: TestRealm + collections: + - middleware_automation.keycloak + roles: + - keycloak + tasks: + - name: Keycloak Realm Role + ansible.builtin.include_role: + name: keycloak_realm + vars: + keycloak_client_default_roles: + - TestRoleAdmin + - TestRoleUser + keycloak_client_users: + - username: TestUser + password: password + client_roles: + - client: TestClient + role: TestRoleUser + realm: "{{ keycloak_realm }}" + - username: TestAdmin + password: password + client_roles: + - client: TestClient + role: TestRoleUser + realm: "{{ keycloak_realm }}" + - client: TestClient + role: TestRoleAdmin + realm: "{{ keycloak_realm }}" + keycloak_realm: TestRealm + keycloak_clients: + - name: TestClient + roles: "{{ keycloak_client_default_roles }}" + realm: "{{ keycloak_realm }}" + public_client: "{{ keycloak_client_public }}" + web_origins: "{{ keycloak_client_web_origins }}" + users: "{{ keycloak_client_users }}" + client_id: TestClient + +[NOTE] +==== +The playbook in Example 1 is the only playbook being used in this procedure. The following code blocks contain text from the same playbook. As a result, you are only modifying variables from one example. The examples following this note are truncated for findability and ease of understanding. + +In Example 1, there are also no sources for authentication being used. This makes it easier to test your configuration before you deploy it. However, Red Hat recommends the use of LDAP as the authentication application. +==== + +When you have configured single sign-on, there are several variables within the code provided that must be defined and are left blank. They must be defined before the playbook can work automatically. + +. Define the realm, so the other variables within the realm execute properly. ++ +[listing] +… + - client: TestClient + role: TestRoleAdmin + realm: "{{ keycloak_realm }}" + keycloak_realm: TestRealm + keycloak_clients: +… + +With the realm defined, you can define the roles and permissions for the users: ++ +[listing] +… + keycloak_client_default_roles: + - TestRoleAdmin + - TestRoleUser + keycloak_client_users: + - username: TestUser + password: password + client_roles: + - client: TestClient + role: TestRoleUser + realm: "{{ keycloak_realm }}" +… + +In this example, the first part of the code defined two default roles: `TestRoleAdmin` and `TestRoleUser`. Before defining an administrator role, define the typical user role with a password, their client, their role, and the realm. + +. Continue defining other roles your organization needs: ++ +[listing] + + - username: TestAdmin + password: password + client_roles: + - client: TestClient + role: TestRoleUser + realm: "{{ keycloak_realm }}" + - client: TestClient + role: TestRoleAdmin + realm: "{{ keycloak_realm }}" +… + + +With the typical user role defined, you can now supply the information appropriate for an administrator. `TestRoleUser` is only given the permission to exist within the defined realm and what it can do. However, `TestRoleAdmin` has the same permissions as `TestRoleUser`, in addition to connecting to the SSO server and configuring or updating the realm. + +. Define the client for roles to apply to the users who use them: ++ +[listing] +… + keycloak_clients: + - name: TestClient + roles: "{{ keycloak_client_default_roles }}" + realm: "{{ keycloak_realm }}" + public_client: "{{ keycloak_client_public }}" + web_origins: "{{ keycloak_client_web_origins }}" + users: "{{ keycloak_client_users }}" + +… + +At this point, you have defined all the variables your playbook requires to run smoothly. + +Optionally, add a check to your playbook that uses Keycloak’s administration credentials to get an SSO token. + +[listing] +- name: Verify token api call + ansible.builtin.uri: + url: "http://localhost:{{ keycloak_port }}/auth/realms/master/protocol/openid-connect/token" + method: POST + body: "client_id=admin-cli&username=admin&password={{ keycloak_admin_password }}&grant_type=password" + validate_certs: no + register: keycloak_auth_response + until: keycloak_auth_response.status == 200 + retries: 2 + delay: 2 + + +[NOTE] +==== +ˋkeycloakportˋ is 8080 by default. + +.Verification + +If you get an authentication response of “200”, the playbook successfully connects to the server and authenticates as expected. Any other response is a failure. + diff --git a/downstream/modules/platform/proc-configure-ldap-hub-ocp.adoc b/downstream/archive/archived-modules/platform/proc-configure-ldap-hub-ocp.adoc similarity index 99% rename from downstream/modules/platform/proc-configure-ldap-hub-ocp.adoc rename to downstream/archive/archived-modules/platform/proc-configure-ldap-hub-ocp.adoc index 4d44ab753d..80aa683bf5 100644 --- a/downstream/modules/platform/proc-configure-ldap-hub-ocp.adoc +++ b/downstream/archive/archived-modules/platform/proc-configure-ldap-hub-ocp.adoc @@ -34,10 +34,7 @@ spec: ---- [NOTE] - ==== - Do not leave any fields empty. For fields with no variable, enter ```` to indicate a default value. - ==== diff --git a/downstream/modules/platform/proc-configure-upgraded-aap.adoc b/downstream/archive/archived-modules/platform/proc-configure-upgraded-aap.adoc similarity index 94% rename from downstream/modules/platform/proc-configure-upgraded-aap.adoc rename to downstream/archive/archived-modules/platform/proc-configure-upgraded-aap.adoc index f3d37901d5..ed709e87d7 100644 --- a/downstream/modules/platform/proc-configure-upgraded-aap.adoc +++ b/downstream/archive/archived-modules/platform/proc-configure-upgraded-aap.adoc @@ -6,7 +6,7 @@ After upgrading your {PlatformName} instance, associate your original instances to its corresponding instance groups by configuring settings in the {ControllerName} UI: -. Log into the new Controller instance. +. Log in to the new Controller instance. . Content from old instance, such as credentials, jobs, inventories should now be visible on your Controller instance. . Navigate to {MenuInfrastructureInstanceGroups}. . Associate execution nodes by clicking on an instance group, then click the *Instances* tab. diff --git a/downstream/archive/archived-modules/platform/proc-cont-aap-hub-storage.adoc b/downstream/archive/archived-modules/platform/proc-cont-aap-hub-storage.adoc new file mode 100644 index 0000000000..7bcbe71ab5 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-cont-aap-hub-storage.adoc @@ -0,0 +1,69 @@ +//Michelle: Module archived as it has been replaced with modular content +[id="cont-aap-hub-storage"] + += Configuring storage for {HubName} + +Configure storage backends for {HubName} including Amazon S3, Azure Blob Storage, and Network File System (NFS) storage. + +== Configuring Amazon S3 storage for {HubName} + +Amazon S3 storage is a type of object storage that is supported in containerized installations. When using an AWS S3 storage backend, set `hub_storage_backend` to `s3`. The AWS S3 bucket needs to exist before running the installation program. + +The variables you can use to configure this storage backend type in your inventory file are: + +* `hub_s3_access_key` +* `hub_s3_secret_key` +* `hub_s3_bucket_name` +* `hub_s3_extra_settings` + +Extra parameters can be passed through an Ansible `hub_s3_extra_settings` dictionary. + +For example, you can set the following parameters: + +---- +hub_s3_extra_settings: + AWS_S3_MAX_MEMORY_SIZE: 4096 + AWS_S3_REGION_NAME: eu-central-1 + AWS_S3_USE_SSL: True +---- + +For more information about the list of parameters, see link:https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings[django-storages documentation - Amazon S3]. + +== Configuring Azure Blob Storage for {HubName} + +Azure Blob storage is a type of object storage that is supported in containerized installations. +When using an Azure blob storage backend, set `hub_storage_backend` to `azure`. The Azure container needs to exist before running the installation program. + +The variables you can use to configure this storage backend type in your inventory file are: + +* `hub_azure_account_key` +* `hub_azure_account_name` +* `hub_azure_container` +* `hub_azure_extra_settings` + +Extra parameters can be passed through an Ansible `hub_azure_extra_settings` dictionary. + +For example, you can set the following parameters: + +---- +hub_azure_extra_settings: +For more information about the list of parameters, see link:https://django-storages.readthedocs.io/en/latest/backends/azure.html#settings[django-storages documentation - Azure Storage]. + AZURE_LOCATION: foo + AZURE_SSL: True + AZURE_URL_EXPIRATION_SECS: 60 +---- + + +== Configuring Network File System (NFS) storage for {HubName} + +NFS is a type of shared storage that is supported in containerized installations. Shared storage is required when installing more than one instance of {HubName} with a `file` storage backend. When installing a single instance of the {HubName}, shared storage is optional. + +* To configure shared storage for {HubName}, set the following variable in the inventory file, ensuring your NFS share has read, write, and execute permissions: + +---- +hub_shared_data_path= +---- + +* The value must match the format `host:dir`, for example `nfs-server.example.com:/exports/hub`. + +* To change the mount options for your NFS share, use the `hub_shared_data_mount_opts` variable. This variable is optional and the default value is `rw,sync,hard`. diff --git a/downstream/modules/platform/proc-controller-activity-stream.adoc b/downstream/archive/archived-modules/platform/proc-controller-activity-stream.adoc similarity index 99% rename from downstream/modules/platform/proc-controller-activity-stream.adoc rename to downstream/archive/archived-modules/platform/proc-controller-activity-stream.adoc index e90f2b74f3..8222ea38e8 100644 --- a/downstream/modules/platform/proc-controller-activity-stream.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-activity-stream.adoc @@ -8,8 +8,10 @@ Most screens have an Activity Stream image:activitystream.png[activitystream,15, image:users-activity-stream.png[Activity Stream] An Activity Stream shows all changes for a particular object. + For each change, the Activity Stream shows the time of the event, the user that initiated the event, and the action. The information displayed varies depending on the type of event. + Click the btn:[Examine] image:examine.png[View Event Details,15,15] icon to display the event log for the change. image:activity-stream-event-log.png[event log] diff --git a/downstream/modules/platform/proc-controller-add-groups-hosts.adoc b/downstream/archive/archived-modules/platform/proc-controller-add-groups-hosts.adoc similarity index 97% rename from downstream/modules/platform/proc-controller-add-groups-hosts.adoc rename to downstream/archive/archived-modules/platform/proc-controller-add-groups-hosts.adoc index e15636dbe3..50ab869589 100644 --- a/downstream/modules/platform/proc-controller-add-groups-hosts.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-add-groups-hosts.adoc @@ -4,7 +4,7 @@ Groups are only applicable to standard inventories and are not configurable directly through a Smart Inventory. You can associate an existing group through hosts that are used with standard inventories. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-add-groups[Adding groups to inventories] in the _{ControllerUG}_. +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-add-groups[Adding groups to inventories] in _{ControllerUG}_. .Procedure //[ddacosta] Need to verify this is the correct flow. The original content identified menus that don't exist. diff --git a/downstream/modules/platform/proc-controller-adding-an-instance.adoc b/downstream/archive/archived-modules/platform/proc-controller-adding-an-instance.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-adding-an-instance.adoc rename to downstream/archive/archived-modules/platform/proc-controller-adding-an-instance.adoc diff --git a/downstream/modules/platform/proc-controller-adding-subscription-manually.adoc b/downstream/archive/archived-modules/platform/proc-controller-adding-subscription-manually.adoc similarity index 91% rename from downstream/modules/platform/proc-controller-adding-subscription-manually.adoc rename to downstream/archive/archived-modules/platform/proc-controller-adding-subscription-manually.adoc index 3b0cdb2532..ea2b2c7d64 100644 --- a/downstream/modules/platform/proc-controller-adding-subscription-manually.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-adding-subscription-manually.adoc @@ -2,7 +2,7 @@ = Add a subscription manually -If you are unable to apply or update the subscription information by using the {ControllerName} user interface, you can upload the subscriptions manifest manually in an Ansible playbook. +If you are unable to apply or update the subscription information by using the {ControllerName} user interface, you can upload the subscriptions manifest manually in an Ansible Playbook. Use the license module in the `ansible.controller` collection: diff --git a/downstream/modules/platform/proc-controller-api-endpoint-functions.adoc b/downstream/archive/archived-modules/platform/proc-controller-api-endpoint-functions.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-api-endpoint-functions.adoc rename to downstream/archive/archived-modules/platform/proc-controller-api-endpoint-functions.adoc diff --git a/downstream/modules/platform/proc-controller-attaching-subscriptions.adoc b/downstream/archive/archived-modules/platform/proc-controller-attaching-subscriptions.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-attaching-subscriptions.adoc rename to downstream/archive/archived-modules/platform/proc-controller-attaching-subscriptions.adoc diff --git a/downstream/modules/platform/proc-controller-authentication.adoc b/downstream/archive/archived-modules/platform/proc-controller-authentication.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-authentication.adoc rename to downstream/archive/archived-modules/platform/proc-controller-authentication.adoc diff --git a/downstream/modules/platform/proc-controller-change-timeout-auth.adoc b/downstream/archive/archived-modules/platform/proc-controller-change-timeout-auth.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-change-timeout-auth.adoc rename to downstream/archive/archived-modules/platform/proc-controller-change-timeout-auth.adoc diff --git a/downstream/modules/platform/proc-controller-configure-usability-analytics.adoc b/downstream/archive/archived-modules/platform/proc-controller-configure-usability-analytics.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-configure-usability-analytics.adoc rename to downstream/archive/archived-modules/platform/proc-controller-configure-usability-analytics.adoc diff --git a/downstream/modules/platform/proc-controller-configure-user-interface.adoc b/downstream/archive/archived-modules/platform/proc-controller-configure-user-interface.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-configure-user-interface.adoc rename to downstream/archive/archived-modules/platform/proc-controller-configure-user-interface.adoc diff --git a/downstream/modules/platform/proc-controller-control-data-collection.adoc b/downstream/archive/archived-modules/platform/proc-controller-control-data-collection.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-control-data-collection.adoc rename to downstream/archive/archived-modules/platform/proc-controller-control-data-collection.adoc diff --git a/downstream/archive/archived-modules/platform/proc-controller-create-inventory.adoc b/downstream/archive/archived-modules/platform/proc-controller-create-inventory.adoc new file mode 100644 index 0000000000..98eb4ce497 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-controller-create-inventory.adoc @@ -0,0 +1,76 @@ +[id="controller-creating-inventory"] + += Browsing and creating inventories + +The Inventories window displays a list of the inventories that are currently available. +You can sort the inventory list by name and searched type, organization, description, owners and modifiers of the inventory, or additional criteria. + +.Procedure +. From the navigational panel, select {MenuInfrastructureInventories}. +. Click btn:[Create inventory], and select the type of inventory to create. +. Enter the appropriate details into the following fields: + +* *Name*: Enter a name appropriate for this inventory. +* Optional: *Description*: Enter an arbitrary description as appropriate. +* *Organization*: Required. Choose among the available organizations. +* Only applicable to Smart Inventories: *Smart Host Filter*: Populate the hosts for this inventory by using a search filter. ++ +_Example_ ++ +name__icontains=RedHat. ++ +These options are based on the organization you chose. ++ +Filters are similar to tags in that tags are used to filter certain hosts that contain those names. +Therefore, to populate the *Smart Host Filter* field, specify a tag that contains the hosts you want, not the hosts themselves. ++ +Filters are case-sensitive. +* *Instance Groups*: Select the instance group or groups for this inventory to run on. ++ +You can select many instance groups and sort them in the order that you want them run. ++ +//image:select-instance-groups-modal.png[image] + +* Optional: *Labels*: Supply labels that describe this inventory, so they can be used to group and filter inventories and jobs. +* Only applicable to constructed inventories: *Input inventories*: Specify the source inventories to include in this constructed inventory. +//Click the image:search.png[Search,15,15] icon to select from available inventories. +Empty groups from input inventories are copied into the constructed inventory. +* Optional:(Only applicable to constructed inventories): *Cached timeout (seconds)*: Set the length of time you want the cache plugin data to timeout. +* Only applicable to constructed inventories: *Verbosity*: Control the level of output that Ansible produces as the playbook executes related to inventory sources associated with constructed inventories. ++ +Select the verbosity from: + +* *Normal* +* *Verbose* +* *More verbose* +* *Debug* +* *Connection Debug* +* *WinRM Debug* + +** *Verbose* logging includes the output of all commands. +** *More verbose* provides more detail than *Verbose*. +** *Debug* logging is exceedingly verbose and includes information about SSH operations that can be useful in certain support instances. Most users do not need to see debug mode output. +//Not sure of this +** *Connection Debug* enables you to run ssh in verbose mode, providing debugging information about the SSH connection progress. +//Not sure of this. +** *WinRM Debug* used for verbosity specific to windows remote management ++ +Click the image:arrow.png[Expand,15,15] icon for information on *How to use the constructed inventory plugin*. +* Only applicable to constructed inventories: *Limit*: Restricts the number of returned hosts for the inventory source associated with the constructed inventory. +You can paste a group name into the limit field to only include hosts in that group. +For more information, see the *Source vars* setting. +* Only applicable to standard inventories: *Options*: Check the *Prevent Instance Group Fallback* option to enable only the instance groups listed in the *Instance Groups* field to execute the job. +If unchecked, all available instances in the execution pool are used based on the hierarchy. + +* *Variables* (*Source vars* for constructed inventories): + +** *Variables* Variable definitions and values to apply to all hosts in this inventory. +Enter variables by using either JSON or YAML syntax. +Use the radio button to toggle between the two. +** *Source vars* for constructed inventories creates groups, specifically under the `groups` key of the data. +It accepts Jinja2 template syntax, renders it for every host, makes a `true` or `false` evaluation, and includes the host in the group (from the key of the entry) if the result is `true`. +This is particularly useful because you can paste that group name into the limit field to only include hosts in that group. +//See Example 1 in xref:ref-controller-smart-host-filter[Smart host filters]. +. Click btn:[Create inventory]. + +After saving the new inventory, you can proceed with configuring permissions, groups, hosts, sources, and view completed jobs, if applicable to the type of inventory. diff --git a/downstream/modules/platform/proc-controller-define-filter-with-facts.adoc b/downstream/archive/archived-modules/platform/proc-controller-define-filter-with-facts.adoc similarity index 89% rename from downstream/modules/platform/proc-controller-define-filter-with-facts.adoc rename to downstream/archive/archived-modules/platform/proc-controller-define-filter-with-facts.adoc index bfe7fb9671..5ee2c74ca4 100644 --- a/downstream/modules/platform/proc-controller-define-filter-with-facts.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-define-filter-with-facts.adoc @@ -6,8 +6,8 @@ Use the following procedure to use `ansible_facts` to define the host filter whe .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. -. Select *Add Smart Inventory* from *Add* list. -. In the *Create new smart inventory* page, click the image:search.png[Search,15,15] icon in the *Smart host filter* field. +. Select *Create Smart Inventory* from *Create inventory* list. +//. In the *Create Smart Inventory* page, click the image:search.png[Search,15,15] icon in the *Smart host filter* field. This opens a window to filter hosts for this inventory. + image:define_host_filter.png[Dfine host filter] diff --git a/downstream/modules/platform/proc-controller-edit-an-organization.adoc b/downstream/archive/archived-modules/platform/proc-controller-edit-an-organization.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-edit-an-organization.adoc rename to downstream/archive/archived-modules/platform/proc-controller-edit-an-organization.adoc diff --git a/downstream/modules/platform/proc-controller-edit-credential.adoc b/downstream/archive/archived-modules/platform/proc-controller-edit-credential.adoc similarity index 86% rename from downstream/modules/platform/proc-controller-edit-credential.adoc rename to downstream/archive/archived-modules/platform/proc-controller-edit-credential.adoc index 78e6695908..9309dd2d4d 100644 --- a/downstream/modules/platform/proc-controller-edit-credential.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-edit-credential.adoc @@ -8,5 +8,5 @@ As part of the initial setup, you can leave the default *Demo Credential* as it . Edit the credential by using one of these methods: ** Go to the credential Details page and click btn:[Edit]. -** From the navigation panel, select {MenuAMCredentials}. Click btn:[Edit] next to the credential name and edit the appropriate details. +** From the navigation panel, select {MenuAECredentials}. Click btn:[Edit] next to the credential name and edit the appropriate details. . Save your changes. diff --git a/downstream/modules/platform/proc-controller-enable-logging-LDAP.adoc b/downstream/archive/archived-modules/platform/proc-controller-enable-logging-LDAP.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-enable-logging-LDAP.adoc rename to downstream/archive/archived-modules/platform/proc-controller-enable-logging-LDAP.adoc diff --git a/downstream/modules/platform/proc-controller-host-metrics.adoc b/downstream/archive/archived-modules/platform/proc-controller-host-metrics.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-host-metrics.adoc rename to downstream/archive/archived-modules/platform/proc-controller-host-metrics.adoc diff --git a/downstream/modules/platform/proc-controller-import-CA-cert-LDAP.adoc b/downstream/archive/archived-modules/platform/proc-controller-import-CA-cert-LDAP.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-import-CA-cert-LDAP.adoc rename to downstream/archive/archived-modules/platform/proc-controller-import-CA-cert-LDAP.adoc diff --git a/downstream/modules/platform/proc-controller-importing-subscriptions.adoc b/downstream/archive/archived-modules/platform/proc-controller-importing-subscriptions.adoc similarity index 71% rename from downstream/modules/platform/proc-controller-importing-subscriptions.adoc rename to downstream/archive/archived-modules/platform/proc-controller-importing-subscriptions.adoc index fbf13c0e34..5f4fc8c333 100644 --- a/downstream/modules/platform/proc-controller-importing-subscriptions.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-importing-subscriptions.adoc @@ -3,6 +3,18 @@ = Importing a subscription After you have obtained an authorized {PlatformNameShort} subscription, you must import it into the {ControllerName} system before you can use {ControllerName}. + +[NOTE] +==== +You are opted in for {Analytics} by default when you activate the {ControllerName} on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, by doing the following: +. From the navigation panel, select menu:Settings[] and select the *Miscellaneous System settings* option. +. Click btn:[Edit]. +. Toggle the *Gather data for Automation Analytics* switch to the off position. +. Click btn:[Save]. +For opt-in of {Analytics} to be effective, your instance of {ControllerName} must be running on {RHEL}. +For more information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-automation-analytics[{Analytics}] section. +==== + .Prerequisites * You have obtained a subscriptions manifest. @@ -34,25 +46,26 @@ After you enter your credentials, click btn:[Get Subscriptions]. Then, it prompts you to select the subscription that you want to run and applies that metadata to {ControllerName}. You can log in over time and retrieve new subscriptions if you have renewed. + -. Click btn:[Next] to proceed to the *Tracking and Insights* page. -+ -Tracking and insights collect data to help Red Hat improve the product and deliver a better user experience. -For more information about data collection, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-usability-analytics-data-collection[Usability Analytics and Data Collection] of the _{ControllerAG}_. -+ -This option is checked by default, but you can opt out of any of the following: +. Click btn:[Next] to proceed to the End User Agreement. +//[ddacosta - removed analytics selection for AAP-30863 and AAP-29909] to proceed to the *Tracking and Insights* page. +//+ +//Tracking and insights collect data to help Red Hat improve the product and deliver a better user experience. +//For more information about data collection, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-usability-analytics-data-collection[Usability Analytics and Data Collection] of the _{ControllerAG}_. +//+ +//This option is checked by default, but you can opt out of any of the following: //* *User analytics*. Collects data from the controller UI. -* *Insights Analytics*. Provides a high level analysis of your automation with {ControllerName}. -It helps you to identify trends and anomalous use of the controller. -For opt-in of {Analytics} to be effective, your instance of {ControllerName} must be running on {RHEL}. -For more information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-automation-analytics[{Analytics}] section. -+ -[NOTE] -==== -You can change your analytics data collection preferences at any time. -==== -+ -. After you have specified your tracking and Insights preferences, click btn:[Next] to proceed to the End User Agreement. +//* *Insights Analytics*. Provides a high level analysis of your automation with {ControllerName}. +//It helps you to identify trends and anomalous use of the controller. +//For opt-in of {Analytics} to be effective, your instance of {ControllerName} must be running on {RHEL}. +//For more information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/configuring_automation_execution/index#ref-controller-automation-analytics[{Analytics}] section of _{ControllerAG}_. +//+ +//[NOTE] +//==== +//You can change your analytics data collection preferences at any time. +//==== +//+ +//. After you have specified your tracking and Insights preferences, click btn:[Next] . Review and check the *I agree to the End User License Agreement* checkbox and click btn:[Submit]. + After your subscription is accepted, {ControllerName} displays the subscription details and opens the Dashboard. diff --git a/downstream/modules/platform/proc-controller-jobs-view.adoc b/downstream/archive/archived-modules/platform/proc-controller-jobs-view.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-jobs-view.adoc rename to downstream/archive/archived-modules/platform/proc-controller-jobs-view.adoc diff --git a/downstream/modules/platform/proc-controller-manage-instances.adoc b/downstream/archive/archived-modules/platform/proc-controller-manage-instances.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-manage-instances.adoc rename to downstream/archive/archived-modules/platform/proc-controller-manage-instances.adoc diff --git a/downstream/modules/platform/proc-controller-multi-vault-credentials.adoc b/downstream/archive/archived-modules/platform/proc-controller-multi-vault-credentials.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-multi-vault-credentials.adoc rename to downstream/archive/archived-modules/platform/proc-controller-multi-vault-credentials.adoc diff --git a/downstream/modules/platform/proc-controller-obtaining-subscriptions-manifest.adoc b/downstream/archive/archived-modules/platform/proc-controller-obtaining-subscriptions-manifest.adoc similarity index 95% rename from downstream/modules/platform/proc-controller-obtaining-subscriptions-manifest.adoc rename to downstream/archive/archived-modules/platform/proc-controller-obtaining-subscriptions-manifest.adoc index cd6a46e11d..5202cd73c6 100644 --- a/downstream/modules/platform/proc-controller-obtaining-subscriptions-manifest.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-obtaining-subscriptions-manifest.adoc @@ -5,7 +5,7 @@ To upload a subscriptions manifest, first set up your subscription allocations: .Procedure -. Navigate to https://access.redhat.com/management/subscription_allocations. +. Go to https://access.redhat.com/management/subscription_allocations. The *Subscriptions Allocations* page has no subscriptions until you create one. //+ //image::subscription-allocations-empty.png[Subscriptions allocation] @@ -60,9 +60,5 @@ A folder pre-pended with `manifest_` is downloaded to your local drive. Multiple subscriptions with the same SKU are aggregated. . When you have a subscription manifest, go to the Subscription screen. . Click btn:[Browse] to upload the entire manifest file. -. Navigate to the location where the file is saved. +. Go to the location where the file is saved. Do not open it or upload individual parts of it. - - - - diff --git a/downstream/modules/platform/proc-controller-obtaining-subscriptions.adoc b/downstream/archive/archived-modules/platform/proc-controller-obtaining-subscriptions.adoc similarity index 97% rename from downstream/modules/platform/proc-controller-obtaining-subscriptions.adoc rename to downstream/archive/archived-modules/platform/proc-controller-obtaining-subscriptions.adoc index e7d3ac4f82..de661f0fc1 100644 --- a/downstream/modules/platform/proc-controller-obtaining-subscriptions.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-obtaining-subscriptions.adoc @@ -20,7 +20,7 @@ endif::controller-GS,controller-AG[] ** Enter your username and password on the license page. ** Obtain a subscriptions manifest from the link:https://access.redhat.com/management/subscription_allocations[Subscription Allocations] page on the Red Hat Customer Portal. ifdef::controller-GS,controller-AG[] -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-obtaining-subscriptions-manifest[Obtaining a subscriptions manifest] in the _{ControllerUG}_. +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-obtaining-subscriptions-manifest[Obtaining a subscriptions manifest] in _{ControllerUG}_. endif::controller-GS,controller-AG[] ifdef::controller-UG[] For more information, see xref:proc-controller-obtaining-subscriptions-manifest[Obtaining a subscriptions manifest]. diff --git a/downstream/modules/platform/proc-controller-prevent-LDAP-attributes.adoc b/downstream/archive/archived-modules/platform/proc-controller-prevent-LDAP-attributes.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-prevent-LDAP-attributes.adoc rename to downstream/archive/archived-modules/platform/proc-controller-prevent-LDAP-attributes.adoc diff --git a/downstream/modules/platform/proc-controller-run-job-template.adoc b/downstream/archive/archived-modules/platform/proc-controller-run-job-template.adoc similarity index 94% rename from downstream/modules/platform/proc-controller-run-job-template.adoc rename to downstream/archive/archived-modules/platform/proc-controller-run-job-template.adoc index e91e5173d8..3c9b992825 100644 --- a/downstream/modules/platform/proc-controller-run-job-template.adoc +++ b/downstream/archive/archived-modules/platform/proc-controller-run-job-template.adoc @@ -2,8 +2,8 @@ = Running a job template -A benefit of {ControllerName} is the push-button deployment of Ansible playbooks. -You can configure a template to store all the parameters that you would normally pass to the Ansible playbook on the command line. +A benefit of {ControllerName} is the push-button deployment of Ansible Playbooks. +You can configure a template to store all the parameters that you would normally pass to the Ansible Playbook on the command line. In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line. .Procedure @@ -14,7 +14,7 @@ image::controller-gs-job-templates-launch.png[Launch template] The initial job start generates a status page, which updates automatically by using {ControllerName}'s Live Event feature, until the job is complete. -For more information about the job results, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-jobs[Jobs in automation controller] in the _{ControllerUG}_. +For more information about the job results, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-jobs[Jobs in automation controller] in _{ControllerUG}_. .Additional resources diff --git a/downstream/modules/platform/proc-controller-schedules-view.adoc b/downstream/archive/archived-modules/platform/proc-controller-schedules-view.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-schedules-view.adoc rename to downstream/archive/archived-modules/platform/proc-controller-schedules-view.adoc diff --git a/downstream/modules/platform/proc-controller-team-add-permissions.adoc b/downstream/archive/archived-modules/platform/proc-controller-team-add-permissions.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-team-add-permissions.adoc rename to downstream/archive/archived-modules/platform/proc-controller-team-add-permissions.adoc diff --git a/downstream/modules/platform/proc-controller-token-scope-mask-rbac.adoc b/downstream/archive/archived-modules/platform/proc-controller-token-scope-mask-rbac.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-token-scope-mask-rbac.adoc rename to downstream/archive/archived-modules/platform/proc-controller-token-scope-mask-rbac.adoc diff --git a/downstream/modules/platform/proc-controller-user-tokens.adoc b/downstream/archive/archived-modules/platform/proc-controller-user-tokens.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-user-tokens.adoc rename to downstream/archive/archived-modules/platform/proc-controller-user-tokens.adoc diff --git a/downstream/modules/platform/proc-controller-viewing-dashboard.adoc b/downstream/archive/archived-modules/platform/proc-controller-viewing-dashboard.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-viewing-dashboard.adoc rename to downstream/archive/archived-modules/platform/proc-controller-viewing-dashboard.adoc diff --git a/downstream/modules/platform/proc-controller-work-with-session-limits.adoc b/downstream/archive/archived-modules/platform/proc-controller-work-with-session-limits.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-work-with-session-limits.adoc rename to downstream/archive/archived-modules/platform/proc-controller-work-with-session-limits.adoc diff --git a/downstream/modules/platform/proc-controller-workflow-approvals.adoc b/downstream/archive/archived-modules/platform/proc-controller-workflow-approvals.adoc similarity index 100% rename from downstream/modules/platform/proc-controller-workflow-approvals.adoc rename to downstream/archive/archived-modules/platform/proc-controller-workflow-approvals.adoc diff --git a/downstream/modules/platform/proc-create-a-user.adoc b/downstream/archive/archived-modules/platform/proc-create-a-user.adoc similarity index 98% rename from downstream/modules/platform/proc-create-a-user.adoc rename to downstream/archive/archived-modules/platform/proc-create-a-user.adoc index 09d983dc54..570f26b1a7 100644 --- a/downstream/modules/platform/proc-create-a-user.adoc +++ b/downstream/archive/archived-modules/platform/proc-create-a-user.adoc @@ -6,6 +6,7 @@ This procedure creates a Keycloak user, with the `hubadmin` role, that can log i .Procedure +. Log in to {OCP}. . Navigate to menu:Operator[Installed Operators]. . Select the {OperatorRHSSO} project. . Select the *Keycloak Realm* tab and click btn:[Create Keycloak User]. diff --git a/downstream/modules/platform/proc-create-keycloak-client.adoc b/downstream/archive/archived-modules/platform/proc-create-keycloak-client.adoc similarity index 98% rename from downstream/modules/platform/proc-create-keycloak-client.adoc rename to downstream/archive/archived-modules/platform/proc-create-keycloak-client.adoc index ad9f0f5eb6..45392299a7 100644 --- a/downstream/modules/platform/proc-create-keycloak-client.adoc +++ b/downstream/archive/archived-modules/platform/proc-create-keycloak-client.adoc @@ -8,6 +8,7 @@ When Single Sign-On validates or issues the `OAuth` token, the client provides t .Procedure +. Log in to {OCP}. . Navigate to menu:Operator[Installed Operators]. . Select the {OperatorRHSSO} project. . Select the *Keycloak Client* tab and click btn:[Create Keycloak Client]. @@ -141,6 +142,6 @@ spec: . Click btn:[Create] and wait for the process to complete. -When {HubName} is deployed, you must update the client with the “Valid Redirect URIs” and “Web Origins” as described in xref:proc-update-rhsso-client_{context}[Updating the {RHSSO} client] +After you deploy {HubName}, you must update the client with the “Valid Redirect URIs” and “Web Origins” as described in xref:proc-update-rhsso-client_{context}[Updating the {RHSSO} client] Additionally, the client comes pre-configured with token mappers, however, if your authentication provider does not provide group data to Red Hat SSO, then the group mapping must be updated to reflect how that information is passed. This is commonly by user attribute. diff --git a/downstream/modules/platform/proc-create-keycloak-instance.adoc b/downstream/archive/archived-modules/platform/proc-create-keycloak-instance.adoc similarity index 85% rename from downstream/modules/platform/proc-create-keycloak-instance.adoc rename to downstream/archive/archived-modules/platform/proc-create-keycloak-instance.adoc index 5f2be0cacf..0175d1d634 100644 --- a/downstream/modules/platform/proc-create-keycloak-instance.adoc +++ b/downstream/archive/archived-modules/platform/proc-create-keycloak-instance.adoc @@ -2,14 +2,15 @@ = Creating a Keycloak instance -When the {OperatorRHSSO} is installed you can create a Keycloak instance for use with {PlatformNameShort}. +After you install the {OperatorRHSSO}, you can create a Keycloak instance for use with {PlatformNameShort}. From here you provide an external Postgres or one will be created for you. .Procedure +. Log in to {OCP}. . Navigate to menu:Operator[Installed Operators]. -. Select the `rh-sso` project. +. Select the *RH-SSO* project. . Select the *{OperatorRHSSO}*. . On the {OperatorRHSSO} details page select btn:[Keycloak]. . Click btn:[Create instance]. @@ -33,7 +34,5 @@ spec: ---- + . Click btn:[Create]. - . When deployment is complete, you can use this credential to login to the administrative console. - . You can find the credentials for the administrator in the `credential-` (example keycloak) secret in the namespace. diff --git a/downstream/modules/platform/proc-create-keycloak-realm.adoc b/downstream/archive/archived-modules/platform/proc-create-keycloak-realm.adoc similarity index 98% rename from downstream/modules/platform/proc-create-keycloak-realm.adoc rename to downstream/archive/archived-modules/platform/proc-create-keycloak-realm.adoc index 062cb16717..df6dce9a87 100644 --- a/downstream/modules/platform/proc-create-keycloak-realm.adoc +++ b/downstream/archive/archived-modules/platform/proc-create-keycloak-realm.adoc @@ -8,6 +8,7 @@ Realms are isolated from one another and can only manage and authenticate the us .Procedure +. Log in to {OCP}. . Navigate to menu:Operator[Installed Operators]. . Select the *{OperatorRHSSO}* project. . Select the *Keycloak Realm* tab and click btn:[Create Keycloak Realm]. diff --git a/downstream/modules/platform/proc-creating-a-secret.adoc b/downstream/archive/archived-modules/platform/proc-creating-a-secret.adoc similarity index 89% rename from downstream/modules/platform/proc-creating-a-secret.adoc rename to downstream/archive/archived-modules/platform/proc-creating-a-secret.adoc index f5a60b5173..efefaa9855 100644 --- a/downstream/modules/platform/proc-creating-a-secret.adoc +++ b/downstream/archive/archived-modules/platform/proc-creating-a-secret.adoc @@ -33,6 +33,6 @@ stringData: + <1> This name is used in the next step when creating the {HubName} instance. <2> If the secret was changed when creating the Keycloak client for {HubName} be sure to change this value to match. -<3> Enter the value of the `public_key` copied in xref:proc-installing-the-ansible-platform-operator_{context}[Installing the {PlatformNameShort} Operator]. +<3> Enter the value of the `public_key` for your {OperatorPlatformNameShort} deployment. . Click btn:[Create] and wait for the process to complete. diff --git a/downstream/modules/platform/proc-creating-collection-namespace.adoc b/downstream/archive/archived-modules/platform/proc-creating-collection-namespace.adoc similarity index 93% rename from downstream/modules/platform/proc-creating-collection-namespace.adoc rename to downstream/archive/archived-modules/platform/proc-creating-collection-namespace.adoc index 78a0a369e8..4084ac7c12 100644 --- a/downstream/modules/platform/proc-creating-collection-namespace.adoc +++ b/downstream/archive/archived-modules/platform/proc-creating-collection-namespace.adoc @@ -34,7 +34,7 @@ Once the namespace has been created, you can import the collection by using the . Click btn:[Upload]. -This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported. +This opens the 'My Imports' page. You can see the status of the import and various details of the files and modules that have been imported. == Importing the collection tarball by using the CLI diff --git a/downstream/modules/platform/proc-creating-controller-form-view.adoc b/downstream/archive/archived-modules/platform/proc-creating-controller-form-view.adoc similarity index 100% rename from downstream/modules/platform/proc-creating-controller-form-view.adoc rename to downstream/archive/archived-modules/platform/proc-creating-controller-form-view.adoc diff --git a/downstream/archive/archived-modules/platform/proc-custom-logos-images.adoc b/downstream/archive/archived-modules/platform/proc-custom-logos-images.adoc new file mode 100644 index 0000000000..fbdf33212f --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-custom-logos-images.adoc @@ -0,0 +1,26 @@ +[id="proc-custom-logos-images"] + +//[ddacosta]Obsolete, this information is provided in the proc-settings-platform-gateway.adoc module now. + += Setting a custom logo + +{ControllerNameStart} and {PlatformName} support the use of a custom logo. + +You can add a custom logo by uploading an image and supplying a custom login message from the *{GatewayStart} settings* page. + +.Procedure + +. From the navigation panel, select {MenuSetGateway}. +. In the *Custom login info* field, provide specific information (such as a legal notice or a disclaimer). +. In the *Custom logo* field, provide an image file for setting up a custom logo (must be a data URL with a base64-encoded GIF, PNG, or JPEG image). + +.Example + +You upload a specific logo and add the following text: + +image::ag-configure-tower-ui-logo-filled.png[Adding a custom logo] + +The {PlatformNameShort} login dialog resembles the following: + +image::ag-configure-aap-ui-angry-spud-login.png[Login page with custom logo] + diff --git a/downstream/modules/platform/proc-deploy-eda-controller-with-aap-operator-ocp.adoc b/downstream/archive/archived-modules/platform/proc-deploy-eda-controller-with-aap-operator-ocp.adoc similarity index 82% rename from downstream/modules/platform/proc-deploy-eda-controller-with-aap-operator-ocp.adoc rename to downstream/archive/archived-modules/platform/proc-deploy-eda-controller-with-aap-operator-ocp.adoc index 1a75943d32..99e066b825 100644 --- a/downstream/modules/platform/proc-deploy-eda-controller-with-aap-operator-ocp.adoc +++ b/downstream/archive/archived-modules/platform/proc-deploy-eda-controller-with-aap-operator-ocp.adoc @@ -5,7 +5,7 @@ .Prerequisites -* You have installed {OperatorPlatform} on {OCPShort}. +* You have installed {OperatorPlatformNameShort} on {OCPShort}. * You have installed and configured {ControllerName}. .Procedure @@ -14,9 +14,22 @@ . Locate and select your installation of {PlatformNameShort}. -. Under *Provided APIs*, locate the {EDAName} modal and click *Create instance*. +. Under the *Details* tab, locate the *EDA* modal and click *Create instance*. + +. Click btn:[Form view], and in the *Name* field, enter the name you want for your new {EDAcontroller} deployment. + -This takes you to the Form View to customize your installation. +[IMPORTANT] +==== +If you have installed other {PlatformNameShort} components in your current {OCPShort} namespace, ensure that you provide a unique name for your {EDAcontroller} when you create your {EDAName} custom resource. Otherwise, naming conflicts can occur and impact {EDAcontroller} deployment. +==== +. Specify your controller URL in the *Automation Server URL* field. ++ +If you deployed {ControllerName} in Openshift as well, you can find the URL in the navigation panel under menu:Networking[Routes]. ++ +[NOTE] +==== +This is the only required customization, but you can customize other options using the UI form or directly in the YAML configuration tab, if desired. +==== + [IMPORTANT] ==== @@ -35,34 +48,16 @@ extra_settings: value: '12' ---- + -. Click btn:[Reload] and btn:[Save]. Return to the *Form* view. - -. In the *Name* field, enter the name you want for your new {EDAcontroller} deployment. -+ -[IMPORTANT] -==== -If you have other {PlatformNameShort} components installed in your current {OCPShort} namespace, ensure that you provide a unique name for your {EDAcontroller} when you create your {EDAName} custom resource. Otherwise, naming conflicts can occur and impact {EDAcontroller} deployment. -==== -+ -. Specify your controller URL. -+ -If you deployed {ControllerName} in Openshift as well, you can find the URL in the navigation panel under menu:Networking[Routes]. -+ -[NOTE] -==== -This is the only required customization, but you can customize other options using the UI form or directly in the YAML configuration tab, if desired. -==== - . Click btn:[Create]. This deploys {EDAcontroller} in the namespace you specified. + -After a couple minutes when the installation is marked as *Successful*, you can find the URL for the {EDAName} UI on the *Routes* page in the Openshift UI. +After a couple minutes when the installation is marked as *Successful*, you can find the URL for the {EDAName} UI on the *Routes* page in the OpenShift UI. . From the navigation panel, select menu:Networking[Routes] to find the new Route URL that has been created for you. + Routes are listed according to the name of your custom resource. -. Click the new URL to navigate to {EDAName} in the browser. +. Click the new URL under the *Location* column to navigate to {EDAName} in the browser. . From the navigation panel, select menu:Workloads[Secrets] and locate the Admin Password k8s secret that was created for you, unless you specified a custom one. + diff --git a/downstream/modules/platform/proc-determine-hub-route.adoc b/downstream/archive/archived-modules/platform/proc-determine-hub-route.adoc similarity index 95% rename from downstream/modules/platform/proc-determine-hub-route.adoc rename to downstream/archive/archived-modules/platform/proc-determine-hub-route.adoc index f1ad4e5b19..9d1257d4ec 100644 --- a/downstream/modules/platform/proc-determine-hub-route.adoc +++ b/downstream/archive/archived-modules/platform/proc-determine-hub-route.adoc @@ -6,6 +6,7 @@ Use the following procedure to determine the hub route. .Procedure +. Log in to {OCP}. . Navigate to menu:Networking[Routes]. . Select the project you used for the install. . Copy the location of the `private-ah-web-svc` service. diff --git a/downstream/archive/archived-modules/platform/proc-edge-manager-access-devices-cli.adoc b/downstream/archive/archived-modules/platform/proc-edge-manager-access-devices-cli.adoc new file mode 100644 index 0000000000..7f84e7cdc2 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-edge-manager-access-devices-cli.adoc @@ -0,0 +1,17 @@ +[id="edge-manager-access-devices-cli"] + += Accessing devices on the CLI + +Access and manage devices directly through the CLI, enabling you to perform tasks remotely and efficiently. + +.Procedure + +* To connect, use the `flightctl console` command specifying the device's name, and the agent establishes the console connection the next time it calls home (pull mode) or instantaneously (push mode): + +[source,console] +---- +flightctl console +---- + +* To disconnect, enter "exit" on the console. +To force-disconnect, press `+b` three times. diff --git a/downstream/archive/archived-modules/platform/proc-edge-manager-bootc.adoc b/downstream/archive/archived-modules/platform/proc-edge-manager-bootc.adoc new file mode 100644 index 0000000000..a775578e21 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-edge-manager-bootc.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-bootc"] + += Deploying the {RedHatEdge} using the bootc image appliance + +For environments where you prefer a pre-configured appliance, you can use the bootc image appliance. +The deployment of this image provides you with a ready-to-use system that includes {RedHatEdge} pre-installed. + +.Procedure + +. Provision the bootc image by using your preferred method, such as virtualization platform or cloud service. +. Once provisioned, follow link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/managing-rhel-bootc-images[Managing RHEL bootc images] to ensure {RedHatEdge} is running. diff --git a/downstream/archive/archived-modules/platform/proc-edge-manager-manage-apps-ui.adoc b/downstream/archive/archived-modules/platform/proc-edge-manager-manage-apps-ui.adoc new file mode 100644 index 0000000000..ecf42d6d11 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-edge-manager-manage-apps-ui.adoc @@ -0,0 +1,5 @@ +[id="edge-manager-manage-apps-ui"] + += Managing applications on the web UI + + diff --git a/downstream/archive/archived-modules/platform/proc-edge-manager-monitor-device-resources-web-ui.adoc b/downstream/archive/archived-modules/platform/proc-edge-manager-monitor-device-resources-web-ui.adoc new file mode 100644 index 0000000000..483b66f5f5 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-edge-manager-monitor-device-resources-web-ui.adoc @@ -0,0 +1,5 @@ +[id="edge-manager-monitor-device-resources-web-ui"] + += Monitoring device resources on the web UI + + diff --git a/downstream/archive/archived-modules/platform/proc-gs-auto-dev-run-template.adoc b/downstream/archive/archived-modules/platform/proc-gs-auto-dev-run-template.adoc new file mode 100644 index 0000000000..a249c15553 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-gs-auto-dev-run-template.adoc @@ -0,0 +1,9 @@ +[id="proc-gs-auto-dev-run-template"] + += Running a job template + +One benefit of {ControllerName} is the push-button deployment of Ansible playbooks. +You can configure a template to store all the parameters that you would normally pass to the Ansible playbook on the command line. +In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line. + +//ADD CONTENT \ No newline at end of file diff --git a/downstream/archive/archived-modules/platform/proc-gs-browse-content.adoc b/downstream/archive/archived-modules/platform/proc-gs-browse-content.adoc new file mode 100644 index 0000000000..e55a2c080b --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-gs-browse-content.adoc @@ -0,0 +1,21 @@ +[id="con-gs-browse-content_{context}"] + += Browse content + +{CertifiedName} are included in your subscription to {PlatformName}. +Using {HubNameMain}, you can access and curate a unique set of collections from all forms of Ansible content. + +Red Hat Ansible content contains two types of content: + +* {CertifiedName} +* {Valid} collections + +{Valid} collections are available in your {PrivateHubName} through the Platform Installer. +When you download {PlatformNameShort} with the bundled installer, validated content is pre-populated into the {PrivateHubName} by default, +but only if you enable the {PrivateHubName} as part of the inventory. + +If you are not using the bundle installer, you can use a Red Hat-supplied Ansible Playbook to install validated content. +For further information, see Ansible validated content. + +You can update validated collections manually by downloading their updated packages in {HubName}. + diff --git a/downstream/archive/archived-modules/platform/proc-gs-creating-a-role.adoc b/downstream/archive/archived-modules/platform/proc-gs-creating-a-role.adoc new file mode 100644 index 0000000000..027b222ac4 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-gs-creating-a-role.adoc @@ -0,0 +1,52 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-09-19 +:_mod-docs-content-type: PROCEDURE + +[id="gs-creating-a-role_{context}"] += Creating a role + +.Procedure + +. In your terminal, navigate to the roles directory inside a collection. +. Create a role called `my_role` inside the collection: ++ +[source,bash] +---- +$ansible-galaxy role init my_role +---- ++ +The collection now includes a role named `my_role` inside the `roles` directory, as you can see in this example: ++ +[source,bash] +---- +~/.ansible/collections/ansible_collections// + ... + └── roles/ + └── my_role/ + ├── .travis.yml + ├── README.md + ├── defaults/ + │ └── main.yml + ├── files/ + ├── handlers/ + │ └── main.yml + ├── meta/ + │ └── main.yml + ├── tasks/ + │ └── main.yml + ├── templates/ + ├── tests/ + │ ├── inventory + │ └── test.yml + └── vars/ + └── main.yml +---- ++ +. A custom role skeleton directory can be supplied by using the `--role-skeleton` argument. This allows organizations to create standardized templates for new roles to suit their needs. ++ +[source,bash] +---- +$ansible-galaxy role init my_role --role-skeleton ~/role_skeleton +---- ++ +This creates a role named `my_role` by copying the contents of `~/role_skeleton` into `my_role`. The contents of `role_skeleton` can be any files or folders that are valid inside a role directory. diff --git a/downstream/archive/archived-modules/platform/proc-gw-configure-auth-details.adoc b/downstream/archive/archived-modules/platform/proc-gw-configure-auth-details.adoc new file mode 100644 index 0000000000..7f9b71649a --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-gw-configure-auth-details.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-configure-auth-details"] + += Configuring authentication details + +Different authenticator plugins require different types of information. See the respective sections in xref:gw-config-authentication-type[Configuring an authentication type] for the required details. + +For all authentication types you can enter a *Name*, *Additional Authenticator Fields* and *Create Objects*. + +.Procedure + +. Enter a unique *Name* for the authenticator. The name is required, must be unique across all authenticators, and must not be longer than 512 characters. This becomes the unique identifier generated for the authenticator. ++ +[NOTE] +==== +Changing the name does not update the unique identifier of the authenticator. For example, if you create an authenticator with the name “My Authenticator” and later change it to “My LDAP Authenticator” you will not be able to create another authenticator with the name “My Authenticator” because the unique identifier is still in use. +==== ++ +. Use the *Additional Authenticator Fields* to send arbitrary data back to the libraries behind the authenticators. This is an advanced feature and any values provided in this field are not validated. ++ +[NOTE] +==== +Values defined in this field override the dedicated fields provided in the UI. For example, if you enter a URL in a dedicated field on this page and then add a URL entry into the Additional Authentication Fields, the URL defined in Additional Authentication Fields overrides the definition in the dedicated field. +==== ++ +. Enable or disable *Enabled* to specify if the authenticator should be enabled or disabled. If enabled, users are able to log in from the authenticator. If disabled, users will not be allowed to log in from the authenticator. +. Enable or disable *Create Object* to specify whether the authenticator should create teams and organizations in the system when a user logs in. ++ +Enabled:: Teams and organizations defined in the authenticator maps are created and the users added to them. +Disabled:: Organizations and teams defined in the authenticator maps will not be created automatically in the system. However, if they already exist (i.e. created by a superuser), users who trigger the maps are granted access to them. ++ +. Enable or disable *Remove Users*. If enabled, any access previously granted to a user is removed when they authenticate from this source. If disabled, permissions are only added or removed from the user based on the results of this authenticator's authenticator mappings. ++ +For example, assume a user has been granted the `is_superuser` permission in the system. And that user will log into an authenticator whose maps will not formulate an opinion as to whether or not the user should be a superuser. +If *Remove Users* is enabled, the `is_superuser` permission will be removed from the user, the authenticator maps will not have an opinion as to whether it should be there or not so, after login the user will not have the `is_superuser` permission. ++ +If *Remove Users* is disabled, the `is_superuser` permission _will not_ be removed from the user. The authenticator maps will not have an opinion as to whether it should be there or not so after login the user _will_ have the `is_superuser` permission. ++ +. Click btn:[Create Authentication Method]. +. Select the *Mapping* tab. +. Click btn:[Create mapping] and proceed to to xref:gw-define-rules-triggers[Define authentication mapping rules and triggers]. \ No newline at end of file diff --git a/downstream/archive/archived-modules/platform/proc-gw-review-auth-settings.adoc b/downstream/archive/archived-modules/platform/proc-gw-review-auth-settings.adoc new file mode 100644 index 0000000000..b6118d2430 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-gw-review-auth-settings.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-review-auth-settings"] + += Reviewing the authentication settings + +After you have defined the authentication details, configured the authentication maps, and specified the mapping order precedence, you can review and verify, or modify the settings before creating the authenticator. + +.Procedure + +. Review and verify the authentication settings. +. Click btn:[Finish] to create the authenticator. ++ +A notification is displayed if there are any issues with the authenticator or the map. If you encounter issues, click btn:[Back] or select a wizard section from the wizard menu to go back and add missing data or correct inaccurate data. \ No newline at end of file diff --git a/downstream/modules/platform/proc-importing-collections-into-private-automation-hub.adoc b/downstream/archive/archived-modules/platform/proc-importing-collections-into-private-automation-hub.adoc similarity index 100% rename from downstream/modules/platform/proc-importing-collections-into-private-automation-hub.adoc rename to downstream/archive/archived-modules/platform/proc-importing-collections-into-private-automation-hub.adoc diff --git a/downstream/modules/platform/proc-install-ansible-colls.adoc b/downstream/archive/archived-modules/platform/proc-install-ansible-colls.adoc similarity index 100% rename from downstream/modules/platform/proc-install-ansible-colls.adoc rename to downstream/archive/archived-modules/platform/proc-install-ansible-colls.adoc diff --git a/downstream/archive/archived-modules/platform/proc-installing-ansible-core.adoc b/downstream/archive/archived-modules/platform/proc-installing-ansible-core.adoc new file mode 100644 index 0000000000..e970b6d2d4 --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-installing-ansible-core.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: PROCEDURE + +[id="installing-ansible-core_{context}"] + += Installing ansible-core on the RHEL host + +.Procedure +. From your RHEL host, install `ansible-core`: ++ +---- +sudo dnf install -y ansible-core +---- ++ +. Optionally, you can install additional utilities that can be useful for troubleshooting purposes, for example `wget`, `git`, `rsync`, and `vim`: ++ +---- +sudo dnf install -y wget git rsync vim +---- ++ +. Set a hostname that is a fully qualified domain name (FQDN): ++ +---- +sudo hostnamectl set-hostname +---- diff --git a/downstream/modules/platform/proc-installing-hub-using-operator.adoc b/downstream/archive/archived-modules/platform/proc-installing-hub-using-operator.adoc similarity index 87% rename from downstream/modules/platform/proc-installing-hub-using-operator.adoc rename to downstream/archive/archived-modules/platform/proc-installing-hub-using-operator.adoc index 12453aee41..5de9ad444d 100644 --- a/downstream/modules/platform/proc-installing-hub-using-operator.adoc +++ b/downstream/archive/archived-modules/platform/proc-installing-hub-using-operator.adoc @@ -1,14 +1,16 @@ [id="proc-installing-hub-using-operator_{context}"] -= Installing {HubName} using the Operator += Installing {HubName} using the {OperatorPlatformNameShort} -Use the following procedure to install {HubName} using the operator. +Use the following procedure to install {HubName} using the {OperatorPlatformNameShort}. .Procedure +. Log in to {OCP}. . Navigate to menu:Operator[Installed Operators]. -. Select the {PlatformNameShort}. -. Select the {HubNameStart} tab and click btn:[Create {HubNameStart}]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the {HubNameStart} tab. +. Click btn:[Create {HubNameStart}]. . Select btn:[YAML view]. The YAML should be similar to: + diff --git a/downstream/modules/platform/proc-installing-the-ansible-builder-rpm.adoc b/downstream/archive/archived-modules/platform/proc-installing-the-ansible-builder-rpm.adoc similarity index 70% rename from downstream/modules/platform/proc-installing-the-ansible-builder-rpm.adoc rename to downstream/archive/archived-modules/platform/proc-installing-the-ansible-builder-rpm.adoc index eae85c1fa0..12260dbb6f 100644 --- a/downstream/modules/platform/proc-installing-the-ansible-builder-rpm.adoc +++ b/downstream/archive/archived-modules/platform/proc-installing-the-ansible-builder-rpm.adoc @@ -8,7 +8,7 @@ [role="_abstract"] -On the RHEL system where custom {ExecEnvShort}s will be built, you will install the {Builder} RPM using a Satellite server that already exists in the environment. This method is preferred because the {ExecEnvShort} images can use any RHEL content from the pre-existing Satellite if required. +On the RHEL system where custom {ExecEnvShort}s will be built, you will install the {Builder} RPM by using a Satellite Server that already exists in the environment. This method is preferred because the {ExecEnvShort} images can use any RHEL content from the pre-existing Satellite if required. .Procedure @@ -17,11 +17,11 @@ On the RHEL system where custom {ExecEnvShort}s will be built, you will install .. Subscribe the RHEL system to a Satellite on the disconnected network. -.. Attach the {PlatformNameShort} subscription and enable the AAP repository. The repository name will either be `ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms` or `ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms` depending on the version of RHEL used on the underlying system. +.. Attach the {PlatformNameShort} subscription and enable the {PlatformNameShort} repository. The repository name is either `ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms` or `ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms` depending on the version of RHEL used on the underlying system. -.. Install the {Builder} RPM. The version of the {Builder} RPM must be 3.0.0 or later in order for the examples below to work properly. +.. Install the {Builder} RPM. The version of the {Builder} RPM must be 3.0.0 or later in order for the examples below to work properly. -. Install the {Builder} RPM from the {PlatformNameShort} setup bundle. Use this method if a Satellite server is not available on your disconnected network. +. Install the {Builder} RPM from the {PlatformNameShort} setup bundle. Use this method if a Satellite Server is not available on your disconnected network. .. Unarchive the {PlatformNameShort} setup bundle. diff --git a/downstream/modules/platform/proc-installing-the-ansible-platform-operator.adoc b/downstream/archive/archived-modules/platform/proc-installing-the-ansible-platform-operator.adoc similarity index 78% rename from downstream/modules/platform/proc-installing-the-ansible-platform-operator.adoc rename to downstream/archive/archived-modules/platform/proc-installing-the-ansible-platform-operator.adoc index 72d014dadb..7538bb67d0 100644 --- a/downstream/modules/platform/proc-installing-the-ansible-platform-operator.adoc +++ b/downstream/archive/archived-modules/platform/proc-installing-the-ansible-platform-operator.adoc @@ -4,8 +4,10 @@ .Procedure -. Navigate to menu:Operator[Operator Hub] and search for the {PlatformNameShort} Operator. -. Select the {PlatformNameShort} Operator project. +. Log in to {OCP}. +. Navigate to menu:Operator[Operator Hub]. +. Search for the {OperatorPlatformNameShort}. +. Select the {OperatorPlatformNameShort} project. . Click on the Operator tile. . Click btn:[Install]. . Select a Project to install the Operator into. diff --git a/downstream/modules/platform/proc-migrate-playbooks-roles.adoc b/downstream/archive/archived-modules/platform/proc-migrate-playbooks-roles.adoc similarity index 100% rename from downstream/modules/platform/proc-migrate-playbooks-roles.adoc rename to downstream/archive/archived-modules/platform/proc-migrate-playbooks-roles.adoc diff --git a/downstream/modules/platform/proc-new-aap-instance-upgrade.adoc b/downstream/archive/archived-modules/platform/proc-new-aap-instance-upgrade.adoc similarity index 100% rename from downstream/modules/platform/proc-new-aap-instance-upgrade.adoc rename to downstream/archive/archived-modules/platform/proc-new-aap-instance-upgrade.adoc diff --git a/downstream/archive/archived-modules/platform/proc-operator-upgrade-external-db-gateway.adoc b/downstream/archive/archived-modules/platform/proc-operator-upgrade-external-db-gateway.adoc new file mode 100644 index 0000000000..850395642c --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-operator-upgrade-external-db-gateway.adoc @@ -0,0 +1,95 @@ +[id="proc-operator-upgrade-external-db-gateway"] + += Upgrading an external database for {Gateway} on {OperatorPlatformName} + +[role="_abstract"] + +To upgrade from {PlatformNameShort} 2.4 to 2.5 with an external database, you must scale down your Operator deployment, upgrade your PostgreSQL, then scale your deployment back up. + +.Prerequisites + +* A 2.4 {ControllerName} and {HubName} deployment with an external PostgreSQL 13 database +* A newly provisioned {PostgresVers} database for the new {Gateway} component + +.Procedure + +. Create a secret `postgres-config-gateway` with database version 15 credentials for the {Gateway} component. +For example: ++ +---- +apiVersion: v1 +kind: Secret +metadata: + name: postgres-config-gateway + namespace: aap +stringData: + host: "" + port: "" # default is 5432 + database: "" # for example "gateway" + username: "" # for example "gateway" + password: "" + sslmode: "prefer" + type: "unmanaged" +type: Opaque +---- ++ +. Add your newly created secret to your {PlatformNameShort} instance: ++ +---- +spec: + postgres_configuration_secret: postgres-config-gateway +---- ++ +. Scale down your deployments in their respective namespaces using: ++ +`oc scale deployment --replicas=0 -n ` ++ +.. {ControllerNameStart}: +... `automation-controller-operator-controller-manager` +... `-controller-task` +... `-controller-web` +.. {HubNameStart}: +... `automation-hub-operator-controller-manager` +... `-hub-api` +... `-hub-content` +... `-hub-redis` +... `-hub-worker` +.. The remaining operators: +... `ansible-lightspeed-operator-controller-manager` +... `eda-server-operator-controller-manager` +... `resource-operator-controller-manager` +. Upgrade your PostgreSQL 13 to {PostgresVers}. +. Scale your deployments back up using: ++ +`oc scale deployment --replicas=1 -n ` ++ +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Click the {MoreActionsIcon} icon next to your deployment and then click btn:[Edit Subscription]. +. From the *Details* tab, select btn:[Update Channel]. +. Select *stable-2.5* as the channel and click btn:[Save]. +. Deploy {PlatformNameShort} {PlatformVers} using the following custom resource (CR): ++ +---- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: aap +spec: + + database: + database_secret: postgres-config-gateway + + controller: + name: existing-controller + + eda: + disabled: true + + hub: + name: existing-hub +---- + +.Verification + +To verify your upgrade was successful, go to your users, collection, job history or similar and confirm that they are on the new {PlatformVers} instance and in the new {PostgresVers} databases. \ No newline at end of file diff --git a/downstream/modules/platform/proc-restart-controller-services.adoc b/downstream/archive/archived-modules/platform/proc-restart-controller-services.adoc similarity index 100% rename from downstream/modules/platform/proc-restart-controller-services.adoc rename to downstream/archive/archived-modules/platform/proc-restart-controller-services.adoc diff --git a/downstream/modules/platform/proc-restore-aap-backup.adoc b/downstream/archive/archived-modules/platform/proc-restore-aap-backup.adoc similarity index 100% rename from downstream/modules/platform/proc-restore-aap-backup.adoc rename to downstream/archive/archived-modules/platform/proc-restore-aap-backup.adoc diff --git a/downstream/modules/platform/proc-set-custom-pod-timeout.adoc b/downstream/archive/archived-modules/platform/proc-set-custom-pod-timeout.adoc similarity index 100% rename from downstream/modules/platform/proc-set-custom-pod-timeout.adoc rename to downstream/archive/archived-modules/platform/proc-set-custom-pod-timeout.adoc diff --git a/downstream/archive/archived-modules/platform/proc-setup-postgresql-ext-database-containerized.adoc b/downstream/archive/archived-modules/platform/proc-setup-postgresql-ext-database-containerized.adoc new file mode 100644 index 0000000000..16a643936d --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-setup-postgresql-ext-database-containerized.adoc @@ -0,0 +1,133 @@ +//Michelle: Module archived as it has been replaced with modular content +[id="proc-setup-postgresql-ext-database-containerized"] + += Setting up a customer provided (external) database + +[IMPORTANT] +==== +* When using an external database with {PlatformNameShort}, you must create and maintain that database. Ensure that you clear your external database when uninstalling {PlatformNameShort}. + +* {PlatformName} {PlatformVers} uses {PostgresVers} and requires the customer provided (external) database to have ICU support. + +* During configuration of an external database, you must check the external database coverage. For more information, see link:https://access.redhat.com/articles/4010491[{PlatformName} Database Scope of Coverage]. +==== + +There are two possible scenarios for setting up an external database: + +. An external database with PostgreSQL admin credentials +. An external database without PostgreSQL admin credentials + +== Setting up an external database with PostgreSQL admin credentials + +If you have PostgreSQL admin credentials, you can supply them in the inventory file and the installation program creates the PostgreSQL users and databases for each component for you. The PostgreSQL admin account must have `SUPERUSER` privileges. + +To configure the PostgreSQL admin credentials, add the following variables to the inventory file under the `[all:vars]` group: + +---- +postgresql_admin_username= +postgresql_admin_password= +---- + +== Setting up an external database without PostgreSQL admin credentials + +If you do not have PostgreSQL admin credentials, then PostgreSQL users and databases need to be created for each component ({Gateway}, {ControllerName}, {HubName}, and {EDAName}) before running the installation program. + +.Procedure + +. Connect to a PostgreSQL compliant database server with a user that has `SUPERUSER` privileges. ++ +---- +# psql -h -U -p +---- ++ +For example: ++ +---- +# psql -h db.example.com -U superuser -p 5432 +---- ++ +. Create the user with a password and ensure the `CREATEDB` role is assigned to the user. For more information, see link:https://www.postgresql.org/docs/13/user-manag.html[Database Roles]. ++ +---- +CREATE USER WITH PASSWORD CREATEDB; +---- ++ +For example: ++ +---- +CREATE USER hub_user WITH PASSWORD CREATEDB; +---- ++ +. Create the database and add the user you created as the owner. ++ +---- +CREATE DATABASE OWNER ; +---- ++ +For example: ++ +---- +CREATE DATABASE hub_database OWNER hub_user; +---- ++ +. When you have created the PostgreSQL users and databases for each component, you can supply them in the inventory file under the `[all:vars]` group. ++ +[source,yaml,subs="+attributes"] +---- +# {GatewayStart} +gateway_pg_host=aap.example.org +gateway_pg_database= +gateway_pg_username= +gateway_pg_password= + +# {ControllerNameStart} +controller_pg_host=aap.example.org +controller_pg_database= +controller_pg_username= +controller_pg_password= + +# {HubNameStart} +hub_pg_host=aap.example.org +hub_pg_database= +hub_pg_username= +hub_pg_password= + +# {EDAName} +eda_pg_host=aap.example.org +eda_pg_database= +eda_pg_username= +eda_pg_password= +---- + +include::proc-enable-hstore-extension.adoc[leveloffset=+1] + +== Optional: enabling mutual TLS (mTLS) authentication + +mTLS authentication is disabled by default. To configure each component's database with mTLS authentication, add the following variables to your inventory file under the `[all:vars]` group and ensure each component has a different TLS certificate and key: + +[source,yaml,subs="+attributes"] +---- +# {GatewayStart} +gateway_pg_cert_auth=true +gateway_pg_tls_cert=/path/to/gateway.cert +gateway_pg_tls_key=/path/to/gateway.key +gateway_pg_sslmode=verify-full + +# {ControllerNameStart} +controller_pg_cert_auth=true +controller_pg_tls_cert=/path/to/awx.cert +controller_pg_tls_key=/path/to/awx.key +controller_pg_sslmode=verify-full + +# {HubNameStart} +hub_pg_cert_auth=true +hub_pg_tls_cert=/path/to/pulp.cert +hub_pg_tls_key=/path/to/pulp.key +hub_pg_sslmode=verify-full + +# {EDAName} +eda_pg_cert_auth=true +eda_pg_tls_cert=/path/to/eda.cert +eda_pg_tls_key=/path/to/eda.key +eda_pg_sslmode=verify-full +---- diff --git a/downstream/modules/platform/proc-update-rhsso-client.adoc b/downstream/archive/archived-modules/platform/proc-update-rhsso-client.adoc similarity index 92% rename from downstream/modules/platform/proc-update-rhsso-client.adoc rename to downstream/archive/archived-modules/platform/proc-update-rhsso-client.adoc index 7625965f9b..22dbe193c4 100644 --- a/downstream/modules/platform/proc-update-rhsso-client.adoc +++ b/downstream/archive/archived-modules/platform/proc-update-rhsso-client.adoc @@ -2,12 +2,13 @@ = Updating the {RHSSO} client -When {HubName} is installed and you know the URL of the instance, you must update the {RHSSO} to set the Valid Redirect URIs and Web Origins settings. +After you install {HubName} and you know the URL of the instance, you must update the {RHSSO} to set the Valid Redirect URIs and Web Origins settings. .Procedure +. Log in to {OCP}. . Navigate to menu:Operator[Installed Operators]. -. Select the RH-SSO project. +. Select the *RH-SSO* project. . Click btn:[Red Hat Single Sign-On Operator]. . Select btn:[Keycloak Client]. . Click on the automation-hub-client-secret client. diff --git a/downstream/modules/platform/proc-upgrade-installer.adoc b/downstream/archive/archived-modules/platform/proc-upgrade-installer.adoc similarity index 100% rename from downstream/modules/platform/proc-upgrade-installer.adoc rename to downstream/archive/archived-modules/platform/proc-upgrade-installer.adoc diff --git a/downstream/modules/platform/proc-upgrading-between-minor-aap-releases.adoc b/downstream/archive/archived-modules/platform/proc-upgrading-between-minor-aap-releases.adoc similarity index 93% rename from downstream/modules/platform/proc-upgrading-between-minor-aap-releases.adoc rename to downstream/archive/archived-modules/platform/proc-upgrading-between-minor-aap-releases.adoc index 4abc957029..e25d701841 100644 --- a/downstream/modules/platform/proc-upgrading-between-minor-aap-releases.adoc +++ b/downstream/archive/archived-modules/platform/proc-upgrading-between-minor-aap-releases.adoc @@ -9,7 +9,7 @@ [role="_abstract"] -To upgrade between minor releases of {PlatformNameShort} 2, use this general workflow. +To upgrade between minor releases of {PlatformNameShort} 2 on your {VMBase}, use this general workflow. .Procedure diff --git a/downstream/archive/archived-modules/platform/proc-using-postinstall.adoc b/downstream/archive/archived-modules/platform/proc-using-postinstall.adoc new file mode 100644 index 0000000000..245094015a --- /dev/null +++ b/downstream/archive/archived-modules/platform/proc-using-postinstall.adoc @@ -0,0 +1,71 @@ +:_mod-docs-content-type: PROCEDURE + +[id="using-postinstall_{context}"] + += Using the postinstall feature of containerized {PlatformNameShort} + +[role="_abstract"] + +You can use the optional postinstall feature of containerized {PlatformNameShort} to define and load the configuration during the initial installation. This uses a configuration-as-code approach, where you simply define your configuration to be loaded as YAML files. + +.Prerequisites +* An {PlatformNameShort} license for this feature that is on the local filesystem so it can be automatically loaded from the inventory file. + + +.Procedure +. The postinstall feature is disabled by default. To enable the postinstall feature, add the following variables in your inventory file: ++ +---- +controller_postinstall=true +---- ++ +. To load your {ControllerName} license as part of the postinstall process, set the following variables in your inventory file: ++ +---- +controller_license_file= +controller_postinstall_dir= +---- ++ +. You can pull your configuration-as-code from a Git based repository. To do this, set the following variables to dictate where you pull the content from and where to store it for upload to the {PlatformNameShort} controller: ++ +---- +controller_postinstall_repo_url= +controller_postinstall_dir= +controller_postinstall_repo_ref=main +---- ++ +. The `controller_postinstall_repo_url` variable defines the postinstall repository URL which must include authentication information. + ++ +---- +http(s):///.git (public repository without HTTP(S) authentication) +http(s)://:@:.git (private repository with HTTP(S) authentication) +git@:.git (public or private repository with SSH authentication) +---- ++ + +[NOTE] +==== +When using SSH based authentication, the installer does not configure anything for you, so you must configure everything on the installer node. +==== + +Definition files that are used by {Builder} to create {ExecEnvNameSing} images use the link:https://console.redhat.com/ansible/automation-hub/namespaces/infra/[infra certified collections]. The link:https://console.redhat.com/ansible/automation-hub/repo/validated/infra/controller_configuration/[controller_configuration] collection is preinstalled as part of the installation and uses the installation controller credentials you supply in the inventory file for access into the {PlatformNameShort} controller. You need to give the YAML configuration files. + +You can set up {PlatformNameShort} configuration attributes such as credentials, LDAP settings, users and teams, organizations, projects, inventories and hosts, job and workflow templates. + +The following example shows a sample `your-config.yml` file defining and loading controller job templates. The example demonstrates a simple change to the example provided with an {PlatformNameShort} installation. + +---- +/full_path_to_your_configuration_as_code/ +├── controller + └── job_templates.yml +---- + +---- +controller_templates: + - name: Demo Job Template + execution_environment: Default execution environment + instance_groups: + - default + inventory: Demo Inventory +---- diff --git a/downstream/modules/platform/proc-verify-controller-installation.adoc b/downstream/archive/archived-modules/platform/proc-verify-controller-installation.adoc similarity index 87% rename from downstream/modules/platform/proc-verify-controller-installation.adoc rename to downstream/archive/archived-modules/platform/proc-verify-controller-installation.adoc index 544b32ce13..bf56169133 100644 --- a/downstream/modules/platform/proc-verify-controller-installation.adoc +++ b/downstream/archive/archived-modules/platform/proc-verify-controller-installation.adoc @@ -10,6 +10,7 @@ Verify that you installed {ControllerName} successfully by logging in with the a .Procedure . Go to the IP address specified for the {ControllerName} node in the `inventory` file. +. Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your `manifest` file. . Log in with the user ID `admin` and the password credentials you set in the `inventory` file. [NOTE] diff --git a/downstream/modules/platform/proc-verify-eda-controller-installation.adoc b/downstream/archive/archived-modules/platform/proc-verify-eda-controller-installation.adoc similarity index 85% rename from downstream/modules/platform/proc-verify-eda-controller-installation.adoc rename to downstream/archive/archived-modules/platform/proc-verify-eda-controller-installation.adoc index 482bcd97fa..1b796cf5cc 100644 --- a/downstream/modules/platform/proc-verify-eda-controller-installation.adoc +++ b/downstream/archive/archived-modules/platform/proc-verify-eda-controller-installation.adoc @@ -10,6 +10,8 @@ Verify that you installed {EDAcontroller} successfully by logging in with the ad . Navigate to the IP address specified for the {EDAcontroller} node in the `inventory` file. +. Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your `manifest` file. + . Log in with the user ID `admin` and the password credentials you set in the `inventory` file. [IMPORTANT] diff --git a/downstream/modules/platform/proc-verify-hub-installation.adoc b/downstream/archive/archived-modules/platform/proc-verify-hub-installation.adoc similarity index 85% rename from downstream/modules/platform/proc-verify-hub-installation.adoc rename to downstream/archive/archived-modules/platform/proc-verify-hub-installation.adoc index 8adb60989c..2cdb3fbf0c 100644 --- a/downstream/modules/platform/proc-verify-hub-installation.adoc +++ b/downstream/archive/archived-modules/platform/proc-verify-hub-installation.adoc @@ -7,6 +7,7 @@ Verify that you installed your {HubName} successfully by logging in with the adm .Procedure . Navigate to the IP address specified for the {HubName} node in the `inventory` file. +. Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your `manifest` file. . Log in with the user ID `admin` and the password credentials you set in the `inventory` file. diff --git a/downstream/modules/platform/ref-aap-containerized-dns-config.adoc b/downstream/archive/archived-modules/platform/ref-aap-containerized-dns-config.adoc similarity index 100% rename from downstream/modules/platform/ref-aap-containerized-dns-config.adoc rename to downstream/archive/archived-modules/platform/ref-aap-containerized-dns-config.adoc diff --git a/downstream/archive/archived-modules/platform/ref-accessing-control-auto-hub-eda-control.adoc b/downstream/archive/archived-modules/platform/ref-accessing-control-auto-hub-eda-control.adoc new file mode 100644 index 0000000000..1d49f70481 --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-accessing-control-auto-hub-eda-control.adoc @@ -0,0 +1,66 @@ +:_mod-docs-content-type: REFERENCE + +[id="accessing-ansible-automation-platform_{context}"] + += Accessing {PlatformNameShort} + +[role="_abstract"] + + +After the installation completes, the default protocol and ports used for {PlatformNameShort} are 80 (HTTP) and 443 (HTTPS). + +You can customize the ports with the following variables: + +---- +envoy_http_port=80 +envoy_https_port=443 +---- + +If you want to disable HTTPS, set `envoy_disable_https` to `true`: + +---- +envoy_disable_https=true +---- + +.Accessing the platform UI + +The platform UI is available by default at: + +---- +https://:443 +---- + +Log in as the admin user with the password you created for `gateway_admin_password`. + +// Michelle: Removing additional component UI references as platform gateway UI will be used going forward - AAP-18760 +// .Accessing {ControllerName} UI + +// The {ControllerName} UI is available by default at: + +// ---- +// https://:8443 +// ---- + +// Log in as the admin user with the password you created for *controller_admin_password*. + +// If you supplied the license manifest as part of the installation, the {PlatformNameShort} dashboard is displayed. If you did not supply a license file, the *Subscription* screen is displayed where you must supply your license details. This is documented here: link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_operations_guide/assembly-aap-activate[Chapter 1. Activating {PlatformName}]. + +// .Accessing {HubName} UI + +// The {HubName} UI is available by default at: + +// ---- +// https://:8444 +// ---- + +// Log in as the admin user with the password you created for *hub_admin_password*. + + +// .Accessing {EDAName} UI + +// The {EDAName} UI is available by default at: +// ---- +// https://:8445 +// ---- + +// Log in as the admin user with the password you created for *eda_admin_password*. diff --git a/downstream/archive/archived-modules/platform/ref-configuring-metrics-utility.adoc b/downstream/archive/archived-modules/platform/ref-configuring-metrics-utility.adoc new file mode 100644 index 0000000000..d87300b5e5 --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-configuring-metrics-utility.adoc @@ -0,0 +1,133 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-07-15 + +:_mod-docs-content-type: REFERENCE + +[id="ref-configuring-metrics-utility_{context}"] += Configuring metrics-utility + +== On {RHEL} + +.Prerequisites + +* An active {PlatformName} subscription +* An existing installation of {PlatformName} on {RHEL} + +`metrics-utility` is included with {PlatformName}, so you do not need a separate installation. The following commands gather the relevant data and generate a link:https://connect.redhat.com/en/programs/certified-cloud-service-provider[CCSP] report containing your usage metrics. +You can configure these commands as cron jobs to ensure they run at the beginning of every month. See link:https://www.redhat.com/en/blog/linux-cron-command[How to schedule jobs using the Linux 'cron' utility] for more on configuring using the cron syntax. + +.Procedure +. In the `cron` file, set the correct variables to ensure `metrics-utility` gathers the relevant data. To open the cron file for editing, run: ++ +`crontab -e` + +. Specify the following variables to indicate where the report is deposited in your file system: ++ +---- +export METRICS_UTILITY_SHIP_TARGET=directory +export METRICS_UTILITY_SHIP_PATH=/awx_devel/awx-dev/metrics-utility/shipped_data/billing +---- +. Set these variables to generate a report: ++ +---- +export METRICS_UTILITY_REPORT_TYPE=CCSP +export METRICS_UTILITY_PRICE_PER_NODE=11.55 # in USD +export METRICS_UTILITY_REPORT_SKU=MCT3752MO +export METRICS_UTILITY_REPORT_SKU_DESCRIPTION="EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" +export METRICS_UTILITY_REPORT_H1_HEADING="CCSP Reporting : ANSIBLE Consumption" +export METRICS_UTILITY_REPORT_COMPANY_NAME="Company Name" +export METRICS_UTILITY_REPORT_EMAIL="email@email.com" +export METRICS_UTILITY_REPORT_RHN_LOGIN="test_login" +export METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER="BUSINESS LEADER" +export METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER="PROCUREMENT LEADER" +---- + +. Run the following command to gather and store the data in the provided `SHIP_PATH` directory: ++ +`metrics-utility gather_automation_controller_billing_data --ship --until=10m` + +. To configure the run schedule, add the following parameters to the end of the file and specify how often you want `metrics-utility` to gather information and build a report using link:https://www.redhat.com/en/blog/linux-cron-command[cron syntax]. +In the following example, the `gather` command is configured to run every hour at 00 minutes. The `build_report` command is configured to run every second day of each month at 4:00 AM. ++ +---- +0 */1 * * * metrics-utility gather_automation_controller_billing_data --ship --until=10m +0 4 2 * * metrics-utility build_report +---- + +. Save and close the file. +. To verify that you saved your changes, run: ++ +`crontab -l` + +. You can also check the logs to ensure that data is being collected. Run: ++ +`cat /var/log/cron` + +. The following is an example of the output. Note that time and date might vary depending on how your configure the run schedule: ++ +---- +May 8 09:45:03 ip-10-0-6-24 CROND[51623]: (root) CMDOUT (No billing data for month: 2024-04) +May 8 09:45:03 ip-10-0-6-24 CROND[51623]: (root) CMDEND (metrics-utility build_report) +May 8 09:45:19 ip-10-0-6-24 crontab[51619]: (root) END EDIT (root) +May 8 09:45:34 ip-10-0-6-24 crontab[51659]: (root) BEGIN EDIT (root) +May 8 09:46:01 ip-10-0-6-24 CROND[51688]: (root) CMD (metrics-utility gather_automation_controller_billing_data --ship --until=10m) +May 8 09:46:03 ip-10-0-6-24 CROND[51669]: (root) CMDOUT (/tmp/9e3f86ee-c92e-4b05-8217-72c496e6ffd9-2024-05-08-093402+0000-2024-05-08-093602+0000-0.tar.gz) +May 8 09:46:03 ip-10-0-6-24 CROND[51669]: (root) CMDEND (metrics-utility gather_automation_controller_billing_data --ship --until=10m) +May 8 09:46:26 ip-10-0-6-24 crontab[51659]: (root) END EDIT (root) +---- + +. Run the following command to build a report for the previous month: ++ +`metrics-utility build_report` + +. The system saves the generated report as `CCSP--.xlsx` in the ship path that you specified in step 2. + + +== On {OCPShort} from the {PlatformNameShort} operator + +`metrics-utility` is included in the {OCPShort} image beginning with version 4.12. +If your system does not have `metrics-utility` installed, update your OpenShift image to the latest version. +Follow the steps below to configure the run schedule for `metrics-utility` on {OCPShort} using the {PlatformName} Operator. + +. Prerequisites: + +* A running OpenShift cluster +* An operator-based installation of {PlatformNameShort} on {OCPShort}. ++ +[Note] +==== +`metrics-utility` runs as indicated by the parameters you set in the configuration file. +The utility cannot be run manually on {OCPShort}. +==== + +== Create a ConfigMap in the OpenShift UI YAML view + +.Procedure +. From the navigation panel, select menu::ConfigMaps[]. +. Click btn:[Create ConfigMap]. +. On the next screen, select the YAML view tab. +. In the YAML field, enter the following parameters with the appropriate variables set: ++ +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: automationcontroller-metrics-utility-configmap +data: + METRICS_UTILITY_SHIP_TARGET: directory + METRICS_UTILITY_SHIP_PATH: /metrics-utility + METRICS_UTILITY_REPORT_TYPE: CCSP + METRICS_UTILITY_PRICE_PER_NODE: '11' # in USD + METRICS_UTILITY_REPORT_SKU: MCT3752MO + METRICS_UTILITY_REPORT_SKU_DESCRIPTION: "EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" + METRICS_UTILITY_REPORT_H1_HEADING: "CCSP Reporting : ANSIBLE Consumption" + METRICS_UTILITY_REPORT_COMPANY_NAME: "Company Name" + METRICS_UTILITY_REPORT_EMAIL: "email@email.com" + METRICS_UTILITY_REPORT_RHN_LOGIN: "test_login" + METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER: "BUSINESS LEADER" + METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER: "PROCUREMENT LEADER" +---- + +. Click btn:[Create]. + +. To verify that the ConfigMap was created and `metrics-utility` is installed, select menu::ConfigMap[] from the navigation panel and look for your ConfigMap in the list. \ No newline at end of file diff --git a/downstream/modules/platform/ref-connect-hub-to-rhsso.adoc b/downstream/archive/archived-modules/platform/ref-connect-hub-to-rhsso.adoc similarity index 100% rename from downstream/modules/platform/ref-connect-hub-to-rhsso.adoc rename to downstream/archive/archived-modules/platform/ref-connect-hub-to-rhsso.adoc diff --git a/downstream/modules/platform/ref-controller-AD-and-kerberos-credentials.adoc b/downstream/archive/archived-modules/platform/ref-controller-AD-and-kerberos-credentials.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-AD-and-kerberos-credentials.adoc rename to downstream/archive/archived-modules/platform/ref-controller-AD-and-kerberos-credentials.adoc diff --git a/downstream/modules/platform/ref-controller-LDAP-organization-team-mapping.adoc b/downstream/archive/archived-modules/platform/ref-controller-LDAP-organization-team-mapping.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-LDAP-organization-team-mapping.adoc rename to downstream/archive/archived-modules/platform/ref-controller-LDAP-organization-team-mapping.adoc diff --git a/downstream/modules/platform/ref-controller-access-rules-for-apps.adoc b/downstream/archive/archived-modules/platform/ref-controller-access-rules-for-apps.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-access-rules-for-apps.adoc rename to downstream/archive/archived-modules/platform/ref-controller-access-rules-for-apps.adoc diff --git a/downstream/modules/platform/ref-controller-access-rules-for-tokens.adoc b/downstream/archive/archived-modules/platform/ref-controller-access-rules-for-tokens.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-access-rules-for-tokens.adoc rename to downstream/archive/archived-modules/platform/ref-controller-access-rules-for-tokens.adoc diff --git a/downstream/modules/platform/ref-controller-application-functions.adoc b/downstream/archive/archived-modules/platform/ref-controller-application-functions.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-application-functions.adoc rename to downstream/archive/archived-modules/platform/ref-controller-application-functions.adoc diff --git a/downstream/modules/platform/ref-controller-applying-rbac.adoc b/downstream/archive/archived-modules/platform/ref-controller-applying-rbac.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-applying-rbac.adoc rename to downstream/archive/archived-modules/platform/ref-controller-applying-rbac.adoc diff --git a/downstream/modules/platform/ref-controller-apps-add-tokens.adoc b/downstream/archive/archived-modules/platform/ref-controller-apps-add-tokens.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-apps-add-tokens.adoc rename to downstream/archive/archived-modules/platform/ref-controller-apps-add-tokens.adoc diff --git a/downstream/modules/platform/ref-controller-audit-functionality.adoc b/downstream/archive/archived-modules/platform/ref-controller-audit-functionality.adoc similarity index 88% rename from downstream/modules/platform/ref-controller-audit-functionality.adoc rename to downstream/archive/archived-modules/platform/ref-controller-audit-functionality.adoc index e15a680232..58c37ec208 100644 --- a/downstream/modules/platform/ref-controller-audit-functionality.adoc +++ b/downstream/archive/archived-modules/platform/ref-controller-audit-functionality.adoc @@ -3,9 +3,9 @@ = Audit and logging functionality For any administrative access, it is important to audit and watch for actions. -For the system overall, you can do this through the built in audit support and the built-in logging support. +For the system overall, you can do this through the built-in audit support and the built-in logging support. -For {ControllerName}, you can do this through the built-in Activity Stream support that logs all changes within {ControllerName}, as well as through the automation logs. +For {ControllerName}, you can do this through the built-in Activity Stream support that logs all changes within {ControllerName}, and through the automation logs. Best practices dictate collecting logging and auditing centrally rather than reviewing it on the local system. You must configure {ControllerName} to use standard IDs or logging and auditing (Splunk) in your environment. diff --git a/downstream/modules/platform/ref-controller-auth-code-grant-type.adoc b/downstream/archive/archived-modules/platform/ref-controller-auth-code-grant-type.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-auth-code-grant-type.adoc rename to downstream/archive/archived-modules/platform/ref-controller-auth-code-grant-type.adoc diff --git a/downstream/modules/platform/ref-controller-configs.adoc b/downstream/archive/archived-modules/platform/ref-controller-configs.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-configs.adoc rename to downstream/archive/archived-modules/platform/ref-controller-configs.adoc diff --git a/downstream/archive/archived-modules/platform/ref-controller-connect-to-host.adoc b/downstream/archive/archived-modules/platform/ref-controller-connect-to-host.adoc new file mode 100644 index 0000000000..1609336719 --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-controller-connect-to-host.adoc @@ -0,0 +1,10 @@ +[id="controller-connect-to-host"] + += Unable to connect to your host + +//If you are unable to run the `helloworld.yml` example playbook from the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/getting_started_with_automation_controller/index#controller-projects[Managing projects] section of the _{ControllerGS}_ guide or other playbooks due to host connection errors, try the following: + +//* Can you `ssh` to your host? +//Ansible depends on SSH access to the servers you are managing. +//* Are your `hostnames` and IPs correctly added in your inventory file? +//Check for typos. diff --git a/downstream/modules/platform/ref-controller-credentials-getting-started.adoc b/downstream/archive/archived-modules/platform/ref-controller-credentials-getting-started.adoc similarity index 93% rename from downstream/modules/platform/ref-controller-credentials-getting-started.adoc rename to downstream/archive/archived-modules/platform/ref-controller-credentials-getting-started.adoc index 67003e71c6..72edf44288 100644 --- a/downstream/modules/platform/ref-controller-credentials-getting-started.adoc +++ b/downstream/archive/archived-modules/platform/ref-controller-credentials-getting-started.adoc @@ -2,7 +2,7 @@ = Getting started with credentials //[ddacosta] This should really be rewritten as a procedure because it includes steps. -From the navigation panel, select {MenuAMCredentials} to access the *Credentials* page. +From the navigation panel, select {MenuAECredentials} to access the *Credentials* page. image:credentials-demo-edit-details.png[Credentials] @@ -38,7 +38,7 @@ A credential with roles associated retains them if the credential is reassigned Click btn:[Add] to assign the *Demo Credential* to additional users. If no users exist, add them by selecting {MenuControllerUsers} from the navigation panel. -For more information, see xref:assembly-controller-users[Users]. +For more information, see [Users]. Select the *Job Templates* tab to display the job templates associated with this credential, and which jobs have run recently using this credential. diff --git a/downstream/modules/platform/ref-controller-dynamic-inventory.adoc b/downstream/archive/archived-modules/platform/ref-controller-dynamic-inventory.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-dynamic-inventory.adoc rename to downstream/archive/archived-modules/platform/ref-controller-dynamic-inventory.adoc diff --git a/downstream/modules/platform/ref-controller-expire-sessions.adoc b/downstream/archive/archived-modules/platform/ref-controller-expire-sessions.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-expire-sessions.adoc rename to downstream/archive/archived-modules/platform/ref-controller-expire-sessions.adoc diff --git a/downstream/archive/archived-modules/platform/ref-controller-host-details.adoc b/downstream/archive/archived-modules/platform/ref-controller-host-details.adoc new file mode 100644 index 0000000000..d9ee352097 --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-controller-host-details.adoc @@ -0,0 +1,61 @@ +[id="controller-host-details"] + += Hosts + +//Does this need to be a procedure or can it be left a ref. + +A system managed by {PlatformNameShort}, which may include a physical, virtual, cloud-based server, or other device. +Typically a host is an operating system instance. +Hosts are grouped in inventories and are sometimes referred to as a “nodes”. + +Ansible works against multiple managed nodes or “hosts” in your infrastructure at the same time, using a list or group of lists known as an inventory. + +Once your inventory is defined, you use patterns to select the hosts or groups you want Ansible to run against. + +== Viewing the host details + +To view the Host details for a job run. + +.Procedure + +From the navigation panel, select {MenuInfrastructureHosts}. +The *Hosts* page displays the following information about the host affected by the selected event and its associated play and task: + +* The *Host*. +* The *Description*. +* The *Inventory* associated with that host. + +Selecting a particular host displays the *Details* page for that host. + +== Creating a host + +To create a new host. +From the navigation panel, select {MenuInfrastructureHosts}. +Click btn:[Create host]. +On the *Create Host* page enter the following information: + +* *Name*: Enter a name for the host. +* (Optional) *Description*: Enter a description for the host. +* *Inventory*: Select the inventory to contain that host from the list. +* *Variables*: Enter the inventory file variables associated with the host. + +Click btn:[Create host] to save your changes. + + +.Procedure + +From the navigation panel, select {MenuInfrastructureHosts}. +The *Hosts* page displays the following information about the host affected by the selected event and its associated play and task: + + + + +* The type of run in the *Play* field. +* The type of *Task*. +* If applicable, the Ansible Module task, and any arguments for that module. + +image::ug-job-details-hostevent.png[Host details] + +To view the results in JSON format, click the *JSON* tab. +To view the output of the task, click *Standard Out*. +To view errors from the output, click *Standard Error*. diff --git a/downstream/modules/platform/ref-controller-license-support.adoc b/downstream/archive/archived-modules/platform/ref-controller-license-support.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-license-support.adoc rename to downstream/archive/archived-modules/platform/ref-controller-license-support.adoc diff --git a/downstream/modules/platform/ref-controller-manage-oauth2-apps-tokens.adoc b/downstream/archive/archived-modules/platform/ref-controller-manage-oauth2-apps-tokens.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-manage-oauth2-apps-tokens.adoc rename to downstream/archive/archived-modules/platform/ref-controller-manage-oauth2-apps-tokens.adoc diff --git a/downstream/modules/platform/ref-controller-multi-cred-changes.adoc b/downstream/archive/archived-modules/platform/ref-controller-multi-cred-changes.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-multi-cred-changes.adoc rename to downstream/archive/archived-modules/platform/ref-controller-multi-cred-changes.adoc diff --git a/downstream/modules/platform/ref-controller-multi-cred-launch-considerations.adoc b/downstream/archive/archived-modules/platform/ref-controller-multi-cred-launch-considerations.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-multi-cred-launch-considerations.adoc rename to downstream/archive/archived-modules/platform/ref-controller-multi-cred-launch-considerations.adoc diff --git a/downstream/modules/platform/ref-controller-password-grant-type.adoc b/downstream/archive/archived-modules/platform/ref-controller-password-grant-type.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-password-grant-type.adoc rename to downstream/archive/archived-modules/platform/ref-controller-password-grant-type.adoc diff --git a/downstream/modules/platform/ref-controller-prompted-vault-credentials.adoc b/downstream/archive/archived-modules/platform/ref-controller-prompted-vault-credentials.adoc similarity index 81% rename from downstream/modules/platform/ref-controller-prompted-vault-credentials.adoc rename to downstream/archive/archived-modules/platform/ref-controller-prompted-vault-credentials.adoc index b98baea532..35e4e5e14e 100644 --- a/downstream/modules/platform/ref-controller-prompted-vault-credentials.adoc +++ b/downstream/archive/archived-modules/platform/ref-controller-prompted-vault-credentials.adoc @@ -32,4 +32,4 @@ POST /api/v2/job_templates/N/launch/ Instead of uploading sensitive credential information into {ControllerName}, you can link credential fields to external systems and use them to run your playbooks. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#assembly-controller-secret-management[Secret Management System] in the {ControllerUG}. +For more information, see xref:assembly-controller-secret-management[Secret Management System]. diff --git a/downstream/modules/platform/ref-controller-rbac-built-in-roles.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-built-in-roles.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-built-in-roles.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-built-in-roles.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-create-inventory.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-create-inventory.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-create-inventory.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-create-inventory.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-edit-job-template.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-edit-job-template.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-edit-job-template.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-edit-job-template.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-edit-orgs.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-edit-orgs.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-edit-orgs.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-edit-orgs.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-edit-projects.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-edit-projects.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-edit-projects.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-edit-projects.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-edit-user.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-edit-user.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-edit-user.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-edit-user.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-personas.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-personas.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-personas.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-personas.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-roles.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-roles.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-roles.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-roles.adoc diff --git a/downstream/modules/platform/ref-controller-rbac-user-view.adoc b/downstream/archive/archived-modules/platform/ref-controller-rbac-user-view.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-rbac-user-view.adoc rename to downstream/archive/archived-modules/platform/ref-controller-rbac-user-view.adoc diff --git a/downstream/modules/platform/ref-controller-referrals.adoc b/downstream/archive/archived-modules/platform/ref-controller-referrals.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-referrals.adoc rename to downstream/archive/archived-modules/platform/ref-controller-referrals.adoc diff --git a/downstream/archive/archived-modules/platform/ref-controller-run-a-playbook.adoc b/downstream/archive/archived-modules/platform/ref-controller-run-a-playbook.adoc new file mode 100644 index 0000000000..1c9c0e653e --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-controller-run-a-playbook.adoc @@ -0,0 +1,12 @@ +[id="controller-run-a-playbook"] + +//= Unable to run a playbook + +//If you are unable to run the `helloworld.yml` example playbook from the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/getting_started_with_automation_controller/index#controller-projects[Managing projects] section of the _{ControllerGS}_ guide due to playbook errors, try the following: + +//* Ensure that you are authenticating with the user currently running the commands. +//If not, check how the username has been set up or pass the `--user=username` or `-u username` commands to specify a user. +//* Is your YAML file correctly indented? +//You might need to line up your whitespace correctly. +//Indentation level is significant in YAML. +//You can use `yamlint` to check your playbook. diff --git a/downstream/modules/platform/ref-controller-set-up-kerberos-packages.adoc b/downstream/archive/archived-modules/platform/ref-controller-set-up-kerberos-packages.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-set-up-kerberos-packages.adoc rename to downstream/archive/archived-modules/platform/ref-controller-set-up-kerberos-packages.adoc diff --git a/downstream/modules/platform/ref-controller-team-access.adoc b/downstream/archive/archived-modules/platform/ref-controller-team-access.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-team-access.adoc rename to downstream/archive/archived-modules/platform/ref-controller-team-access.adoc diff --git a/downstream/modules/platform/ref-controller-team-roles.adoc b/downstream/archive/archived-modules/platform/ref-controller-team-roles.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-team-roles.adoc rename to downstream/archive/archived-modules/platform/ref-controller-team-roles.adoc diff --git a/downstream/modules/platform/ref-controller-use-oauth2-token-system.adoc b/downstream/archive/archived-modules/platform/ref-controller-use-oauth2-token-system.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-use-oauth2-token-system.adoc rename to downstream/archive/archived-modules/platform/ref-controller-use-oauth2-token-system.adoc diff --git a/downstream/modules/platform/ref-controller-user-organizations.adoc b/downstream/archive/archived-modules/platform/ref-controller-user-organizations.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-user-organizations.adoc rename to downstream/archive/archived-modules/platform/ref-controller-user-organizations.adoc diff --git a/downstream/archive/archived-modules/platform/ref-controller-user-teams.adoc b/downstream/archive/archived-modules/platform/ref-controller-user-teams.adoc new file mode 100644 index 0000000000..7b430c1665 --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-controller-user-teams.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-user-teams"] + += Adding a team for a user + +You can add a team for a user from the Users list view. + +.Procedure +. From the navigation panel, select {MenuAMUsers}. +. Select the user that you want to which you want to add team membership. +. Select the *Teams* tab to display the list of teams of which that user is a member. +. Click btn:[Add Team(s)]. +. Select the check box for the team to which you want to add the user. +. You can search this list by the team *Name* or *Organization*. ++ +Until a team has been created and a user has been assigned to that team, the assigned team details for that user remain empty. + diff --git a/downstream/modules/platform/ref-controller-work-with-kerberos-tickets.adoc b/downstream/archive/archived-modules/platform/ref-controller-work-with-kerberos-tickets.adoc similarity index 100% rename from downstream/modules/platform/ref-controller-work-with-kerberos-tickets.adoc rename to downstream/archive/archived-modules/platform/ref-controller-work-with-kerberos-tickets.adoc diff --git a/downstream/modules/platform/ref-converting-playbook-examples.adoc b/downstream/archive/archived-modules/platform/ref-converting-playbook-examples.adoc similarity index 100% rename from downstream/modules/platform/ref-converting-playbook-examples.adoc rename to downstream/archive/archived-modules/platform/ref-converting-playbook-examples.adoc diff --git a/downstream/archive/archived-modules/platform/ref-edge-manager-auth-resources.adoc b/downstream/archive/archived-modules/platform/ref-edge-manager-auth-resources.adoc new file mode 100644 index 0000000000..b91528c56c --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-edge-manager-auth-resources.adoc @@ -0,0 +1,61 @@ +[id="edge-manager-auth-resources"] + += {RedHatEdge} authorization resources + +The following table contains the routes, names, resource names, and verbs for the {RedHatEdge} API endpoints: + +|==== +|Route| Name| Resource| Verb +|`DELETE /api/v1/certificatesigningrequests`|`DeleteCertificateSigningRequests`|`certificatesigningrequests`|`deletecollection` +|`GET /api/v1/certificatesigningrequests`|`ListCertificateSigningRequests`|`certificatesigningrequests`|`list` +|`POST /api/v1/certificatesigningrequests`|`CreateCertificateSigningRequest`|`certificatesigningrequests`|`create` +|`DELETE /api/v1/certificatesigningrequests/{name}`|`DeleteCertificateSigningRequest`|`certificatesigningrequests`|`delete` +|`GET /api/v1/certificatesigningrequests/{name}`|`ReadCertificateSigningRequest`|`certificatesigningrequests`|`get` +|`PATCH /api/v1/certificatesigningrequests/{name}`|`PatchCertificateSigningRequest`|`certificatesigningrequests`|`patch` +|`PUT /api/v1/certificatesigningrequests/{name}`|`ReplaceCertificateSigningRequest`|`certificatesigningrequests`|`update` +|`DELETE /api/v1/certificatesigningrequests/{name}/approval`|`DenyCertificateSigningRequest`|`certificatesigningrequests/approval`|`delete` +|`POST /api/v1/devices`|`CreateDevice`|`devices`|`create` +|`GET /api/v1/devices`|`ListDevices`|`devices`|`list` +|`DELETE /api/v1/devices`|`DeleteDevices`|`devices`|`deletecollection` +|`GET /api/v1/devices/{name}`|`ReadDevice`|`devices`|`get` +|`PUT /api/v1/devices/{name}`|`ReplaceDevice`|`devices`|`update` +|`DELETE /api/v1/devices/{name}`|`DeleteDevice`|`devices`|`delete` +|`GET /api/v1/devices/{name}/status`|`ReadDeviceStatus`|`devices/status`|`get` +|`PUT /api/v1/devices/{name}/status`|`ReplaceDeviceStatus`|`devices/status`|`update` +|`GET /api/v1/devices/{name}/rendered`|`GetRenderedDevice`|`devices/rendered`|`get` +|`PUT /api/v1/devices/{name}/decommission`|`DecommissionDevice`|`devices/decommission`|`update` +|`GET /ws/v1/devices/{name}/console`|`DeviceConsole`|`devices/console`|`get` +|`POST /api/v1/enrollmentrequests`|`CreateEnrollmentRequest`|`enrollmentrequests`|`create` +|`GET /api/v1/enrollmentrequests`|`ListEnrollmentRequests`|`enrollmentrequests`|`list` +|`DELETE /api/v1/enrollmentrequests`|`DeleteEnrollmentRequests`|`enrollmentrequests`|`deletecollection` +|`GET /api/v1/enrollmentrequests/{name}`|`ReadEnrollmentRequest`|`enrollmentrequests`|`get` +|`PUT /api/v1/enrollmentrequests/{name}`|`ReplaceEnrollmentRequest`|`enrollmentrequests`|`update` +|`PATCH /api/v1/enrollmentrequests/{name}`|`PatchEnrollmentRequest`|`enrollmentrequests`|`patch` +|`DELETE /api/v1/enrollmentrequests/{name}`|`DeleteEnrollmentRequest`|`enrollmentrequests`|`delete` +|`GET /api/v1/enrollmentrequests/{name}/status`|`ReadEnrollmentRequestStatus`|`enrollmentrequests/status`|`get` +|`POST /api/v1/enrollmentrequests/{name}/approval`|`ApproveEnrollmentRequest`|`enrollmentrequests/approval`|`post` +|`PUT /api/v1/enrollmentrequests/{name}/status`|`ReplaceEnrollmentRequestStatus`|`enrollmentrequests/status`|`update` +|`POST /api/v1/fleets`|`CreateFleet`|`fleets`|`create` +|`GET /api/v1/fleets`|`ListFleets`|`fleets`|`list` +|`DELETE /api/v1/fleets`|`DeleteFleets`|`fleets`|`deletecollection` +|`GET /api/v1/fleets/{name}`|`ReadFleet`|`fleets`|`get` +|`PUT /api/v1/fleets/{name}`|`ReplaceFleet`|`fleets`|`update` +|`DELETE /api/v1/fleets/{name}`|`DeleteFleet`|`fleets`|`delete` +|`GET /api/v1/fleets/{name}/status`|`ReadFleetStatus`|`fleets/status`|`get` +|`PUT /api/v1/fleets/{name}/status`|`ReplaceFleetStatus`|`fleets/status`|`update` +|`POST /api/v1/repositories`|`CreateRepository`|`repositories`|`create` +|`GET /api/v1/repositories`|`ListRepositories`|`repositories`|`list` +|`DELETE /api/v1/repositories`|`DeleteRepositories`|`repositories`|`deletecollection` +|`PUT /api/v1/repositories/{name}`|`ReplaceRepository`|`repositories`|`update` +|`DELETE /api/v1/repositories/{name}`|`DeleteRepository`|`repositories`|`delete` +|`POST /api/v1/resourcesyncs`|`CreateResourceSync`|`resourcesyncs`|`create` +|`GET /api/v1/resourcesyncs`|`ListResourceSync`|`resourcesyncs`|`list` +|`DELETE /api/v1/resourcesyncs`|`DeleteResourceSyncs`|`resourcesyncs`|`deletecollection` +|`GET /api/v1/resourcesyncs/{name}`|`ReadResourceSync`|`resourcesyncs`|`get` +|`PUT /api/v1/resourcesyncs/{name}`|`ReplaceResourceSync`|`resourcesyncs`|`update` +|`DELETE /api/v1/resourcesyncs/{name}`|`DeleteResourceSync`|`resourcesyncs`|`delete` +|`GET /api/v1/fleets/{fleet}/templateVersions`|`ListTemplateVersions`|`fleets/templateversions`|`list` +|`DELETE /api/v1/fleets/{fleet}/templateVersions`|`DeleteTemplateVersions`|`fleets/templateversions`|`deletecollection` +|`GET /api/v1/fleets/{fleet}/templateVersions/{name}`|`ReadTemplateVersion`|`fleets/templateversions`|`get` +|`DELETE /api/v1/fleets/{fleet}/templateVersions/{name}`|`DeleteTemplateVersion`|`fleets/templateversions`|`delete` +|==== diff --git a/downstream/archive/archived-modules/platform/ref-edge-manager-rbac-roles.adoc b/downstream/archive/archived-modules/platform/ref-edge-manager-rbac-roles.adoc new file mode 100644 index 0000000000..7ba65b44ab --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-edge-manager-rbac-roles.adoc @@ -0,0 +1,17 @@ +[id="edge-manager-rbac-roles"] + += {RedHatEdge} RBAC roles + +The {RedHatEdge} has the following default roles and their permissions: + +|==== +|Roles|Permissions |Resources +|`flightctl-admin` |All |All +|`flightctl-viewer` | `get`, `list` |`devices`, `fleets`, `resourcesyncs` +.3+|`flightctl-operator` | `get`, `list`, `create`, `delete`, `update`, `patch`|`devices`, `fleets`, `resourcesyncs` +|`get` |`devices/console` +|`get`, `list`|`repositories`, `fleets`, `templateversions` +.3+|`flightctl-installer` |`get`, `list` |`enrollmentrequests` +|`post` |`enrollmentrequests/approval` +|`get`, `list`, `create` | `certificatesigningrequests` +|==== diff --git a/downstream/modules/platform/ref-enabling-automation-hub-collection-and-container-signing.adoc b/downstream/archive/archived-modules/platform/ref-enabling-automation-hub-collection-and-container-signing.adoc similarity index 51% rename from downstream/modules/platform/ref-enabling-automation-hub-collection-and-container-signing.adoc rename to downstream/archive/archived-modules/platform/ref-enabling-automation-hub-collection-and-container-signing.adoc index b0a9437429..acfedc35e9 100644 --- a/downstream/modules/platform/ref-enabling-automation-hub-collection-and-container-signing.adoc +++ b/downstream/archive/archived-modules/platform/ref-enabling-automation-hub-collection-and-container-signing.adoc @@ -7,18 +7,18 @@ = Enabling {HubNameStart} collection and container signing [role="_abstract"] -{HubNameStart} allows you to sign Ansible collections and container images. This feature is not enabled by default, and you must provide the GPG key. +With {HubName} you can sign Ansible collections and container images. This feature is not enabled by default, and you must provide the GPG key. ---- hub_collection_signing=true -hub_collection_signing_key=/full/path/to/collections/gpg/key +hub_collection_signing_key= hub_container_signing=true -hub_container_signing_key=/full/path/to/containers/gpg/key +hub_container_signing_key= ---- When the GPG key is protected by a passphrase, you must provide the passphrase. ---- -hub_collection_signing_pass= -hub_container_signing_pass= +hub_collection_signing_pass= +hub_container_signing_pass= ---- diff --git a/downstream/archive/archived-modules/platform/ref-example-CONT-architecture.adoc b/downstream/archive/archived-modules/platform/ref-example-CONT-architecture.adoc new file mode 100644 index 0000000000..bebd5223ad --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-example-CONT-architecture.adoc @@ -0,0 +1,8 @@ +// This module is included in assembly-aap-architecture.adoc +[id='example_CONT_architecture_{context}'] += Example containerized deployment architecture + +The following reference architecture provides an example setup of an enterprise deployment of containerized {PlatformNameShort}. + +.Example enterprise containerized deployment architecture +image::cont-b-env-a.png[Reference architecture for an example setup of an enterprise containerized {PlatformNameShort} deployment] \ No newline at end of file diff --git a/downstream/archive/archived-modules/platform/ref-example-OCP-architecture.adoc b/downstream/archive/archived-modules/platform/ref-example-OCP-architecture.adoc new file mode 100644 index 0000000000..e8b8da90d7 --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-example-OCP-architecture.adoc @@ -0,0 +1,8 @@ +// This module is included in assembly-aap-architecture.adoc +[id='example_OCP_architecture_{context}'] += Example Operator-based deployment architecture + +The following reference architecture provides an example setup of an enterprise deployment of {PlatformNameShort} on {OCPShort}. + +.Example enterprise Operator-based deployment architecture +image::ocp-b-env-a.png[Reference architecture for an example setup of an enterprise Operator-based {PlatformNameShort} deployment] \ No newline at end of file diff --git a/downstream/modules/platform/ref-example-platform-ext-database-customer-provided.adoc b/downstream/archive/archived-modules/platform/ref-example-platform-ext-database-customer-provided.adoc similarity index 100% rename from downstream/modules/platform/ref-example-platform-ext-database-customer-provided.adoc rename to downstream/archive/archived-modules/platform/ref-example-platform-ext-database-customer-provided.adoc diff --git a/downstream/modules/platform/ref-example-platform-ext-database-inventory.adoc b/downstream/archive/archived-modules/platform/ref-example-platform-ext-database-inventory.adoc similarity index 100% rename from downstream/modules/platform/ref-example-platform-ext-database-inventory.adoc rename to downstream/archive/archived-modules/platform/ref-example-platform-ext-database-inventory.adoc diff --git a/downstream/modules/platform/ref-hub-configs.adoc b/downstream/archive/archived-modules/platform/ref-hub-configs.adoc similarity index 100% rename from downstream/modules/platform/ref-hub-configs.adoc rename to downstream/archive/archived-modules/platform/ref-hub-configs.adoc diff --git a/downstream/modules/platform/ref-instances-prerequisites.adoc b/downstream/archive/archived-modules/platform/ref-instances-prerequisites.adoc similarity index 100% rename from downstream/modules/platform/ref-instances-prerequisites.adoc rename to downstream/archive/archived-modules/platform/ref-instances-prerequisites.adoc diff --git a/downstream/modules/platform/ref-ldap-config-on-pah.adoc b/downstream/archive/archived-modules/platform/ref-ldap-config-on-pah.adoc similarity index 100% rename from downstream/modules/platform/ref-ldap-config-on-pah.adoc rename to downstream/archive/archived-modules/platform/ref-ldap-config-on-pah.adoc diff --git a/downstream/modules/platform/ref-ldap-referrals.adoc b/downstream/archive/archived-modules/platform/ref-ldap-referrals.adoc similarity index 100% rename from downstream/modules/platform/ref-ldap-referrals.adoc rename to downstream/archive/archived-modules/platform/ref-ldap-referrals.adoc diff --git a/downstream/modules/platform/ref-necessary-permissions-job-templates.adoc b/downstream/archive/archived-modules/platform/ref-necessary-permissions-job-templates.adoc similarity index 100% rename from downstream/modules/platform/ref-necessary-permissions-job-templates.adoc rename to downstream/archive/archived-modules/platform/ref-necessary-permissions-job-templates.adoc diff --git a/downstream/modules/platform/ref-platform-non-inst-database-inventory.adoc b/downstream/archive/archived-modules/platform/ref-platform-non-inst-database-inventory.adoc similarity index 100% rename from downstream/modules/platform/ref-platform-non-inst-database-inventory.adoc rename to downstream/archive/archived-modules/platform/ref-platform-non-inst-database-inventory.adoc diff --git a/downstream/modules/platform/ref-single-controller-ext-customer-managed-db.adoc b/downstream/archive/archived-modules/platform/ref-single-controller-ext-customer-managed-db.adoc similarity index 100% rename from downstream/modules/platform/ref-single-controller-ext-customer-managed-db.adoc rename to downstream/archive/archived-modules/platform/ref-single-controller-ext-customer-managed-db.adoc diff --git a/downstream/modules/platform/ref-single-controller-ext-installer-managed-db.adoc b/downstream/archive/archived-modules/platform/ref-single-controller-ext-installer-managed-db.adoc similarity index 100% rename from downstream/modules/platform/ref-single-controller-ext-installer-managed-db.adoc rename to downstream/archive/archived-modules/platform/ref-single-controller-ext-installer-managed-db.adoc diff --git a/downstream/modules/platform/ref-single-controller-hub-eda-with-managed-db.adoc b/downstream/archive/archived-modules/platform/ref-single-controller-hub-eda-with-managed-db.adoc similarity index 94% rename from downstream/modules/platform/ref-single-controller-hub-eda-with-managed-db.adoc rename to downstream/archive/archived-modules/platform/ref-single-controller-hub-eda-with-managed-db.adoc index e3e1e43f87..6605d6ddf6 100644 --- a/downstream/modules/platform/ref-single-controller-hub-eda-with-managed-db.adoc +++ b/downstream/archive/archived-modules/platform/ref-single-controller-hub-eda-with-managed-db.adoc @@ -15,6 +15,7 @@ Use this example to populate the inventory file to deploy single instances of {C ==== +[literal, subs="+attributes"] ----- [automationcontroller] controller.example.com @@ -64,12 +65,6 @@ automationedacontroller_pg_database='automationedacontroller' automationedacontroller_pg_username='automationedacontroller' automationedacontroller_pg_password='' -# Keystore file to install in SSO node -# sso_custom_keystore_file='/path/to/sso.jks' - -# This install will deploy SSO with sso_use_https=True -# Keystore password is required for https enabled SSO -sso_keystore_password='' # This install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can diff --git a/downstream/modules/platform/ref-single-eda-controller-with-internal-db.adoc b/downstream/archive/archived-modules/platform/ref-single-eda-controller-with-internal-db.adoc similarity index 100% rename from downstream/modules/platform/ref-single-eda-controller-with-internal-db.adoc rename to downstream/archive/archived-modules/platform/ref-single-eda-controller-with-internal-db.adoc diff --git a/downstream/modules/platform/ref-standalone-controller-hub-ext-database-inventory.adoc b/downstream/archive/archived-modules/platform/ref-standalone-controller-hub-ext-database-inventory.adoc similarity index 100% rename from downstream/modules/platform/ref-standalone-controller-hub-ext-database-inventory.adoc rename to downstream/archive/archived-modules/platform/ref-standalone-controller-hub-ext-database-inventory.adoc diff --git a/downstream/modules/platform/ref-standalone-hub-ext-database-customer-provided.adoc b/downstream/archive/archived-modules/platform/ref-standalone-hub-ext-database-customer-provided.adoc similarity index 100% rename from downstream/modules/platform/ref-standalone-hub-ext-database-customer-provided.adoc rename to downstream/archive/archived-modules/platform/ref-standalone-hub-ext-database-customer-provided.adoc diff --git a/downstream/modules/platform/ref-standalone-hub-inventory.adoc b/downstream/archive/archived-modules/platform/ref-standalone-hub-inventory.adoc similarity index 100% rename from downstream/modules/platform/ref-standalone-hub-inventory.adoc rename to downstream/archive/archived-modules/platform/ref-standalone-hub-inventory.adoc diff --git a/downstream/archive/archived-modules/platform/ref-using-custom-tls-certificates.adoc b/downstream/archive/archived-modules/platform/ref-using-custom-tls-certificates.adoc new file mode 100644 index 0000000000..f8d6776578 --- /dev/null +++ b/downstream/archive/archived-modules/platform/ref-using-custom-tls-certificates.adoc @@ -0,0 +1,84 @@ +//Michelle - Archiving this module as it's now broken down into other modularized files +:_newdoc-version: 2.15.1 +:_template-generated: 2024-01-12 + +:_mod-docs-content-type: REFERENCE + +[id="using-custom-tls-certificates_{context}"] += Using custom TLS certificates + +By default, the installation program creates a self-signed Certificate Authority (CA) and uses it to generate self-signed TLS certificates for all {PlatformNameShort} services. + +To use your own TLS certificates and keys to replace some or all of the self-signed certificates generated during installation, you can set specific variables in your inventory file. + +*Option 1: Use a custom CA to generate all TLS certificates* + +To use a custom Certificate Authority (CA) to generate TLS certificates for all {PlatformNameShort} services, set the following variables in your inventory file: + +---- +ca_tls_cert= +ca_tls_key= +---- + +Use this method when you want {PlatformNameShort} to generate all of the certificates, but you want them signed by a custom CA rather than the default self-signed certificates. + +*Option 2: Provide custom TLS certificates for each service* + +To manually provide TLS certificates for each individual service (for example {ControllerName}, {HubName}, {EDAName}), set the following variables in your inventory file: + +[source,yaml,subs="+attributes"] +---- +# {GatewayStart} +gateway_tls_cert= +gateway_tls_key= +gateway_pg_tls_cert= +gateway_pg_tls_key= +gateway_redis_tls_cert= +gateway_redis_tls_key= + +# {ControllerNameStart} +controller_tls_cert= +controller_tls_key= +controller_pg_tls_cert= +controller_pg_tls_key= + +# {HubNameStart} +hub_tls_cert= +hub_tls_key= +hub_pg_tls_cert= +hub_pg_tls_key= + +# {EDAName} +eda_tls_cert= +eda_tls_key= +eda_pg_tls_cert= +eda_pg_tls_key= +eda_redis_tls_cert= +eda_redis_tls_key= + +# PostgreSQL +postgresql_tls_cert= +postgresql_tls_key= + +# Receptor +receptor_tls_cert= +receptor_tls_key= +---- + +Use this method if your organization manages TLS certificates outside of {PlatformNameShort} and requires manual provisioning. + +*Providing a custom CA certificate* + +If any of the TLS certificates you manually provide are signed by a custom CA, you must specify the path to the CA certificate file by using the following variable in your inventory file: + +---- +custom_ca_cert= +---- + +This is necessary for {PlatformNameShort} to trust the connections secured by these certificates, such as when configuring LDAPS (LDAP with TLS enabled) authentication. + +If you have more than one CA certificate (for example a root CA and one or more intermediate certificates), combine them into a single file. + +Ensure the certificates are concatenated in the correct order, starting with the root CA followed by any intermediate CAs. + +Provide the absolute path to this combined certificate file by using the `custom_ca_cert` variable. diff --git a/downstream/modules/platform/ref-work-with-job-templates.adoc b/downstream/archive/archived-modules/platform/ref-work-with-job-templates.adoc similarity index 100% rename from downstream/modules/platform/ref-work-with-job-templates.adoc rename to downstream/archive/archived-modules/platform/ref-work-with-job-templates.adoc diff --git a/downstream/modules/platform/ref-work-with-notifications.adoc b/downstream/archive/archived-modules/platform/ref-work-with-notifications.adoc similarity index 100% rename from downstream/modules/platform/ref-work-with-notifications.adoc rename to downstream/archive/archived-modules/platform/ref-work-with-notifications.adoc diff --git a/downstream/modules/platform/ref-work-with-schedules.adoc b/downstream/archive/archived-modules/platform/ref-work-with-schedules.adoc similarity index 100% rename from downstream/modules/platform/ref-work-with-schedules.adoc rename to downstream/archive/archived-modules/platform/ref-work-with-schedules.adoc diff --git a/downstream/archive/archived-modules/topologies/ref-rpm-a-env-b.adoc b/downstream/archive/archived-modules/topologies/ref-rpm-a-env-b.adoc new file mode 100644 index 0000000000..d6e3486ee4 --- /dev/null +++ b/downstream/archive/archived-modules/topologies/ref-rpm-a-env-b.adoc @@ -0,0 +1,70 @@ +:_mod-docs-content-type: REFERENCE +[id="rpm-a-env-b"] += RPM mixed {GrowthTopology} + +include::snippets/growth-topologies.adoc[] +include::snippets/mixed-topologies.adoc[] + +== Infrastructure topology +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::rpm-a-env-b.png[RPM mixed {GrowthTopology} diagram] + +[NOTE] +==== +Here, {ControllerName} and {HubName} are at 2.4x while the {EDAName} and {Gateway} components are at {PlatformVers} +==== + +Each VM has been tested with the following component requirements: + +include::snippets/rpm-tested-vm-config.adoc[] + +.Infrastructure topology +[options="header"] +|==== +| VM count | Purpose | {PlatformNameShort} version | Example VM group names +| 1 | {GatewayStart} with colocated Redis | 2.5 | `automationgateway` +| 1 | {ControllerNameStart} | 2.4 | `automationcontroller` +| 1 | {PrivateHubNameStart} | 2.4 | `automationhub` +| 1 | {EDAName} | 2.5 | `automationedacontroller` +| 1 | {AutomationMeshStart} execution node | 2.4 | `execution_nodes` +| 1 | {PlatformNameShort} managed database | 2.4 | `database` +|==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +include::snippets/rpm-env-b-tested-system-config.adoc[] + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {ControllerNameStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {EDAName} +| 5432 | TCP | PostgreSQL | {EDAName} | Database +| 5432 | TCP | PostgreSQL | {GatewayStart} | Database +| 5432 | TCP | PostgreSQL | {HubNameStart} | Database +| 5432 | TCP | PostgreSQL | {ControllerNameStart} | Database +| 6379 | TCP | Redis | {EDAName} | Redis node +| 6379 | TCP | Redis | {GatewayStart} | Redis node +| 8443 | TCP | HTTPS | {GatewayStart} | {GatewayStart} +| 27199 | TCP | Receptor | {ControllerNameStart} | Execution node +//| 50051 | TCP | gRPC | {GatewayStart} | {GatewayStart} +|==== + +== Example inventory file + +Use the example inventory file to perform an installation for this topology: + +include::snippets/inventory-rpm-a-env-b.adoc[] diff --git a/downstream/archive/archived-modules/topologies/ref-rpm-b-env-b.adoc b/downstream/archive/archived-modules/topologies/ref-rpm-b-env-b.adoc new file mode 100644 index 0000000000..571036f8d7 --- /dev/null +++ b/downstream/archive/archived-modules/topologies/ref-rpm-b-env-b.adoc @@ -0,0 +1,79 @@ +:_mod-docs-content-type: REFERENCE +[id="rpm-b-env-b"] += RPM mixed {EnterpriseTopology} + +include::snippets/enterprise-topologies.adoc[] +include::snippets/mixed-topologies.adoc[] + +== Infrastructure topology +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::rpm-b-env-b.png[RPM mixed {EnterpriseTopology} diagram] + +[NOTE] +==== +Here, {ControllerName} and {HubName} are at 2.4x while the {EDAName} and {Gateway} components are at {PlatformVers} +==== + +Each VM has been tested with the following component requirements: + +include::snippets/rpm-tested-vm-config.adoc[] + +.Infrastructure topology +[options="header"] +|==== +| VM count | Purpose | {PlatformNameShort} version | Example VM group names +| 3 | {GatewayStart} with colocated Redis | 2.5 | `automationgateway` +| 2 | {ControllerNameStart} | 2.4 | `automationcontroller` +| 2 | {PrivateHubNameStart} | 2.4 | `automationhub` +| 3 | {EDAName} with colocated Redis | 2.5 | `automationedacontroller` +| 1 | {AutomationMeshStart} hop node | 2.4 | `execution_nodes` +| 2 | {AutomationMeshStart} execution node | 2.4 | `execution_nodes` +| 1 | Externally managed database service | N/A | N/A +| 1 | HAProxy load balancer in front of {Gateway} (externally managed) | N/A | N/A +|==== + +[NOTE] +==== +6 VMs are required for a Redis high availability (HA) compatible deployment. Redis can be colocated on each {PlatformNameShort} {PlatformVers} component VM except for {ControllerName}, execution nodes, or the PostgreSQL database. +==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +include::snippets/rpm-env-b-tested-system-config.adoc[] + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {ControllerNameStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | HAProxy load balancer | {GatewayStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {EDAName} +| 5432 | TCP | PostgreSQL | {EDAName} | External database +| 5432 | TCP | PostgreSQL | {GatewayStart} | External database +| 5432 | TCP | PostgreSQL | {HubNameStart} | External database +| 5432 | TCP | PostgreSQL | {ControllerNameStart} | External database +| 6379 | TCP | Redis | {EDAName} | Redis node +| 6379 | TCP | Redis | {GatewayStart} | Redis node +| 8443 | TCP | HTTPS | {GatewayStart} | {GatewayStart} +| 16379 | TCP | Redis | Redis node | Redis node +| 27199 | TCP | Receptor | {ControllerNameStart} | Hop node and execution node +| 27199 | TCP | Receptor | Hop node | Execution node +//| 50051 | TCP | gRPC | {GatewayStart} | {GatewayStart} +|==== + +== Example inventory file +Use the example inventory file to perform an installation for this topology: + +include::snippets/inventory-rpm-b-env-b.adoc[] diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-invalid-credentials.adoc b/downstream/archive/archived-modules/troubleshooting-aap/proc-troubleshoot-invalid-credentials.adoc similarity index 80% rename from downstream/modules/troubleshooting-aap/proc-troubleshoot-invalid-credentials.adoc rename to downstream/archive/archived-modules/troubleshooting-aap/proc-troubleshoot-invalid-credentials.adoc index ee0bec48f5..163d1cf22b 100644 --- a/downstream/modules/troubleshooting-aap/proc-troubleshoot-invalid-credentials.adoc +++ b/downstream/archive/archived-modules/troubleshooting-aap/proc-troubleshoot-invalid-credentials.adoc @@ -27,5 +27,6 @@ The default value is `-1`, which disables the maximum sessions allowed. This mea * For more information about installing and using the controller node CLI, see link:https://docs.ansible.com/automation-controller/latest/html/controllercli/index.html[AWX Command Line Interface] and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#assembly-controller-awx-manage-utility[AWX manage utility]. -* For more information about session limits, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-session-limits[Session Limits] in the Automation Controller Administration Guide. +// Michelle - commenting out for now as this content doesn't appear to exist anymore in a published doc +// * For more information about session limits, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-session-limits[Session Limits] in the Automation Controller Administration Guide. diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-localhost.adoc b/downstream/archive/archived-modules/troubleshooting-aap/proc-troubleshoot-job-localhost.adoc similarity index 100% rename from downstream/modules/troubleshooting-aap/proc-troubleshoot-job-localhost.adoc rename to downstream/archive/archived-modules/troubleshooting-aap/proc-troubleshoot-job-localhost.adoc diff --git a/downstream/archive/archived-snippets/inventory-rpm-a-env-b.adoc b/downstream/archive/archived-snippets/inventory-rpm-a-env-b.adoc new file mode 100644 index 0000000000..3c64c276c3 --- /dev/null +++ b/downstream/archive/archived-snippets/inventory-rpm-a-env-b.adoc @@ -0,0 +1,51 @@ +//Inventory file for RPM A ENV B topology + +[source,yaml,subs="+attributes"] +---- +# This is the {PlatformNameShort} installer inventory file intended for the mixed RPM growth deployment topology. +# Consult the {PlatformNameShort} product documentation about this topology's tested hardware configuration. +# {URLTopologies}/rpm-topologies +# +# Consult the docs if you are unsure what to add +# For all optional variables consult the Red Hat documentation: +# {URLInstallationGuide} + +# This section is for your {Gateway} hosts +# ----------------------------------------------------- +[automationgateway] +gateway.example.org + +# This section is for your {EDAcontroller} hosts +# ----------------------------------------------------- +[automationedacontroller] +eda.example.org + +# This section is for the {PlatformNameShort} database +# ----------------------------------------------------- +[database] +db.example.org + +[all:vars] + +# Common variables +# {URLInstallationGuide}/appendix-inventory-files-vars#general-variables +# ----------------------------------------------------- +registry_username= +registry_password= + +redis_mode=standalone + +# {GatewayStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#platform-gateway-variables +# ----------------------------------------------------- +automationgateway_admin_password= +automationgateway_pg_host=db.example.org +automationgateway_pg_password= + +# {EDAcontroller} +# {URLInstallationGuide}/appendix-inventory-files-vars#event-driven-ansible-variables +# ----------------------------------------------------- +automationedacontroller_admin_password= +automationedacontroller_pg_host=db.example.org +automationedacontroller_pg_password= +---- \ No newline at end of file diff --git a/downstream/archive/archived-snippets/inventory-rpm-b-env-b.adoc b/downstream/archive/archived-snippets/inventory-rpm-b-env-b.adoc new file mode 100644 index 0000000000..6de796a93f --- /dev/null +++ b/downstream/archive/archived-snippets/inventory-rpm-b-env-b.adoc @@ -0,0 +1,56 @@ +//Inventory file for RPM B ENV B topology + +[source,yaml,subs="+attributes"] +---- +# This is the {PlatformNameShort} mixed enterprise installer inventory file +# Consult the docs if you are unsure what to add +# For all optional variables consult the Red Hat documentation: +# {URLInstallationGuide} + +# This section is for your {Gateway} hosts +# ----------------------------------------------------- +[automationgateway] +gateway1.example.org +gateway2.example.org +gateway3.example.org + +# This section is for your {EDAcontroller} hosts +# ----------------------------------------------------- +[automationedacontroller] +eda1.example.org +eda2.example.org +eda3.example.org + +[redis] +gateway1.example.org +gateway2.example.org +gateway3.example.org +eda1.example.org +eda2.example.org +eda3.example.org + +[all:vars] +# Common variables +# {URLInstallationGuide}/appendix-inventory-files-vars#general-variables +# ----------------------------------------------------- +registry_username= +registry_password= + +# {GatewayStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#platform-gateway-variables +# ----------------------------------------------------- +automationgateway_admin_password= +automationgateway_pg_host= +automationgateway_pg_database= +automationgateway_pg_username= +automationgateway_pg_password= + +# {EDAcontroller} +# {URLInstallationGuide}/appendix-inventory-files-vars#event-driven-ansible-variables +# ----------------------------------------------------- +automationedacontroller_admin_password= +automationedacontroller_pg_host= +automationedacontroller_pg_database= +automationedacontroller_pg_username= +automationedacontroller_pg_password= +---- \ No newline at end of file diff --git a/downstream/archive/archived-snippets/known-issue-container-content-syncing.adoc b/downstream/archive/archived-snippets/known-issue-container-content-syncing.adoc new file mode 100644 index 0000000000..0a9449e86c --- /dev/null +++ b/downstream/archive/archived-snippets/known-issue-container-content-syncing.adoc @@ -0,0 +1 @@ +* When installing the growth topology for the {PlatformNameShort} 2.5 containerized setup bundle, you must disable content syncing which is enabled by default. To disable this feature set the `hub_seed_collections` variable in the inventory file to false. See link:{URLTopologies}/container-topologies#cont-a-env-a[Container growth topology] for a sample inventory file and see link:{URLContainerizedInstall}/appendix-inventory-files-vars#ref-hub-variables[{HubNameStart} variables] for more information about this inventory file variable. \ No newline at end of file diff --git a/downstream/archive/archived-snippets/mixed-topologies.adoc b/downstream/archive/archived-snippets/mixed-topologies.adoc new file mode 100644 index 0000000000..ecac0f8f1d --- /dev/null +++ b/downstream/archive/archived-snippets/mixed-topologies.adoc @@ -0,0 +1,2 @@ +// Snippet that describes the mixed topology. Can be combined with growth-topologies.adoc or enterprise-topologies.adoc. +The mixed topology has different versions of {PlatformNameShort} intended for configuring a new installation of {EDAName} 1.1 with {ControllerName} 4.4 or 4.5. \ No newline at end of file diff --git a/downstream/archive/archived-snippets/rpm-env-b-tested-system-config.adoc b/downstream/archive/archived-snippets/rpm-env-b-tested-system-config.adoc new file mode 100644 index 0000000000..f071cc55a1 --- /dev/null +++ b/downstream/archive/archived-snippets/rpm-env-b-tested-system-config.adoc @@ -0,0 +1,15 @@ +//Tested system configuration snippet for RPM ENV B (mixed) topologies +.Tested system configurations +[options="header"] +|==== +| Type | Description +| Subscription | Valid {PlatformName} subscription +| Operating system +a| +* {RHEL} 8.8 or later minor versions of {RHEL} 8. +* {RHEL} 9.2 or later minor versions of {RHEL} 9. +| CPU architecture | x86_64, AArch64 +| Ansible-core | Ansible-core version {CoreUseVers} or later +| Browser | A currently supported version of Mozilla Firefox or Google Chrome +| Database | {PostgresVers} +|==== diff --git a/downstream/titles/analytics/automation-savings-planner/docinfo.xml b/downstream/archive/archived-titles/analytics/automation-savings-planner/docinfo.xml similarity index 81% rename from downstream/titles/analytics/automation-savings-planner/docinfo.xml rename to downstream/archive/archived-titles/analytics/automation-savings-planner/docinfo.xml index ae7eb382a8..5ef801d52b 100644 --- a/downstream/titles/analytics/automation-savings-planner/docinfo.xml +++ b/downstream/archive/archived-titles/analytics/automation-savings-planner/docinfo.xml @@ -3,7 +3,7 @@ 2.5 Create an automation savings plan for your organization -This guide shows how to plan your organization's automation initiatives, and get an accurate estimation of your time and monetary savings when switching to automation. +This guide shows how to plan your organization's automation initiatives, and get an accurate estimate of your time and monetary savings when switching to automation. Red Hat Customer Content Services diff --git a/downstream/titles/analytics/automation-savings-planner/master.adoc b/downstream/archive/archived-titles/analytics/automation-savings-planner/master.adoc similarity index 80% rename from downstream/titles/analytics/automation-savings-planner/master.adoc rename to downstream/archive/archived-titles/analytics/automation-savings-planner/master.adoc index f912396af9..c70185089a 100644 --- a/downstream/titles/analytics/automation-savings-planner/master.adoc +++ b/downstream/archive/archived-titles/analytics/automation-savings-planner/master.adoc @@ -1,3 +1,5 @@ +// This title has been archived due to consolidation of separate AA docs. See AAP-26519 + :imagesdir: images :experimental: :toclevels: 4 diff --git a/downstream/titles/analytics/automation-savings/docinfo.xml b/downstream/archive/archived-titles/analytics/automation-savings/docinfo.xml similarity index 100% rename from downstream/titles/analytics/automation-savings/docinfo.xml rename to downstream/archive/archived-titles/analytics/automation-savings/docinfo.xml diff --git a/downstream/titles/analytics/automation-savings/master.adoc b/downstream/archive/archived-titles/analytics/automation-savings/master.adoc similarity index 79% rename from downstream/titles/analytics/automation-savings/master.adoc rename to downstream/archive/archived-titles/analytics/automation-savings/master.adoc index 43e0419f26..bf5872c058 100644 --- a/downstream/titles/analytics/automation-savings/master.adoc +++ b/downstream/archive/archived-titles/analytics/automation-savings/master.adoc @@ -1,4 +1,4 @@ -// This assembly is included in the following assemblies: +// This title has been archived due to consolidation of separate AA docs. See AAP-26519 // :imagesdir: images diff --git a/downstream/titles/analytics/job-explorer/docinfo.xml b/downstream/archive/archived-titles/analytics/job-explorer/docinfo.xml similarity index 51% rename from downstream/titles/analytics/job-explorer/docinfo.xml rename to downstream/archive/archived-titles/analytics/job-explorer/docinfo.xml index e616023e8e..555f3a6428 100644 --- a/downstream/titles/analytics/job-explorer/docinfo.xml +++ b/downstream/archive/archived-titles/analytics/job-explorer/docinfo.xml @@ -1,9 +1,9 @@ -Evaluating your automation controller job runs using the job explorer +Using automation analytics Red Hat Ansible Automation Platform 2.5 Review jobs and templates in greater detail by applying filters and sorting by attributes -This guide shows how to use the Job Explorer to drill deeper into the information from your automation initiatives in automation controller. Explore the details behind visualizations to evaluate contextual data and link out to jobs and templates on your automation controller clusters. +This guide shows how to use the job explorer to drill deeper into the information from automation controller initiatives. Explore the details behind visualizations to evaluate contextual data and link out to jobs and templates. Red Hat Customer Content Services diff --git a/downstream/titles/analytics/job-explorer/master.adoc b/downstream/archive/archived-titles/analytics/job-explorer/master.adoc similarity index 68% rename from downstream/titles/analytics/job-explorer/master.adoc rename to downstream/archive/archived-titles/analytics/job-explorer/master.adoc index cf7b70776d..fbdf6aca6a 100644 --- a/downstream/titles/analytics/job-explorer/master.adoc +++ b/downstream/archive/archived-titles/analytics/job-explorer/master.adoc @@ -1,3 +1,5 @@ +// This title has been archived due to consolidation of separate AA docs. See AAP-26519 + :imagesdir: images :experimental: :toclevels: 4 @@ -7,7 +9,7 @@ include::attributes/attributes.adoc[] :analytics_automation_savings: [[analytics_automation_savings]] -= Evaluating your automation controller job runs using the job explorer += Using automation analytics include::{Boilerplate}[] diff --git a/downstream/titles/analytics/reports/docinfo.xml b/downstream/archive/archived-titles/analytics/reports/docinfo.xml similarity index 72% rename from downstream/titles/analytics/reports/docinfo.xml rename to downstream/archive/archived-titles/analytics/reports/docinfo.xml index f68807378a..4acd356c46 100644 --- a/downstream/titles/analytics/reports/docinfo.xml +++ b/downstream/archive/archived-titles/analytics/reports/docinfo.xml @@ -3,7 +3,7 @@ 2.5 Monitor your automation environment with the reports feature -This guide shows how to use the reports feature within Automation Analytics to generate an overview report to monitor your automation environment. +This guide shows how to use the reports feature on Insights for Red Hat Ansible Automation Platform to generate an overview report to monitor your automation environment. diff --git a/downstream/titles/analytics/reports/master.adoc b/downstream/archive/archived-titles/analytics/reports/master.adoc similarity index 76% rename from downstream/titles/analytics/reports/master.adoc rename to downstream/archive/archived-titles/analytics/reports/master.adoc index 99cb395934..344fbef221 100644 --- a/downstream/titles/analytics/reports/master.adoc +++ b/downstream/archive/archived-titles/analytics/reports/master.adoc @@ -1,3 +1,5 @@ +// This title has been archived due to consolidation of separate AA docs. See AAP-26519 + :imagesdir: images :experimental: :toclevels: 4 diff --git a/downstream/titles/controller/controller-getting-started/docinfo.xml b/downstream/archive/archived-titles/controller/controller-getting-started/docinfo.xml similarity index 68% rename from downstream/titles/controller/controller-getting-started/docinfo.xml rename to downstream/archive/archived-titles/controller/controller-getting-started/docinfo.xml index 10e8f263e4..6cf093a22f 100644 --- a/downstream/titles/controller/controller-getting-started/docinfo.xml +++ b/downstream/archive/archived-titles/controller/controller-getting-started/docinfo.xml @@ -3,8 +3,7 @@ 2.5 Getting started guide for automation controller - Learn how to set up a controller application, which you can then use to launch more sophisticated playbooks. - The setup process should take less than thirty minutes. + This guide shows how to set up a controller application, which you can then use to launch more sophisticated playbooks. The setup process typically takes less than thirty minutes. Red Hat Customer Content Services diff --git a/downstream/titles/controller/controller-getting-started/master.adoc b/downstream/archive/archived-titles/controller/controller-getting-started/master.adoc similarity index 100% rename from downstream/titles/controller/controller-getting-started/master.adoc rename to downstream/archive/archived-titles/controller/controller-getting-started/master.adoc diff --git a/downstream/titles/dev-guide/docinfo.xml b/downstream/archive/archived-titles/dev-guide/docinfo.xml similarity index 68% rename from downstream/titles/dev-guide/docinfo.xml rename to downstream/archive/archived-titles/dev-guide/docinfo.xml index d001122158..bddd36676d 100644 --- a/downstream/titles/dev-guide/docinfo.xml +++ b/downstream/archive/archived-titles/dev-guide/docinfo.xml @@ -1,9 +1,9 @@ Red Hat Ansible Automation Platform creator guide Red Hat Ansible Automation Platform 2.5 -Learn to create automation content with Ansible +Create automation content with Ansible -This guide helps developers learn how to use Ansible to create content for automation. +This guide helps developers learn how to use Ansible to create content for automation. Red Hat Customer Content Services diff --git a/downstream/titles/dev-guide/master.adoc b/downstream/archive/archived-titles/dev-guide/master.adoc similarity index 100% rename from downstream/titles/dev-guide/master.adoc rename to downstream/archive/archived-titles/dev-guide/master.adoc diff --git a/downstream/titles/eda/eda-getting-started-guide/docinfo.xml b/downstream/archive/archived-titles/eda/eda-getting-started-guide/docinfo.xml similarity index 85% rename from downstream/titles/eda/eda-getting-started-guide/docinfo.xml rename to downstream/archive/archived-titles/eda/eda-getting-started-guide/docinfo.xml index de5a62f77c..ddcac75870 100644 --- a/downstream/titles/eda/eda-getting-started-guide/docinfo.xml +++ b/downstream/archive/archived-titles/eda/eda-getting-started-guide/docinfo.xml @@ -1,7 +1,7 @@ -Getting started with Event-Driven Ansible guide +Getting started with Event-Driven Ansible Red Hat Ansible Automation Platform 2.5 -Learn about the benefits and how to get started using Event-Driven Ansible. +Learn about the benefits and how to get started using Event-Driven Ansible Event-Driven Ansible is a new way to enhance and expand automation by improving IT speed and agility while enabling consistency and resilience. This feature is designed for simplicity and flexibility. diff --git a/downstream/titles/eda/eda-getting-started-guide/master.adoc b/downstream/archive/archived-titles/eda/eda-getting-started-guide/master.adoc similarity index 93% rename from downstream/titles/eda/eda-getting-started-guide/master.adoc rename to downstream/archive/archived-titles/eda/eda-getting-started-guide/master.adoc index d70e26df28..6f96c67f34 100644 --- a/downstream/titles/eda/eda-getting-started-guide/master.adoc +++ b/downstream/archive/archived-titles/eda/eda-getting-started-guide/master.adoc @@ -9,7 +9,7 @@ include::attributes/attributes.adoc[] // Book Title -= Getting started with Event-Driven Ansible guide += Getting started with Event-Driven Ansible Thank you for your interest in {EDAname}. {EDAname} is a new way to enhance and expand automation. It helps teams automate decision-making and improve IT speed and agility. diff --git a/downstream/titles/hub/getting-started/docinfo.xml b/downstream/archive/archived-titles/hub/getting-started/docinfo.xml similarity index 100% rename from downstream/titles/hub/getting-started/docinfo.xml rename to downstream/archive/archived-titles/hub/getting-started/docinfo.xml diff --git a/downstream/titles/hub/getting-started/master.adoc b/downstream/archive/archived-titles/hub/getting-started/master.adoc similarity index 100% rename from downstream/titles/hub/getting-started/master.adoc rename to downstream/archive/archived-titles/hub/getting-started/master.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-11.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-11.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-11.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-11.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-12.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-12.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-12.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-12.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-13.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-13.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-13.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-13.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-14.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-14.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-14.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-14.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-21.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-21.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-21.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-21.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-22.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-22.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-22.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-22.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-23.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-23.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-23.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-23.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-24.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-24.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-24.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-24.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-6.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-6.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-6.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-6.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-61.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-61.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-61.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-61.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-62.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-62.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-62.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-62.adoc diff --git a/downstream/titles/release-notes/topics/installer-24-7.adoc b/downstream/archive/archived-titles/release-notes/async/installer-24-7.adoc similarity index 100% rename from downstream/titles/release-notes/topics/installer-24-7.adoc rename to downstream/archive/archived-titles/release-notes/async/installer-24-7.adoc diff --git a/downstream/titles/release-notes/topics/rpm-24-2.adoc b/downstream/archive/archived-titles/release-notes/async/rpm-24-2.adoc similarity index 100% rename from downstream/titles/release-notes/topics/rpm-24-2.adoc rename to downstream/archive/archived-titles/release-notes/async/rpm-24-2.adoc diff --git a/downstream/titles/release-notes/topics/rpm-24-3.adoc b/downstream/archive/archived-titles/release-notes/async/rpm-24-3.adoc similarity index 100% rename from downstream/titles/release-notes/topics/rpm-24-3.adoc rename to downstream/archive/archived-titles/release-notes/async/rpm-24-3.adoc diff --git a/downstream/titles/release-notes/topics/rpm-24-4.adoc b/downstream/archive/archived-titles/release-notes/async/rpm-24-4.adoc similarity index 100% rename from downstream/titles/release-notes/topics/rpm-24-4.adoc rename to downstream/archive/archived-titles/release-notes/async/rpm-24-4.adoc diff --git a/downstream/titles/release-notes/topics/rpm-24-5.adoc b/downstream/archive/archived-titles/release-notes/async/rpm-24-5.adoc similarity index 100% rename from downstream/titles/release-notes/topics/rpm-24-5.adoc rename to downstream/archive/archived-titles/release-notes/async/rpm-24-5.adoc diff --git a/downstream/titles/release-notes/topics/rpm-24-6.adoc b/downstream/archive/archived-titles/release-notes/async/rpm-24-6.adoc similarity index 100% rename from downstream/titles/release-notes/topics/rpm-24-6.adoc rename to downstream/archive/archived-titles/release-notes/async/rpm-24-6.adoc diff --git a/downstream/titles/release-notes/topics/rpm-24-7.adoc b/downstream/archive/archived-titles/release-notes/async/rpm-24-7.adoc similarity index 100% rename from downstream/titles/release-notes/topics/rpm-24-7.adoc rename to downstream/archive/archived-titles/release-notes/async/rpm-24-7.adoc diff --git a/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc b/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc index a2aa736963..3b0233bc56 100644 --- a/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc +++ b/downstream/assemblies/aap-hardening/assembly-hardening-aap.adoc @@ -7,44 +7,102 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for the installation phase. As this guide specifically covers {PlatformNameShort} running on Red Hat Enterprise Linux, hardening guidance for Red Hat Enterprise Linux will be covered where it affects the automation platform components. +This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for the installation phase. +As this guide specifically covers {PlatformNameShort} running on {RHEL}, hardening guidance for {RHEL} is covered where it affects the automation platform components. include::aap-hardening/con-planning-considerations.adoc[leveloffset=+1] + include::aap-hardening/ref-architecture.adoc[leveloffset=+2] + include::aap-hardening/con-network-firewall-services-planning.adoc[leveloffset=+2] + include::aap-hardening/con-dns-ntp-service-planning.adoc[leveloffset=+2] + include::aap-hardening/ref-dns.adoc[leveloffset=+3] + include::aap-hardening/ref-dns-load-balancing.adoc[leveloffset=+3] + include::aap-hardening/ref-ntp.adoc[leveloffset=+3] -include::aap-hardening/con-user-authentication-planning.adoc[leveloffset=+2] -include::aap-hardening/ref-automation-controller-authentication.adoc[leveloffset=+3] -include::aap-hardening/ref-private-automation-hub-authentication.adoc[leveloffset=+3] + +//include::aap-hardening/ref-aap-authentication.adoc[leveloffset=+3] +//include::aap-hardening/ref-private-automation-hub-authentication.adoc[leveloffset=+3] + include::aap-hardening/con-credential-management-planning.adoc[leveloffset=+2] -include::aap-hardening/ref-automation-controller-operational-secrets.adoc[leveloffset=+3] + +include::aap-hardening/ref-aap-operational-secrets.adoc[leveloffset=+3] + include::aap-hardening/con-automation-use-secrets.adoc[leveloffset=+3] + +include::aap-hardening/con-protect-sensitive-data-no-log.adoc[leveloffset=+3] + +include::aap-hardening/con-user-authentication-planning.adoc[leveloffset=+2] + +include::aap-hardening/ref-infrastructure-server-account-planning.adoc[leveloffset=+3] + +include::aap-hardening/ref-aap-account-planning.adoc[leveloffset=+3] + include::aap-hardening/con-logging-log-capture.adoc[leveloffset=+2] + include::aap-hardening/ref-auditing-incident-detection.adoc[leveloffset=+2] + include::aap-hardening/con-rhel-host-planning.adoc[leveloffset=+2] + include::aap-hardening/con-aap-additional-software.adoc[leveloffset=+3] + include::aap-hardening/con-installation.adoc[leveloffset=+1] + include::aap-hardening/con-install-secure-host.adoc[leveloffset=+2] + include::aap-hardening/ref-security-variables-install-inventory.adoc[leveloffset=+2] + include::aap-hardening/proc-install-user-pki.adoc[leveloffset=+2] + include::aap-hardening/ref-sensitive-variables-install-inventory.adoc[leveloffset=+2] -include::aap-hardening/con-controller-stig-considerations.adoc[leveloffset=+2] + +//include::aap-hardening/con-controller-stig-considerations.adoc[leveloffset=+2] + +include::aap-hardening/con-compliance-profile-considerations.adoc[leveloffset=+2] + include::aap-hardening/proc-fapolicyd.adoc[leveloffset=+3] + include::aap-hardening/proc-file-systems-mounted-noexec.adoc[leveloffset=+3] + include::aap-hardening/proc-namespaces.adoc[leveloffset=+3] + +include::aap-hardening/ref-interactive-session-timeout.adoc[leveloffset=+3] + include::aap-hardening/ref-sudo-nopasswd.adoc[leveloffset=+3] + include::aap-hardening/ref-initial-configuration.adoc[leveloffset=+1] + include::aap-hardening/ref-infrastructure-as-code.adoc[leveloffset=+2] -include::aap-hardening/con-controller-configuration.adoc[leveloffset=+2] -include::aap-hardening/proc-configure-centralized-logging.adoc[leveloffset=+3] -include::aap-hardening/proc-configure-external-authentication.adoc[leveloffset=+3] -include::aap-hardening/con-external-credential-vault.adoc[leveloffset=+3] + +//include::aap-hardening/con-controller-configuration.adoc[leveloffset=+2] + +include::aap-hardening/ref-configure-centralized-logging.adoc[leveloffset=+2] + +include::../platform/proc-controller-set-up-logging.adoc[leveloffset=+3] + +include::aap-hardening/proc-configure-ldap-logging.adoc[leveloffset=+3] + +include::aap-hardening/proc-implement-security-control.adoc[leveloffset=+3] + +include::aap-hardening/proc-implement-security-controller.adoc[leveloffset=+3] + +include::aap-hardening/proc-implement-security-for-admin.adoc[leveloffset=+2] + +//include::aap-hardening/proc-configure-external-authentication.adoc[leveloffset=+3] +include::aap-hardening/con-external-credential-vault.adoc[leveloffset=+2] + include::aap-hardening/con-day-two-operations.adoc[leveloffset=+1] + include::aap-hardening/con-rbac.adoc[leveloffset=+2] + include::aap-hardening/ref-updates-upgrades.adoc[leveloffset=+2] -include::aap-hardening/proc-controller-stig-considerations.adoc[leveloffset=+3] + +//include::aap-hardening/proc-controller-stig-considerations.adoc[leveloffset=+3] + include::aap-hardening/proc-disaster-recovery-operations.adoc[leveloffset=+3] + + diff --git a/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc b/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc index fb06823779..c31f09d143 100644 --- a/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc +++ b/downstream/assemblies/aap-hardening/assembly-intro-to-aap-hardening.adoc @@ -10,15 +10,31 @@ ifdef::context[:parent-context: {context}] This document provides guidance for improving the security posture (referred to as “hardening” throughout this guide) of your {PlatformName} deployment on {RHEL}. -Other deployment targets, such as OpenShift, are not currently within the scope of this guide. {PlatformNameShort} managed services available through cloud service provider marketplaces are also not within the scope of this guide. +The following are not currently within the scope of this guide: -This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for installation, initial configuration, and day two operations. As this guide specifically covers {PlatformNameShort} running on {RHEL}, hardening guidance for {RHEL} will be covered where it affects the automation platform components. Additional considerations with regards to the Defense Information Systems Agency (DISA) Security Technical Implementation Guides (STIGs) are provided for those organizations that integrate the DISA STIG as a part of their overall security strategy. +* Other deployment targets for {PlatformNameShort}, such as OpenShift. +* {PlatformNameShort} managed services available through cloud service provider marketplaces. +//* Additional considerations with regards to the _Defense Information Systems Agency_ (DISA) _Security Technical Implementation Guides_ (STIGs) [NOTE] ==== -These recommendations do not guarantee security or compliance of your deployment of {PlatformNameShort}. You must assess security from the unique requirements of your organization to address specific threats and risks and balance these against implementation factors. +Hardening and compliance for {PlatformNameShort} 2.4 includes additional considerations with regards to the specific _Defense Security Information Agency_ (DISA) _Security Technical Implementation Guides_ (STIGs) for {ControllerName}, but this guidance does not apply to {PlatformNameShort} {PlatformVers}. +==== + +This guide takes a practical approach to hardening the {PlatformNameShort} security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for installation, initial configuration, and day 2 operations. +As this guide specifically covers {PlatformNameShort} running on {RHEL}, hardening guidance for {RHEL} will be covered where it affects the automation platform components. +Additional considerations with regards to the DISA STIGs for {RHEL} are provided for those organizations that integrate the DISA STIGs as a part of their overall security strategy. + +[NOTE] +==== +These recommendations do not guarantee security or compliance of your deployment of {PlatformNameShort}. +You must assess security from the unique requirements of your organization to address specific threats and risks and balance these against implementation factors. ==== include::aap-hardening/con-hardening-guide-audience.adoc[leveloffset=+1] + include::aap-hardening/con-product-overview.adoc[leveloffset=+1] + +include::aap-hardening/con-deployment-methods.adoc[leveloffset=+2] + include::aap-hardening/con-platform-components.adoc[leveloffset=+2] \ No newline at end of file diff --git a/downstream/assemblies/aap-migration/aap-migration b/downstream/assemblies/aap-migration/aap-migration new file mode 120000 index 0000000000..7daff758c9 --- /dev/null +++ b/downstream/assemblies/aap-migration/aap-migration @@ -0,0 +1 @@ +../../modules/aap-migration \ No newline at end of file diff --git a/downstream/assemblies/aap-migration/assembly-migration-artifact.adoc b/downstream/assemblies/aap-migration/assembly-migration-artifact.adoc new file mode 100644 index 0000000000..35befe8600 --- /dev/null +++ b/downstream/assemblies/aap-migration/assembly-migration-artifact.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="migration-artifact"] += Migration artifact structure and verification + +The migration artifact is a critical component for successfully transferring your {PlatformNameShort} deployment. It packages all necessary data and configurations from your source environment. + +This section details the structure of the migration artifact and includes a migration checklist for artifact verification. + +include::aap-migration/con-artifact-structure.adoc[leveloffset=+1] + +include::aap-migration/con-manifest-file.adoc[leveloffset=+1] + +include::aap-migration/con-secrets-file.adoc[leveloffset=+1] + +include::aap-migration/ref-migration-artifact-checklist.adoc[leveloffset=+1] diff --git a/downstream/assemblies/aap-migration/assembly-migration-prerequisites.adoc b/downstream/assemblies/aap-migration/assembly-migration-prerequisites.adoc new file mode 100644 index 0000000000..da1e32a941 --- /dev/null +++ b/downstream/assemblies/aap-migration/assembly-migration-prerequisites.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="migration-prerequisites"] += Migration prerequisites + +Prerequisites for migrating your {PlatformNameShort} deployment. For your specific migration path, ensure that you meet all necessary conditions before proceeding. + +include::aap-migration/con-rpm-to-containerized-prerequisites.adoc[leveloffset=+1] + +include::aap-migration/con-rpm-to-ocp-prerequisites.adoc[leveloffset=+1] + +include::aap-migration/con-rpm-to-managed-prerequisites.adoc[leveloffset=+1] + +include::aap-migration/con-containerized-to-ocp-prerequisites.adoc[leveloffset=+1] + +include::aap-migration/con-containerized-to-managed-prerequisites.adoc[leveloffset=+1] diff --git a/downstream/assemblies/aap-migration/assembly-source-containerized.adoc b/downstream/assemblies/aap-migration/assembly-source-containerized.adoc new file mode 100644 index 0000000000..ea79fe0f99 --- /dev/null +++ b/downstream/assemblies/aap-migration/assembly-source-containerized.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="source-containerized"] += Container-based {PlatformNameShort} + +Prepare and export data from your container-based {PlatformNameShort} deployment. + +include::aap-migration/proc-containerized-source-environment-preparation-assessment.adoc[leveloffset=+1] + +include::aap-migration/proc-containerized-source-environment-export.adoc[leveloffset=+1] + +== Creating and verifying the migration artifact + +To create and verify the migration artifact, follow the instructions in link:{URLMigration}/migration-artifact[Migration artifact structure and verification]. diff --git a/downstream/assemblies/aap-migration/assembly-source-rpm.adoc b/downstream/assemblies/aap-migration/assembly-source-rpm.adoc new file mode 100644 index 0000000000..9b661f385f --- /dev/null +++ b/downstream/assemblies/aap-migration/assembly-source-rpm.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="source-rpm"] += RPM-based {PlatformNameShort} + +Prepare and export data from your RPM-based {PlatformNameShort} deployment. + +include::aap-migration/proc-rpm-environment-source-prep.adoc[leveloffset=+1] + +include::aap-migration/proc-rpm-source-environment-export.adoc[leveloffset=+1] + +== Creating and verifying the migration artifact + +To create and verify the migration artifact, follow the instructions in link:{URLMigration}/migration-artifact[Migration artifact structure and verification]. diff --git a/downstream/assemblies/aap-migration/assembly-target-containerized.adoc b/downstream/assemblies/aap-migration/assembly-target-containerized.adoc new file mode 100644 index 0000000000..bed0505c07 --- /dev/null +++ b/downstream/assemblies/aap-migration/assembly-target-containerized.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="target-containerized"] += Container-based {PlatformNameShort} + +Prepare and assess your target container-based {PlatformNameShort} environment, and import and reconcile your migrated content. + +include::aap-migration/proc-containerized-target-prep.adoc[leveloffset=+1] + +include::aap-migration/proc-containerized-target-import.adoc[leveloffset=+1] + +include::aap-migration/proc-containerized-post-import.adoc[leveloffset=+1] + +include::aap-migration/proc-containerized-validation.adoc[leveloffset=+1] diff --git a/downstream/assemblies/aap-migration/assembly-target-managed-aap.adoc b/downstream/assemblies/aap-migration/assembly-target-managed-aap.adoc new file mode 100644 index 0000000000..4c075684c6 --- /dev/null +++ b/downstream/assemblies/aap-migration/assembly-target-managed-aap.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="target-managed-aap"] += Managed {PlatformNameShort} + +Prepare and migrate your source environment to a Managed {PlatformNameShort} deployment, and reconcile the target environment post-migration. + +include::aap-migration/proc-managed-target-migration.adoc[leveloffset=+1] + +include::aap-migration/proc-managed-post-import.adoc[leveloffset=+1] diff --git a/downstream/assemblies/aap-migration/assembly-target-ocp.adoc b/downstream/assemblies/aap-migration/assembly-target-ocp.adoc new file mode 100644 index 0000000000..85d07a74e9 --- /dev/null +++ b/downstream/assemblies/aap-migration/assembly-target-ocp.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="target-ocp"] += {OCPShort} + +Prepare and assess your target {OCPShort} environment, and import and reconcile your migrated content. + +include::aap-migration/proc-ocp-target-prep.adoc[leveloffset=+1] + +include::aap-migration/proc-ocp-target-import.adoc[leveloffset=+1] + +include::aap-migration/proc-ocp-post-import.adoc[leveloffset=+1] + +include::aap-migration/proc-ocp-validation.adoc[leveloffset=+1] diff --git a/downstream/assemblies/analytics/assembly-automation-savings-planner.adoc b/downstream/assemblies/analytics/assembly-automation-savings-planner.adoc index 2909a13153..62866570b8 100644 --- a/downstream/assemblies/analytics/assembly-automation-savings-planner.adoc +++ b/downstream/assemblies/analytics/assembly-automation-savings-planner.adoc @@ -6,7 +6,7 @@ ifdef::context[:parent-context: {context}] = About the {planner} -An automation savings plan gives you the ability to plan, track, and analyze the potential efficiency and cost savings of your automation initiatives. Use {InsightsName} to create an automation savings plan by defining a list of tasks needed to complete an automation job. You can then link your automation savings plans to an Ansible job template in order to accurately measure the time and cost savings upon completion of an automation job. +An automation savings plan gives you the ability to plan, track, and analyze the potential efficiency and cost savings of your automation initiatives. Use automation analytics to create an automation savings plan by defining a list of tasks needed to complete an automation job. You can then link your automation savings plans to an Ansible job template in order to accurately measure the time and cost savings upon completion of an automation job. To create an automation savings plan, you can utilize the {planner} to prioritize the various automation jobs throughout your organization and understand the potential time and cost savings for your automation initiatives. diff --git a/downstream/assemblies/analytics/assembly-data-dictionary.adoc b/downstream/assemblies/analytics/assembly-data-dictionary.adoc new file mode 100644 index 0000000000..19cb87b24d --- /dev/null +++ b/downstream/assemblies/analytics/assembly-data-dictionary.adoc @@ -0,0 +1,14 @@ +ifdef::context[:parent-context: {context}] + +[id="assembly-using-data-dictionary"] + +:context: assembly-using-job-explorer-ctxt + += {Analytics} Data Dictionary + +{Analytics} Data is sent to the Red Hat Hybrid Cloud Console (HCC) to provide detailed analytics on your automation. + +The link:https://access.redhat.com/articles/7124201[Ansible Automation Platform - Data Dictionary] knowledge base article outlines the data dictionary for the information collected by {Analytics} from the {PlatformName} {ControllerName}, also known as Automation Execution. + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/builder/assembly-common-ee-scenarios.adoc b/downstream/assemblies/builder/assembly-common-ee-scenarios.adoc index 3a4e67e918..91026da751 100644 --- a/downstream/assemblies/builder/assembly-common-ee-scenarios.adoc +++ b/downstream/assemblies/builder/assembly-common-ee-scenarios.adoc @@ -5,9 +5,6 @@ Use the following example definition files to address common configuration scenarios. include::builder/ref-scenario-update-hub-ca-cert.adoc[leveloffset=+1] + include::builder/ref-scenario-using-authentication-ee.adoc[leveloffset=+1] -[role="_additional-resources"] -== Additional resources -* For information regarding the different parts of an {ExecEnvNameSing} definition file, see xref:assembly-definition-file-breakdown[Breakdown of definition file content]. -* For additional example definition files for common scenarios, see link:https://ansible.readthedocs.io/projects/builder/en/latest/scenario_guides/scenario_copy/[Common scenarios section] of the _Ansible Builder Documentation_ diff --git a/downstream/assemblies/builder/assembly-intro-to-builder.adoc b/downstream/assemblies/builder/assembly-intro-to-builder.adoc index fd9cc7a4f2..1e765d4d73 100644 --- a/downstream/assemblies/builder/assembly-intro-to-builder.adoc +++ b/downstream/assemblies/builder/assembly-intro-to-builder.adoc @@ -3,9 +3,19 @@ = Introduction to {ExecEnvNameStart} Using Ansible content that depends on non-default dependencies can be complicated because the packages must be installed on each node, interact with other software installed on the host system, and be kept in sync. +You must use the same environment during development, testing and in production. +Red Hat provides {ExecEnvShort}s for this purpose. {ExecEnvNameStart} help simplify this process and can easily be created with {Builder}. += Builder + +Ansible provides an link:https://github.com/redhat-cop/ee_utilities[Execution Environment Utilities Collection], `infra.ee_utilities`. +Red Hat provides {ExecEnvShort}s for this purpose. +This is a collection of roles for creating and managing images, or migrating from the older Tower virtualenvs to {ExecEnvShort}. +Using this collection, you can automate the preparation and maintenance of Ansible {ExecEnvShort}s. + include::builder/con-about-ee.adoc[leveloffset=+1] + include::builder/con-why-ee.adoc[leveloffset=+2] diff --git a/downstream/assemblies/builder/assembly-open-source-license.adoc b/downstream/assemblies/builder/assembly-open-source-license.adoc new file mode 100644 index 0000000000..98bc726217 --- /dev/null +++ b/downstream/assemblies/builder/assembly-open-source-license.adoc @@ -0,0 +1,5 @@ +[id="assembly-open-source-license"] + += Open source license + +include::../aap-common/apache-2.0-license.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/builder/assembly-populate-container-registry.adoc b/downstream/assemblies/builder/assembly-populate-container-registry.adoc new file mode 100644 index 0000000000..3017b0e13c --- /dev/null +++ b/downstream/assemblies/builder/assembly-populate-container-registry.adoc @@ -0,0 +1,25 @@ +[id="assembly-populate-container-registry"] + += Populating your {PrivateHubName} container registry + +By default, {PrivateHubName} does not include {ExecEnvName}. +To populate your container registry, you must push an {ExecEnvShort} to it. + +include::platform/proc-uploading-the-custom-execution-environment-to-the-private-hub.adoc[leveloffset=+1] + +Use the following workflow to populate your private automation hub remote registry: + +. link:{URLHubManagingContent}/managing-containers-hub#obtain-images[Pull {ExecEnvShort}s for use in {HubName}] +. link:{URLHubManagingContent}/managing-containers-hub#tag-pulled-images[Tag {ExecEnvShort} for use in {HubName}] +. link:{URLHubManagingContent}/managing-containers-hub#push-containers[Push an {ExecEnvShort} to {PrivateHubName}] +. link:{URLHubManagingContent}/managing-containers-hub#setting-up-container-repository[Set up your container repository] +. link:{URLHubManagingContent}/managing-containers-hub#proc-doing-one-procedure_assembly-keyword[Add a README to your container repository] +. link:{URLHubManagingContent}/managing-containers-hub#providing-access-to-containers[Provide access to your {ExecEnvName}s] +. link:{URLHubManagingContent}/managing-containers-hub#proc-tag-image[Tag container images] +. link:{URLHubManagingContent}/managing-containers-hub#proc-create-credential[Create a credential] +. link:{URLHubManagingContent}/managing-containers-hub#pulling-images-container-repository[Pulling images from a container repository] +. link:{URLHubManagingContent}/managing-containers-hub#pulling-image[Pull an image] +. link:{URLHubManagingContent}/managing-containers-hub#proc-sync-image-adoc_pulling-images-container-repository[Sync images from a container repository] + +.Additional information +For more information on registries, see link:https://access.redhat.com/articles/RegistryAuthentication[Red Hat Container Registry Authentication] diff --git a/downstream/assemblies/builder/assembly-publishing-exec-env.adoc b/downstream/assemblies/builder/assembly-publishing-exec-env.adoc index 4282f39acf..ec0849054b 100644 --- a/downstream/assemblies/builder/assembly-publishing-exec-env.adoc +++ b/downstream/assemblies/builder/assembly-publishing-exec-env.adoc @@ -5,11 +5,11 @@ include::builder/proc-customize-ee-image.adoc[leveloffset=+1] [role="_additional-resources"] -== Additional resources (or Next steps) +.Additional resources For more details on customizing {ExecEnvShort}s based on common scenarios, see the following topics in the _Ansible Builder Documentation_: -* link:https://ansible.readthedocs.io/projects/builder/en/latest/scenario_guides/scenario_copy/[Copying arbitratory files to an execution enviornment] +* link:https://ansible.readthedocs.io/projects/builder/en/latest/scenario_guides/scenario_copy/[Copying arbitratory files to an execution environment] * link:https://ansible.readthedocs.io/projects/builder/en/latest/scenario_guides/scenario_using_env/[Building execution environments with environment variables] * link:https://ansible.readthedocs.io/projects/builder/en/latest/scenario_guides/scenario_custom/[Building execution environments with environment variables and `ansible.cfg`] diff --git a/downstream/assemblies/builder/assembly-using-builder.adoc b/downstream/assemblies/builder/assembly-using-builder.adoc index 35d69d227a..8aad6a4187 100644 --- a/downstream/assemblies/builder/assembly-using-builder.adoc +++ b/downstream/assemblies/builder/assembly-using-builder.adoc @@ -4,13 +4,53 @@ {Builder} is a command line tool that automates the process of building {ExecEnvName} by using metadata defined in various Ansible Collections or created by the user. +[NOTE] +==== +You must build an {ExecEnvShort} before you can create it using {ControllerName}. +After building it, you must push it to a repository (such as quay) and then, when creating an {ExecEnvShort} in the UI with {ControllerName}, you must point to that repository to use it in {PlatformNameShort} to use it, for example, in a job template. +==== + include::builder/con-why-builder.adoc[leveloffset=+1] + include::builder/proc-installing-builder.adoc[leveloffset=+1] + +include::platform/con-building-an-execution-environment-in-a-disconnected-environment.adoc[leveloffset=+1] + include::builder/con-building-definition-file.adoc[leveloffset=+1] +include::platform/proc-creating-the-custom-execution-environment-definition.adoc[leveloffset=+1] + include::builder/proc-executing-build.adoc[leveloffset=+1] -include::assembly-definition-file-breakdown.adoc[leveloffset=+1] +include::builder/con-definition-file-breakdown.adoc[leveloffset=+1] + +include::builder/ref-build-args-base-image.adoc[leveloffset=+2] + +//include::builder/con-ansible-config-file-path.adoc[leveloffset=+1] +include::builder/con-definition-dependencies.adoc[leveloffset=+2] + +include::builder/con-galaxy-dependencies.adoc[leveloffset=+3] + +include::builder/con-python-dependencies.adoc[leveloffset=+3] + +include::builder/con-system-dependencies.adoc[leveloffset=+3] + +include::builder/ref-definition-file-images.adoc[leveloffset=+3] + +include::builder/con-additional-build-files.adoc[leveloffset=+1] + +include::builder/con-additional-custom-build-steps.adoc[leveloffset=+1] + +include::builder/con-build-an-ee-with-env-variables.adoc[leveloffset=+2] + +include::builder/con-build-ee-with-env-vars-for-galaxy.adoc[leveloffset=+2] + +include::platform/proc-update-ee-image-locations.adoc[leveloffset=+1] + include::builder/con-optional-build-command-arguments.adoc[leveloffset=+1] + include::builder/con-container_file.adoc[leveloffset=+1] + include::builder/proc-creating-containerfile-no-image.adoc[leveloffset=+1] + +include::builder/ref-example-yaml-image-files.adoc[leveloffset=+1] diff --git a/downstream/assemblies/builder/platform b/downstream/assemblies/builder/platform new file mode 120000 index 0000000000..06203029c9 --- /dev/null +++ b/downstream/assemblies/builder/platform @@ -0,0 +1 @@ +../../modules/platform \ No newline at end of file diff --git a/downstream/assemblies/devtools/assembly-creating-playbook-project.adoc b/downstream/assemblies/devtools/assembly-creating-playbook-project.adoc index 146963409e..275349d7f3 100644 --- a/downstream/assemblies/devtools/assembly-creating-playbook-project.adoc +++ b/downstream/assemblies/devtools/assembly-creating-playbook-project.adoc @@ -1,4 +1,5 @@ ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY [id="creating-playbook-project"] = Creating a playbook project diff --git a/downstream/assemblies/devtools/assembly-developer-workflow.adoc b/downstream/assemblies/devtools/assembly-developer-workflow.adoc index 4226593aee..099b14ee65 100644 --- a/downstream/assemblies/devtools/assembly-developer-workflow.adoc +++ b/downstream/assemblies/devtools/assembly-developer-workflow.adoc @@ -1,4 +1,5 @@ ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY [id="developer-workflow"] = Workflow for developing automation content diff --git a/downstream/assemblies/devtools/assembly-devtools-create-roles-collection.adoc b/downstream/assemblies/devtools/assembly-devtools-create-roles-collection.adoc new file mode 100644 index 0000000000..0bdd57505d --- /dev/null +++ b/downstream/assemblies/devtools/assembly-devtools-create-roles-collection.adoc @@ -0,0 +1,55 @@ +ifdef::context[:parent-context-of-devtools-create-roles-collection: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="devtools-create-roles-collection"] +endif::[] +ifdef::context[] +[id="devtools-create-roles-collection_{context}"] +endif::[] + += Creating a collection for distributing roles + +:context: devtools-create-roles-collection + +An Ansible role is a self-contained unit of Ansible automation content that groups related +tasks and associated variables, files, handlers, and other assets in a defined directory structure. + +You can run Ansible roles in one or more plays, and reuse them across playbooks. +Invoking roles instead of tasks simplifies playbooks. +You can migrate existing standalone roles into collections, +and push them to private automation hub to share them with other users in your organization. +Distributing roles in this way is a typical way to use collections. + +With Ansible collections, you can store and distribute multiple roles in a single unit of reusable automation. +Inside a collection, you can share custom plug-ins across all roles in the collection instead of duplicating them in each role. + +You must move roles into collections if you want to use them in {PlatformNameShort}. + +You can add existing standalone roles to a collection, or add new roles to it. +Push the collection to source control and configure credentials for the repository in {PlatformNameShort}. + +include::devtools/con-devtools-plan-roles-collection.adoc[leveloffset=+1] + +include::devtools/con-devtools-roles-collection-prerequisites.adoc[leveloffset=+1] + +include::devtools/proc-devtools-scaffold-roles-collection.adoc[leveloffset=+1] + +include::devtools/proc-devtools-migrate-existing-roles-collection.adoc[leveloffset=+1] + +include::devtools/proc-devtools-create-new-role-in-collection.adoc[leveloffset=+1] + +include::devtools/proc-devtools-docs-roles-collection.adoc[leveloffset=+1] + +// include::devtools/proc-devtools-run-roles-collection.adoc[leveloffset=+1] + +// include::devtools/proc-devtools-molecule-test-roles-collection.adoc[leveloffset=+1] + +include::devtools/proc-devtools-publish-roles-collection-pah.adoc[leveloffset=+1] + +include::devtools/proc-devtools-use-roles-collections-aap.adoc[leveloffset=+2] + +ifdef::parent-context-of-devtools-create-roles-collection[:context: {parent-context-of-devtools-create-roles-collection}] +ifndef::parent-context-of-devtools-create-roles-collection[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-devtools-develop-collections.adoc b/downstream/assemblies/devtools/assembly-devtools-develop-collections.adoc new file mode 100644 index 0000000000..4f9e0b1c29 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-devtools-develop-collections.adoc @@ -0,0 +1,28 @@ +ifdef::context[:parent-context-of-devtools-develop-collections: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="devtools-develop-collections"] +endif::[] +ifdef::context[] +[id="devtools-develop-collections_{context}"] +endif::[] += Developing collections + +:context: devtools-develop-collections + +Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. +Red Hat provides Ansible Content Collections on Ansible automation hub that contain both {CertifiedCon} and {Valid}. + +If you have installed private automation hub, you can create collections for your organization and push them +to {PrivateHubName} so that you can use them in job templates in {PlatformNameShort}. +You can use collections to package and distribute plug-ins. These plug-ins are written in Python. + +You can also create collections to package and distribute Ansible roles, which are expressed in YAML. +You can also include playbooks and custom plug-ins that are required for these roles in the collection. +Typically, collections of roles are distributed for use within your organization. + +ifdef::parent-context-of-devtools-develop-collections[:context: {parent-context-of-devtools-develop-collections}] +ifndef::parent-context-of-devtools-develop-collections[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-devtools-install.adoc b/downstream/assemblies/devtools/assembly-devtools-install.adoc index 4f6fcd5e83..d86f55696e 100644 --- a/downstream/assemblies/devtools/assembly-devtools-install.adoc +++ b/downstream/assemblies/devtools/assembly-devtools-install.adoc @@ -1,13 +1,44 @@ -ifdef::context[:parent-context: {context}] +ifdef::context[:parent-context-of-assembly-devtools-install: {context}] +:_mod-docs-content-type: ASSEMBLY [id="installing-devtools"] = Installing {ToolsName} :context: installing-devtools [role="_abstract"] +Red Hat provides two options for installing {ToolsName}. +// Both options require {VSCode} (Visual Studio Code) with the Ansible extension added. -include::devtools/proc-devtools-install.adoc[leveloffset=+1] +* Installation on a RHEL container running inside {VSCode}. +You can install this option on MacOS, Windows, and Linux systems. +* Installation on your local RHEL 8 or RHEL 9 system using an RPM (Red Hat Package Manager) package. ++ +[NOTE] +==== +RPM installation is not supported on RHEL 10. +==== -ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] + +include::devtools/con-devtools-requirements.adoc[leveloffset=+1] + +include::devtools/proc-devtools-install-podman-desktop-wsl.adoc[leveloffset=+2] + +include::devtools/proc-devtools-setup-registry-redhat-io.adoc[leveloffset=+2] + +include::devtools/proc-devtools-install-vsc.adoc[leveloffset=+2] + +include::devtools/proc-devtools-install-vscode-extension.adoc[leveloffset=+2] + +include::devtools/proc-devtools-extension-settings.adoc[leveloffset=+2] + +include::devtools/proc-devtools-extension-set-language.adoc[leveloffset=+2] + +include::devtools/proc-devtools-ms-dev-containers-ext.adoc[leveloffset=+2] + +include::devtools/proc-devtools-install-container.adoc[leveloffset=+1] + +include::devtools/proc-devtools-install-rpm.adoc[leveloffset=+1] + +ifdef::parent-context-of-assembly-devtools-install[:context: {parent-context-of-assembly-devtools-install}] +ifndef::parent-context-of-assembly-devtools-install[:!context:] diff --git a/downstream/assemblies/devtools/assembly-devtools-intro.adoc b/downstream/assemblies/devtools/assembly-devtools-intro.adoc index 917ef299d5..0629791b93 100644 --- a/downstream/assemblies/devtools/assembly-devtools-intro.adoc +++ b/downstream/assemblies/devtools/assembly-devtools-intro.adoc @@ -1,4 +1,5 @@ ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY [id="devtools-intro"] = {ToolsName} @@ -14,6 +15,9 @@ you can use these tools from the {VSCode} user interface. Use {ToolsName} during local development of playbooks, local testing, and in a CI pipeline (linting and testing). +This document describes how to use {ToolsName} to create a playbook project that contains playbooks and roles that you can reuse within the project. +It also describes how to test the playbooks and deploy the project on your {PlatformNameShort} instance so that you can use the playbooks in automation jobs. + include::devtools/ref-devtools-components.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] diff --git a/downstream/assemblies/devtools/assembly-devtools-setup.adoc b/downstream/assemblies/devtools/assembly-devtools-setup.adoc deleted file mode 100644 index b5d054f0a8..0000000000 --- a/downstream/assemblies/devtools/assembly-devtools-setup.adoc +++ /dev/null @@ -1,19 +0,0 @@ -ifdef::context[:parent-context: {context}] -[id="devtools-setup"] - -= Configuring {ToolsName} - - -:context: devtools-setup -[role="_abstract"] - -include::devtools/proc-installing-vscode.adoc[leveloffset=+1] -// include::devtools/proc-directory-setup.adoc[leveloffset=+1] -include::devtools/proc-setup-vscode-workspace.adoc[leveloffset=+1] -include::devtools/proc-install-vscode-extension.adoc[leveloffset=+1] -include::devtools/proc-configure-extension-settings.adoc[leveloffset=+1] -include::devtools/proc-create-python-venv.adoc[leveloffset=+1] - -ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] - diff --git a/downstream/assemblies/devtools/assembly-publishing-playbook-collection-aap.adoc b/downstream/assemblies/devtools/assembly-publishing-playbook-collection-aap.adoc new file mode 100644 index 0000000000..a2b7da26ff --- /dev/null +++ b/downstream/assemblies/devtools/assembly-publishing-playbook-collection-aap.adoc @@ -0,0 +1,19 @@ +ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="publishing-playbook-collection-aap"] + += Publishing and running your playbooks in {PlatformNameShort} + +:context: publishing-playbook-collection-aap-intro +[role="_abstract"] + +[role="_abstract"] +The following procedures describe how to deploy your new playbooks in your instance of {PlatformNameShort} so that you can use them to run automation jobs. + +include::devtools/proc-devtools-save-scm.adoc[leveloffset=+1] + +include::devtools/proc-devtools-create-aap-job.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-configure.adoc b/downstream/assemblies/devtools/assembly-rhdh-configure.adoc deleted file mode 100644 index 55205cbd5b..0000000000 --- a/downstream/assemblies/devtools/assembly-rhdh-configure.adoc +++ /dev/null @@ -1,15 +0,0 @@ -ifdef::context[:parent-context: {context}] -[id="rhdh-configure_{context}"] - -= Configuring {AAPRHDH} - -:context: rhdh-configure -[role="_abstract"] - -{AAPRHDH} Configuration - -//include::devtools/ref-devtools-components.adoc[leveloffset=+1] - -ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] - diff --git a/downstream/assemblies/devtools/assembly-rhdh-example.adoc b/downstream/assemblies/devtools/assembly-rhdh-example.adoc new file mode 100644 index 0000000000..306a7781ca --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-example.adoc @@ -0,0 +1,27 @@ +ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-example_{context}"] + += Example: Automate Red Hat Enterprise Linux firewall configuration + +:context: rhdh-example +[role="_abstract"] +This example demonstrates how the Ansible plug-ins can help Ansible users of all skill levels create quality Ansible content. + +As an infrastructure engineer new to Ansible, you have been tasked to create a playbook to configure a {RHEL} (RHEL) host firewall. + +The following procedures show you how to use the Ansible plug-ins and Dev Spaces to develop a playbook. + +include::devtools/proc-rhdh-firewall-example-learn.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-firewall-example-discover.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-firewall-example-create-playbook.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-firewall-example-new-playbook.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-firewall-example-edit.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-feedback.adoc b/downstream/assemblies/devtools/assembly-rhdh-feedback.adoc new file mode 100644 index 0000000000..aed3d80679 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-feedback.adoc @@ -0,0 +1,26 @@ +ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-feedback_{context}"] + += Providing feedback in the Ansible plug-ins + +:context: rhdh-feedback +[role="_abstract"] +The Ansible plug-ins provide a feedback form where you can suggest new features and content, as well as general feedback. + +. Click the Ansible `A` icon in the {RHDH} navigation panel. +. Click the *Feedback* icon to display the feedback form. ++ +image::rhdh-feedback-form.png[Ansible plug-in feedback form] +. Enter the feedback you want to provide. +. Tick the *I understand that feedback is shared with Red Hat* checkbox. +. Click *Submit*. + +[NOTE] +==== +To ensure that Red Hat receives your feedback, exclude your {RHDH} URL in any browser ad blockers or privacy tools. +==== + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-install-ocp-helm.adoc b/downstream/assemblies/devtools/assembly-rhdh-install-ocp-helm.adoc new file mode 100644 index 0000000000..62808c59ff --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-install-ocp-helm.adoc @@ -0,0 +1,50 @@ +ifdef::context[:parent-context-of-rhdh-install-ocp-helm: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-install-ocp-helm_{context}"] + += Installing the Ansible plug-ins with a Helm chart on {OCPShort} + +:context: rhdh-install-ocp +[role="_abstract"] +The following procedures describe how to install Ansible plug-ins in {RHDH} instances on {OCP} using a Helm chart. + +The workflow is as follows: + +. Download the Ansible plug-ins files. +. Create a plug-in registry in your OpenShift cluster to host the Ansible plug-ins. +. Add the plug-ins to the Helm chart. +. Create a custom ConfigMap. +. Add your custom ConfigMap to your Helm chart. +. Edit your custom ConfigMap and Helm chart according to the required and optional configuration procedures. ++ +[NOTE] +==== +You can save changes to your Helm and ConfigMap after each update to your configuration. +You do not have to make all the changes to these files in a single session. +==== + +include::devtools/con-rhdh-install-ocp-prereqs.adoc[leveloffset=+1] + +include::devtools/con-rhdh-recommended-preconfig.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-download-plugins.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-create-plugin-registry.adoc[leveloffset=+1] + + +// Required config +include::assembly-rhdh-ocp-required-installation.adoc[leveloffset=+1] + +// +// Optional config +include::assembly-rhdh-ocp-configure-optional.adoc[leveloffset=+1] + +// +// Full example configuration +include::assembly-rhdh-ocp-full-examples.adoc[leveloffset=+1] + +// + +ifdef::parent-context-of-rhdh-install-ocp-helm[:context: {parent-context-of-rhdh-install-ocp-helm}] +ifndef::parent-context-of-rhdh-install-ocp-helm[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-install-ocp-operator.adoc b/downstream/assemblies/devtools/assembly-rhdh-install-ocp-operator.adoc new file mode 100644 index 0000000000..16c63ea868 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-install-ocp-operator.adoc @@ -0,0 +1,50 @@ +ifdef::context[:parent-context-of-rhdh-install-ocp-operator: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-install-ocp-operator_{context}"] + += Installing the {AAPRHDHShort} with the Operator on {OCPShort} + +:context: rhdh-install-ocp-operator +[role="_abstract"] +The following procedures describe how to install {AAPRHDHShort} in {RHDH} instances on {OCP} using the Operator. + +include::devtools/con-rhdh-install-ocp-prereqs.adoc[leveloffset=+1] + +include::devtools/con-rhdh-recommended-preconfig.adoc[leveloffset=+1] + +// Add sidecar container to the base configmap of for the rhdh deployment +include::devtools/proc-rhdh-operator-add-sidecar-container.adoc[leveloffset=+1] + +// Plug-in registry +include::devtools/proc-rhdh-download-plugins.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-create-plugin-registry.adoc[leveloffset=+1] + +// Install the dynamic plug-ins +include::devtools/proc-rhdh-install-dynamic-plugins-operator.adoc[leveloffset=+1] + +// +// Add dynamic plug-ins to rhaap-dynamic-plugins-config +// Replace the following to reuse Helm config: +// include::devtools/proc-rhdh-operator-install-add-plugins-app-config.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-add-custom-configmap.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-devtools-server.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-aap-details.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-add-plugin-software-templates.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-rbac.adoc[leveloffset=+1] + +include::assembly-rhdh-ocp-configure-optional.adoc[leveloffset=+1] + +// Full example configuration +include::devtools/ref-rhdh-full-aap-configmap-example.adoc[leveloffset=+1] + +// + +ifdef::parent-context-of-rhdh-install-ocp-operator[:context: {parent-context-of-rhdh-install-ocp-operator}] +ifndef::parent-context-of-rhdh-install-ocp-operator[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-install.adoc b/downstream/assemblies/devtools/assembly-rhdh-install.adoc deleted file mode 100644 index c4d32d9280..0000000000 --- a/downstream/assemblies/devtools/assembly-rhdh-install.adoc +++ /dev/null @@ -1,16 +0,0 @@ -ifdef::context[:parent-context: {context}] -[id="rhdh-install_{context}"] - -= Installing {AAPRHDH} - -:context: rhdh-install -[role="_abstract"] - -{AAPRHDH} (`ansible-dev-tools`) is a suite of tools provided with {PlatformNameShort} to help automation creators to -create, test, and deploy playbook projects, execution environments, and collections. - -//include::devtools/ref-devtools-components.adoc[leveloffset=+1] - -ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] - diff --git a/downstream/assemblies/devtools/assembly-rhdh-intro.adoc b/downstream/assemblies/devtools/assembly-rhdh-intro.adoc index 59ac7f4829..0deb956d52 100644 --- a/downstream/assemblies/devtools/assembly-rhdh-intro.adoc +++ b/downstream/assemblies/devtools/assembly-rhdh-intro.adoc @@ -1,4 +1,5 @@ ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY [id="rhdh-intro_{context}"] = {AAPRHDH} @@ -6,9 +7,13 @@ ifdef::context[:parent-context: {context}] :context: rhdh-intro [role="_abstract"] -{AAPRHDH} introduction placeholder +include::devtools/ref-rhdh-about-rhdh.adoc[leveloffset=+1] -//include::devtools/ref-devtools-components.adoc[leveloffset=+1] +include::devtools/ref-rhdh-about-plugins.adoc[leveloffset=+1] + +include::devtools/ref-rhdh-architecture.adoc[leveloffset=+1] + +// include::devtools/ref-devtools-components.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/devtools/assembly-rhdh-ocp-configure-optional.adoc b/downstream/assemblies/devtools/assembly-rhdh-ocp-configure-optional.adoc new file mode 100644 index 0000000000..734ab4042d --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-ocp-configure-optional.adoc @@ -0,0 +1,22 @@ +ifdef::context[:parent-context-of-rhdh-ocp-configure-optional: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-ocp-configure-optional_{context}"] + += Optional configuration for Ansible plug-ins + +:context: rhdh-ocp-configure-optional_{parent-context-of-rhdh-ocp-configure-optional} +[role="_abstract"] + +include::devtools/proc-rhdh-enable-rhdh-authentication.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-optional-integrations.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-devspaces.adoc[leveloffset=+2] + +include::devtools/proc-rhdh-configure-pah-url.adoc[leveloffset=+2] + +ifdef::parent-context-of-rhdh-ocp-configure-optional[:context: {parent-context-of-rhdh-ocp-configure-optional}] +ifndef::parent-context-of-rhdh-ocp-configure-optional[:!context:] + +ifdef::context[:parent-context-of-assembly: {context}] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-ocp-full-examples.adoc b/downstream/assemblies/devtools/assembly-rhdh-ocp-full-examples.adoc new file mode 100644 index 0000000000..559b28d65d --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-ocp-full-examples.adoc @@ -0,0 +1,17 @@ +ifdef::context[:parent-context-of-rhdh-ocp-full-examples: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-ocp-full-examples_{context}"] + += Full examples + +:context: rhdh-ocp-full-examples +[role="_abstract"] + + +include::devtools/ref-rhdh-full-aap-configmap-example.adoc[leveloffset=+1] + +include::devtools/ref-rhdh-full-helm-chart-ansible-plugins.adoc[leveloffset=+1] + +ifdef::parent-context-of-rhdh-ocp-full-examples[:context: {parent-context-of-rhdh-ocp-full-examples}] +ifndef::parent-context-of-rhdh-ocp-full-examples[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-ocp-required-installation.adoc b/downstream/assemblies/devtools/assembly-rhdh-ocp-required-installation.adoc new file mode 100644 index 0000000000..dc1963bdf5 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-ocp-required-installation.adoc @@ -0,0 +1,32 @@ +ifdef::context[:parent-context-of-rhdh-ocp-required-installation: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-ocp-required-installation_{context}"] + += Required configuration + +:context: rhdh-ocp-required-installation +[role="_abstract"] + +include::devtools/proc-rhdh-add-plugin-config.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-devtools-sidecar.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-add-pull-secret-helm.adoc[leveloffset=+2] + +include::devtools/proc-rhdh-add-devtools-container.adoc[leveloffset=+2] + +include::devtools/proc-rhdh-add-custom-configmap.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-devtools-server.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-aap-details.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-showcase-location.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-add-plugin-software-templates.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-configure-rbac.adoc[leveloffset=+1] + +ifdef::parent-context-of-rhdh-ocp-required-installation[:context: {parent-context-of-rhdh-ocp-required-installation}] +ifndef::parent-context-of-rhdh-ocp-required-installation[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-subscription-warnings.adoc b/downstream/assemblies/devtools/assembly-rhdh-subscription-warnings.adoc new file mode 100644 index 0000000000..4c12714f67 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-subscription-warnings.adoc @@ -0,0 +1,30 @@ +ifdef::context[:parent-context-of-rhdh-subscription-warnings: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-subscription-warnings_{context}"] + += Ansible plug-ins subscription warning messages + +:context: rhdh-subscription-warnings + +[role="_abstract"] +The Ansible plug-ins display a subscription warning banner in the user interface in the following scenarios: + +* xref:rhdh-warning-unable-connect-aap_rhdh-subscription-warnings[Unable to connect to Ansible Automation Platform] +* xref:rhdh-warning-unable-authenticate-aap_rhdh-subscription-warnings[Unable to authenticate to Ansible Automation Platform] +* xref:rhdh-warning-invalid-aap-config_rhdh-subscription-warnings[Invalid Ansible Automation Platform configuration] +* xref:rhdh-warning-aap-ooc_rhdh-subscription-warnings[Ansible Automation Platform subscription is out of compliance] +* xref:rhdh-warning-invalid-aap-subscription_rhdh-subscription-warnings[Invalid Ansible Automation Platform subscription] + +include::devtools/proc-rhdh-warning-unable-connect-aap.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-warning-unable-authenticate-aap.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-warning-invalid-aap-config.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-warning-aap-ooc.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-warning-invalid-aap-subscription.adoc[leveloffset=+1] + +ifdef::parent-context-of-rhdh-subscription-warnings[:context: {parent-context-of-rhdh-subscription-warnings}] +ifndef::parent-context-of-rhdh-subscription-warnings[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-telemetry-capturing.adoc b/downstream/assemblies/devtools/assembly-rhdh-telemetry-capturing.adoc new file mode 100644 index 0000000000..9ed676f75f --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-telemetry-capturing.adoc @@ -0,0 +1,25 @@ +ifdef::context[:parent-context-of-rhdh-telemetry-capturing: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-configure-telemetry_{context}"] + += {RHDH} data telemetry capturing + +{RHDH} (RHDH) sends telemetry data to Red Hat using the `backstage-plugin-analytics-provider-segment` plug-in, which is enabled by default. +This includes telemetry data from the Ansible plug-ins. + +Red Hat collects and analyzes the following data to improve your experience with {RHDH}: + +* Events of page visits and clicks on links or buttons. +* System-related information, for example, locale, timezone, user agent including browser and OS details. +* Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters. +* Anonymized IP addresses, recorded as 0.0.0.0. +* Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application. +* Feedback and sentiment provided in the Ansible plug-ins feedback form. + +With {RHDH}, you can disable or customize the telemetry data collection feature. +For more information, refer to the +link:{BaseURL}/red_hat_developer_hub/{RHDHVers}/html/telemetry_data_collection_and_analysis/index[_Telemetry data collection and analysis_] +guide in the {RHDH} documentation. + +ifdef::parent-context-of-rhdh-telemetry-capturing[:context: {parent-context-of-rhdh-telemetry-capturing}] +ifndef::parent-context-of-rhdh-telemetry-capturing[:!context:] diff --git a/downstream/assemblies/devtools/assembly-rhdh-uninstall-ocp-helm.adoc b/downstream/assemblies/devtools/assembly-rhdh-uninstall-ocp-helm.adoc new file mode 100644 index 0000000000..8e9f4c959b --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-uninstall-ocp-helm.adoc @@ -0,0 +1,16 @@ +ifdef::context[:parent-context-of-assembly-rhdh-uninstall-ocp-helm: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-uninstall-ocp-helm_{context}"] + += Uninstalling the Ansible plug-ins from a Helm installation on {OCPShort} + +:context: rhdh-uninstall-ocp-helm + +[role="_abstract"] +To uninstall the Ansible plug-ins, you must remove any software templates that use the `ansible:content:create` action from {RHDH}, and remove the plug-ins configuration from the Helm chart in OpenShift. + +include::devtools/proc-rhdh-uninstall-ocp-helm.adoc[leveloffset=+1] + +ifdef::parent-context-of-assembly-rhdh-uninstall-ocp-helm[:context: {parent-context-of-assembly-rhdh-uninstall-ocp-helm}] +ifndef::parent-context-of-assembly-rhdh-uninstall-ocp-helm[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-uninstall-ocp-operator.adoc b/downstream/assemblies/devtools/assembly-rhdh-uninstall-ocp-operator.adoc new file mode 100644 index 0000000000..dbef88a563 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-uninstall-ocp-operator.adoc @@ -0,0 +1,26 @@ +ifdef::context[:parent-context-of-assembly-rhdh-uninstall-ocp-operator: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-uninstall-ocp-operator{context}"] + += Uninstalling an Operator installation on {OCPShort} + +:context: rhdh-uninstall-ocp-operator + +To delete the dynamic plug-ins from your installation, you must edit the ConfigMaps +that reference Ansible. + +The deployment auto reloads when the ConfigMaps are updated. +You do not need to reload the deployment manually. + +// rhaap-dynamic-plugins-config configMap +include::devtools/proc-rhdh-uninstall-ocp-operator-plugins-cm.adoc[leveloffset=+1] + +// app-config-rhdh ConfigMap +include::devtools/proc-rhdh-uninstall-ocp-operator-rhdh-cm.adoc[leveloffset=+1] + +// Remove Custom resource ConfigMap from the {RHDHShort} Operator Custom Resource +include::devtools/proc-rhdh-uninstall-ocp-operator-sidecar-container.adoc[leveloffset=+1] + +ifdef::parent-context-of-assembly-rhdh-uninstall-ocp-operator[:context: {parent-context-of-assembly-rhdh-uninstall-ocp-operator}] +ifndef::parent-context-of-assembly-rhdh-uninstall-ocp-operator[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-upgrade-ocp-helm.adoc b/downstream/assemblies/devtools/assembly-rhdh-upgrade-ocp-helm.adoc new file mode 100644 index 0000000000..b8cbf17419 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-upgrade-ocp-helm.adoc @@ -0,0 +1,20 @@ +ifdef::context[:parent-context-of-assembly-rhdh-upgrade-ocp-helm: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-upgrade-ocp-helm_{context}"] + += Upgrading the Ansible plug-ins on a Helm installation on {OCPShort} + +:context: rhdh-upgrade-ocp-helm + +[role="_abstract"] +To upgrade the Ansible plug-ins, you must update the `plugin-registry` application with the latest Ansible plug-ins files. + +include::devtools/proc-rhdh-download-plugins.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-update-plugin-registry.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-update-plugins-helm-version-numbers.adoc[leveloffset=+1] + +ifdef::parent-context-of-assembly-rhdh-upgrade-ocp-helm[:context: {parent-context-of-assembly-rhdh-upgrade-ocp-helm}] +ifndef::parent-context-of-assembly-rhdh-upgrade-ocp-helm[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-upgrade-ocp-operator.adoc b/downstream/assemblies/devtools/assembly-rhdh-upgrade-ocp-operator.adoc new file mode 100644 index 0000000000..9ef1c4c28d --- /dev/null +++ b/downstream/assemblies/devtools/assembly-rhdh-upgrade-ocp-operator.adoc @@ -0,0 +1,20 @@ +ifdef::context[:parent-context-of-assembly-rhdh-upgrade-ocp-operator: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="rhdh-upgrade-ocp-operator_{context}"] + += Upgrading the Ansible plug-ins on an Operator installation on {OCPShort} + +:context: rhdh-upgrade-ocp-operator + +[role="_abstract"] +To upgrade the Ansible plug-ins, you must update the `plugin-registry` application with the latest Ansible plug-ins files. + +include::devtools/proc-rhdh-download-plugins.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-update-plugin-registry.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-update-plugins-operator-version-numbers.adoc[leveloffset=+1] + +ifdef::parent-context-of-assembly-rhdh-upgrade-ocp-operator[:context: {parent-context-of-assembly-rhdh-upgrade-ocp-operator}] +ifndef::parent-context-of-assembly-rhdh-upgrade-ocp-operator[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-rhdh-upgrading-uninstalling.adoc b/downstream/assemblies/devtools/assembly-rhdh-upgrading-uninstalling.adoc deleted file mode 100644 index 8161fab3e2..0000000000 --- a/downstream/assemblies/devtools/assembly-rhdh-upgrading-uninstalling.adoc +++ /dev/null @@ -1,15 +0,0 @@ -ifdef::context[:parent-context: {context}] -[id="rhdh-upgrade_{context}"] - -= Upgrading and uninstalling {AAPRHDH} - -:context: rhdh-upgrade -[role="_abstract"] - -{AAPRHDH} Upgrading and uninstalling placeholder - -//include::devtools/ref-devtools-components.adoc[leveloffset=+1] - -ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] - diff --git a/downstream/assemblies/devtools/assembly-rhdh-using.adoc b/downstream/assemblies/devtools/assembly-rhdh-using.adoc index 97f4039a2e..d148199a32 100644 --- a/downstream/assemblies/devtools/assembly-rhdh-using.adoc +++ b/downstream/assemblies/devtools/assembly-rhdh-using.adoc @@ -1,14 +1,41 @@ ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY [id="rhdh-using_{context}"] -= Using {AAPRHDH} += Using the Ansible plug-ins :context: rhdh-using [role="_abstract"] +You can use {AAPRHDH} (RHDH) to learn about Ansible, create automation projects, and access opinionated workflows and tools to develop and test your automation code. +From the {RHDH} UI, you can navigate to your {PlatformNameShort} instance, where you can configure and run automation jobs. -Using the plug-in - placeholder +This document describes how to use the {AAPRHDH}. +It presents a worked example of developing a playbook project for automating updates to your firewall configuration on RHEL systems. -//include::devtools/ref-devtools-components.adoc[leveloffset=+1] +== Optional requirement + +The {AAPRHDH} link to Learning Paths on the Red{nbsp}Hat developer portal, +link:https://developers.redhat.com/learn[developers.redhat.com/learn]. + +To access the Learning Paths, you must have a Red{nbsp}Hat account and you must be able to log in to link:https://developers.redhat.com[developers.redhat.com]. + +include::devtools/ref-rhdh-dashboard.adoc[leveloffset=+1] + +include::devtools/ref-rhdh-learning.adoc[leveloffset=+1] + +include::devtools/ref-rhdh-discover-collections.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-create.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-view.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-develop-projects.adoc[leveloffset=+1] + +include::devtools/proc-rhdh-develop-projects-devspaces.adoc[leveloffset=+2] + +include::devtools/proc-rhdh-execute-automation-devspaces.adoc[leveloffset=+2] + +include::devtools/proc-rhdh-set-up-controller-project.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/devtools/assembly-self-service-about.adoc b/downstream/assemblies/devtools/assembly-self-service-about.adoc new file mode 100644 index 0000000000..7e388f7a88 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-about.adoc @@ -0,0 +1,32 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-about: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-about"] +endif::[] +ifdef::context[] +[id="self-service-about_{context}"] +endif::[] + += About {SelfService} + +:context: self-service-about + +{SelfService} connects with {PlatformName} using an OAuth application for authentication. +For the {SelfServiceShort} release, the following restrictions apply: + +* You can only use one {PlatformNameShort} instance. +* You can only use one {PlatformNameShort} organization. + +ifdef::parent-context-of-self-service-about[:context: {parent-context-of-self-service-about}] +ifndef::parent-context-of-self-service-about[:!context:] + +== Supported platforms + +{SelfServiceShortStart} supports installation using a Helm chart on {OCPShort}, and {PlatformNameShort} version 2.5. + + diff --git a/downstream/assemblies/devtools/assembly-self-service-accessing-deployment.adoc b/downstream/assemblies/devtools/assembly-self-service-accessing-deployment.adoc new file mode 100644 index 0000000000..bc92df25fe --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-accessing-deployment.adoc @@ -0,0 +1,27 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-accessing-deployment: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-accessing-deployment"] +endif::[] +ifdef::context[] +[id="self-service-accessing-deployment_{context}"] +endif::[] + += Accessing the {SelfServiceShort} deployment + +:context: self-service-accessing-deployment + +include::devtools/proc-self-service-add-deployment-url-oauth-app.adoc[leveloffset=+1] + +include::devtools/proc-self-service-sign-in.adoc[leveloffset=+1] + +include::devtools/proc-self-service-sync-frequency.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-accessing-deployment[:context: {parent-context-of-self-service-accessing-deployment}] +ifndef::parent-context-of-self-service-accessing-deployment[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-create-ocp-registry.adoc b/downstream/assemblies/devtools/assembly-self-service-create-ocp-registry.adoc new file mode 100644 index 0000000000..2f2339cf09 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-create-ocp-registry.adoc @@ -0,0 +1,25 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-create-ocp-registry: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-create-ocp-registry"] +endif::[] +ifdef::context[] +[id="self-service-create-ocp-registry_{context}"] +endif::[] + += Creating a plug-in registry in OpenShift + +:context: self-service-create-ocp-registry + +include::devtools/proc-self-service-download-tar.adoc[leveloffset=+1] + +include::devtools/proc-self-service-setup-registry-image.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-create-ocp-registry[:context: {parent-context-of-self-service-create-ocp-registry}] +ifndef::parent-context-of-self-service-create-ocp-registry[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-create-ocp-secrets.adoc b/downstream/assemblies/devtools/assembly-self-service-create-ocp-secrets.adoc new file mode 100644 index 0000000000..07aa913d58 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-create-ocp-secrets.adoc @@ -0,0 +1,29 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-create-ocp-secrets: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-create-ocp-secrets"] +endif::[] +ifdef::context[] +[id="self-service-create-ocp-secrets_{context}"] +endif::[] + += Creating secrets in OpenShift for your environment variables + +:context: self-service-create-ocp-secrets + +Before installing the chart, you must create a set of secrets in your OpenShift project. +The {SelfServiceShort} Helm chart fetches environment variables from OpenShift secrets. + +include::devtools/proc-self-service-create-ocp-auth-secrets.adoc[leveloffset=+1] + +include::devtools/proc-self-service-create-scm-secrets.adoc[leveloffset=+1] + + +ifdef::parent-context-of-self-service-create-ocp-secrets[:context: {parent-context-of-self-service-create-ocp-secrets}] +ifndef::parent-context-of-self-service-create-ocp-secrets[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-deregister-templates.adoc b/downstream/assemblies/devtools/assembly-self-service-deregister-templates.adoc new file mode 100644 index 0000000000..4789f2a2bc --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-deregister-templates.adoc @@ -0,0 +1,15 @@ +ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="self-service-deregister-templates{context}"] + += Deregistering templates + +:context: self-service-deregister-templates + +include::devtools/proc-self-service-deregister-dynamic-templates.adoc[leveloffset=+1] + +include::devtools/proc-self-service-deregister-preinstalled-templates.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-feedback.adoc b/downstream/assemblies/devtools/assembly-self-service-feedback.adoc new file mode 100644 index 0000000000..3054c93428 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-feedback.adoc @@ -0,0 +1,25 @@ +ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="self-service-feedback_{context}"] + += Providing feedback in {SelfServiceShort} + +:context: self-service-feedback +[role="_abstract"] +{SelfServiceShortStart} provides a feedback form where you can suggest new features and content, as well as provide general feedback. + +. Click *Feedback* in the {SelfServiceShort} console to display the feedback form. ++ +image::rhdh-feedback-form.png[{SelfServiceShortStart} feedback form] +. Enter the feedback you want to provide. +. Tick the *I understand that feedback is shared with Red Hat* checkbox. +. Click *Submit*. + +[NOTE] +==== +To ensure that Red Hat receives your feedback, exclude your {SelfServiceShort} URL in any browser ad blockers or privacy tools. +==== + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-generate-scm-tokens.adoc b/downstream/assemblies/devtools/assembly-self-service-generate-scm-tokens.adoc new file mode 100644 index 0000000000..26eb4a77df --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-generate-scm-tokens.adoc @@ -0,0 +1,25 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-generate-scm-tokens: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-generate-scm-tokens"] +endif::[] +ifdef::context[] +[id="self-service-generate-scm-tokens_{context}"] +endif::[] + += Generating GitHub and Gitlab personal access tokens + +:context: self-service-generate-scm-tokens + +include::devtools/proc-self-service-create-gh-pat.adoc[leveloffset=+1] + +include::devtools/proc-self-service-create-gl-pat.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-generate-scm-tokens[:context: {parent-context-of-self-service-generate-scm-tokens}] +ifndef::parent-context-of-self-service-generate-scm-tokens[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-helm-install.adoc b/downstream/assemblies/devtools/assembly-self-service-helm-install.adoc new file mode 100644 index 0000000000..757e3b57a6 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-helm-install.adoc @@ -0,0 +1,27 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-helm-install: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-helm-install"] +endif::[] +ifdef::context[] +[id="self-service-helm-install_{context}"] +endif::[] + += Installing the {SelfServiceShort} Helm chart + +:context: self-service-helm-install + +include::devtools/proc-self-service-install-helm-from-catalog.adoc[leveloffset=+1] + +include::devtools/proc-self-service-install-verify.adoc[leveloffset=+1] + +// include::devtools/zzz[leveloffset=+1] + +ifdef::parent-context-of-self-service-helm-install[:context: {parent-context-of-self-service-helm-install}] +ifndef::parent-context-of-self-service-helm-install[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-installation-overview.adoc b/downstream/assemblies/devtools/assembly-self-service-installation-overview.adoc new file mode 100644 index 0000000000..6f3502dbfe --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-installation-overview.adoc @@ -0,0 +1,30 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-installation-overview: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-installation-overview"] +endif::[] +ifdef::context[] +[id="self-service-installation-overview_{context}"] +endif::[] + += Installation overview + +:context: self-service-installation-overview + +You can deploy {SelfServiceShort} from a Helm chart on {OCPShort}. + +Helm is a tool that simplifies deployment of applications on {OCP} clusters. +Helm uses a packaging format called Helm charts. +A Helm chart is a package of files that define how an application is deployed and managed on OpenShift. +The Helm chart for {SelfServiceShort} is available in the OpenShift Helm catalog. + +include::devtools/con-installation-prereqs.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-installation-overview[:context: {parent-context-of-self-service-installation-overview}] +ifndef::parent-context-of-self-service-installation-overview[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-ocp-project.adoc b/downstream/assemblies/devtools/assembly-self-service-ocp-project.adoc new file mode 100644 index 0000000000..52fbbedb7f --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-ocp-project.adoc @@ -0,0 +1,32 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-new-ocp-project: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-ocp-project"] +endif::[] +ifdef::context[] +[id="self-service-new-ocp-project_{context}"] +endif::[] + += Setting up a project for {SelfServiceShort} in {OCPShort} + +:context: self-service-new-ocp-project + + +You must set up a project in {OCPShort} for {SelfServiceShort}. +You can create the project from a terminal using the `oc` command. +Alternatively, you can create the project in the {OCPShort} console. + +For more about {OCPShort} projects, see the _link:{BaseURL}/openshift_container_platform/{OCPLatest}/html/building_applications/projects#working-with-projects[Building applications]_ guide in the {OCPShort} documentation. + +include::devtools/proc-self-service-ocp-project-setup.adoc[leveloffset=+1] + +include::devtools/proc-self-service-ocp-project-setup-ui.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-new-ocp-project[:context: {parent-context-of-self-service-new-ocp-project}] +ifndef::parent-context-of-self-service-new-ocp-project[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-preinstall-config.adoc b/downstream/assemblies/devtools/assembly-self-service-preinstall-config.adoc new file mode 100644 index 0000000000..a559740998 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-preinstall-config.adoc @@ -0,0 +1,35 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-preinstall-config: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-preinstall-config"] +endif::[] +ifdef::context[] +[id="self-service-preinstall-config_{context}"] +endif::[] + += Pre-installation configuration + +:context: self-service-preinstall-config + +include::devtools/proc-self-service-create-oauth-app.adoc[leveloffset=+1] + +include::devtools/proc-self-service-generate-oauth-token.adoc[leveloffset=+1] + +include::assembly-self-service-generate-scm-tokens.adoc[leveloffset=+1] + +include::assembly-self-service-ocp-project.adoc[leveloffset=+1] + +include::assembly-self-service-create-ocp-registry.adoc[leveloffset=+1] + +include::assembly-self-service-create-ocp-secrets.adoc[leveloffset=+1] + +// include::devtools/zzz[leveloffset=+1] + +ifdef::parent-context-of-self-service-preinstall-config[:context: {parent-context-of-self-service-preinstall-config}] +ifndef::parent-context-of-self-service-preinstall-config[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-rbac.adoc b/downstream/assemblies/devtools/assembly-self-service-rbac.adoc new file mode 100644 index 0000000000..c6ff65c443 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-rbac.adoc @@ -0,0 +1,15 @@ +ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="self-service-rbac_{context}"] + += Working with RBAC + +:context: self-service-rbac + +include::devtools/proc-self-service-set-up-rbac.adoc[leveloffset=+1] + +include::devtools/proc-self-service-verify-rbac.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-scm-credentials-private-repos.adoc b/downstream/assemblies/devtools/assembly-self-service-scm-credentials-private-repos.adoc new file mode 100644 index 0000000000..ddf6643efa --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-scm-credentials-private-repos.adoc @@ -0,0 +1,35 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-using-scm-credentials-private-repos: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-using-scm-credentials-private-repos"] +endif::[] +ifdef::context[] +[id="self-service-using-scm-credentials-private-repos_{context}"] +endif::[] + += Configuring source control credentials for private repositories + +:context: self-service-using-scm-credentials-private-repos + +To work with private repositories, you must add your GitHub or Gitlab personal access tokens to {PlatformNameShort} as a source control credential. + +[NOTE] +==== +Ensure that the {PlatformNameShort} users and teams assigned to the {PlatformNameShort} objects, +such as source control credentials, +are part of the {PlatformNameShort} Organization configured to sync to the {SelfServiceShort}. +See _link:{LinkSelfServiceInstall}_ for more information. +==== + +include::devtools/proc-self-service-add-scm-credentials-aap.adoc[leveloffset=+1] + +include::devtools/proc-self-service-share-credentials-aap.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-using-scm-credentials-private-repos[:context: {parent-context-of-self-service-using-scm-credentials-private-repos}] +ifndef::parent-context-of-self-service-using-scm-credentials-private-repos[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-telemetry-capture.adoc b/downstream/assemblies/devtools/assembly-self-service-telemetry-capture.adoc new file mode 100644 index 0000000000..24451c290f --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-telemetry-capture.adoc @@ -0,0 +1,27 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-telemetry: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-telemetry"] +endif::[] +ifdef::context[] +[id="self-service-telemetry_{context}"] +endif::[] + += Telemetry capturing + +:context: self-service-telemetry + +The telemetry data collection feature helps in collecting and analyzing the telemetry data to improve your experience with {SelfService}. This feature is enabled by default. + +include::devtools/con-self-service-telemetry-data.adoc[leveloffset=+1] + +include::devtools/proc-self-service-telemetry-disable.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-telemetry[:context: {parent-context-of-self-service-telemetry}] +ifndef::parent-context-of-self-service-telemetry[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-using-overview.adoc b/downstream/assemblies/devtools/assembly-self-service-using-overview.adoc new file mode 100644 index 0000000000..e23c6ee453 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-using-overview.adoc @@ -0,0 +1,37 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-using-overview-adoc: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-using-overview-adoc"] +endif::[] +ifdef::context[] +[id="self-service-using-overview-adoc_{context}"] +endif::[] + += Overview + +:context: self-service-using-overview + +To populate {SelfServiceShort} with templates, +you must create repositories in GitHub or Gitlab for collections that define the templates. + +Currently, Red Hat provides validated content with examples to automate jobs for RHEL, Network, Windows, and Cloud using {SelfServiceShort}. + +[NOTE] +==== +The validated content examples provided have their own support statement and lifecycle definitions. +Refer to the link:https://access.redhat.com/support/policy/updates/ansible-automation-platform#validated[Red Hat Ansible Automation Platform Life Cycle]. +==== + +These collections are hosted in {HubName}. +Your administrator must add the collections to {PrivateHubName} so that you can download them and create the repositories. + +You can then import templates into your {SelfServiceShort}, and launch them to run automation jobs. + +ifdef::parent-context-of-self-service-using-overview-adoc[:context: {parent-context-of-self-service-using-overview-adoc}] +ifndef::parent-context-of-self-service-using-overview-adoc[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-using-prereqs.adoc b/downstream/assemblies/devtools/assembly-self-service-using-prereqs.adoc new file mode 100644 index 0000000000..73308e1053 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-using-prereqs.adoc @@ -0,0 +1,25 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-using-prereqs-adoc: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-using-prereqs-adoc"] +endif::[] +ifdef::context[] +[id="self-service-using-prereqs-adoc_{context}"] +endif::[] + += Prerequisites + +:context: self-service-using-prereqs + +* Your {PlatformNameShort} administrator has populated {PrivateHubName} with collections. +* You have permissions to access collections in {PrivateHubName}. +* You have installed and configured {SelfServiceShort}. + +ifdef::parent-context-of-self-service-using-prereqs-adoc[:context: {parent-context-of-self-service-using-prereqs-adoc}] +ifndef::parent-context-of-self-service-using-prereqs-adoc[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-using-repo-setup.adoc b/downstream/assemblies/devtools/assembly-self-service-using-repo-setup.adoc new file mode 100644 index 0000000000..f85340dd28 --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-using-repo-setup.adoc @@ -0,0 +1,27 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-using-repo-setup: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-using-repo-setup"] +endif::[] +ifdef::context[] +[id="self-service-using-repo-setup_{context}"] +endif::[] + += Setting up repositories for collections + +:context: self-service-using-repo-setup + +include::devtools/proc-self-service-export-collection-pah.adoc[leveloffset=+1] + +include::devtools/proc-self-service-create-collection-repo.adoc[leveloffset=+1] + +include::devtools/proc-self-service-create-pattern-loader-repo.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-using-repo-setup[:context: {parent-context-of-self-service-using-repo-setup}] +ifndef::parent-context-of-self-service-using-repo-setup[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-view-deployment.adoc b/downstream/assemblies/devtools/assembly-self-service-view-deployment.adoc new file mode 100644 index 0000000000..489b0419ae --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-view-deployment.adoc @@ -0,0 +1,29 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-view-deployment: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-view-deployment"] +endif::[] +ifdef::context[] +[id="self-service-view-deployment_{context}"] +endif::[] + += Inspecting the deployment on OpenShift + +:context: self-service-view-deployment + +You can inspect the deployment logs and ConfigMap on the OpenShift UI to verify that the deployment conforms with the settings in your Helm chart. + +include::devtools/proc-self-service-view-deployment-logs.adoc[leveloffset=+1] + +include::devtools/proc-self-service-view-configmap.adoc[leveloffset=+1] + +// include::devtools/zzz[leveloffset=+1] + +ifdef::parent-context-of-self-service-view-deployment[:context: {parent-context-of-self-service-view-deployment}] +ifndef::parent-context-of-self-service-view-deployment[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-self-service-working-templates.adoc b/downstream/assemblies/devtools/assembly-self-service-working-templates.adoc new file mode 100644 index 0000000000..f53ec823da --- /dev/null +++ b/downstream/assemblies/devtools/assembly-self-service-working-templates.adoc @@ -0,0 +1,25 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 + +ifdef::context[:parent-context-of-self-service-working-templates: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="self-service-working-templates"] +endif::[] +ifdef::context[] +[id="self-service-working-templates_{context}"] +endif::[] + += Working with templates + +:context: self-service-working-templates + +include::devtools/proc-self-service-add-template.adoc[leveloffset=+1] + +include::devtools/proc-self-service-launch-template.adoc[leveloffset=+1] + +ifdef::parent-context-of-self-service-working-templates[:context: {parent-context-of-self-service-working-templates}] +ifndef::parent-context-of-self-service-working-templates[:!context:] + diff --git a/downstream/assemblies/devtools/assembly-writing-running-playbook.adoc b/downstream/assemblies/devtools/assembly-writing-running-playbook.adoc index 6a4faa71aa..914ad53a7e 100644 --- a/downstream/assemblies/devtools/assembly-writing-running-playbook.adoc +++ b/downstream/assemblies/devtools/assembly-writing-running-playbook.adoc @@ -1,15 +1,31 @@ -ifdef::context[:parent-context: {context}] -[id="writing-running-playbook"] +ifdef::context[:parent-context_of_writing-running-playbook: {context}] +:_mod-docs-content-type: ASSEMBLY +[id="writing-running-playbook_{context}"] = Writing and running a playbook with {ToolsName} :context: writing-running-playbook [role="_abstract"] -include::devtools/proc-writing-playbook.adoc[leveloffset=+1] +include::devtools/proc-devtools-set-up-ansible-config.adoc[leveloffset=+1] + +include::devtools/proc-devtools-writing-first-playbook.adoc[leveloffset=+1] + +include::devtools/proc-devtools-inspect-playbook.adoc[leveloffset=+1] + include::devtools/proc-debugging-playbook.adoc[leveloffset=+1] -include::devtools/proc-running-playbook.adoc[leveloffset=+1] -ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] +include::devtools/proc-devtools-run-playbook-extension.adoc[leveloffset=+1] + +include::devtools/proc-devtools-extension-run-ansible-playbook.adoc[leveloffset=+2] + +include::devtools/proc-devtools-extension-run-ansible-navigator.adoc[leveloffset=+2] + +include::devtools/proc-devtools-working-with-ee.adoc[leveloffset=+2] + +include::devtools/proc-devtools-testing-playbook.adoc[leveloffset=+1] + + +ifdef::parent-context_of_writing-running-playbook[:context: {parent-context_of_writing-running-playbook}] +ifndef::parent-context_of_writing-running-playbook[:!context:] diff --git a/downstream/assemblies/eda/assembly-eda-credential-types.adoc b/downstream/assemblies/eda/assembly-eda-credential-types.adoc new file mode 100644 index 0000000000..497ef67e28 --- /dev/null +++ b/downstream/assemblies/eda/assembly-eda-credential-types.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: +[id="eda-credential-types"] + += Credential types + +{EDAcontroller} comes with several built-in credental types that you can use for syncing projects, running rulebook activations, executing job templates through {MenuTopAE} ({ControllerName}), fetching images from container registries, and processing data through event streams. + +These built-in credential types are not editable. So if you want credential types that support authentication with other systems, you can create your own credential types that can be used in your source plugins. Each credential type contains an input configuration and an injector configuration that can be passed to an Ansible rulebook to configure your sources. + +For more information, see xref:eda-custom-credential-types[Custom credential types]. +//[J. Self] Will add the cross-reference/link later. + + +include::eda/con-custom-credential-types.adoc[leveloffset=+1] + +include::eda/con-credential-types-input-config.adoc[leveloffset=+2] + +include::eda/con-credential-types-injector-config.adoc[leveloffset=+2] + +include::eda/proc-eda-set-up-credential-types.adoc[leveloffset=+1] + diff --git a/downstream/assemblies/eda/assembly-eda-credentials.adoc b/downstream/assemblies/eda/assembly-eda-credentials.adoc index 244c5f8baa..7d469c6b8c 100644 --- a/downstream/assemblies/eda/assembly-eda-credentials.adoc +++ b/downstream/assemblies/eda/assembly-eda-credentials.adoc @@ -1,10 +1,25 @@ +:_mod-docs-content-type: ASSEMBLY [id="eda-credentials"] -= Setting up credentials for {EDAcontroller} += Credentials -Credentials are used by {EDAName} for authentication when launching rulebooks. +You can use credentials to store secrets that can be used for authentication purposes with resources, such as decision environments, rulebook activations and projects for {EDAcontroller}, and projects for {ControllerName}. + +Credentials authenticate users when launching jobs against machines and importing project content from a version control system. + +You can grant users and teams the ability to use these credentials without exposing the credential to the user. If a user moves to a different team or leaves the organization, you do not have to rekey all of your systems just because that credential was previously available. + +[NOTE] +==== +In the context of {ControllerName} and {EDAcontroller}, you can use both `extra_vars` and credentials to store a variety of information. However, credentials are the preferred method of storing sensitive information such as passwords or API keys because they offer better security and centralized management, whereas `extra_vars` are more suitable for passing dynamic, non-sensitive data. +==== -include::eda/proc-eda-set-up-credential.adoc[leveloffset=+1] include::eda/con-credentials-list-view.adoc[leveloffset=+1] + +include::eda/proc-eda-set-up-credential.adoc[leveloffset=+1] + include::eda/proc-eda-edit-credential.adoc[leveloffset=+1] + +include::eda/proc-eda-duplicate-credential.adoc[leveloffset=+1] + include::eda/proc-eda-delete-credential.adoc[leveloffset=+1] diff --git a/downstream/assemblies/eda/assembly-eda-decision-environments.adoc b/downstream/assemblies/eda/assembly-eda-decision-environments.adoc index ad3652facd..f7677b4c24 100644 --- a/downstream/assemblies/eda/assembly-eda-decision-environments.adoc +++ b/downstream/assemblies/eda/assembly-eda-decision-environments.adoc @@ -2,11 +2,12 @@ = Decision environments -Decision environments are a container image to run Ansible rulebooks. -They create a common language for communicating automation dependencies, and provide a standard way to build and distribute the automation environment. -The default decision environment is found in the link:https://quay.io/repository/ansible/ansible-rulebook[Ansible-Rulebook]. +Decision environments are container images that run Ansible rulebooks. +They create a common language for communicating automation dependencies, and give a standard way to build and distribute the automation environment. +You can find the default decision environment in the link:https://quay.io/repository/ansible/ansible-rulebook[Ansible-Rulebook]. -To create your own decision environment refer to xref:eda-build-a-custom-decision-environment[Building a custom decision environment for Event-Driven Ansible within Ansible Automation Platform]. +To create your own decision environment, see xref:eda-controller-install-builder[Installing ansible-builder] and xref:eda-build-a-custom-decision-environment[Building a custom decision environment for Event-Driven Ansible within Ansible Automation Platform]. +include::eda/ref-eda-controller-install-builder.adoc[leveloffset=+1] +include::eda/proc-eda-build-a-custom-decision-environment.adoc[leveloffset=+1] include::eda/proc-eda-set-up-new-decision-environment.adoc[leveloffset=+1] -include::eda/proc-eda-build-a-custom-decision-environment.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/eda/assembly-eda-event-filter-plugins.adoc b/downstream/assemblies/eda/assembly-eda-event-filter-plugins.adoc new file mode 100644 index 0000000000..9d5852e57c --- /dev/null +++ b/downstream/assemblies/eda/assembly-eda-event-filter-plugins.adoc @@ -0,0 +1,49 @@ +[id="eda-event-filter-plugins"] + += Event filter plugins + +Events sometimes have extra data that is unnecessary and might overwhelm the rule engine. +Use event filters to remove that extra data so you can focus on what matters to your rules. +Event filters might also change the format of the data so that the rule conditions can better match the data. + +Events are defined as python code and distributed as collections. +The default link:https://github.com/ansible/event-driven-ansible/tree/main/extensions/eda/plugins/event_filter[eda collection] has the following filters: + +[cols="30%,30%",options="header"] +|==== +| Name | Description +| json_filter | This filter includes and excludes keys from the event object +| dashes_to_underscores | This filter changes the dashes in all keys in the payload to be underscore +| ansible.eda.insert_hosts_to_meta | This filter is used to add host information into the event so that ansible-rulebook can locate it and use it +| ansible.eda.normalize_keys | This filter is used if you want to change non alpha numeric keys to underscore +|==== + +You can chain event filters one after the other, and the updated data is sent from one filter to the next. +Event filters are defined in the rulebook after a source is defined. +When the rulebook starts the source plugin it associates the correct filters and transforms the data before putting it into the queue. + +.Example + +---- +sources: + - name: azure_service_bus + ansible.eda.azure_service_bus: + conn_str: "{{connection_str}}" + queue_name: "{{queue_name}}" + filters: + - json_filter: + include_keys: ['clone_url'] + exclude_keys: ['*_url', '_links', 'base', 'sender', 'owner', 'user'] + - dashes_to_underscores: +---- + +In this example the data is first passed through the `json_filter` and then through the `dashes_to_underscores` filter. +In the event payload, keys can only contain letters, numbers, and underscores. +The period (.) is used to access nested keys. + +Since every event should record the origin of the event the filter `eda.builtin.insert_meta_info` is added automatically by ansible-rulebook to add the `source name`, `type`, and `received_at`. +The `received_at` stores a date time in UTC ISO8601 format and includes the microseconds. +The `uuid` stores the unique id for the event. +The `meta key` is used to store metadata about the event and its needed to correctly report about the events in the aap-server. + +include::eda/con-eda-author-event-filters.adoc[leveloffset=+1] diff --git a/downstream/assemblies/eda/assembly-eda-logging-strategy.adoc b/downstream/assemblies/eda/assembly-eda-logging-strategy.adoc new file mode 100644 index 0000000000..26e7bffc77 --- /dev/null +++ b/downstream/assemblies/eda/assembly-eda-logging-strategy.adoc @@ -0,0 +1,11 @@ +[id="eda-logging-strategy"] + += {EDAName} logging strategy + +{EDAName} offers an audit logging solution over its resources. +Each supported create, read, update and delete (CRUD) operation is logged against rulebook activations, event streams, decision environments, projects, and activations. +Some of these resources support further operations, such as sync, enable, disable, restart, start, and stop; for these operations, logging is supported as well. +These logs are only retained for the lifecycle of its associated container. +See the following sample logs for each supported logging operation. + +include::eda/ref-eda-logging-samples.adoc[leveloffset=+1] diff --git a/downstream/assemblies/eda/assembly-eda-performance-tuning.adoc b/downstream/assemblies/eda/assembly-eda-performance-tuning.adoc new file mode 100644 index 0000000000..25d843e7d7 --- /dev/null +++ b/downstream/assemblies/eda/assembly-eda-performance-tuning.adoc @@ -0,0 +1,15 @@ +[id="eda-performance-tuning"] + += Performance tuning for {EDAcontroller} + +{EDAName} is a highly scalable, flexible automation capability. +{EDAcontroller} provides the interface in which {EDAName} automation performs. +Tune your {EDAcontroller} to optimize performance and scalability through: + +* Characterizing your workload +* System level monitoring +* Performance troubleshooting + +include::eda/con-characterizing-your-workload.adoc[leveloffset=+1] +include::eda/con-system-level-monitoring.adoc[leveloffset=+1] +include::eda/ref-performance-troubleshooting.adoc[leveloffset=+1] diff --git a/downstream/assemblies/eda/assembly-eda-projects.adoc b/downstream/assemblies/eda/assembly-eda-projects.adoc index 2bdd34c4e4..a258124017 100644 --- a/downstream/assemblies/eda/assembly-eda-projects.adoc +++ b/downstream/assemblies/eda/assembly-eda-projects.adoc @@ -6,6 +6,11 @@ Projects are a logical collection of rulebooks. They must be a git repository and only http protocol is supported. The rulebooks of a project must be located in the path defined for {EDAName} content in Ansible collections: `/extensions/eda/rulebooks` at the root of the project. +[IMPORTANT] +==== +To meet high availability demands, {EDAcontroller} shares centralized link:https://redis.io/[Redis (REmote DIctionary Server)] with the {PlatformNameShort} UI. When Redis is unavailable, you will not be able to create or sync projects. +==== + include::eda/proc-eda-set-up-new-project.adoc[leveloffset=+1] include::eda/con-eda-projects-list-view.adoc[leveloffset=+1] include::eda/proc-eda-editing-a-project.adoc[leveloffset=+1] diff --git a/downstream/assemblies/eda/assembly-eda-rulebook-activations.adoc b/downstream/assemblies/eda/assembly-eda-rulebook-activations.adoc index 9c4e992169..706241d65b 100644 --- a/downstream/assemblies/eda/assembly-eda-rulebook-activations.adoc +++ b/downstream/assemblies/eda/assembly-eda-rulebook-activations.adoc @@ -4,13 +4,53 @@ [role="_abstract"] -A rulebook activation is a process running in the background defined by a decision environment executing a specific rulebook. +A rulebook is a set of conditional rules that {EDAName} uses to perform IT actions in an event-driven automation model. +Rulebooks are the means by which users tell {EDAName} which source to check for an event and when that event occurs what to do when certain conditions are met. + +A rulebook specifies actions to be performed when a rule is triggered. +A rule gets triggered when the events match the conditions for the rules. +The following actions are currently supported: + +* `run_playbook` (only supported with ansible-rulebook CLI) +* `run_module` +* `run_job_template` +* `run_workflow_template` +* `set_fact` +* `post_event` +* `retract_fact` +* `print_event` +* `shutdown` +* `debug` +* `none` + +To view further details, see link:https://ansible.readthedocs.io/projects/rulebook/en/stable/actions.html[Actions]. + +A rulebook activation is a process running in the background defined by a decision environment executing a specific rulebook. You can set up your rulebook activation by following xref:eda-set-up-rulebook-activation[Setting up a rulebook activation]. + +[WARNING] +==== +Red Hat does not recommend the use of a non-supported source plugin with 1 postgres database. +This can pose a potential risk to your use of {PlatformNameShort}. +==== + +[IMPORTANT] +==== +To meet high availability demands, {EDAcontroller} shares centralized link:https://redis.io/[Redis (REmote DIctionary Server)] with the {PlatformNameShort} UI. When Redis is unavailable, the following functions will not be available: + +* Creating an activation, if `is_enabled` is True +* Deleting an activation +* Enabling an activation, if not already enabled +* Disabling an activation, if not already disabled +* Restarting an activation +==== include::eda/proc-eda-set-up-rulebook-activation.adoc[leveloffset=+1] include::eda/con-eda-rulebook-activation-list-view.adoc[leveloffset=+1] include::eda/proc-eda-view-activation-output.adoc[leveloffset=+2] include::eda/proc-eda-enable-rulebook-activations.adoc[leveloffset=+1] include::eda/proc-eda-restart-rulebook-activations.adoc[leveloffset=+1] +include::eda/proc-eda-edit-rulebook-activation.adoc[leveloffset=+1] +include::eda/proc-eda-copy-rulebook-activation.adoc[leveloffset=+1] include::eda/proc-eda-delete-rulebook-activations.adoc[leveloffset=+1] include::eda/proc-eda-activate-webhook.adoc[leveloffset=+1] -include::eda/proc-eda-test-with-K8s.adoc[leveloffset=+1] \ No newline at end of file +include::eda/proc-eda-test-with-K8s.adoc[leveloffset=+1] diff --git a/downstream/assemblies/eda/assembly-eda-rulebook-troubleshooting.adoc b/downstream/assemblies/eda/assembly-eda-rulebook-troubleshooting.adoc new file mode 100644 index 0000000000..3a3c387bdf --- /dev/null +++ b/downstream/assemblies/eda/assembly-eda-rulebook-troubleshooting.adoc @@ -0,0 +1,23 @@ +[id="eda-rulebook-troubleshooting"] + += Rulebook activations troubleshooting + +[role="_abstract"] + +Occasionally, rulebook activations might fail due to a variety of reasons that can be resolved. In many cases, log filtering provides information that could be helpful in determining the cause of activation failure. + +For improved log filtering, there are two different tracking IDs available for troubleshooting after an action is performed (for example, when you initiate a rulebook activation). Both tracking IDs are universally unique identifiers (UUIDs): + +* *Log tracking ID* `[tid]`- Created for each activation and persists across all activation instances. It allows users to track the complete history of an activation and its lifecycle. The Log tracking ID can be retrieved from the activation instance logs under the History tab. + +* *X-request-ID* `[rid]` - A standard HTTP header that is returned to the user as part of the HTTP response. If you want to fetch this ID, you must inspect the HTTP response headers. This ID results from actions such as triggering a restart of an activation. It allows tracking of a specific API request from {Gateway} to {EDAcontroller}. + +You can use both tracking IDs to locate specific log entries in your backend logs (for example, API or worker logs). + +Review the list of possible issues that can cause activation failures and suggestions on how you can resolve them. + +include::eda/proc-eda-activation-stuck-pending.adoc[leveloffset=+1] +include::eda/proc-eda-activation-keeps-restarting.adoc[leveloffset=+1] +include::eda/proc-eda-event-streams-not-sending-events.adoc[leveloffset=+1] +include::eda/proc-eda-cannot-connect-to-controller.adoc[leveloffset=+1] + diff --git a/downstream/assemblies/eda/assembly-eda-set-up-rhaap-credential.adoc b/downstream/assemblies/eda/assembly-eda-set-up-rhaap-credential.adoc new file mode 100644 index 0000000000..756b6c18af --- /dev/null +++ b/downstream/assemblies/eda/assembly-eda-set-up-rhaap-credential.adoc @@ -0,0 +1,15 @@ +[id="eda-set-up-rhaap-credential-type"] + += {PlatformName} credential + +When {EDAcontroller} is deployed on {PlatformNameShort} {PlatformVers}, you can create a {PlatformName} credential to connect to {ControllerName} through the use of an {ControllerName} URL and a username and password. After it has been created, you can attach the {PlatformName} credential to a rulebook and use it to run rulebook activations. These credentials provide a simple way to configure communication between {ControllerName} and {EDAcontroller}, enabling your rulebook activations to launch job templates. + +[NOTE] +==== +If you deployed {EDAcontroller} with {PlatformNameShort} 2.4, you probably used controller tokens to connect {ControllerName} and {EDAcontroller}. These controller tokens have been deprecated in {PlatformNameShort} {PlatformVers}. To delete deprecated controller tokens and the rulebook activations associated with them, complete the following procedures starting with xref:replacing-controller-tokens[Replacing controller tokens in {PlatformNameShort} {PlatformVers}] before proceeding with xref:eda-set-up-rhaap-credential[Setting up a {PlatformName} credential]. +==== + +include::eda/con-replacing-controller-tokens.adoc[leveloffset=+1] +include::eda/proc-eda-delete-rulebook-activations-with-cont-tokens.adoc[leveloffset=+2] +include::eda/proc-eda-delete-controller-token.adoc[leveloffset=+2] +include::eda/proc-eda-set-up-rhaap-credential.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/eda/assembly-eda-user-guide-overview.adoc b/downstream/assemblies/eda/assembly-eda-user-guide-overview.adoc index eb204aac93..efac82038d 100644 --- a/downstream/assemblies/eda/assembly-eda-user-guide-overview.adoc +++ b/downstream/assemblies/eda/assembly-eda-user-guide-overview.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY [id="eda-user-guide-overview"] = {EDAcontroller} overview @@ -7,16 +8,32 @@ These tools monitor IT solutions and identify events and automatically implement The following procedures form the user configuration: -* xref:eda-set-up-credential[Setting up credentials] -* xref:eda-set-up-new-project[Setting up a new project] -* xref:eda-set-up-new-decision-environment[Setting up a new decision environment] -* xref:eda-set-up-token[Setting up a token to authenticate to {PlatformNameShort} Controller] -* xref:eda-set-up-rulebook-activation[Setting up a rulebook activation] +* xref:eda-credentials[Credentials] +* xref:eda-credential-types[Credential types] +* xref:eda-projects[Projects] +* xref:eda-decision-environments[Decision environments] +* xref:eda-set-up-rhaap-credential-type[Red Hat Ansible Automation Platform credential] +* xref:eda-rulebook-activations[Rulebook activations] +* xref:eda-rulebook-troubleshooting[Rulebook activations troubleshooting] +* xref:eda-rule-audit[Rule audit] +* xref:simplified-event-routing[Simplified event routing] +* xref:eda-performance-tuning[Performance tuning for {EDAcontroller}] +* xref:eda-event-filter-plugins[Event filter plugins] +* xref:eda-logging-strategy[Event-Driven Ansible logging strategy] + [NOTE] +==== +* API documentation for {EDAcontroller} is available at \https:///api/eda/v1/docs +* To meet high availability demands, {EDAcontroller} shares centralized link:https://redis.io/[Redis (REmote DIctionary Server)] with the {PlatformNameShort} UI. When Redis is unavailable, you will not be able to create or sync projects, or enable rulebook activations. ==== -API documentation for {EDAcontroller} is available at \https:///api/eda/v1/docs +[role="_additional-resources"] +.Additional resources +* For information on how to set user permissions for {EDAcontroller}, see the following in the link:{URLCentralAuth}/index[Access management and authentication guide]: -==== +. link:{URLCentralAuth}/gw-managing-access#ref-controller-user-roles[Adding roles for a user] +. link:{URLCentralAuth}/assembly-gw-roles[Roles] + +* If you plan to use {EDAName} 2.5 with a 2.4 {PlatformNameShort}, see link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_event-driven_ansible_2.5_with_ansible_automation_platform_2.4/index[Using Event-Driven Ansible 2.5 with Ansible Automation Platform 2.4]. diff --git a/downstream/assemblies/eda/assembly-simplified-event-routing.adoc b/downstream/assemblies/eda/assembly-simplified-event-routing.adoc new file mode 100644 index 0000000000..f3282408f1 --- /dev/null +++ b/downstream/assemblies/eda/assembly-simplified-event-routing.adoc @@ -0,0 +1,23 @@ + +[id="simplified-event-routing"] + += Simplified event routing + +Simplified event routing enables {EDAcontroller} to capture and analyze data from various remote systems using event streams. With event streams, you can send events from a remote system like GitHub or GitLab into {EDAcontroller}. You can attach 1 or more event streams to an activation by swapping out sources in a rulebook. + +Event streams are an easy way to connect your sources to your rulebooks. This capability lets you create a single endpoint to receive alerts from an event source and then use the events in multiple rulebooks. + +include::eda/con-event-streams.adoc[leveloffset=+1] +include::eda/proc-eda-create-event-stream-credential.adoc[leveloffset=+1] +include::eda/proc-eda-create-event-stream.adoc[leveloffset=+1] +include::eda/proc-eda-config-remote-sys-to-events.adoc[leveloffset=+1] +include::eda/proc-eda-verify-event-streams-work.adoc[leveloffset=+1] +include::eda/proc-eda-replace-sources-with-event-streams.adoc[leveloffset=+1] +include::eda/proc-eda-resend-webhook-data-event-streams.adoc[leveloffset=+1] +include::eda/proc-eda-check-rule-audit-event-stream.adoc[leveloffset=+1] + + + + + + diff --git a/downstream/assemblies/hub/assembly-collection-import-export.adoc b/downstream/assemblies/hub/assembly-collection-import-export.adoc index 6c108682e8..653e01343e 100644 --- a/downstream/assemblies/hub/assembly-collection-import-export.adoc +++ b/downstream/assemblies/hub/assembly-collection-import-export.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="export-import-collections"] diff --git a/downstream/assemblies/hub/assembly-collections-and-content-signing-in-pah.adoc b/downstream/assemblies/hub/assembly-collections-and-content-signing-in-pah.adoc index 8b39a0f944..45ee12b969 100644 --- a/downstream/assemblies/hub/assembly-collections-and-content-signing-in-pah.adoc +++ b/downstream/assemblies/hub/assembly-collections-and-content-signing-in-pah.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY [id="assembly-collections-and-content-signing-in-pah"] = Collections and content signing in {PrivateHubName} diff --git a/downstream/assemblies/hub/assembly-container-user-access.adoc b/downstream/assemblies/hub/assembly-container-user-access.adoc index fe4cb7fe88..55affbdeb1 100644 --- a/downstream/assemblies/hub/assembly-container-user-access.adoc +++ b/downstream/assemblies/hub/assembly-container-user-access.adoc @@ -1,4 +1,4 @@ - +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] @@ -9,17 +9,17 @@ ifdef::context[:parent-context: {context}] :context: configuring-user-access-containers [role="_abstract"] -To determine who can access and manage images in your {PlatformNameShort}, you must configure user access for container repositories in your {PrivateHubName}. +To determine who can access and manage {ExecEnvShort}s in your {PlatformNameShort}, you must configure user access for container repositories in your {PrivateHubName}. include::hub/ref-container-permissions.adoc[leveloffset=+1] include::hub/proc-create-groups.adoc[leveloffset=+1] -include::hub/proc-assigning-permissions.adoc[leveloffset=+1] +// [hherbly]: proc-assigning-permissions seems to repeat proc-create-groups.adoc include::hub/proc-assigning-permissions.adoc[leveloffset=+1] -.Additional resources +// .Additional resources -* See <> to learn more about specific permissions. +// [hherbly] LINK SHOULD BE REPLACED when we find a better one * See <> to learn more about specific permissions. -include::hub/proc-add-user-to-group.adoc[leveloffset=+1] +// [hherbly]: this module also seems redundant include::hub/proc-add-user-to-group.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] diff --git a/downstream/assemblies/hub/assembly-delete-container.adoc b/downstream/assemblies/hub/assembly-delete-container.adoc index 125111bd39..96f411ad59 100644 --- a/downstream/assemblies/hub/assembly-delete-container.adoc +++ b/downstream/assemblies/hub/assembly-delete-container.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="delete-container"] @@ -6,21 +7,21 @@ ifdef::context[:parent-context: {context}] :context: delete-container [role="_abstract"] -Delete a container repository from your {PrivateHubName} to manage your disk space. -You can delete repositories from the {PlatformName} interface in the *Container Repository* list view. +Delete a remote repository from your {PlatformNameShort} to manage your disk space. +You can delete repositories from the {PlatformName} interface in the *Execution Environment* list view. .Prerequisites * You have permissions to manage repositories. .Procedure -. Navigate to {HubName}. +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACExecEnvironments}. -. On the container repository that you want to delete, click the btn:[More Actions] icon *{MoreActionsIcon}*, and click btn:[Delete]. -. When the confirmation message is displayed, click the checkbox and click btn:[Delete]. +. On the container repository that you want to delete, click the btn:[More Actions] icon *{MoreActionsIcon}*, and click btn:[Delete {ExecEnvShort}]. +. When the confirmation message is displayed, click the checkbox and click btn:[Delete {ExecEnvShort}]. .Verification -* Return to the *Execution Environments* list view. -If the container repository has been successfully deleted, the container repository is no longer on the list. +* Return to the *{ExecEnvName}* list view. +If the {ExecEnvName} has been successfully deleted, it will no longer be in the list. ifdef::parent-context[:context: {parent-context}] diff --git a/downstream/assemblies/hub/assembly-managing-cert-valid-content.adoc b/downstream/assemblies/hub/assembly-managing-cert-valid-content.adoc index 83abc0cc67..c54e51b159 100644 --- a/downstream/assemblies/hub/assembly-managing-cert-valid-content.adoc +++ b/downstream/assemblies/hub/assembly-managing-cert-valid-content.adoc @@ -1,64 +1,74 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="managing-cert-valid-content"] = Red Hat Certified, validated, and Ansible Galaxy content in automation hub -:context: managing-cert-validated-content +:context: cloud-sync [role="_abstract"] -{CertifiedName} are included in your subscription to {PlatformName}. Red Hat Ansible content includes two types of content: {CertifiedName} and {Valid}. -Using {HubNameMain}, you can access and curate a unique set of collections from all forms of Ansible content. +{CertifiedName} are included in your subscription to {PlatformName}. Using {HubNameMain}, you can access and curate a unique set of collections from all forms of Ansible content. Red Hat Ansible content contains two types of content: * {CertifiedName} * {Valid} collections -Ansible validated collections are available in your {PrivateHubName} through the Platform Installer. -When you download {PlatformName} with the bundled installer, validated content is pre-populated into the {PrivateHubName} by default, but only if you enable the {PrivateHubName} as part of the inventory. +You can use both {CertifiedName} or {Valid} collections to build your automation library. For more information on the differences between {CertifiedName} and {Valid} collections, see the Knowledgebase article link:https://access.redhat.com/support/articles/ansible-automation-platform-certified-content[{CertifiedName} and {Valid}], or xref:assembly-validated-content[{Valid}] in this guide. -If you are not using the bundle installer, you can use a Red Hat supplied Ansible playbook to install validated content. -For further information, see xref:assembly-validated-content[{Valid}]. +// hherbly--removed, see aap-20548 +// Ansible validated collections are available in your {PrivateHubName} through the platform installer. +// When you download {PlatformName} with the bundled installer, validated content is pre-populated into the {PrivateHubName} by default, but only if you enable the {PrivateHubName} as part of the inventory. + +// If you are not using the bundle installer, you can use a Red Hat supplied Ansible playbook to install validated content. + +// For further information, see xref:assembly-validated-content[{Valid}]. You can update these collections manually by downloading their packages. -[discrete] -== Why certify Ansible collections? +//hherbly: removing as this is specific to partners, not a general user audience. see aap-20548 + +// [discrete] +// == Why certify Ansible collections? + +// The Ansible certification program represents a shared statement of support for {CertifiedCon} between Red Hat and the ecosystem partner. +// An end customer experiencing trouble with Ansible and certified partner content can, for example, open a support ticket describing a request for information, or a problem with Red Hat, and expect the ticket to be resolved by Red Hat and the ecosystem partner. + +// Red Hat offers go-to-market benefits for Certified Partners to grow market awareness, generate demand, and sell collaboratively. + +// Red Hat {CertifiedName} are distributed through {HubNameMain} (subscription required), a centralized repository for jointly supported Ansible Content. +// As a certified partner, publishing collections to {HubNameMain} gives end customers the power to manage how trusted automation content is used in their production environment with a well-known support life cycle. + +// For more information about getting started with certifying a solution, see link:https://connect.redhat.com/en/partner-with-us/red-hat-ansible-automation-certification[Red Hat Partner Resources]. -The Ansible certification program enables a shared statement of support for {CertifiedCon} between Red Hat and the ecosystem partner. -An end customer, experiencing trouble with Ansible and certified partner content, can open a support ticket, for example, a request for information, or a problem with Red Hat, and expect the ticket to be resolved by Red Hat and the ecosystem partner. +// [discrete] +// == How do I get a collection certified? -Red Hat offers go-to-market benefits for Certified Partners to grow market awareness, generate demand, and sell collaboratively. +// For instructions on certifying your collection, see the Ansible certification policy guide on link:http://www.ansible.com/partners[Red Hat Partner Connect]. -Red Hat {CertifiedName} are distributed through {HubNameMain} (subscription required), a centralized repository for jointly supported Ansible Content. -As a certified partner, publishing collections to {HubNameMain} provides end customers the power to manage how trusted automation content is used in their production environment with a well-known support life cycle. +// [discrete] +// == How does the joint support agreement on Certified Collections work? -For more information about getting started with certifying a solution, see link:https://connect.redhat.com/en/partner-with-us/red-hat-ansible-automation-certification[Red Hat Partner Connect]. +// If a customer raises an issue with the Red Hat support team about a certified collection, Red Hat support assesses the issue and checks whether the problem is with Ansible or Ansible usage. +// They also check whether the issue is with a certified collection. +// If there is a problem with the certified collection, support teams transfer the issue to the vendor owner of the certified collection through an agreed-upon tool such as TSANet. -[discrete] -== How do I get a collection certified? +// [discrete] +// == Can I create and certify a collection containing only Ansible Roles? -For instructions on certifying your collection, see the Ansible certification policy guide on link:http://www.ansible.com/partners[Red Hat Partner Connect]. +// You can create and certify collections that contain only roles. +// Current testing requirements are focused on collections containing modules, and additional resources are currently in progress for testing collections containing only roles. +// Contact ansiblepartners@redhat.com for more information. -[discrete] -== How does the joint support agreement on Certified Collections work? +You can use {HubNameMain} to distribute the relevant {CertifiedColl}s to your users by creating a requirements file. -If a customer raises an issue with the Red Hat support team about a certified collection, Red Hat support assesses the issue and checks whether the problem exists within Ansible or Ansible usage. -They also check whether the issue is with a certified collection. -If there is a problem with the certified collection, support teams transfer the issue to the vendor owner of the certified collection through an agreed upon tool such as TSANet. +Before you can use a requirements file to install content, you must: -[discrete] -== Can I create and certify a collection containing only Ansible Roles? +. xref:token-management-hub_cloud-sync[Obtain an automation hub API token] +. xref:proc-set-rhcertified-remote_cloud-sync[Use the API token to configure a remote repository in your local hub] +. Then, xref:create-requirements-file_cloud-sync[Create a requirements file]. -You can create and certify collections that contain only roles. -Current testing requirements are focused on collections containing modules, and additional resources are currently in progress for testing collections only containing roles. -Contact ansiblepartners@redhat.com for more information. -include::assembly-synclists.adoc[leveloffset=+1] -include::assembly-syncing-to-cloud-repo.adoc[leveloffset=+1] -include::assembly-collections-and-content-signing-in-pah.adoc[leveloffset=+1] -//include::assembly-faq.adoc[leveloffset=+1] -include::assembly-validated-content.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/hub/assembly-managing-collections-hub.adoc b/downstream/assemblies/hub/assembly-managing-collections-hub.adoc index 75b4cc4271..ed2de81087 100644 --- a/downstream/assemblies/hub/assembly-managing-collections-hub.adoc +++ b/downstream/assemblies/hub/assembly-managing-collections-hub.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="managing-collections-hub"] @@ -6,21 +7,15 @@ ifdef::context[:parent-context: {context}] :context: managing-collections-hub [role="_abstract"] -As a content creator, you can use namespaces in {HubName} to curate and manage collections for the following purposes: +As a content creator, you can use namespaces in {HubName} to curate and manage collections. For example, you can: -* Create groups with permissions to curate namespaces and upload collections to {PrivateHubName} +* Create teams with permissions to curate namespaces and upload collections to {PrivateHubName} * Add information and resources to the namespace to help end users of the collection in their automation tasks * Upload collections to the namespace * Review the namespace import logs to determine the success or failure of uploading the collection and its current approval status -For information on creating content, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_creator_guide/index[{PlatformName} Creator Guide]. +For information on creating content, see link:{LinkDevelopAutomationContent}. -include::assembly-working-with-namespaces.adoc[leveloffset=+1] -include::assembly-managing-private-collections.adoc[leveloffset=+1] -include::assembly-repo-management.adoc[leveloffset=+1] -include::assembly-remote-management.adoc[leveloffset=+2] -include::assembly-repo-sync.adoc[leveloffset=+2] -include::assembly-collection-import-export.adoc[leveloffset=+2] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/hub/assembly-managing-container-registry.adoc b/downstream/assemblies/hub/assembly-managing-container-registry.adoc index 4f8eb57a16..bab8984693 100644 --- a/downstream/assemblies/hub/assembly-managing-container-registry.adoc +++ b/downstream/assemblies/hub/assembly-managing-container-registry.adoc @@ -1,15 +1,14 @@ - +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] - [id="managing-container-registry"] -= Manage your {PrivateHubName} container registry += Manage your {PrivateHubName} remote registry :context: managing-container-registry [role="_abstract"] -Manage container image repositories in your {PlatformNameShort} infrastructure by using the {HubName} container registry. +Manage container image repositories in your {PlatformNameShort} infrastructure by using the {HubName} remote registry. You can perform the following tasks with {HubNameStart}: * Control who can access individual container repositories @@ -17,8 +16,6 @@ You can perform the following tasks with {HubNameStart}: * View activity and image layers * Provide additional information related to each container repository - - //// The following include statements pull in the module files that comprise the assembly. Include any combination of concept, procedure, or reference modules required to cover the user story. You can also include other assemblies. //// diff --git a/downstream/assemblies/hub/assembly-managing-containers-hub.adoc b/downstream/assemblies/hub/assembly-managing-containers-hub.adoc index 97a22e8482..14d5bdb8dc 100644 --- a/downstream/assemblies/hub/assembly-managing-containers-hub.adoc +++ b/downstream/assemblies/hub/assembly-managing-containers-hub.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="managing-containers-hub"] @@ -6,15 +7,8 @@ ifdef::context[:parent-context: {context}] :context: managing-containers [role="_abstract"] -Learn the administrator workflows and processes for configuring {PrivateHubName} container registry and repositories. +Learn the administrator workflows and processes for configuring the {PrivateHubName} remote registry and repositories. -include::assembly-managing-container-registry.adoc[leveloffset=+1] -include::assembly-container-user-access.adoc[leveloffset=+1] -include::assembly-populate-container-registry.adoc[leveloffset=+1] -include::assembly-setup-container-repository.adoc[leveloffset=+1] -include::assembly-pull-image.adoc[leveloffset=+1] -include::assembly-working-with-signed-containers.adoc[leveloffset=+1] -include::assembly-delete-container.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/hub/assembly-managing-private-collections.adoc b/downstream/assemblies/hub/assembly-managing-private-collections.adoc index d27fe5926c..8e8ec7ce69 100644 --- a/downstream/assemblies/hub/assembly-managing-private-collections.adoc +++ b/downstream/assemblies/hub/assembly-managing-private-collections.adoc @@ -1,10 +1,11 @@ +:_mod-docs-content-type: ASSEMBLY [id="assembly-managing-private-collections"] = Managing the publication process of internal collections in Automation Hub Use {HubName} to manage and publish content collections developed within your organization. You can upload and group collections in namespaces. They need administrative approval to appear in the *Published* content repository. After you publish a collection, your users can access and download it for use. -You can reject submitted collections that do not meet organizational certification criteria. +You can also reject submitted collections that do not meet organizational certification criteria. include::hub/con-approval.adoc[leveloffset=+1] diff --git a/downstream/assemblies/hub/assembly-populate-container-registry.adoc b/downstream/assemblies/hub/assembly-populate-container-registry.adoc index 31d00049c2..81ee1e856c 100644 --- a/downstream/assemblies/hub/assembly-populate-container-registry.adoc +++ b/downstream/assemblies/hub/assembly-populate-container-registry.adoc @@ -1,4 +1,4 @@ - +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] @@ -10,27 +10,43 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -By default, {PrivateHubName} does not include container images. -To populate your container registry, you must push a container image to it. +By default, {PrivateHubName} does not include {ExecEnvName}. +To populate your container registry, you must push an {ExecEnvShort} to it. -You must follow a specific workflow to populate your {PrivateHubName} container registry: +You must follow a specific workflow to populate your {PrivateHubName} remote registry: -* Pull images from the Red Hat Ecosystem Catalog (registry.redhat.io) +* Pull {ExecEnvName} from the Red Hat Ecosystem Catalog (registry.redhat.io) * Tag them -* Push them to your {PrivateHubName} container registry +* Push them to your {PrivateHubName} remote registry [IMPORTANT] ==== -Image manifests and filesystem blobs were both originally served directly from `registry.redhat.io` and `registry.access.redhat.com`. -As of 1 May 2023, filesystem blobs are served from `quay.io` instead. +As of *April 1st, 2025*, `quay.io` is adding three additional endpoints. As a result, customers must adjust the allow/block lists within their firewall systems lists to include the following endpoints: + +* `cdn04.quay.io` +* `cdn05.quay.io` +* `cdn06.quay.io` + +To avoid problems pulling container images, customers must allow outbound TCP connections (ports 80 and 443) to the following hostnames: + +* `cdn.quay.io` +* `cdn01.quay.io` +* `cdn02.quay.io` +* `cdn03.quay.io` +* `cdn04.quay.io` +* `cdn05.quay.io` +* `cdn06.quay.io` + +This change should be made to any firewall configuration that specifically enables outbound connections to `registry.redhat.io` or `registry.access.redhat.com`. + +Use the hostnames instead of IP addresses when configuring firewall rules. -* Ensure that the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/ref-network-ports-protocols_planning[Network ports and protocols] listed in _Table 5.10. Execution Environments (EE)_ are available to avoid problems pulling container images. +After making this change, you can continue to pull images from `registry.redhat.io` or `registry.access.redhat.com`. You do not require a `quay.io` login, or need to interact with the `quay.io` registry directly in any way to continue pulling Red Hat container images. -Make this change to any firewall configuration that specifically enables outbound connections to `registry.redhat.io` or `registry.access.redhat.com`. +For more information, see link:https://access.redhat.com/articles/7084334[Firewall changes for container image pulls 2024/2025]. -Use the hostnames instead of IP addresses when configuring firewall rules. +Ensure that the link:{URLPlanningGuide}/ref-network-ports-protocols_planning[Network ports and protocols] listed in _Table 6.4. Execution Environments (EE)_ are available to avoid problems pulling container images. -After making this change you can continue to pull images from `registry.redhat.io` and `registry.access.redhat.com`. You do not require a `quay.io` login, or need to interact with the `quay.io` registry directly in any way to continue pulling Red Hat container images. ==== include::hub/proc-obtain-images.adoc[leveloffset=+1] diff --git a/downstream/assemblies/hub/assembly-pull-image.adoc b/downstream/assemblies/hub/assembly-pull-image.adoc index dad521a92f..62b843286b 100644 --- a/downstream/assemblies/hub/assembly-pull-image.adoc +++ b/downstream/assemblies/hub/assembly-pull-image.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="pulling-images-container-repository"] @@ -6,9 +7,9 @@ ifdef::context[:parent-context: {context}] :context: pulling-images-container-repository [role="_abstract"] -Pull images from the {HubName} container registry to make a copy to your local machine. -{HubNameStart} provides the `podman pull` command for each `latest` image in the container repository. -You can copy and paste this command into your terminal, or use `podman pull` to copy an image based on an image tag. +Pull {ExecEnvName} from the {HubName} remote registry to make a copy to your local machine. +{HubNameStart} provides the `podman pull` command for each `latest` {ExecEnvName} in the container repository. +You can copy and paste this command into your terminal, or use `podman pull` to copy an {ExecEnvName} based on an {ExecEnvName} tag. include::hub/proc-pull-image.adoc[leveloffset=+1] include::hub/proc-sync-image.adoc[leveloffset=+1] diff --git a/downstream/assemblies/hub/assembly-remote-management.adoc b/downstream/assemblies/hub/assembly-remote-management.adoc index 289cf753e1..780c8594e3 100644 --- a/downstream/assemblies/hub/assembly-remote-management.adoc +++ b/downstream/assemblies/hub/assembly-remote-management.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="remote-management"] diff --git a/downstream/assemblies/hub/assembly-repo-management.adoc b/downstream/assemblies/hub/assembly-repo-management.adoc index 220acc8950..0a66d6d24c 100644 --- a/downstream/assemblies/hub/assembly-repo-management.adoc +++ b/downstream/assemblies/hub/assembly-repo-management.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="repo-management"] @@ -6,7 +7,7 @@ ifdef::context[:parent-context: {context}] :context: repo-management [role="_abstract"] -As an {HubName} administrator, you can create, edit, delete, and move automation content collections between repositories. +As a platform administrator, you can create, edit, delete, and move automation content collections between repositories. == Types of repositories in automation hub @@ -14,9 +15,9 @@ In {HubName} you can publish collections to two types of repositories, depending Staging repositories:: Any user with permission to upload to a namespace can publish collections into these repositories. Collections in these repositories are not available in the search page. Instead, they are displayed on the approval dashboard for an administrator to verify. Staging repositories are marked with the `pipeline=staging` label. -Custom repositories:: Any user with write permissions on the repository can publish collections to these repositories. Custom repositories can be public where all users can see them, or private where only users with view permissions can see them. These repositories are not displayed on the approval dashboard. If the repository owner enables search, the collection can appear in search results. +Custom repositories:: Any user with write permissions on the repository can publish collections to these repositories. Custom repositories can be public where all users can see them, or private where only users with view permissions can see them. These repositories are not displayed on the approval dashboard. If the repository owner enables search, the collection can appear in search results. -By default, {HubName} ships with one staging repository that is automatically used when a repository is not specified for uploading collections. Users can create new staging repositories during xref:proc-create-repository[repository creation]. +By default, {HubName} includes one staging repository that is automatically used when a repository is not specified for uploading collections. Users can create new staging repositories during xref:proc-create-repository[repository creation]. include::hub/con-approval-pipeline.adoc[leveloffset=+1] include::hub/con-repo-rbac.adoc[leveloffset=+1] diff --git a/downstream/assemblies/hub/assembly-repo-sync.adoc b/downstream/assemblies/hub/assembly-repo-sync.adoc index 80031c5646..19dee229a7 100644 --- a/downstream/assemblies/hub/assembly-repo-sync.adoc +++ b/downstream/assemblies/hub/assembly-repo-sync.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] [id="repository-sync"] diff --git a/downstream/assemblies/hub/assembly-setup-container-repository.adoc b/downstream/assemblies/hub/assembly-setup-container-repository.adoc index 09af70eba6..e624764a0c 100644 --- a/downstream/assemblies/hub/assembly-setup-container-repository.adoc +++ b/downstream/assemblies/hub/assembly-setup-container-repository.adoc @@ -1,4 +1,4 @@ - +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] @@ -7,17 +7,15 @@ ifdef::context[:parent-context: {context}] [id="setting-up-container-repository"] = Setting up your container repository - :context: assembly-keyword - [role="_abstract"] -When you set up your container repository, you must add a description, include a README, add groups that can access the repository, and tag images. +When you set up your container repository, you must add a description, include a README, add teams that can access the repository, and tag {ExecEnvName}. -== Prerequisites to setting up your container registry +== Prerequisites to setting up your remote registry -* You are logged in to a {PrivateHubName}. +* You are logged in to {PlatformNameShort}. * You have permissions to change the repository. diff --git a/downstream/assemblies/hub/assembly-syncing-to-cloud-repo.adoc b/downstream/assemblies/hub/assembly-syncing-to-cloud-repo.adoc index 7b3ffb4ba2..f7a8c393e0 100644 --- a/downstream/assemblies/hub/assembly-syncing-to-cloud-repo.adoc +++ b/downstream/assemblies/hub/assembly-syncing-to-cloud-repo.adoc @@ -1,26 +1,23 @@ +:_mod-docs-content-type: ASSEMBLY [id="assembly-creating-tokens-in-automation-hub"] = Configuring {HubNameMain} remote repositories to synchronize content -Use remote configurations to configure your {PrivateHubName} to synchronize with {CertifiedName} hosted on `{Console}` or with your collections in {Galaxy}. - -[IMPORTANT] -==== -As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version. +:context: cloud-sync -To synchronize content, you can now upload a manually-created requirements file from the rh-certified remote. +Use remote configurations to configure your {PrivateHubName} to synchronize with {CertifiedName} hosted on `{Console}` or with your collections in {Galaxy}. -Remotes are configurations that allow you to synchronize content to your custom repositories from an external collection source. -==== +Each remote configuration located in {MenuACAdminRemotes} provides information for both the *community* and *rh-certified* repository about when the repository was *last updated*. +You can add new content to {HubNameMain} at any time using the *Edit* and *Sync* features included on the {MenuACAdminRepositories} page. [discrete] -== What’s the difference between {Galaxy} and {HubNameMain}? +== What's the difference between {Galaxy} and {HubNameMain}? Collections published to {Galaxy} are the latest content published by the Ansible community and have no joint support claims associated with them. -{Galaxy} is the recommended frontend directory for the Ansible community accessing content. +{Galaxy} is the recommended frontend directory for the Ansible community to access content. -Collections published to {HubNameMain} are targeted for joint customers of Red Hat and selected partners. +Collections published to {HubNameMain} are targeted to joint customers of Red Hat and selected partners. Customers need an Ansible subscription to access and download collections on {HubNameMain}. -A certified collection means that Red Hat and partners have a strategic relationship in place and are ready to support joint customers, and may have had additional testing and validation done against them. +A certified collection means that Red Hat and partners have a strategic relationship in place and are ready to support joint customers, and that the collections may have had additional testing and validation done against them. [discrete] == How do I request a namespace on {Galaxy}? @@ -37,16 +34,26 @@ After users are added as administrators of the namespace, you can use the self-s [discrete] == Are there any restrictions for {Galaxy} namespace naming? -Collection namespaces must follow python module name convention. +Collection namespaces must follow Python module name convention. This means collections should have short, all lowercase names. You can use underscores in the collection name if it improves readability. -include::hub/con-remote-repos.adoc[leveloffset=+1] +// [hherbly: there's only a couple of sentences in this concept module, and they make more sense at the beginning of this assembly. Moving this content to line 15] include::hub/con-remote-repos.adoc[leveloffset=+1] + +// [hherbly: replacing this with the 4 modules below from the Getting started with hub guide include::hub/proc-obtaining-org-collection-url.adoc[leveloffset=+1] -include::hub/proc-obtaining-org-collection-url.adoc[leveloffset=+1] +include::hub/con-token-management-hub.adoc[leveloffset=+1] + +include::hub/proc-create-api-token.adoc[leveloffset=+1] + +include::hub/proc-create-api-token-pah.adoc[leveloffset=+1] + +include::hub/proc-offline-token-active.adoc[leveloffset=+1] include::hub/proc-set-rhcertified-remote.adoc[leveloffset=+1] include::hub/proc-set-community-remote.adoc[leveloffset=+1] include::hub/proc-configure-proxy-remote.adoc[leveloffset=+1] + +include::hub/proc-create-requirements-file.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/hub/assembly-synclists.adoc b/downstream/assemblies/hub/assembly-synclists.adoc index 6b2a651c7e..7be2b2403d 100644 --- a/downstream/assemblies/hub/assembly-synclists.adoc +++ b/downstream/assemblies/hub/assembly-synclists.adoc @@ -1,16 +1,9 @@ +:_mod-docs-content-type: ASSEMBLY [id="assembly-synclists"] = Synchronizing Ansible Content Collections in {HubName} -[IMPORTANT] -==== -As of the 2.4 release you can still synchronize content, but synclists are deprecated, and will be removed in a future version. - -To synchronize content, you can now upload a manually-created requirements file from the rh-certified remote. - +To synchronize content, create and upload a requirements file to the appropriate remote. Remotes are configurations that enable you to synchronize content to your custom repositories from an external collection source. -==== - -You can use {HubNameMain} to distribute the relevant {CertifiedColl}s to your users by creating synclists or a requirements file. For more information about using requirements files, see link:https://docs.ansible.com/ansible/latest/collections_guide/collections_installing.html#install-multiple-collections-with-a-requirements-file[Install multiple collections with a requirements file] in the _Using Ansible collections_ guide. -include::hub/con-rh-certified-synclist.adoc[leveloffset=+1] +// include::hub/con-rh-certified-synclist.adoc[leveloffset=+1] include::hub/proc-create-synclist.adoc[leveloffset=+1] diff --git a/downstream/assemblies/hub/assembly-validated-content.adoc b/downstream/assemblies/hub/assembly-validated-content.adoc index e19d2615eb..8904fb1b2a 100644 --- a/downstream/assemblies/hub/assembly-validated-content.adoc +++ b/downstream/assemblies/hub/assembly-validated-content.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY [id="assembly-validated-content"] = {Valid} @@ -7,58 +8,63 @@ == Configuring validated collections with the installer -When you download and run the bundle installer, certified and validated collections are automatically uploaded. +When you download and run the RPM bundle installer, certified and validated collections are automatically uploaded. Certified collections are uploaded into the `rh-certified` repository. Validated collections are uploaded into the `validated` repository. -You can change to default configuration by using two variables: +You can change the default configuration by using two variables: * `automationhub_seed_collections` is a boolean that defines whether or not preloading is enabled. -* `automationhub_collection_seed_repository`. A variable that enables you to specify the type of content to upload when it is set to `true`. +* `automationhub_collection_seed_repository` is a variable that enables you to specify the type of content to upload when it is set to `true`. Possible values are `certified` or `validated`. -If missing both content sets will be uploaded. +If this variable is missing, both content sets will be uploaded. -== Installing validated content using the tarball - -If you are not using the bundle installer, you can use a standalone tarball, `ansible-validated-content-bundle-1.tar.gz`. -You can also use this standalone tarball later to update validated contents in any environment, when a newer tarball becomes available, without having to re-run the bundle installer. +[NOTE] +==== +Changing the default configuration may require further platform configuration changes for other content you may use. +==== -.Prerequisites -You require the following variables to run the playbook. +// == Installing validated content using the tarball -[cols="50%,50%",options="header"] -|==== -| Name | Description -| *`automationhub_admin_password`* | Your administration password. -| *`automationhub_api_token`* | The API token generated for your {HubName}. -| *`automationhub_main_url`* | For example, `\https://automationhub.example.com` -| *`automationhub_require_content_approval`* | Boolean (`true` or `false`) +// If you are not using the bundle installer, you can use a standalone .tar file, `ansible-validated-content-bundle-1.tar.gz`. +// You can also use this standalone .tar file later to update validated contents in any environment, when a newer .tar file becomes available, without having to re-run the bundle installer. -This must match the value used during {HubName} deployment. +// .Prerequisites +// Use the following required variables to run the playbook. -This variable is set to `true` by the installer. -|==== +// [cols="50%,50%",options="header"] +// |==== +// | Name | Description +// | *`automationhub_admin_password`* | Your administration password. +// | *`automationhub_api_token`* | The API token generated for your {HubName}. +// | *`automationhub_main_url`* | For example, `\https://automationhub.example.com` +// | *`automationhub_require_content_approval`* | Boolean (`true` or `false`) +// +// This must match the value used during {HubName} deployment. +// +// This variable is set to `true` by the installer. +// |==== -.Procedure -. To obtain the tarball, navigate to the link:{PlatformDownloadUrl}[{PlatformName} download] page and select *Ansible Validated Content*. -. Upload the content and define the variables (this example uses `automationhub_api_token`): -+ -[options="nowrap" subs="+quotes,attributes"] ----- -ansible-playbook collection_seed.yml --e automationhub_api_token= --e automationhub_main_url=https://automationhub.example.com --e automationhub_require_content_approval=true ----- -+ -[NOTE] -==== -Use either `automationhub_admin_password` or `automationhub_api_token`, not both. -==== +// .Procedure +// . To obtain the .tar file, navigate to the link:{PlatformDownloadUrl}[{PlatformName} download] page and select // *Ansible Validated Content*. +// . Upload the content and define the variables (this example uses `automationhub_api_token`): +// + +// [options="nowrap" subs="+quotes,attributes"] +// ---- +// ansible-playbook collection_seed.yml +// -e automationhub_api_token= +// -e automationhub_main_url=https://automationhub.example.com +// -e automationhub_require_content_approval=true +// ---- +// + +// [NOTE] +// ==== +// Use either `automationhub_admin_password` or `automationhub_api_token`, not both. +// ==== -When complete, the collections are visible in the validated collection section of {PrivateHubName}. -Users can now view and download collections from your {PrivateHubName}. +// When complete, the collections are visible in the validated collection section of {PrivateHubName}. +// Users can now view and download collections from your {PrivateHubName}. -[role="_additional-resources"] -.Additional Resources -For more information on running ansible playbooks, see link:https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html[ansible-playbook]. +// [role="_additional-resources"] +// .Additional Resources +// For more information on running ansible playbooks, see link:https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html[ansible-playbook]. diff --git a/downstream/assemblies/hub/assembly-working-with-namespaces.adoc b/downstream/assemblies/hub/assembly-working-with-namespaces.adoc index 42e77c6a17..5cd04b867f 100644 --- a/downstream/assemblies/hub/assembly-working-with-namespaces.adoc +++ b/downstream/assemblies/hub/assembly-working-with-namespaces.adoc @@ -1,12 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY [id="assembly-working-with-namespaces"] = Using namespaces to manage collections in {HubName} -Namespaces are unique locations in {HubName} to which you can upload and publish content collections. Access to namespaces in {HubName} is governed by groups with permission to manage the content and related information that appears there. +Namespaces are unique locations in {HubName} to which you can upload and publish content collections. Access to namespaces in {HubName} is governed by teams with permission to manage the content and related information that appears there. You can use namespaces in {HubName} to organize collections developed within your organization for internal distribution and use. -If you are working with namespaces, you must have a group that has permissions to create, edit and upload collections to namespaces. Collections uploaded to a namespace require administrative approval before you can publish them and make them available for use. +If you are working with namespaces, you must have a team that has permissions to create, edit and upload collections to namespaces. Collections uploaded to a namespace require administrative approval before you can publish them and make them available for use. include::hub/proc-create-content-developers.adoc[leveloffset=+1] @@ -14,10 +15,12 @@ include::hub/proc-create-namespace.adoc[leveloffset=+1] include::hub/proc-edit-namespace.adoc[leveloffset=+1] -When you create a namespace, groups with permissions to upload to it can start adding their collections for approval. Collections in the namespace appear in the *Published* repository after approval. +When you create a namespace, teams with permissions to upload to it can start adding their collections for approval. Collections in the namespace appear in the *Published* repository after approval. include::hub/proc-uploading-collections.adoc[leveloffset=+1] include::hub/proc-review-collection-imports.adoc[leveloffset=+1] +include::hub/proc-delete-collection.adoc[leveloffset=+1] + include::hub/proc-delete-namespace.adoc[leveloffset=+1] diff --git a/downstream/assemblies/hub/assembly-working-with-signed-containers.adoc b/downstream/assemblies/hub/assembly-working-with-signed-containers.adoc index b622914bff..2c912142e9 100644 --- a/downstream/assemblies/hub/assembly-working-with-signed-containers.adoc +++ b/downstream/assemblies/hub/assembly-working-with-signed-containers.adoc @@ -1,4 +1,4 @@ - +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] @@ -9,7 +9,7 @@ ifdef::context[:parent-context: {context}] :context: working-with-signed-containers -{ExecEnvNameStart} are container images used by Ansible {ControllerName} to run jobs. +{ExecEnvNameStart} are container images used by {PlatformNameShort} to run jobs. You can download this content to {PrivateHubName}, and publish it within your organization. include::hub/proc-deploying-your-system-for-container-signing.adoc[leveloffset=+1] diff --git a/downstream/assemblies/navigator/assembly-intro-navigator.adoc b/downstream/assemblies/navigator/assembly-intro-navigator.adoc index 3f2e2c0a1f..e45cd587db 100644 --- a/downstream/assemblies/navigator/assembly-intro-navigator.adoc +++ b/downstream/assemblies/navigator/assembly-intro-navigator.adoc @@ -17,6 +17,13 @@ As a content creator, you can use {Navigator} to develop Ansible playbooks, coll {NavigatorStart} also produces an artifact file you can use to help you develop your playbooks and troubleshoot problem areas. +[NOTE] +==== +{NavigatorStart} is a component of {ToolsName}. +To use {Navigator}, you must install {ToolsName}. +==== + + include::navigator/con-about-ansible-navigator.adoc[leveloffset=+1] include::navigator/con-navigator-modes.adoc[leveloffset=+1] include::navigator/ref-navigator-command-summary.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-HA-redis.adoc b/downstream/assemblies/platform/assembly-HA-redis.adoc new file mode 100644 index 0000000000..5691865fdd --- /dev/null +++ b/downstream/assemblies/platform/assembly-HA-redis.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="HA-redis_{context}"] + += Caching and queueing system + +In {PlatformNameShort} {PlatformVers}, link:https://redis.io/[Redis (REmote DIctionary Server)] is used as the caching and queueing system. Redis is an open source, in-memory, NoSQL key/value store that is used primarily as an application cache, quick-response database and lightweight message broker. + +Centralized Redis is provided for the {Gateway} and {EDAName} and shared between those components. {ControllerNameStart} and {HubName} have their own instances of Redis. + +This cache and queue system stores data in memory, rather than on a disk or solid-state drive (SSD), which helps deliver speed, reliability, and performance. In {PlatformNameShort}, the system caches the following types of data for the various services in {PlatformNameShort}: + +.Data types cached by Centralized Redis +[options="header"] +|==== +| {ControllerNameStart} | {EDAName} server | {HubNameStart} | {GatewayStart} +| N/A {ControllerName} does not use shared Redis in {PlatformNameShort} {PlatformVers} | Event queues | N/A {HubName} does not use shared Redis in {PlatformNameShort} {PlatformVers} | Settings, Session Information, JSON Web Tokens +|==== + +This data can contain sensitive Personal Identifiable Information (PII). Your data is protected through secure communication with the cache and queue system through both Transport Layer Security (TLS) encryption and authentication. + +[NOTE] +==== +The data in Redis from both the {Gateway} and {EDAName} are partitioned; therefore, neither service can access the other’s data. +==== + +include::platform/con-gw-centralized-redis.adoc[leveloffset=+1] +include::platform/con-gw-clustered-redis.adoc[leveloffset=+1] +include::platform/con-gw-single-node-redis.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-UG-overview.adoc b/downstream/assemblies/platform/assembly-UG-overview.adoc index 971deee736..ab93d23bf3 100644 --- a/downstream/assemblies/platform/assembly-UG-overview.adoc +++ b/downstream/assemblies/platform/assembly-UG-overview.adoc @@ -1,19 +1,61 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="assembly-UG-overview"] = {ControllerNameStart} overview :context: overview-controller +:mod-docs-content-type: With {PlatformNameShort} users across an organization can share, vet, and manage automation content by means of a simple, powerful, and agentless technical implementation. IT managers can provide guidelines on how automation is applied to individual teams. Automation developers can write tasks that use existing knowledge, without the operational overhead of conforming to complex tools and frameworks. It is a more secure and stable foundation for deploying end-to-end automation solutions, from hybrid cloud to the edge. -{PlatformNameShort} includes {ControllerName}, which enables users to define, operate, scale, and delegate automation across their enterprise. - include::platform/con-controller-overview-details.adoc[leveloffset=+1] +include::platform/con-controller-overview-exploration.adoc[leveloffset=+1] + +include::platform/con-controller-overview-automation.adoc[leveloffset=+1] + +include::platform/con-controller-overview-rbac.adoc[leveloffset=+1] + +include::platform/con-controller-overview-cloud-autoscaling.adoc[leveloffset=+1] + +include::platform/con-controller-overview-api.adoc[leveloffset=+1] + +include::platform/con-controller-overview-backup-restore.adoc[leveloffset=+1] + +include::platform/con-controller-overview-galaxy.adoc[leveloffset=+1] + +include::platform/con-controller-overview-openstack.adoc[leveloffset=+1] + +include::platform/con-controller-overview-remote-exec.adoc[leveloffset=+1] + +include::platform/con-controller-overview-tracking.adoc[leveloffset=+1] + +include::platform/con-controller-overview-notifiers.adoc[leveloffset=+1] + +include::platform/con-controller-overview-integrations.adoc[leveloffset=+1] + +include::platform/con-controller-overview-virtual-envs.adoc[leveloffset=+1] + +include::platform/con-controller-overview-auth-enhance.adoc[leveloffset=+1] + +include::platform/con-controller-overview-cluster-manage.adoc[leveloffset=+1] + +include::platform/con-controller-overview-workflow-enhancements.adoc[leveloffset=+1] + +include::platform/con-controller-overview-job-distribution.adoc[leveloffset=+1] + +include::platform/con-controller-fips-support.adoc[leveloffset=+1] + +include::platform/con-controller-overview-host-limits.adoc[leveloffset=+1] + +include::platform/con-controller-overview-inventory-plugins.adoc[leveloffset=+1] + +include::platform/con-controller-overview-secret-management.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-aap-activate.adoc b/downstream/assemblies/platform/assembly-aap-activate.adoc index f2131b15c5..e5f389e4aa 100644 --- a/downstream/assemblies/platform/assembly-aap-activate.adoc +++ b/downstream/assemblies/platform/assembly-aap-activate.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="assembly-aap-activate"] @@ -6,13 +8,34 @@ ifdef::context[:parent-context: {context}] :context: activate-aap [role="_abstract"] + +If you are an organization administrator, you must link:{BaseURL}/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/proc-ciam-svc-acct-overview-creating-service-acct#proc-ciam-svc-acct-create-creating-service-acct[create a service account] and use the client ID and client secret to activate your subscription. + +If you do not have administrative access, you can enter your Red Hat username and password in the Client ID and Client secret fields, respectively, to locate and add your subscription to your {PlatformNameShort} instance. + +[NOTE] +==== +If you enter your client ID and client secret but cannot locate your subscription, you might not have the correct permissions set on your service account. For more information and troubleshooting guidance for service accounts, see link:https://access.redhat.com/articles/7112649[Configure Ansible Automation Platform to authenticate through service account credentials]. +==== + +For Red Hat Satellite, input your Satellite username and Satellite password in the fields below. + {PlatformName} uses available subscriptions or a subscription manifest to authorize the use of {PlatformNameShort}. To obtain a subscription, you can do either of the following: -. Use your Red Hat customer or Satellite credentials when you launch {PlatformNameShort}. +. Use your Red Hat username and password, service account credentials, or Satellite credentials when you launch {PlatformNameShort}. . Upload a subscriptions manifest file either using the {PlatformName} interface or manually in an Ansible playbook. +ifndef::operationG[] include::platform/proc-aap-activate-with-credentials.adoc[leveloffset=+1] + include::platform/proc-aap-activate-with-manifest.adoc[leveloffset=+1] +endif::operationG[] + +ifdef::operationG[] +To activate {PlatformNameShort} using credentials, see link:{URLCentralAuth}/assembly-gateway-licensing#proc-aap-activate-with-credentials[Activate with credentials]. + +To activate {PlatformNameShort} with a manifest file, see link:{URLCentralAuth}/assembly-gateway-licensing#proc-aap-activate-with-manifest[Activate with a manifest file]. +endif::operationG[] ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-aap-advanced-config.adoc b/downstream/assemblies/platform/assembly-aap-advanced-config.adoc new file mode 100644 index 0000000000..9d3779fe73 --- /dev/null +++ b/downstream/assemblies/platform/assembly-aap-advanced-config.adoc @@ -0,0 +1,22 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + +[id="aap-advanced-config"] + += Advanced configurations + +:context: advanced-config + +As a platform administrator, you can implement advanced configurations to customize {PlatformNameShort}, including database connections, logging, caching, and gRPC server parameters. + +include::platform/con-settings-py.adoc[leveloffset=+1] + +include::platform/con-grpc-settings-py.adoc[leveloffset=+1] + +include::platform/con-loading-impacts-grpc-settings.adoc[leveloffset=+2] + +include::platform/con-loading-order-grpc-settings.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-aap-architecture.adoc b/downstream/assemblies/platform/assembly-aap-architecture.adoc deleted file mode 100644 index eda9268de0..0000000000 --- a/downstream/assemblies/platform/assembly-aap-architecture.adoc +++ /dev/null @@ -1,7 +0,0 @@ -// This assembly is part of the AAP Planning Guide -[id='aap_architecture'] -= {PlatformName} Architecture - -As a modular platform, {PlatformNameShort} provides the flexibility to easily integrate components and customize your deployment to best meet your automation requirements. The following section provides a comprehensive architectural example of an {PlatformNameShort} deployment. - -include::platform/con-aap-example-architecture.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-aap-backup-recovery.adoc b/downstream/assemblies/platform/assembly-aap-backup-recovery.adoc index da6032d4cb..020a86e512 100644 --- a/downstream/assemblies/platform/assembly-aap-backup-recovery.adoc +++ b/downstream/assemblies/platform/assembly-aap-backup-recovery.adoc @@ -1,7 +1,8 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] -[id="aap-backup-recover"] +[id="assembly-aap-backup-recovery"] :context: aap-backup diff --git a/downstream/assemblies/platform/assembly-aap-backup.adoc b/downstream/assemblies/platform/assembly-aap-backup.adoc index 36b77815cb..6934f38686 100644 --- a/downstream/assemblies/platform/assembly-aap-backup.adoc +++ b/downstream/assemblies/platform/assembly-aap-backup.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] @@ -9,14 +10,13 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -Backing up your {PlatformName} deployment involves creating backup resources for your deployed {HubName} and {ControllerName} instances. Use these procedures to create backup resources for your {PlatformName} deployment. +Backing up your {PlatformName} deployment involves creating backup resources for your deployed instances. +Use the following procedures to create backup resources for your {PlatformName} deployment. +We recommend taking backups before upgrading the {OperatorPlatformNameShort}. +Take a backup regularly in case you want to restore the platform to a previous state. -//part of 2.5 release, (AAP-22178) uncomment when publishing [gmurray] -include::platform/proc-aap-platform-gateway-backup.adoc[leveloffset=+1] - -include::platform/proc-aap-controller-backup.adoc[leveloffset=+1] -include::platform/proc-aap-hub-backup.adoc[leveloffset=+1] +include::platform/proc-aap-platform-gateway-backup.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-aap-containerized-installation.adoc b/downstream/assemblies/platform/assembly-aap-containerized-installation.adoc index bb08181afc..19dc525c7b 100644 --- a/downstream/assemblies/platform/assembly-aap-containerized-installation.adoc +++ b/downstream/assemblies/platform/assembly-aap-containerized-installation.adoc @@ -1,61 +1,81 @@ -ifdef::context[:parent-context-of-aap-containerized-installation: {context}] - :_mod-docs-content-type: ASSEMBLY - -ifndef::context[] [id="aap-containerized-installation"] -endif::[] - -ifdef::context[] -[id="aap-containerized-installation_{context}"] -endif::[] +ifdef::context[:parent-context: {context}] = {PlatformNameShort} containerized installation :context: aap-containerized-installation -[role="_abstract"] -Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. +{PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. + +This guide helps you to understand the installation requirements and processes behind the containerized version of {PlatformNameShort}. -This guide helps you to understand the installation requirements and processes behind our new containerized version of Ansible Automation Platform. This initial version is based upon {PlatformNameShort} 2.4 and is being released as a Technical Preview. Please see link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] to understand what a technical preview entails. +[NOTE] +==== +include::snippets/container-upgrades.adoc[] -.Prerequisites +==== -* A RHEL 9.2 based host. Minimal OS base install is recommended. -* A non-root user for the RHEL host, with sudo or other Ansible supported privilege escalation (sudo recommended). This user is responsible for the installation of containerized {PlatformNameShort}. -* It is recommended setting up an *SSH public key authentication* for the non-root user. For guidelines on setting up an SSH public key authentication for the non-root user, see link:https://access.redhat.com/solutions/4110681[How to configure SSH public key authentication for passwordless login]. -* SSH keys are only required when installing on remote hosts. If doing a self contained local VM based installation, you can use *ansible_connection: local* as per the example which does not require SSH. -* Internet access from the RHEL host if using the default online installation method. +== Tested deployment models -== System Requirements -Your system must meet the following minimum system requirements to install and run Red Hat Containerized Ansible Automation Platform. +Red Hat tests {PlatformNameShort} {PlatformVers} with a defined set of topologies to give you opinionated deployment options. The supported topologies include infrastructure topology diagrams, tested system configurations, example inventory files, and network ports information. -[cols=2] -|====================== -| Memory | 16Gb RAM -| CPU | 4 CPU -| Disk space | 40Gb -| Disk IOPs | 1500 -|====================== +For containerized {PlatformNameShort}, there are two infrastructure topology shapes: +. Growth - (All-in-one) Intended for organizations that are getting started with {PlatformNameShort}. This topology allows for smaller footprint deployments. +. Enterprise - Intended for organizations that require {PlatformNameShort} deployments to have redundancy or higher compute for large volumes of automation. This is a more future-proofed scaled out architecture. +For more information about the tested deployment topologies for containerized {PlatformNameShort}, see link:{URLTopologies}/container-topologies[Container topologies] in _{TitleTopologies}_. +include::platform/ref-cont-aap-system-requirements.adoc[leveloffset=+1] include::platform/proc-preparing-the-rhel-host-for-containerized-installation.adoc[leveloffset=+1] -include::platform/proc-installing-ansible-core.adoc[leveloffset=+1] + +include::platform/proc-preparing-the-managed-nodes-for-containerized-installation.adoc[leveloffset=+1] + include::platform/proc-downloading-containerized-aap.adoc[leveloffset=+1] -include::platform/proc-using-postinstall.adoc[leveloffset=+1] + +include::platform/ref-configuring-inventory-file.adoc[leveloffset=+1] + +include::platform/proc-set-registry-username-password.adoc[leveloffset=+2] + +== Advanced configuration options +Advanced configuration options, such as external database set up and the use of custom TLS certs, are available for more complex deployments of containerized {PlatformNameShort}. + +If you are not using these advanced configuration options, go to link:{URLContainerizedInstall}/aap-containerized-installation#installing-containerized-aap[Installing containerized {PlatformNameShort}] to continue with your installation. + +include::platform/proc-add-eda-safe-plugin-var.adoc[leveloffset=+2] + +include::platform/ref-adding-execution-nodes.adoc[leveloffset=+2] + +include::assembly-configure-hub-storage.adoc[leveloffset=+2] + +include::platform/proc-configure-haproxy-load-balancer.adoc[leveloffset=+2] + +include::platform/proc-enabling-automation-hub-collection-and-container-signing.adoc[leveloffset=+2] + +include::assembly-setup-postgresql-ext-database.adoc[leveloffset=+2] + +include::assembly-using-custom-tls-certificates.adoc[leveloffset=+2] + +include::platform/ref-using-custom-receptor-signing-keys.adoc[leveloffset=+2] + include::platform/proc-installing-containerized-aap.adoc[leveloffset=+1] -include::platform/ref-accessing-control-auto-hub-eda-control.adoc[leveloffset=+1] -include::platform/ref-using-custom-tls-certificates.adoc[leveloffset=+1] -include::platform/ref-using-custom-receptor-signing-keys.adoc[leveloffset=+1] -include::platform/ref-enabling-automation-hub-collection-and-container-signing.adoc[leveloffset=+1] -include::platform/ref-adding-execution-nodes.adoc[leveloffset=+1] -include::platform/proc-uninstalling-containerized-aap.adoc[leveloffset=+1] +//Michelle: Postinstall not currently functioning so commented out +//include::platform/proc-using-postinstall.adoc[leveloffset=+1] + +include::platform/proc-update-aap-container.adoc[leveloffset=+1] + +include::platform/proc-backup-aap-container.adoc[leveloffset=+1] + +include::platform/proc-restore-aap-container.adoc[leveloffset=+1] + +include::platform/proc-uninstalling-containerized-aap.adoc[leveloffset=+1] -ifdef::parent-context-of-aap-containerized-installation[:context: {parent-context-of-aap-containerized-installation}] -ifndef::parent-context-of-aap-containerized-installation[:!context:] +include::platform/proc-reinstalling-containerized-aap.adoc[leveloffset=+1] +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-aap-manifest-files.adoc b/downstream/assemblies/platform/assembly-aap-manifest-files.adoc index 543c98cce6..cc46df6c94 100644 --- a/downstream/assemblies/platform/assembly-aap-manifest-files.adoc +++ b/downstream/assemblies/platform/assembly-aap-manifest-files.adoc @@ -1,9 +1,9 @@ +:_mod-docs-content-type: ASSEMBLY +// emurtoug removed this file from the planning guide to avoid duplication of subscription content within Access mangement and authentication ifdef::context[:parent-context: {context}] - - [id="assembly-aap-obtain-manifest-files"] = Obtaining a manifest file @@ -21,4 +21,4 @@ include::platform/proc-aap-add-merge-subscriptions.adoc[leveloffset=+1] include::platform/proc-aap-generate-manifest-file.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-aap-migration.adoc b/downstream/assemblies/platform/assembly-aap-migration.adoc index d9d825f85b..aa2f96cf94 100644 --- a/downstream/assemblies/platform/assembly-aap-migration.adoc +++ b/downstream/assemblies/platform/assembly-aap-migration.adoc @@ -1,32 +1,47 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="aap-migration"] -= Migrating {PlatformName} to {OperatorPlatform} += Migrating {PlatformName} to {OperatorPlatformName} :context: aap-migration [role="_abstract"] -Migrating your {PlatformName} deployment to the {OperatorPlatform} allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your {PlatformName} deployments. - -Use these procedures to migrate any of the following deployments to the {OperatorPlatform}: - -* A VM-based installation of Ansible Tower 3.8.6, {ControllerName}, or {HubName} -* An Openshift instance of Ansible Tower 3.8.6 ({PlatformNameShort} 1.2) - -include::platform/con-aap-migration-considerations.adoc[leveloffset=+1] -include::platform/con-aap-migration-prepare.adoc[leveloffset=+1] -include::platform/proc-aap-migration-backup.adoc[leveloffset=+2] -include::platform/proc-create-secret-key-secret.adoc[leveloffset=+2] -include::platform/proc-create-postresql-secret.adoc[leveloffset=+2] -include::platform/proc-verify-network-connectivity.adoc[leveloffset=+2] -include::platform/proc-aap-migration.adoc[leveloffset=+1] -include::platform/proc-aap-create_controller.adoc[leveloffset=+2] -include::platform/proc-aap-create_hub.adoc[leveloffset=+2] -include::platform/proc-post-migration-cleanup.adoc[leveloffset=+1] - - - +Migrating your {PlatformName} deployment to the {OperatorPlatformNameShort} allows you to take advantage of the benefits provided by a Kubernetes native operator, including simplified upgrades and full lifecycle support for your {PlatformName} deployments. + +You can use the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/ansible_automation_platform_migration[{TitleMigration}] guide for help with migrating. + +[NOTE] +==== +Upgrades of {EDAName} version 2.4 to 2.5 are not supported. Database migrations between {EDAName} 2.4 and {EDAName} 2.5 are not compatible. +==== + +//[gmurray 07/14/25 ]The following modules will need to be deprecated eventually, commenting out for now incase we need to roll back, I also need to confirm which are used in 2.4. Best thing would be to archive these when we cease supporting 2.4 +// +//include::platform/con-aap-migration-considerations.adoc[leveloffset=+1] +// +//include::platform/con-aap-migration-prepare.adoc[leveloffset=+1] +// +//include::platform/proc-aap-migration-backup.adoc[leveloffset=+2] +// +//include::platform/proc-create-secret-key-secret.adoc[leveloffset=+2] +// +//include::platform/proc-create-postresql-secret.adoc[leveloffset=+2] +// +//include::platform/proc-verify-network-connectivity.adoc[leveloffset=+2] +// +//include::platform/proc-aap-migration.adoc[leveloffset=+1] +// +//include::platform/proc-aap-create-aap-object.adoc[leveloffset=+2] +// +//include::platform/con-post-migration-cleanup.adoc[leveloffset=+1] +// +//include::platform/proc-post-migration-delete-instance.adoc[leveloffset=+2] +// +//include::platform/proc-post-migration-unlink-db.adoc[leveloffset=+2] +// ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-aap-platform-components.adoc b/downstream/assemblies/platform/assembly-aap-platform-components.adoc index addd8902ff..131abe3442 100644 --- a/downstream/assemblies/platform/assembly-aap-platform-components.adoc +++ b/downstream/assemblies/platform/assembly-aap-platform-components.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="ref-aap-components"] @@ -8,7 +10,25 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -{PlatformNameShort} is a modular platform composed of separate components that can be connected together to meet your deployment needs. {PlatformNameShort} deployments start with {ControllerName} which is the enterprise framework for controlling, securing, and managing Ansible automation with a user interface (UI) and RESTful application programming interface (API). Then, you can add to your deployment any combination of the following automation platform components: +{PlatformNameShort} is composed of services that are connected together to meet your automation needs. These services provide the ability to store, make decisions for, and execute automation. All of these functions are available through a user interface (UI) and RESTful application programming interface (API). Deploy each of the following components so that all features and capabilities are available for use without the need to take further action: + +* {GatewayStart} +* {ControllerNameStart} +* {HubNameStart} +* {PrivateHubNameStart} +* High availability {HubName} +* {EDAcontroller} +* {AutomationMeshStart} +* {ExecEnvNameStart} +* {Galaxy} +* {NavigatorStart} +* PostgreSQL + +include::platform/con-about-platform-gateway.adoc[leveloffset=+1] + +//Readded controller description for AAP-48022 + +include::platform/con-about-controller.adoc[leveloffset=+1] include::platform/con-about-automation-hub.adoc[leveloffset=+1] @@ -16,8 +36,6 @@ include::platform/con-about-pa-hub.adoc[leveloffset=+1] include::platform/con-about-ha-hub.adoc[leveloffset=+1] -//[dcd-moved this description to the intro to platform components since it is the only "required" component. include::platform/con-about-automation-controller.adoc[leveloffset=+1] - //include::platform/con-about-services-catalog.adoc[leveloffset=+1] include::platform/con-about-eda-controller.adoc[leveloffset=+1] @@ -30,5 +48,8 @@ include::platform/con-about-galaxy.adoc[leveloffset=+1] include::platform/con-about-navigator.adoc[leveloffset=+1] +//Added for AAP-48022 +include::platform/con-about-postgres.adoc[leveloffset=+1] + ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-aap-post-upgrade.adoc b/downstream/assemblies/platform/assembly-aap-post-upgrade.adoc new file mode 100644 index 0000000000..bb6173027a --- /dev/null +++ b/downstream/assemblies/platform/assembly-aap-post-upgrade.adoc @@ -0,0 +1,48 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +[id="aap-post-upgrade"] += {PlatformNameShort} post-upgrade steps + +:context: aap-post-upgrade + +[role="_abstract"] + +After a successful upgrade to {PlatformNameShort} 2.5, the next crucial step is migrating your users to the latest version of the platform. + +User data and legacy authentication settings from {ControllerName} and {PrivateHubName} are carried over during the upgrade process and allow seamless initial access to the platform after upgrade. Customers can log in without additional action. + +However, to fully transition authentication to use all of the features and capabilities of the 2.5 {Gateway}, a manual process is required post-upgrade to leverage the new authentication framework. In the context of upgrading to {PlatformNameShort} 2.5, this manual process is referred to as _migration_. + +There are important notes and considerations for each type of user migration, including the following: + +* Admin users +* Normal users +* SSO users +* LDAP users + +Be sure to read through the important notes highlighted for each user type to help make the migration process as smooth as possible. + +include::platform/ref-aap-considerations-for-migrate-admin-users.adoc[leveloffset=+1] + +include::platform/proc-aap-migrate-admin-users.adoc[leveloffset=+1] + +include::platform/ref-aap-considerations-for-migrate-normal-users.adoc[leveloffset=+1] + +include::platform/con-aap-migrate-normal-users.adoc[leveloffset=+1] + +include::platform/proc-account-linking.adoc[leveloffset=+2] + +include::platform/con-aap-migrate-SAML-users.adoc[leveloffset=+1] + +include::platform/con-aap-migrate-LDAP-users.adoc[leveloffset=+1] + +include::platform/con-aap-legacy-user-login.adoc[leveloffset=+2] + +include::platform/proc-aap-migrate-LDAP-users.adoc[leveloffset=+2] + + + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-aap-recovery.adoc b/downstream/assemblies/platform/assembly-aap-recovery.adoc index 6b87e7237e..7eb4af62e0 100644 --- a/downstream/assemblies/platform/assembly-aap-recovery.adoc +++ b/downstream/assemblies/platform/assembly-aap-recovery.adoc @@ -1,19 +1,17 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] -[id="aap-recovery"] +[id="assembly-aap-recovery"] + = Recovering a {PlatformName} deployment :context: aap-recovery [role="_abstract"] -If you lose information on your system or issues with an upgrade, you can use the backup resources of your deployment instances. Use these procedures to recover your {ControllerName} and {HubName} deployment files. -//part of 2.5 release, (AAP-22178) uncomment when publishing [gmurray] -//include::platform/proc-aap-platform-gateway-restore.adoc[leveloffset=+1] - -include::platform/proc-aap-controller-restore.adoc[leveloffset=+1] +If you lose information on your system or experience issues with an upgrade, you can use the backup resources of your deployment instances. Use the following procedures to recover your {PlatformNameShort} deployment files. -include::platform/proc-aap-hub-restore.adoc[leveloffset=+1] +include::platform/proc-aap-platform-gateway-restore.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-aap-troubleshoot-backup-recovery.adoc b/downstream/assemblies/platform/assembly-aap-troubleshoot-backup-recovery.adoc index bbb42b7e67..622088b9fc 100644 --- a/downstream/assemblies/platform/assembly-aap-troubleshoot-backup-recovery.adoc +++ b/downstream/assemblies/platform/assembly-aap-troubleshoot-backup-recovery.adoc @@ -1,7 +1,8 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] -[id="aap-troubleshoot-backup-recover"] +[id="assembly-aap-troubleshoot-backup-recover"] :context: troubleshooting diff --git a/downstream/assemblies/platform/assembly-aap-upgrades.adoc b/downstream/assemblies/platform/assembly-aap-upgrades.adoc index 9b0678bcdb..8865947dac 100644 --- a/downstream/assemblies/platform/assembly-aap-upgrades.adoc +++ b/downstream/assemblies/platform/assembly-aap-upgrades.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] @@ -8,12 +10,9 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -Upgrade to {PlatformName} {PlatformVers} by setting up your inventory and running the installation script. -Ansible then upgrades your deployment to {PlatformVers}. -If you plan to upgrade from {PlatformNameShort} 2.0 or earlier, you must migrate Ansible content for compatibility with {PlatformVers}. include::platform/con-aap-upgrades.adoc[leveloffset=+1] -include::platform/con-aap-upgrades-legacy.adoc[leveloffset=+1] +// include::platform/con-aap-upgrades-legacy.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] diff --git a/downstream/assemblies/platform/assembly-aap-upgrading-platform.adoc b/downstream/assemblies/platform/assembly-aap-upgrading-platform.adoc index ac1f9fe009..486b90b367 100644 --- a/downstream/assemblies/platform/assembly-aap-upgrading-platform.adoc +++ b/downstream/assemblies/platform/assembly-aap-upgrading-platform.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="aap-upgrading-platform"] @@ -7,14 +9,32 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -To upgrade your {PlatformName}, start by reviewing planning information to ensure a successful upgrade. +To upgrade your {PlatformName}, start by reviewing link:{LinkPlanningGuide} to ensure a successful upgrade. You can then download the desired version of the {PlatformNameShort} installer, configure the inventory file in the installation bundle to reflect your environment, and then run the installer. +== Prerequisites + +* Upgrades to {PlatformNameShort} 2.5 include the link:{URLPlanningGuide}/ref-aap-components#con-about-platform-gateway_planning[{Gateway}]. Ensure you review the link:{URLPlanningGuide}/ref-network-ports-protocols_planning[2.5 Network ports and protocols] for architectural changes and link:{LinkTopologies} for information on opinionated deployment models. + +* You have reviewed the link:{URLPlanningGuide}/ha-redis_planning#gw-centralized-redis_planning[centralized Redis] instance offered by {PlatformNameShort} for both standalone and clustered topologies. + +* Prior to upgrading your {PlatformName}, ensure you have reviewed link:{LinkPlanningGuide} for a successful upgrade. You can then download the desired version of the {PlatformNameShort} installer, configure the inventory file in the installation bundle to reflect your environment, and then run the installer. + +* Prior to upgrading your {PlatformName}, ensure you have upgraded to {ControllerName} 2.5 or later. + +* When upgrading to {PlatformNameShort} {PlatformVers}, you must use RPM installer version 2.5-11 or later. If you use an older installer, the installation might fail. If you encounter a failed installation using an older version of the installer, rerun the installation with RPM installer version 2.5-11 or later. + include::platform/con-aap-upgrade-planning.adoc[leveloffset=+1] include::platform/proc-choosing-obtaining-installer.adoc[leveloffset=+1] +include::platform/proc-choosing-obtaining-installer-no-internet.adoc[leveloffset=+2] include::platform/proc-editing-inventory-file-for-updates.adoc[leveloffset=+1] +include::platform/con-backup-aap.adoc[leveloffset=+1] include::platform/proc-running-setup-script-for-updates.adoc[leveloffset=+1] +include::platform/proc-upgrade-controller-hub-eda-unified-ui.adoc[leveloffset=+1] +// [ddacosta] - Moved to a new post upgrade section of the doc +//include::platform/proc-account-linking.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] + \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-ag-controller-backup-and-restore.adoc b/downstream/assemblies/platform/assembly-ag-controller-backup-and-restore.adoc index c0e8b0ea55..6b332fb986 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-backup-and-restore.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-backup-and-restore.adoc @@ -1,10 +1,12 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-backup-and-restore"] = Backup and restore You can backup and restore your system using the {PlatformNameShort} setup playbook. -For more information, see the xref:controller-backup-restore-clustered-environments[Backup and restore clustered environments] section. +For more information, see the link:{URLControllerAdminGuide}/index#controller-backup-restore-clustered-environments[Backup and restore clustered environments] section. [NOTE] ==== @@ -13,7 +15,7 @@ However, you must use the most recent minor version of a release to backup or re For example, if the current {PlatformNameShort} version you are on is 2.0.x, use only the latest 2.0 installer. Backup and restore only works on PostgreSQL versions supported by your current platform version. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_installation_guide/index#red_hat_ansible_automation_platform_system_requirements[{PlatformName} system requirements] in the _{PlatformName} Installation Guide_. +For more information, see link:{URLPlanningGuide}/platform-system-requirements[System requirements] in the _{TitlePlanningGuide}_. ==== The {PlatformNameShort} setup playbook is invoked as `setup.sh` from the path where you unpacked the platform installer tarball. diff --git a/downstream/assemblies/platform/assembly-ag-controller-clustering.adoc b/downstream/assemblies/platform/assembly-ag-controller-clustering.adoc index 3dd44525c9..d7853f58a2 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-clustering.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-clustering.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-clustering"] = Clustering @@ -13,13 +15,13 @@ Load balancing is optional, and it is entirely possible to have ingress on one o Each instance must be able to join the {ControllerName} cluster and expand its ability to run jobs. This is a simple system where jobs can run anywhere rather than be directed on where to run. -Also, you can group clustered instances into different pools or queues, called xref:controller-instance-groups[Instance groups]. +Also, you can group clustered instances into different pools or queues, called link:{URLControllerUserGuide}/controller-instance-groups[Instance groups] as described in _{ControllerUG}_. {PlatformNameShort} supports container-based clusters by using Kubernetes, meaning you can install new {ControllerName} instances on this platform without any variation or diversion in functionality. You can create instance groups to point to a Kubernetes container. -For more information, see the xref:controller-instance-and-container-groups[Container and instance groups] section. +For more information, see the link:{URLControllerUserGuide}/controller-instance-and-container-groups[Instance and container groups] section in _{ControllerUG}_. -.Supported operating systems +*Supported operating systems* The following operating systems are supported for establishing a clustered environment: @@ -31,10 +33,17 @@ Isolated instances are not supported in conjunction with running automation cont ==== include::platform/ref-controller-setup-considerations.adoc[leveloffset=+1] + include::platform/ref-controller-cluster-install.adoc[leveloffset=+1] + include::platform/ref-controller-cluster-instances.adoc[leveloffset=+2] + include::platform/ref-controller-cluster-status-api.adoc[leveloffset=+1] + include::platform/ref-controller-cluster-instance-behavior.adoc[leveloffset=+1] + include::platform/ref-controller-cluster-job-runtime.adoc[leveloffset=+1] + include::platform/con-controller-cluster-job-runs.adoc[leveloffset=+2] + include::platform/proc-controller-cluster-deprovision-instances.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-ag-controller-config.adoc b/downstream/assemblies/platform/assembly-ag-controller-config.adoc index f825566196..53d8685613 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-config.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-config.adoc @@ -1,25 +1,35 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-config"] = {ControllerNameStart} configuration -You can configure some {ControllerName} options using the *Settings* menu of the User Interface. +You can configure some {ControllerName} options by using the *Settings* menu of the User Interface. -//Each tab contains fields with a *Reset* option, enabling you to revert any value entered back to the default value. -//*Reset All* enables you to revert all the values to their factory default values. +*Save* applies the changes you make, but it does not exit the edit dialog. -//*Save* applies the changes you make, but it does not exit the edit dialog. To return to the *Settings* page, from the navigation panel select {MenuAEAdminSettings} or use the breadcrumbs at the top of the current view. //Now a separate option covered by Donna //include::platform/proc-controller-authentication.adoc[leveloffset=+1] -include::platform/proc-controller-configure-jobs.adoc[leveloffset=+1] +//[ddacosta] subscription content moved to access management guide +//include::platform/proc-controller-configure-subscriptions.adoc[leveloffset=+1] + include::platform/proc-controller-configure-system.adoc[leveloffset=+1] + +include::platform/proc-controller-configure-jobs.adoc[leveloffset=+1] + +include::platform/ref-controller-logging-settings.adoc[leveloffset=+1] + //The only directly controller related thing here is the custom logo which is covered separately //include::platform/proc-controller-configure-user-interface.adoc[leveloffset=+1] -//This doesn't exisat in the documented form //include::platform/proc-controller-configure-usability-analytics.adoc[leveloffset=+2] -include::platform/con-controller-custom-logos.adoc[leveloffset=+1] +//include::platform/con-controller-custom-logos.adoc[leveloffset=+1] + +include::platform/proc-controller-configure-analytics.adoc[leveloffset=+1] + include::platform/con-controller-additional-settings.adoc[leveloffset=+1] + //This should be in Hala's documentation //include::platform/proc-controller-obtaining-subscriptions.adoc[leveloffset=+1] //include::platform/con-controller-keep-subscription-in-compliance.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-ag-controller-secret-handling.adoc b/downstream/assemblies/platform/assembly-ag-controller-secret-handling.adoc index 1f023653db..a6f8c17bce 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-secret-handling.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-secret-handling.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-secret-handling-and-connection-security"] = Secret handling and connection security diff --git a/downstream/assemblies/platform/assembly-ag-controller-security-best-practices.adoc b/downstream/assemblies/platform/assembly-ag-controller-security-best-practices.adoc index c57292da0d..10e8c42614 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-security-best-practices.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-security-best-practices.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-security-best-practices"] = Security best practices diff --git a/downstream/assemblies/platform/assembly-ag-controller-start-stop-controller.adoc b/downstream/assemblies/platform/assembly-ag-controller-start-stop-controller.adoc index 6d1a93abd3..489852efb8 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-start-stop-controller.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-start-stop-controller.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-start-stop-controller"] = Start, stop, and restart {ControllerName} diff --git a/downstream/assemblies/platform/assembly-ag-controller-tips-and-tricks.adoc b/downstream/assemblies/platform/assembly-ag-controller-tips-and-tricks.adoc index 0b3a1c1bcd..a5221e9cfd 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-tips-and-tricks.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-tips-and-tricks.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-tips-and-tricks"] = {ControllerNameStart} tips and tricks diff --git a/downstream/assemblies/platform/assembly-ag-controller-troubleshooting.adoc b/downstream/assemblies/platform/assembly-ag-controller-troubleshooting.adoc index 92a1208f6f..cc0cd97d93 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-troubleshooting.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-troubleshooting.adoc @@ -1,12 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-troubleshooting"] = Troubleshooting {ControllerName} Useful troubleshooting information for {ControllerName}. -include::platform/ref-controller-connect-to-host.adoc[leveloffset=+1] +//include::platform/ref-controller-connect-to-host.adoc[leveloffset=+1] include::platform/ref-controller-unable-to-login-http.adoc[leveloffset=+1] -include::platform/ref-controller-run-a-playbook.adoc[leveloffset=+1] +//include::platform/ref-controller-run-a-playbook.adoc[leveloffset=+1] include::platform/ref-controller-unable-to-run-job.adoc[leveloffset=+1] include::platform/ref-controller-playbooks-not-showing.adoc[leveloffset=+1] include::platform/ref-controller-playbook-pending.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-ag-controller-usability-analytics.adoc b/downstream/assemblies/platform/assembly-ag-controller-usability-analytics.adoc index e87b501840..5fd9d5e830 100644 --- a/downstream/assemblies/platform/assembly-ag-controller-usability-analytics.adoc +++ b/downstream/assemblies/platform/assembly-ag-controller-usability-analytics.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-usability-analytics-data-collection"] = Usability Analytics and Data Collection @@ -11,8 +13,9 @@ Only users installing a trial of or a fresh installation of are opted-in for thi //You can opt out or control the way {ControllerName} collects data by setting your participation level in the *User Interface settings* in the {MenuAEAdminSettings} menu. //Should Settings menu be a link? +For information on setting up {Analytics}, see xref:proc-controller-configure-analytics[Configuring {Analytics}]. -include::platform/proc-controller-control-data-collection.adoc[leveloffset=+1] +//include::platform/proc-controller-control-data-collection.adoc[leveloffset=+1] include::platform/ref-controller-automation-analytics.adoc[leveloffset=+1] include::platform/ref-controller-use-by-organization.adoc[leveloffset=+2] include::platform/ref-controller-jobs-run-by-organization.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-ag-instance-and-container-groups.adoc b/downstream/assemblies/platform/assembly-ag-instance-and-container-groups.adoc index 41bff27321..a3a39f836f 100644 --- a/downstream/assemblies/platform/assembly-ag-instance-and-container-groups.adoc +++ b/downstream/assemblies/platform/assembly-ag-instance-and-container-groups.adoc @@ -1,28 +1,48 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-instance-and-container-groups"] = Instance and container groups -{ControllerNameStart} enables you to execute jobs through Ansible playbooks run directly on a member of the cluster or in a namespace of an OpenShift cluster with the necessary service account provisioned. +{ControllerNameStart} enables you to run jobs through Ansible playbooks run directly on a member of the cluster or in a namespace of an OpenShift cluster with the necessary service account provisioned. This is called a container group. -You can execute jobs in a container group only as-needed per playbook. -For more information, see xref:controller-container-groups[Container groups]. +You can run jobs in a container group only as-needed per playbook. +For more information, see link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-container-groups[Container groups]. -For {ExecEnvShort}s, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#assembly-controller-execution-environments[Execution environments] in the _{ControllerUG}_. +For {ExecEnvShort}s, see link:{URLControllerUserGuide}/assembly-controller-execution-environments[Execution environments]. include::platform/con-controller-instance-groups.adoc[leveloffset=+1] + include::platform/ref-controller-group-policies-automationcontroller.adoc[leveloffset=+2] + include::platform/con-controller-configure-instance-groups.adoc[leveloffset=+2] + include::platform/ref-controller-instance-group-policies.adoc[leveloffset=+2] + include::platform/ref-controller-policy-considerations.adoc[leveloffset=+2] + include::platform/proc-controller-pin-instances.adoc[leveloffset=+2] + include::platform/ref-controller-job-runtime-behavior.adoc[leveloffset=+2] + include::platform/con-controller-control-job-run.adoc[leveloffset=+2] + include::platform/ref-controller-instance-group-capacity.adoc[leveloffset=+2] -include::platform/con-controller-deprovision-instance-groups.adoc[leveloffset=+2] + +include::platform/proc-controller-deprovision-instance-groups.adoc[leveloffset=+2] + include::platform/con-controller-container-groups.adoc[leveloffset=+1] + include::platform/proc-controller-create-container-group.adoc[leveloffset=+2] + +include::platform/proc-controller-create-service-account.adoc[leveloffset=+2] + include::platform/proc-controller-customize-pod-spec.adoc[leveloffset=+2] + include::platform/proc-controller-verify-container-group.adoc[leveloffset=+2] + include::platform/proc-controller-view-container-group-jobs.adoc[leveloffset=+2] + include::platform/ref-controller-kubernetes-API-failure.adoc[leveloffset=+2] + include::platform/ref-controller-container-capacity.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-appendix-inventory-file-vars.adoc b/downstream/assemblies/platform/assembly-appendix-inventory-file-vars.adoc index 70b64619a1..04934674cb 100644 --- a/downstream/assemblies/platform/assembly-appendix-inventory-file-vars.adoc +++ b/downstream/assemblies/platform/assembly-appendix-inventory-file-vars.adoc @@ -1,15 +1,33 @@ +:_mod-docs-content-type: ASSEMBLY + [id="appendix-inventory-files-vars"] = Inventory file variables -The following tables contain information about the pre-defined variables used in Ansible installation inventory files. -Not all of these variables are required. -include::platform/ref-general-inventory-variables.adoc[leveloffset=+1] +The following tables contain information about the variables used in {PlatformNameShort}'s installation `inventory` files. The tables include the variables that you can use for RPM-based installation and {ContainerBase}. + +include::platform/ref-ansible-inventory-variables.adoc[leveloffset=+1] + include::platform/ref-hub-variables.adoc[leveloffset=+1] + +include::platform/ref-controller-variables.adoc[leveloffset=+1] + +include::platform/ref-database-inventory-variables.adoc[leveloffset=+1] + +include::platform/ref-eda-controller-variables.adoc[leveloffset=+1] + +include::platform/ref-general-inventory-variables.adoc[leveloffset=+1] + +include::platform/ref-images-inventory-variables.adoc[leveloffset=+1] + +include::platform/ref-gateway-variables.adoc[leveloffset=+1] + +include::platform/ref-receptor-inventory-variables.adoc[leveloffset=+1] + +include::platform/ref-redis-inventory-variables.adoc[leveloffset=+1] + // SSO variables moved into hub-variables. //include::platform/ref-sso-variables.adoc[leveloffset=+1] -// Catalog removed for 2.4 +// Catalog removed for 2.4 //include::platform/ref-catalog-variables.adoc[leveloffset=+1] -include::platform/ref-controller-variables.adoc[leveloffset=+1] -include::platform/ref-ansible-inventory-variables.adoc[leveloffset=+1] -include::platform/ref-eda-controller-variables.adoc[leveloffset=+1] + diff --git a/downstream/assemblies/platform/assembly-appendix-operator-crs.adoc b/downstream/assemblies/platform/assembly-appendix-operator-crs.adoc new file mode 100644 index 0000000000..93b687ee85 --- /dev/null +++ b/downstream/assemblies/platform/assembly-appendix-operator-crs.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: ASSEMBLY + + +ifdef::context[:parent-context: {context}] + +:context: appendix-operator-crs + +[id="appendix-operator-crs_{context}"] + += Appendix: {PlatformName} custom resources + +[role="_abstract"] + +This appendix provides a reference for the {PlatformNameShort} custom resources for various deployment scenarios. + +[TIP] +==== +You can link in existing components by specifying the component name under the `name` variable. +You can also use `name` to create a custom name for a new component. +==== + +include::platform/ref-operator-crs.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-appendix-troubleshoot-containerized-aap.adoc b/downstream/assemblies/platform/assembly-appendix-troubleshoot-containerized-aap.adoc new file mode 100644 index 0000000000..00013798f5 --- /dev/null +++ b/downstream/assemblies/platform/assembly-appendix-troubleshoot-containerized-aap.adoc @@ -0,0 +1,23 @@ +ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY + +[id="troubleshooting-containerized-ansible-automation-platform"] + += Troubleshooting containerized {PlatformNameShort} + +:context: troubleshooting-containerized-aap + +Use this information to troubleshoot your containerized {PlatformNameShort} installation. + +include::platform/proc-containerized-troubleshoot-gathering-logs.adoc[leveloffset=+1] + +include::platform/ref-containerized-troubleshoot-diagnosing.adoc[leveloffset=+1] + +include::platform/ref-containerized-troubleshoot-install.adoc[leveloffset=+1] + +include::platform/ref-containerized-troubleshoot-config.adoc[leveloffset=+1] + +include::platform/ref-containerized-troubleshoot-ref.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-appendix-troubleshoot-rpm-aap.adoc b/downstream/assemblies/platform/assembly-appendix-troubleshoot-rpm-aap.adoc new file mode 100644 index 0000000000..665d4cdbdb --- /dev/null +++ b/downstream/assemblies/platform/assembly-appendix-troubleshoot-rpm-aap.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="appendix-troubleshoot-rpm-aap"] += Troubleshooting RPM installation of {PlatformNameShort} + +Use this information to troubleshoot your RPM-based installation of {PlatformNameShort}. + +include::platform/proc-rpm-troubleshoot-generating-logs.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-automation-mesh-operator-aap.adoc b/downstream/assemblies/platform/assembly-automation-mesh-operator-aap.adoc index 189e4da62a..44c9fbeedd 100644 --- a/downstream/assemblies/platform/assembly-automation-mesh-operator-aap.adoc +++ b/downstream/assemblies/platform/assembly-automation-mesh-operator-aap.adoc @@ -1,29 +1,41 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-automation-mesh-operator-aap"] = {AutomationMeshStart} for operator-based {PlatformName} -Scaling your automation mesh is available on OpenShift deployments of {PlatformName} and is possible through adding or removing nodes from your cluster dynamically, using the *Instances* resource of the {ControllerName} UI, without running the installation script. +Scaling your automation mesh is available on OpenShift deployments of {PlatformName} and is possible through adding or removing nodes from your cluster dynamically, using the *Instances* resource of the {PlatformNameShort} UI, without running the installation script. Instances serve as nodes in your mesh topology. {AutomationMeshStart} enables you to extend the footprint of your automation. The location where you launch a job can be different from the location where the ansible-playbook runs. -To manage instances from the {ControllerName} UI, you must have System Administrator or System Auditor permissions. +To manage instances from the {PlatformNameShort} UI, you must have System Administrator or System Auditor permissions. In general, the more processor cores (CPU) and memory (RAM) a node has, the more jobs that can be scheduled to run on that node at once. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-jobs#controller-capacity-determination[Automation controller capacity determination and job impact]. +For more information, see link:{URLControllerUserGuide}/controller-jobs#controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. include::platform/ref-operator-mesh-prerequisites.adoc[leveloffset=+1] + include::platform/proc-set-up-virtual-machines.adoc[leveloffset=+1] + include::platform/proc-define-mesh-node-types.adoc[leveloffset=+1] + include::platform/proc-controller-create-instance-group.adoc[leveloffset=+1] + include::platform/proc-controller-associate-instances-to-instance-group.adoc[leveloffset=+1] + include::platform/proc-run-jobs-on-execution-nodes.adoc[leveloffset=+1] + include::platform/proc-connecting-nodes-through-mesh-ingress.adoc[leveloffset=+1] + include::platform/proc-pulling-the-secret.adoc[leveloffset=+1] + include::platform/ref-removing-instances.adoc[leveloffset=+1] +include::platform/proc-operator-mesh-upgrading-receptors.adoc[leveloffset=+1] + diff --git a/downstream/assemblies/platform/assembly-changing-ssl-certs-keys.adoc b/downstream/assemblies/platform/assembly-changing-ssl-certs-keys.adoc index 29b3712189..a1259f92b2 100644 --- a/downstream/assemblies/platform/assembly-changing-ssl-certs-keys.adoc +++ b/downstream/assemblies/platform/assembly-changing-ssl-certs-keys.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="changing-ssl-certs-keys"] @@ -17,12 +19,14 @@ include::platform/proc-renew-ssl-cert.adoc[leveloffset=+1] == Changing SSL certificates -To change the SSL certificate, you can edit the inventory file and run the installer. -The installer verifies that all {PlatformNameShort} components are working. The installer can take a long time to run. +To change the SSL certificate, you can edit the inventory file and run the installation program. +The installation program verifies that all {PlatformNameShort} components are working. +The installation program can take a long time to run. -Alternatively, you can change the SSL certificates manually. This is quicker, but there is no automatic verification. +Alternatively, you can change the SSL certificates manually. +This is quicker, but there is no automatic verification. -Red Hat recommends that you use the installer to make changes to your {PlatformNameShort} instance. +Red Hat recommends that you use the installation program to make changes to your {PlatformNameShort} instance. === Prerequisites @@ -34,10 +38,9 @@ For further information, see the link:http://nginx.org/en/docs/http/ngx_http_ssl include::platform/proc-change-ssl-installer.adoc[leveloffset=+2] -=== Changing the SSL certificate manually - include::platform/proc-change-ssl-controller.adoc[leveloffset=+3] include::platform/proc-change-ssl-controller-ocp.adoc[leveloffset=+3] +include::platform/proc-change-ssl-hub-ocp.adoc[leveloffset=+3] include::platform/proc-change-ssl-eda-controller.adoc[leveloffset=+3] include::platform/proc-change-ssl-hub.adoc[leveloffset=+3] diff --git a/downstream/assemblies/platform/assembly-choosing-obtaining-installer.adoc b/downstream/assemblies/platform/assembly-choosing-obtaining-installer.adoc index 1966261a37..73e3ecbd6a 100644 --- a/downstream/assemblies/platform/assembly-choosing-obtaining-installer.adoc +++ b/downstream/assemblies/platform/assembly-choosing-obtaining-installer.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + // [id="proc-choosing-obtaining-installer_{context}"] diff --git a/downstream/assemblies/platform/assembly-configure-aap-operator.adoc b/downstream/assemblies/platform/assembly-configure-aap-operator.adoc deleted file mode 100644 index e843928e5a..0000000000 --- a/downstream/assemblies/platform/assembly-configure-aap-operator.adoc +++ /dev/null @@ -1,27 +0,0 @@ -ifdef::context[:parent-context: {context}] - -[id="configure-aap-operator_{context}"] - -:context: configure-aap-operator - -= Configuring the {OperatorPlatform} on {OCP} - -The platform gateway for {PlaformNameShort} enables you to manage the following {PlatformNameShort} components to form a single user interface: - -* {ControllerNameStart} -* {HubNameStart} -* {EDAName} -* {LightspeedShortName} (This feature is disabled by default, you must opt in to use it.) - -Before you can deploy the platform gateway you need to have {OperatorPlatform} installed in a namespace. -If you have not installed {OperatorPlatform} see <>. - -If you have the {OperatorPlatform} and some or all of the {PlatformNameShort} components installed see <> for how to proceed. - -include::platform/proc-operator-link-components.adoc[leveloffset=+1] -include::platform/proc-operator-access-aap.adoc[leveloffset=+1] -include::platform/proc-operator-deploy-central-config.adoc[leveloffset=+1] -include::platform/proc-operator-aap-troubleshooting.adoc[leveloffset=+1] - -ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-configure-controller-OCP.adoc b/downstream/assemblies/platform/assembly-configure-controller-OCP.adoc index 90c1b2e42a..7609217f88 100644 --- a/downstream/assemblies/platform/assembly-configure-controller-OCP.adoc +++ b/downstream/assemblies/platform/assembly-configure-controller-OCP.adoc @@ -1,4 +1,10 @@ -[id="assembly-configure-controller-OCP"] +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +:context: performance-considerations + +[id="assembly-configure-controller-OCP_{Context}"] = Configuring Ansible {ControllerName} on {OCPShort} @@ -6,3 +12,6 @@ During a Kubernetes upgrade, {ControllerName} must be running. include::platform/proc-configure-controller-OCP.adoc[leveloffset=+1] +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + diff --git a/downstream/assemblies/platform/assembly-configure-egress-proxy.adoc b/downstream/assemblies/platform/assembly-configure-egress-proxy.adoc new file mode 100644 index 0000000000..7fe0fbfac1 --- /dev/null +++ b/downstream/assemblies/platform/assembly-configure-egress-proxy.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-configure-egress-proxy"] + += Configuring Ansible Automation Platform to use egress proxy + +You can deploy {PlatformNameShort} so that egress from the platform for various purposes functions properly through proxy servers. +Egress proxy allows clients to make indirect (through a proxy server) requests to network services. +The client first connects to the proxy server and requests some resource, for example, email, located on another server. +The proxy server then connects to the specified server and retrieves the resource from it. + + +== Overview +The egress proxy should be configured on the system and component level of {PlatformNameShort}, for all the RPM and containerized installation methods. +For containerized installers, the system proxy configuration for podman on the nodes solves most of the problems with access through the proxy. +For RPM installation, both system and component configurations are needed. + +include::platform/ref_proxy-backends.adoc[leveloffset=+2] +include::platform/ref-system-proxy-config.adoc[leveloffset=+1] +include::platform/proc-controller-proxy-settings.adoc[leveloffset=+1] +include::platform/proc-proxy-AWS-inventory-sync.adoc[leveloffset=+1] +include::hub/hub/proc-set-community-remote.adoc[leveloffset=+1] +include::platform/proc-set-EDA-proxy.adoc[leveloffset=+1] +include::platform/ref-automation-mesh-proxy.adoc[leveloffset=+1] + + diff --git a/downstream/assemblies/platform/assembly-configure-hub-storage.adoc b/downstream/assemblies/platform/assembly-configure-hub-storage.adoc new file mode 100644 index 0000000000..1608429921 --- /dev/null +++ b/downstream/assemblies/platform/assembly-configure-hub-storage.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="configuring-storage-for-automation-hub"] +ifdef::context[:parent-context: {context}] + += Configuring storage for {HubName} + +Configure storage backends for {HubName} including Amazon S3, Azure Blob Storage, and Network File System (NFS) storage. + +include::platform/proc-configure-hub-s3-storage.adoc[leveloffset=+1] + +include::platform/proc-configure-hub-azure-storage.adoc[leveloffset=+1] + +include::platform/proc-configure-hub-nfs-storage.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-configuring-proxy-support.adoc b/downstream/assemblies/platform/assembly-configuring-proxy-support.adoc index f4d195f9c8..c89965fed3 100644 --- a/downstream/assemblies/platform/assembly-configuring-proxy-support.adoc +++ b/downstream/assemblies/platform/assembly-configuring-proxy-support.adoc @@ -1,16 +1,17 @@ - +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] - - [id="assembly-configuring-proxy-support"] = Configuring proxy support for {PlatformName} :context: configuring-proxy-support [role="_abstract"] -You can configure {PlatformName} to communicate with traffic using a proxy. Proxy servers act as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service or available resource from a different server, and the proxy server evaluates the request as a way to simplify and control its complexity. The following sections describe the supported proxy configurations and how to set them up. +You can configure {PlatformName} to communicate with traffic using a proxy. +Proxy servers act as an intermediary for requests from clients seeking resources from other servers. +A client connects to the proxy server, requesting some service or available resource from a different server, and the proxy server evaluates the request as a way to simplify and control its complexity. +The following sections describe the supported proxy configurations and how to set them up. include::platform/proc-enable-proxy-support.adoc[leveloffset=+1] @@ -27,7 +28,5 @@ include::platform/con-sticky-sessions.adoc[leveloffset=+1] .Additional resources * Refer to link:https://docs.aws.amazon.com/elasticloadbalancing/latest/application/sticky-sessions.html[Sticky sessions for your Application Load Balancer] for more information about enabling sticky sessions. -include::aap-common/external-site-disclaimer.adoc[] - ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-control-plane-adjustments.adoc b/downstream/assemblies/platform/assembly-control-plane-adjustments.adoc index d50a606f99..9345093feb 100644 --- a/downstream/assemblies/platform/assembly-control-plane-adjustments.adoc +++ b/downstream/assemblies/platform/assembly-control-plane-adjustments.adoc @@ -1,13 +1,20 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] :context: performance-considerations -[id="assembly-control-plane-adjustments"] +[id="assembly-control-plane-adjustments_{context}"] = Control plane adjustments The control plane refers to the {ControllerName} pods which contain the web and task containers that, among other things, provide the user interface and handle the scheduling and launching of jobs. On the {ControllerName} custom resource, the number of _replicas_ determines the number of {ControllerName} pods in the {ControllerName} deployment. include::platform/ref-set-requests-limits-for-containers.adoc[leveloffset=+1] + include::platform/ref-container-resource-requirements.adoc[leveloffset=+1] -include::platform/con-alternative-capacity-limits.adoc[leveloffset=+1] \ No newline at end of file + +include::platform/con-alternative-capacity-limits.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-controller-activity-stream.adoc b/downstream/assemblies/platform/assembly-controller-activity-stream.adoc new file mode 100644 index 0000000000..3d46be6647 --- /dev/null +++ b/downstream/assemblies/platform/assembly-controller-activity-stream.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-controller-activity-stream"] + += Activity stream + +* From the navigation panel, select {MenuAEAdminActivityStream}. ++ +image::activity_stream_page.png[Activity stream page] + +An Activity Stream shows all changes for a particular object. +For each change, the Activity Stream shows the time of the event, the user that initiated the event, and the action. +The information displayed varies depending on the type of event. + +* Click the image:examine.png[Examine,15,15] icon to display the event log for the change. ++ +image:activity_stream_details.png[Activity stream details] + +You can filter the Activity Stream by the initiating user, by system (if it was system initiated), or by any related object, such as a credential, job template, or schedule. +The Activity Stream shows the Activity Stream for the entire instance. +Most pages permit viewing an activity stream filtered for that specific object. + +You can view the activity stream on any page by clicking the btn:[Activity Stream] image:activitystream.png[activitystream,15,15] icon. diff --git a/downstream/assemblies/platform/assembly-controller-api-access-resources.adoc b/downstream/assemblies/platform/assembly-controller-api-access-resources.adoc index fa43a9367c..0813f241ff 100644 --- a/downstream/assemblies/platform/assembly-controller-api-access-resources.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-access-resources.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-access-resources"] = Access resources diff --git a/downstream/assemblies/platform/assembly-controller-api-auth-methods.adoc b/downstream/assemblies/platform/assembly-controller-api-auth-methods.adoc index 26fdd2163a..55d8f1c2fc 100644 --- a/downstream/assemblies/platform/assembly-controller-api-auth-methods.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-auth-methods.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-auth-methods"] = Authenticating in the API diff --git a/downstream/assemblies/platform/assembly-controller-api-browsing-api.adoc b/downstream/assemblies/platform/assembly-controller-api-browsing-api.adoc index 06be9ad71c..6d9793309c 100644 --- a/downstream/assemblies/platform/assembly-controller-api-browsing-api.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-browsing-api.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-browsing-api"] = Browsing with the API diff --git a/downstream/assemblies/platform/assembly-controller-api-conventions.adoc b/downstream/assemblies/platform/assembly-controller-api-conventions.adoc index bab52d9eb6..052308b00e 100644 --- a/downstream/assemblies/platform/assembly-controller-api-conventions.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-conventions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-conventions"] = Use conventions in the API diff --git a/downstream/assemblies/platform/assembly-controller-api-filter.adoc b/downstream/assemblies/platform/assembly-controller-api-filter.adoc index b7ff7fa062..64c7a61b2d 100644 --- a/downstream/assemblies/platform/assembly-controller-api-filter.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-filter.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-filter"] = Filtering in the API diff --git a/downstream/assemblies/platform/assembly-controller-api-pagination.adoc b/downstream/assemblies/platform/assembly-controller-api-pagination.adoc index 8f7ee148ed..5f7a57fed8 100644 --- a/downstream/assemblies/platform/assembly-controller-api-pagination.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-pagination.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-pagination"] = Using pagination in the API diff --git a/downstream/assemblies/platform/assembly-controller-api-readonly-fields.adoc b/downstream/assemblies/platform/assembly-controller-api-readonly-fields.adoc index 4b5e2bb289..8cd8975494 100644 --- a/downstream/assemblies/platform/assembly-controller-api-readonly-fields.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-readonly-fields.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-readonly-fields"] = Read-only fields diff --git a/downstream/assemblies/platform/assembly-controller-api-search.adoc b/downstream/assemblies/platform/assembly-controller-api-search.adoc index 3602ed37b7..898669665a 100644 --- a/downstream/assemblies/platform/assembly-controller-api-search.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-search.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-search"] = Using the search query string parameter @@ -8,12 +10,12 @@ [literal, options="nowrap" subs="+attributes"] ---- -https:///api/v2/model_verbose_name?search=findme +https:///api/v2/model_verbose_name?search=findme ---- * To search across related fields, use the following: [literal, options="nowrap" subs="+attributes"] ---- -https:///api/v2/model_verbose_name?related__search=findme +https:///api/v2/model_verbose_name?related__search=findme ---- \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-controller-api-sorting.adoc b/downstream/assemblies/platform/assembly-controller-api-sorting.adoc index 5260b8c5f9..9e6a2ec5d5 100644 --- a/downstream/assemblies/platform/assembly-controller-api-sorting.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-sorting.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-sorting"] = Sorting in the API @@ -6,7 +8,7 @@ To give you examples that are easy to follow, we use the following URL throughou [literal, options="nowrap" subs="+attributes"] ---- -https:///api/v2/groups/ +https:///api/v2/groups/ ---- include::platform/proc-controller-api-sorting.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-api-tools.adoc b/downstream/assemblies/platform/assembly-controller-api-tools.adoc index 36dc7db970..a5b02ec3e2 100644 --- a/downstream/assemblies/platform/assembly-controller-api-tools.adoc +++ b/downstream/assemblies/platform/assembly-controller-api-tools.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-api-tools"] = Available tools with the API diff --git a/downstream/assemblies/platform/assembly-controller-applications.adoc b/downstream/assemblies/platform/assembly-controller-applications.adoc index b21c272465..1e4b18a773 100644 --- a/downstream/assemblies/platform/assembly-controller-applications.adoc +++ b/downstream/assemblies/platform/assembly-controller-applications.adoc @@ -1,9 +1,23 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-applications"] = Applications Create and configure token-based authentication for external applications such as ServiceNow and Jenkins. -With token-based authentication, external applications can easily integrate with {ControllerName}. +With token-based authentication, external applications can easily integrate with {PlatformNameShort}. + +[IMPORTANT] +==== +{ControllerNameStart} OAuth applications on the platform UI are not supported for 2.4 to 2.5 migration. See link:https://access.redhat.com/solutions/7091987[this Knowledgebase article] for more information. +==== + +As a platform administrator, you can configure a custom external application URL within the platform, providing seamless integration with external services. This functionality is currently available as a Technology Preview. Once configured, the external application URL appears in the platform UI navigation panel, providing users with easy access to the application. This feature streamlines workflows by ensuring quick access to external services from within the platform UI. + +[NOTE] +==== +include::snippets/technology-preview.adoc[] +==== With OAuth 2 you can use tokens to share data with an application without disclosing login information. You can configure these tokens as read-only. @@ -11,9 +25,12 @@ You can configure these tokens as read-only. You can create an application that is representative of the external application you are integrating with, then use it to create tokens for the application to use on behalf of its users. Associating these tokens with an application resource enables you to manage all tokens issued for a particular application. -By separating the issue of tokens under *OAuth Applications*, you can revoke all tokens based on the Application without having to revoke all tokens in the system. +By separating the issue of tokens under *OAuth Applications*, you can revoke all tokens based on the application without having to revoke all tokens in the system. include::platform/ref-controller-applications-getting-started.adoc[leveloffset=+1] +include::platform/ref-gw-access-rules-apps-tokens.adoc[leveloffset=+2] +include::platform/ref-gw-application-functions.adoc[leveloffset=+2] +include::platform/ref-gw-request-token-after-expiration.adoc[leveloffset=+3] include::platform/proc-controller-create-application.adoc[leveloffset=+1] //include::platform/ref-controller-apps-add-tokens.adoc[leveloffset=+2] -include::platform/proc-controller-apps-create-tokens.adoc[leveloffset=+2] + diff --git a/downstream/assemblies/platform/assembly-controller-awx-manage-utility.adoc b/downstream/assemblies/platform/assembly-controller-awx-manage-utility.adoc index 593ab7c1b5..245b649c1b 100644 --- a/downstream/assemblies/platform/assembly-controller-awx-manage-utility.adoc +++ b/downstream/assemblies/platform/assembly-controller-awx-manage-utility.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-awx-manage-utility"] = The _awx-manage_ Utility diff --git a/downstream/assemblies/platform/assembly-controller-best-practices.adoc b/downstream/assemblies/platform/assembly-controller-best-practices.adoc index 18810f439b..b672e69297 100644 --- a/downstream/assemblies/platform/assembly-controller-best-practices.adoc +++ b/downstream/assemblies/platform/assembly-controller-best-practices.adoc @@ -1,61 +1,21 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-best-practices"] = Best practices for {ControllerName} The following describes best practice for the use of {ControllerName}: -== Use Source Control - -{ControllerNameStart} supports playbooks stored directly on the server. Therefore, you must store your playbooks, roles, and any associated details in source control. -This way you have an audit trail describing when and why you changed the rules that are automating your infrastructure. -Additionally, it permits sharing of playbooks with other parts of your infrastructure or team. - -== Ansible file and directory structure - -If you are creating a common set of roles to use across projects, these should be accessed through source control submodules, or a common location such as `/opt`. -Projects should not expect to import roles or content from other projects. - -For more information, see the link https://docs.ansible.com/ansible/latest/tips_tricks/ansible_tips_tricks.html[General tips] from the Ansible documentation. - -[NOTE] -==== -* Avoid using the playbooks `vars_prompt` feature, as {ControllerName} does not interactively permit `vars_prompt` questions. -If you cannot avoid using `vars_prompt`, see the xref:controller-surveys-in-job-templates[Surveys] functionality. - -* Avoid using the playbooks `pause` feature without a timeout, as {ControllerName} does not permit canceling a pause interactively. -If you cannot avoid using `pause`, you must set a timeout. -==== - -Jobs use the playbook directory as the current working directory, although jobs must be coded to use the `playbook_dir` variable rather -than relying on this. - -== Use Dynamic Inventory Sources - -If you have an external source of truth for your infrastructure, whether it is a cloud provider or a local CMDB, it is best to define an -inventory sync process and use the support for dynamic inventory (including cloud inventory sources). -This ensures your inventory is always up to date. - -[NOTE] -==== -Edits and additions to Inventory host variables persist beyond an inventory synchronization as long as `--overwrite_vars` is *not* set. -==== - -== Variable Management for Inventory - -Keep variable data with the hosts and groups definitions (see the inventory editor), rather than using `group_vars/` and `host_vars/`. -If you use dynamic inventory sources, {ControllerName} can synchronize such variables with the database as long as the *Overwrite Variables* option is not set. +include::platform/ref-controller-source-control.adoc[leveloffset=+1] -== Autoscaling +include::platform/ref-controller-file-directory-structure.adoc[leveloffset=+1] -Use the "callback" feature to permit newly booting instances to request configuration for auto-scaling scenarios or provisioning integration. +include::platform/ref-controller-use-dynamic-inv-sources.adoc[leveloffset=+1] -== Larger Host Counts +include::platform/ref-controller-inv-variable-management.adoc[leveloffset=+1] -Set "forks" on a job template to larger values to increase parallelism of execution runs. -For more information on tuning Ansible, see link:https://www.ansible.com/blog/ansible-performance-tuning[the Ansible blog]. +include::platform/ref-controller-autoscaling.adoc[leveloffset=+1] -== Continuous integration / Continuous Deployment +include::platform/ref-controller-large-host-counts.adoc[leveloffset=+1] -For a Continuous Integration system, such as Jenkins, to spawn a job, it must make a `curl` request to a job template. -The credentials to the job template must not require prompting for any particular passwords. -For configuration and use instructions, see link:https://docs.ansible.com/automation-controller/latest/html/controllercli/usage.html[Installation] in the Ansible documentation. \ No newline at end of file +include::platform/ref-controller-continuous-integration.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-credentials.adoc b/downstream/assemblies/platform/assembly-controller-credentials.adoc index 514a24f829..cbed9c44d6 100644 --- a/downstream/assemblies/platform/assembly-controller-credentials.adoc +++ b/downstream/assemblies/platform/assembly-controller-credentials.adoc @@ -1,16 +1,6 @@ -[id="controller-credentials"] - -ifdef::controller-GS[] -= Managing credentials - - -Credentials authenticate the controller user to launch Ansible playbooks. The passwords and SSH keys are used to authenticate against inventory hosts. -By using the credentials feature of {ControllerName}, you can require the {ControllerName} user to enter a password or key phrase when a playbook launches. +:_mod-docs-content-type: ASSEMBLY -include::platform/proc-controller-create-credential.adoc[leveloffset=+1] -include::platform/proc-controller-edit-credential.adoc[leveloffset=+1] -endif::controller-GS[] -ifdef::controller-UG[] +[id="controller-credentials"] = Managing user credentials @@ -22,50 +12,99 @@ If a user moves to a different team or leaves the organization, you do not have [NOTE] ==== {ControllerNameStart} encrypts passwords and key information in the database and never makes secret information visible through the API. -For further information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#doc-wrapper[_{ControllerAG}_]. +For further information, see link:{URLControllerAdminGuide}[_{ControllerAG}_]. ==== -== How credentials work -{ControllerNameStart} uses SSH to connect to remote hosts. -To pass the key from {ControllerName} to SSH, the key must be decrypted before it can be written to a named pipe. -{ControllerNameStart} uses that pipe to send the key to SSH, so that the key is never written to disk. -If passwords are used, {ControllerName} handles them by responding directly to the password prompt and decrypting the password before writing it to the prompt. - //Removed as part of editorial review - include::platform/ref-controller-credentials-getting-started.adoc[leveloffset=+1] +include::platform/con-controller-how-credentials-work.adoc[leveloffset=+1] + include::platform/proc-controller-create-credential.adoc[leveloffset=+1] + include::platform/proc-controller-add-users-job-templates.adoc[leveloffset=+1] + include::platform/ref-controller-credential-types.adoc[leveloffset=+1] + include::platform/ref-controller-credential-aws.adoc[leveloffset=+2] + +include::platform/ref-controller-access-ec2-credentials-in-playbook.adoc[leveloffset=+3] + include::platform/ref-controller-credential-galaxy-hub.adoc[leveloffset=+2] +//AWS Secrets Manager Lookup +include::platform/ref-controller-aws-secrets-lookup.adoc[leveloffset=+2] +//Bitbucket +include::platform/ref-controller-credential-bitbucket.adoc[leveloffset=+2] + include::platform/ref-controller-credential-centrify-vault.adoc[leveloffset=+2] + include::platform/ref-controller-credential-container-registry.adoc[leveloffset=+2] + include::platform/ref-controller-credential-cyberark-central.adoc[leveloffset=+2] + include::platform/ref-controller-credential-cyberark-conjur.adoc[leveloffset=+2] + include::platform/ref-controller-credential-gitHub-pat.adoc[leveloffset=+2] + include::platform/ref-controller-credential-gitLab-pat.adoc[leveloffset=+2] + include::platform/ref-controller-credential-GCE.adoc[leveloffset=+2] + +include::platform/con-controller-access-GCE-in-a-playbook.adoc[leveloffset=+3] + include::platform/ref-controller-credential-GPG-public-key.adoc[leveloffset=+2] + include::platform/ref-controller-credential-hashiCorp-secret.adoc[leveloffset=+2] + include::platform/ref-controller-credential-hashiCorp-vault.adoc[leveloffset=+2] + include::platform/ref-controller-credential-insights.adoc[leveloffset=+2] + include::platform/ref-controller-credential-machine.adoc[leveloffset=+2] + +include::platform/con-controller-access-machine-credentials-playbook.adoc[leveloffset=+3] + include::platform/ref-controller-credential-azure-key.adoc[leveloffset=+2] + include::platform/ref-controller-credential-azure-resource.adoc[leveloffset=+2] + +include::platform/ref-controller-access-azure-resources-in-playbook.adoc[leveloffset=+3] + include::platform/ref-controller-credential-network.adoc[leveloffset=+2] + +include::platform/ref-controller-access-network-creds-playbook.adoc[leveloffset=+3] + +include::platform/ref-controller-multiple-connection-protocols.adoc[leveloffset=+3] + include::platform/ref-controller-credential-openShift.adoc[leveloffset=+2] + include::platform/proc-controller-credential-create-openshift-account.adoc[leveloffset=+3] + include::platform/ref-controller-credential-openStack.adoc[leveloffset=+2] + include::platform/ref-controller-credential-aap.adoc[leveloffset=+2] + +include::platform/ref-controller-access-controller-creds-in-playbook.adoc[leveloffset=+3] + include::platform/ref-controller-credential-satellite.adoc[leveloffset=+2] + include::platform/ref-controller-credential-virtualization.adoc[leveloffset=+2] + +include::platform/ref-controller-access-virt-creds-in-playbook.adoc[leveloffset=+3] + include::platform/ref-controller-credential-source-control.adoc[leveloffset=+2] + //The following Terraform module is for 2.5 only: include::platform/ref-controller-credential-terraform.adoc[leveloffset=+2] + include::platform/ref-controller-credential-thycotic-vault.adoc[leveloffset=+2] + include::platform/ref-controller-credential-thycotic-server.adoc[leveloffset=+2] + include::platform/ref-controller-credential-vault.adoc[leveloffset=+2] + include::platform/ref-controller-credential-vmware-vcenter.adoc[leveloffset=+2] + +include::platform/ref-controller-access-vmware-creds-in-playbook.adoc[leveloffset=+3] + include::platform/ref-controller-use-credentials-in-playbooks.adoc[leveloffset=+1] -endif::controller-UG[] diff --git a/downstream/assemblies/platform/assembly-controller-custom-credentials.adoc b/downstream/assemblies/platform/assembly-controller-custom-credentials.adoc index 05d011c101..3f595b5965 100644 --- a/downstream/assemblies/platform/assembly-controller-custom-credentials.adoc +++ b/downstream/assemblies/platform/assembly-controller-custom-credentials.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-custom-credentials"] = Custom credential types diff --git a/downstream/assemblies/platform/assembly-controller-ee-setup-reference.adoc b/downstream/assemblies/platform/assembly-controller-ee-setup-reference.adoc index 9c50b1bf6d..618b738e28 100644 --- a/downstream/assemblies/platform/assembly-controller-ee-setup-reference.adoc +++ b/downstream/assemblies/platform/assembly-controller-ee-setup-reference.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-ee-setup-reference"] = Execution Environment Setup Reference @@ -5,18 +7,20 @@ This section contains reference information associated with the definition of an {ExecEnvShort}. You define the content of your {ExecEnvShort} in a YAML file. By default, this file is called `execution_environment.yml`. -This file tells Ansible Builder how to create the build instruction file (Containerfile for Podman, Dockerfile for Docker) and build context for your container image. +This file tells {Builder} how to create the build instruction file (Containerfile for Podman, Dockerfile for Docker) and build context for your container image. [NOTE] ==== -The definition schema for Ansible Builder 3.x is documented here. -If you are running an older version of Ansible Builder, you need an older schema version. +The definition schema for {Builder} 3.x is documented here. +If you are running an older version of {Builder}, you need an older schema version. For more information, see older versions of link:https://ansible.readthedocs.io/projects/builder/en/latest/[this] documentation. We recommend using version 3, which offers substantially more configurable options and functionality than previous versions. ==== include::platform/ref-controller-ee-definition.adoc[leveloffset=+1] + include::platform/ref-controller-ee-configuration-options.adoc[leveloffset=+1] + include::platform/ref-controller-awx-default-ee.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-execution-environments.adoc b/downstream/assemblies/platform/assembly-controller-execution-environments.adoc index e480ddf25e..08ac13f32f 100644 --- a/downstream/assemblies/platform/assembly-controller-execution-environments.adoc +++ b/downstream/assemblies/platform/assembly-controller-execution-environments.adoc @@ -1,15 +1,28 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-execution-environments"] = Execution environments -Unlike legacy virtual environments, {ExecEnvShort}s are container images that make it possible to incorporate system-level dependencies and collection-based content. +{ExecEnvShort}s are container images that make it possible to incorporate system-level dependencies and collection-based content. Each {ExecEnvShort} enables you to have a customized image to run jobs and has only what is necessary when running the job. +The default view for the *Execution Environments* page is collapsed (*Compact*) with the {ExecEnvShort} name and its image name. Selecting the {ExecEnvShort} name expands the entry for more information. + +For each {ExecEnvShort} listed, you can use the image:leftpencil.png[Edit,15,15] to edit the properties for the selected {ExecEnvShort} or the {MoreActionsIcon} to duplicate the {ExecEnvShort}. These are also avilable from the *Details* page. + include::platform/ref-controller-build-exec-envs.adoc[leveloffset=+1] + include::platform/ref-controller-install-builder.adoc[leveloffset=+2] + include::platform/ref-controller-building-exec-env.adoc[leveloffset=+2] + include::platform/ref-controller-run-the-builder.adoc[leveloffset=+2] + include::platform/con-controller-ee-mount-options.adoc[leveloffset=+2] + include::platform/proc-controller-troubleshoot-ee-mount.adoc[leveloffset=+3] + include::platform/proc-controller-ee-mount-execution-node.adoc[leveloffset=+3] + include::platform/proc-controller-use-an-exec-env.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-glossary.adoc b/downstream/assemblies/platform/assembly-controller-glossary.adoc index 20e99ecef1..42ac109a2c 100644 --- a/downstream/assemblies/platform/assembly-controller-glossary.adoc +++ b/downstream/assemblies/platform/assembly-controller-glossary.adoc @@ -1,58 +1,61 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-glossary"] = Glossary -[discrete] -=== Ad Hoc +Ad hoc:: + _Ad hoc_ refers to using Ansible to perform a quick command, using /usr/bin/ansible, rather than the orchestration language, which is `/usr/bin/ansible-playbook`. An example of an ad hoc command might be rebooting 50 machines in your infrastructure. Anything you can do ad hoc can be accomplished by writing a Playbook. Playbooks can also glue lots of other operations together. -[discrete] -=== Callback Plugin +Automation mesh:: +Describes a network comprising of nodes. +Communication between nodes is established at the transport layer by protocols such as TCP, UDP or Unix sockets. + +See also, *Node*. + +Callback Plugin:: + Refers to user-written code that can intercept results from Ansible and act on them. Some examples in the GitHub project perform custom logging, send email, or play sound effects. -[discrete] -=== Control Groups +Control Groups:: + Also known as '_cgroups_', a control group is a feature in the Linux kernel that enables resources to be grouped and allocated to run processes. In addition to assigning resources to processes, cgroups can also report use of resources by all processes running inside of the cgroup. -[discrete] -=== Check Mode +Check Mode:: + Refers to running Ansible with the `--check` option, which does not make any changes on the remote systems, but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called "dry run" modes in other systems. However, this does not take into account unexpected command failures or cascade effects (which is true of similar modes in other systems). Use Check mode to get an idea of what might happen, but it is not a substitute for a good staging environment. -[discrete] -=== Container Groups +Container Groups:: + Container Groups are a type of Instance Group that specify a configuration for provisioning a pod in a Kubernetes or OpenShift cluster where a job is run. These pods are provisioned on-demand and exist only for the duration of the playbook run. -[discrete] -=== Credentials +Credentials:: Authentication details that can be used by {ControllerName} to launch jobs against machines, to synchronize with inventory sources, and to import project content from a version control system. -For more information, see xref:controller-credentials[Credentials]. +For more information, see [Credentials]. -[discrete] -=== Credential Plugin +Credential Plugin:: Python code that contains definitions for an external credential type, its metadata fields, and the code needed for interacting with a secret management system. -[discrete] -=== Distributed Job +Distributed Job:: A job that consists of a job template, an inventory, and slice size. When executed, a distributed job slices each inventory into a number of "slice size" chunks, which are then used to run smaller job slices. -[discrete] -=== External Credential Type +External Credential Type:: A managed credential type used for authenticating with a secret management system. -[discrete] -=== Facts +Facts:: Facts are things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered when running plays by executing the internal setup module on the remote nodes. @@ -60,100 +63,76 @@ You never have to call the setup module explicitly: it just runs. It can be disabled to save time if it is not required. For the convenience of users who are switching from other configuration management systems, the fact module also pulls in facts from the `ohai` and `facter` tools if they are installed, which are fact libraries from Chef and Puppet, respectively. -[discrete] -=== Forks +Forks:: Ansible and {ControllerName} communicate with remote nodes in parallel. The level of parallelism can be set in several ways during the creation or editing of a Job Template, by passing `--forks`, or by editing the default in a configuration file. The default is a very conservative five forks, though if you have a lot of RAM, you can set this to a higher value, such as 50, for increased parallelism. -[discrete] -=== Group +Group:: A set of hosts in Ansible that can be addressed as a set, of which many can exist within a single Inventory. -[discrete] -=== Group Vars +Group Vars:: The `group_vars/` files are files that are stored in a directory with an inventory file, with an optional filename named after each group. This is a convenient place to put variables that are provided to a given group, especially complex data structures, so that these variables do not have to be embedded in the inventory file or playbook. -[discrete] -=== Handlers +Handlers:: Handlers are like regular tasks in an Ansible playbook (see *Tasks*), but are only run if the Task contains a "notify" directive and also indicates that it changed something. For example, if a configuration file is changed then the task referencing the configuration file templating operation might notify a service restart handler. This means services can be bounced only if they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most common use. -[discrete] -=== Host +Host:: A system managed by {ControllerName}, which may include a physical, virtual, or cloud-based server, or other device (typically an operating system instance). Hosts are contained in an Inventory. Sometimes referred to as a "node". -[discrete] -=== Host Specifier +Host Specifier:: Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems. This "hosts:" directive in each play is often called the hosts specifier. It can select one system, many systems, one or more groups, or hosts that are in one group and explicitly not in another. -[discrete] -=== Instance Group +Instance Group:: A group that contains instances for use in a clustered environment. An instance group provides the ability to group instances based on policy. -[discrete] -=== Inventory +Inventory:: A collection of hosts against which Jobs can be launched. -[discrete] -=== Inventory Script +Inventory Script:: A program that looks up hosts, group membership for hosts, and variable information from an external resource, whether that be a SQL database, a CMDB solution, or LDAP. This concept was adapted from Puppet (where it is called an "External Nodes Classifier") and works in a similar way. -[discrete] -=== Inventory Source +Inventory Source:: Information about a cloud or other script to be merged into the current inventory group, resulting in the automatic population of Groups, Hosts, and variables about those groups and hosts. -[discrete] -=== Job +Job:: One of many background tasks launched by {ControllerName}, this is usually the instantiation of a Job Template, such as the launch of an Ansible playbook. Other types of jobs include inventory imports, project synchronizations from source control, or administrative cleanup actions. -[discrete] -=== Job Detail +Job Detail:: The history of running a particular job, including its output and success/failure status. -[discrete] -=== Job Slice +Job Slice:: See *Distributed Job*. -[discrete] -=== Job Template -The combination of an Ansible playbook and the set of parameters required to launch it. For more information, see xref:controller-job-templates[Job templates]. +Job Template:: +The combination of an Ansible playbook and the set of parameters required to launch it. For more information, see link:{ControllerUserGuide}/controller-job-templates[Job templates]. -[discrete] -=== JSON +JSON:: JSON is a text-based format for representing structured data based on JavaScript object syntax. Ansible and {Controllername} use JSON for return data from remote modules. This enables modules to be written in any language, not just Python. -[discrete] -=== Mesh -Describes a network comprising of nodes. -Communication between nodes is established at the transport layer by protocols such as TCP, UDP or Unix sockets. - -See also, *Node*. - -[discrete] -=== Metadata +Metadata:: Information for locating a secret in the external system once authenticated. The user provides this information when linking an external credential to a target credential field. -[discrete] -=== Node +Node:: A node corresponds to entries in the instance database model, or the `/api/v2/instances/` endpoint, and is a machine participating in the cluster or mesh. The unified jobs API reports `controller_node` and `execution_node` fields. The execution node is where the job runs, and the controller node interfaces between the job and server functions. - + ++ [cols="10%,70%",options="header",] |==== | Node Type | Description @@ -163,117 +142,93 @@ The execution node is where the job runs, and the controller node interfaces bet | Execution | Nodes that run jobs delivered from control nodes (jobs submitted from the user's Ansible automation) |==== -[discrete] -=== Notification Template +Notification Template:: An instance of a notification type (Email, Slack, Webhook, etc.) with a name, description, and a defined configuration. -[discrete] -=== Notification +Notification:: A Notification, such as Email, Slack or a Webhook, has a name, description and configuration defined in a Notification template. For example, when a job fails, a notification is sent using the configuration defined by the notification template. -[discrete] -=== Notify +Notify:: The act of a task registering a change event and informing a handler task that another action needs to be run at the end of the play. If a handler is notified by multiple tasks, it is still only run once. Handlers are run in the order they are listed, not in the order that they are notified. -[discrete] -=== Organization +Organization:: A logical collection of Users, Teams, Projects, and Inventories. Organization is the highest level in the object hierarchy. -[discrete] -=== Organization Administrator +Organization Administrator:: An user with the rights to modify the Organization's membership and settings, including making new users and projects within that organization. An organization administrator can also grant permissions to other users within the organization. -[discrete] -=== Permissions +Permissions:: The set of privileges assigned to Users and Teams that provide the ability to read, modify, and administer Projects, Inventories, and other objects. -[discrete] -=== Plays +Plays:: A play is minimally a mapping between a set of hosts selected by a host specifier (usually chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that those systems perform. A playbook is a list of plays. There can be one or many plays in a playbook. -[discrete] -=== Playbook -An Ansible playbook. For more information, see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html#[Ansible playbooks]. +Playbook:: +An Ansible playbook. For more information, see link:{URLPlaybooksGettingStarted}/index[Getting started with playbooks]. -[discrete] -=== Policy +Policy:: Policies dictate how instance groups behave and how jobs are executed. -[discrete] -=== Project +Project:: A logical collection of Ansible playbooks, represented in {ControllerName}. -[discrete] -=== Roles +Roles:: Roles are units of organization in Ansible and {ControllerName}. Assigning a role to a group of hosts (or a set of groups, or host patterns, etc.) implies that they implement a specific behavior. A role can include applying variable values, tasks, and handlers, or a combination of these things. Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among playbooks, or with other users. -[discrete] -=== Secret Management System +Secret Management System:: A server or service for securely storing and controlling access to tokens, passwords, certificates, encryption keys, and other sensitive data. -[discrete] -=== Schedule +Schedule:: The calendar of dates and times for which a job should run automatically. -[discrete] -=== Sliced Job +Sliced Job:: See *Distributed Job*. -[discrete] -=== Source Credential +Source Credential:: An external credential that is linked to the field of a target credential. -[discrete] -=== Sudo +Sudo:: Ansible does not require root logins and, since it is daemonless, does not require root level daemons (which can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a `sudo` command, and can work with both password-less and password-based sudo. Some operations that do not normally work with `sudo` (such as `scp` file transfer) can be achieved with Ansible's _copy_, _template_, and _fetch_ modules while running in `sudo` mode. -[discrete] -=== Superuser +Superuser:: An administrator of the server who has permission to edit any object in the system, whether or not it is associated with any organization. Superusers can create organizations and other superusers. -[discrete] -=== Survey +Survey:: Questions asked by a job template at job launch time, configurable on the job template. -[discrete] -=== Target Credential +Target Credential:: A non-external credential with an input field that is linked to an external credential. -[discrete] -=== Team +Team:: A sub-division of an Organization with associated Users, Projects, Credentials, and Permissions. Teams provide a means to implement role-based access control schemes and delegate responsibilities across Organizations. -[discrete] -=== User +User:: An operator with associated permissions and credentials. -[discrete] -=== Webhook +Webhook:: Webhooks enable communication and information sharing between applications. They are used to respond to commits pushed to SCMs and launch job templates or workflow templates. -[discrete] -=== Workflow Job Template +Workflow Job Template:: A set consisting of any combination of job templates, project syncs, and inventory syncs, linked together in order to execute them as a single unit. -[discrete] -=== YAML +YAML:: A human-readable language that is often used for writing configuration files. Ansible and {ControllerName} use YAML to define playbook configuration languages and also variable files. YAML has a minimum of syntax, is very clean, and is easy for people to skim. diff --git a/downstream/assemblies/platform/assembly-controller-hosts.adoc b/downstream/assemblies/platform/assembly-controller-hosts.adoc new file mode 100644 index 0000000000..f02741d510 --- /dev/null +++ b/downstream/assemblies/platform/assembly-controller-hosts.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-controller-hosts"] + += Hosts + +A host is a system managed by {PlatformNameShort}, which may include a physical, virtual, cloud-based server, or other device. + +Typically a host is an operating system instance. + +Hosts are grouped in inventories and are sometimes referred to as a “nodes”. + +Ansible works against multiple managed nodes or “hosts” in your infrastructure at the same time, using a list or group of lists known as an inventory. + +Once your inventory is defined, use patterns to select the hosts or groups you want Ansible to run against. + +include::platform/proc-controller-create-host.adoc[leveloffset=+1] + +include::platform/proc-controller-view-host.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-controller-improving-performance.adoc b/downstream/assemblies/platform/assembly-controller-improving-performance.adoc index 8b59771d40..fa6f37f1ab 100644 --- a/downstream/assemblies/platform/assembly-controller-improving-performance.adoc +++ b/downstream/assemblies/platform/assembly-controller-improving-performance.adoc @@ -1,36 +1,68 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-improving-performance"] = Performance tuning for {ControllerName} Tune your {ControllerName} to optimize performance and scalability. When planning your workload, ensure that you identify your performance and scaling needs, adjust for any limitations, and monitor your deployment. -{ControllerNameStart} is a distributed system with multiple components that you can tune, including the following: +{ControllerNameStart} is a distributed system with many components that you can tune, including the following: * Task system in charge of scheduling jobs * Control Plane in charge of controlling jobs and processing output * Execution plane where jobs run * Web server in charge of serving the API -* Websocket system that serve and broadcast websocket connections and data -* Database used by multiple components +* WebSocket system that serve and broadcast WebSocket connections and data +* Database used by many components + +include::platform/con-websocket-setup.adoc[leveloffset=+1] + +include::platform/proc-configuring-discovery.adoc[leveloffset=+2] include::platform/ref-controller-capacity-planning.adoc[leveloffset=+1] + include::platform/ref-controller-workload-characteristics.adoc[leveloffset=+2] + include::platform/ref-controller-node-types.adoc[leveloffset=+2] + include::platform/ref-scaling-control-nodes.adoc[leveloffset=+3] + include::platform/ref-scaling-execution-nodes.adoc[leveloffset=+3] + include::platform/ref-scaling-hop-nodes.adoc[leveloffset=+3] + include::platform/ref-ratio-control-execution.adoc[leveloffset=+3] + include::platform/ref-controller-capacity-planning-exercise.adoc[leveloffset=+1] + include::platform/ref-controller-performance-troubleshooting.adoc[leveloffset=+1] + include::platform/con-controller-metrics-monitor-controller.adoc[leveloffset=+1] + include::platform/ref-controller-database-settings.adoc[leveloffset=+1] + +include::platform/ref-encrypting-plaintext-passwords.adoc[leveloffset=+2] + +include::platform/proc-create-password-hashes.adoc[leveloffset=+3] + +include::platform/proc-encrypt-postgres-password.adoc[leveloffset=+3] + include::platform/con-controller-tuning.adoc[leveloffset=+1] + include::platform/proc-controller-managing-live-events.adoc[leveloffset=+2] + include::platform/proc-controller-disabling-live-events.adoc[leveloffset=+3] + include::platform/ref-controller-settings-to-modify-events.adoc[leveloffset=+3] + include::platform/ref-controller-settings-job-events.adoc[leveloffset=+2] + include::platform/ref-controller-settings-control-execution-nodes.adoc[leveloffset=+2] + include::platform/ref-controller-capacity-instance-container.adoc[leveloffset=+2] + include::platform/ref-controller-settings-scheduling-jobs.adoc[leveloffset=+2] + include::platform/ref-controller-internal-cluster-routing.adoc[leveloffset=+2] -include::platform/ref-controller-web-service-tuning.adoc[leveloffset=+2] \ No newline at end of file + +include::platform/ref-controller-web-service-tuning.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-controller-instances.adoc b/downstream/assemblies/platform/assembly-controller-instances.adoc index be6a9a22df..a05f849f05 100644 --- a/downstream/assemblies/platform/assembly-controller-instances.adoc +++ b/downstream/assemblies/platform/assembly-controller-instances.adoc @@ -1,24 +1,33 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-instances"] = Managing capacity with Instances -Scaling your {AutomationMesh} is available on OpenShift deployments of {PlatformName} and is possible through adding or removing nodes from your cluster dynamically, using the *Instances* resource of the {ControllerName} UI, without running the installation script. +Scaling your {AutomationMesh} is available on OpenShift deployments of {PlatformName} and is possible through adding or removing nodes from your cluster dynamically, using the *Instances* resource of the UI, without running the installation script. Instances serve as nodes in your mesh topology. {AutomationMeshStart} enables you to extend the footprint of your automation. The location where you launch a job can be different from the location where the ansible-playbook runs. -To manage instances from the {ControllerName} UI, you must have System Administrator or System Auditor permissions. +To manage instances from the UI, you must have System Administrator or System Auditor permissions. In general, the more processor cores (CPU) and memory (RAM) a node has, the more jobs that can be scheduled to run on that node at once. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-jobs#controller-capacity-determination[Automation controller capacity determination and job impact]. +For more information, see link:{URLControllerUserGuide}/index#controller-capacity-determination[Automation controller capacity determination and job impact]. //include::platform/ref-instances-prerequisites.adoc[leveloffset=+1] + include::platform/ref-operator-mesh-prerequisites.adoc[leveloffset=+1] + include::platform/proc-controller-pulling-secret.adoc[leveloffset=+1] + include::platform/proc-set-up-virtual-machines.adoc[leveloffset=+1] + //include::platform/proc-controller-manage-instances.adoc[leveloffset=+1] + include::platform/proc-define-mesh-node-types.adoc[leveloffset=+1] + //include::platform/proc-controller-adding-an-instance.adoc[leveloffset=+1] + include::platform/ref-removing-instances.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-inventories.adoc b/downstream/assemblies/platform/assembly-controller-inventories.adoc index 82e672448f..1dbad86662 100644 --- a/downstream/assemblies/platform/assembly-controller-inventories.adoc +++ b/downstream/assemblies/platform/assembly-controller-inventories.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="controller-inventories"] @@ -7,21 +9,21 @@ ifdef::context[:parent-context: {context}] = Inventories -ifdef::controller-GS[] -An inventory is a collection of hosts managed by {ControllerName}. -Organizations are assigned to inventories, while permissions to launch playbooks against inventories are controlled at the user or team level. +//ifdef::controller-GS[] +//An inventory is a collection of hosts managed by {ControllerName}. +//Organizations are assigned to inventories, while permissions to launch playbooks against inventories are controlled at the user or team level. -For more information, see the following documentation: +//For more information, see the following documentation: -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-user-permissions[Adding and removing user permissions] -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-team-add-user[Adding or removing a user] -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#about_the_installer_inventory_file[About the installer inventory file] +//* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-user-permissions[Adding and removing user permissions] +//* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-team-add-user[Adding or removing a user] +//* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#about_the_installer_inventory_file[About the installer inventory file] -include::platform/proc-controller-create-inventory.adoc[leveloffset=+1] -include::platform/con-controller-groups-hosts.adoc[leveloffset=+1] -include::platform/proc-controller-add-groups-hosts.adoc[leveloffset=+2] -endif::controller-GS[] -ifdef::controller-UG[] +//include::platform/proc-controller-create-inventory.adoc[leveloffset=+1] +//include::platform/con-controller-groups-hosts.adoc[leveloffset=+1] +//include::platform/proc-controller-add-groups-hosts.adoc[leveloffset=+2] +//endif::controller-GS[] +//ifdef::controller-UG[] {PlatformName} works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. You can use the {PlatformName} installer inventory file to specify your installation scenario and describe host deployments to Ansible. @@ -29,13 +31,13 @@ By using an inventory file, Ansible can manage a large number of hosts with a si Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify. Inventories are divided into groups and these groups contain the hosts. -Groups may be sourced manually, by entering host names into {ControllerName}, or from one of its supported cloud providers. +Groups can be sourced manually, by entering host names into {ControllerName}, or from one of its supported cloud providers. [NOTE] ==== If you have a custom dynamic inventory script, or a cloud provider that is not yet supported natively in {ControllerName}, you can also import that into {ControllerName}. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#assembly-inventory-file-importing[Inventory file importing] in the _{ControllerAG}_. +For more information, see link:{URLControllerAdminGuide}/index#assembly-inventory-file-importing[Inventory file importing] in {TitleControllerAdminGuide}. ==== From the navigation panel, select {MenuInfrastructureInventories}. @@ -62,57 +64,88 @@ The statuses are: //* *Actions*: The following actions are available for the selected inventory: ** *Edit* image:leftpencil.png[Edit,15,15]: Edit the properties for the selected inventory -** *Copy* image:copy.png[Copy,15,15]: Makes a copy of an existing inventory as a template for creating a new one +** *Duplicate* image:copy.png[Copy,15,15]: Makes a copy of an existing inventory as a template for creating a new one ** *Delete inventory*: Delete the selected inventory Click the Inventory name to display the *Details* page for the selected inventory, which shows the inventory's groups and hosts. //Smart inventories are deprecated. -//include::platform/ref-controller-smart-inventories.adoc[leveloffset=+1] -//include::platform/ref-controller-smart-host-filter.adoc[leveloffset=+2] +include::platform/ref-controller-smart-inventories.adoc[leveloffset=+1] + +include::platform/ref-controller-smart-host-filter.adoc[leveloffset=+2] + //include::platform/proc-controller-define-filter-with-facts.adoc[leveloffset=+2] + include::platform/ref-controller-constructed-inventories.adoc[leveloffset=+1] include::platform/ref-controller-group-name-vars-filtering.adoc[leveloffset=+2] + include::platform/ref-controller-inv-debugging-tips.adoc[leveloffset=+2] + include::platform/ref-controller-inv-nested-groups.adoc[leveloffset=+2] + include::platform/ref-controller-inv-ansible-facts.adoc[leveloffset=+2] + include::platform/ref-controller-filter-environ-variables.adoc[leveloffset=+3] + include::platform/ref-controller-filter-hosts-cpu-type.adoc[leveloffset=+3] include::platform/ref-controller-inventory-plugins.adoc[leveloffset=+1] include::platform/proc-controller-adding-new-inventory.adoc[leveloffset=+1] + include::platform/proc-controller-adding-inv-permissions.adoc[leveloffset=+2] + +include::platform/proc-controller-remove-inv-permissions.adoc[leveloffset=+2] + include::platform/proc-controller-add-groups.adoc[leveloffset=+2] include::platform/proc-controller-add-groups-to-groups.adoc[leveloffset=+3] + include::platform/ref-controller-view-edit-inv-groups.adoc[leveloffset=+3] include::platform/proc-controller-add-hosts.adoc[leveloffset=+2] + include::platform/proc-controller-add-source.adoc[leveloffset=+2] -include::platform/ref-controller-inventory-sources.adoc[leveloffset=+3] -include::platform/proc-controller-sourced-from-project.adoc[leveloffset=+4] -include::platform/proc-controller-amazon-ec2.adoc[leveloffset=+4] -include::platform/proc-controller-inv-source-gce.adoc[leveloffset=+4] -include::platform/proc-controller-azure-resource-manager.adoc[leveloffset=+4] -include::platform/proc-controller-inv-source-vm-vcenter.adoc[leveloffset=+4] -include::platform/proc-controller-inv-source-satellite.adoc[leveloffset=+4] -include::platform/proc-controller-inv-source-insights.adoc[leveloffset=+4] -include::platform/proc-controller-inv-source-openstack.adoc[leveloffset=+4] -include::platform/proc-controller-inv-source-rh-virt.adoc[leveloffset=+4] -include::platform/proc-controller-inv-source-aap.adoc[leveloffset=+4] -//The following Terraform module is for 2.5 only: -include::platform/proc-controller-inv-source-terraform.adoc[leveloffset=+4] +include::platform/proc-controller-config-notifications-source.adoc[leveloffset=+2] + +include::platform/ref-controller-inventory-sources.adoc[leveloffset=+2] + +include::platform/proc-controller-sourced-from-project.adoc[leveloffset=+3] + +include::platform/proc-controller-amazon-ec2.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-gce.adoc[leveloffset=+3] + +include::platform/proc-controller-azure-resource-manager.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-vm-vcenter.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-vm-esxi.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-satellite.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-insights.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-openstack.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-rh-virt.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-aap.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-terraform.adoc[leveloffset=+3] + +include::platform/proc-controller-inv-source-open-shift-virt.adoc[leveloffset=+3] include::platform/ref-controller-export-old-scripts.adoc[leveloffset=+3] include::platform/ref-controller-view-completed-jobs.adoc[leveloffset=+1] + include::platform/proc-controller-run-ad-hoc-commands.adoc[leveloffset=+1] -endif::controller-UG[] +//endif::controller-UG[] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-controller-inventory-templates.adoc b/downstream/assemblies/platform/assembly-controller-inventory-templates.adoc index ed717ed232..3429dc0841 100644 --- a/downstream/assemblies/platform/assembly-controller-inventory-templates.adoc +++ b/downstream/assemblies/platform/assembly-controller-inventory-templates.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-inventory-templates"] = Supported Inventory plugin templates @@ -5,20 +7,31 @@ After upgrade to 4.x, existing configurations are migrated to the new format that produces a backwards compatible inventory output. Use the following templates to aid in migrating your inventories to the new style inventory plugin output. -* xref:controller-amazon-web-services[Amazon Web Services EC2] -* xref:controller-google-compute[Google Compute Engine] -* xref:controller-microsoft-azure[Microsoft Azure Resource Manager] -* xref:controller-vmware-vcenter[VMware vCenter] -* xref:controller-rh-satellite[Red Hat Satellite 6] -* xref:controller-openstack[OpenStack] -* xref:controller-rh-virtualization[Red Hat Virtualization] -* xref:controller-aap-template[Red Hat Ansible Automation Platform] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-amazon-web-services[Amazon Web Services EC2] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-google-compute[Google Compute Engine] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-microsoft-azure[Microsoft Azure Resource Manager] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-vmware-vcenter[VMware vCenter] +//This is an xref because for some bizzare reason, if I use a link, all these links go to existing documentation, not within this local build. [AAP-41239-esxi] +* xref:ref-controller-vmware-esxi[VMware ESXI] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-rh-satellite[Red Hat Satellite 6] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-openstack[OpenStack] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-rh-virtualization[Red Hat Virtualization] +* link:{URLControllerUserGuide}/controller-inventory-templates#controller-aap-template[Red Hat Ansible Automation Platform] include::platform/ref-controller-amazon-web-services.adoc[leveloffset=+1] + include::platform/ref-controller-google-compute.adoc[leveloffset=+1] + include::platform/ref-controller-microsoft-azure.adoc[leveloffset=+1] + include::platform/ref-controller-vmware-vcenter.adoc[leveloffset=+1] + +include::platform/ref-controller-vmware-esxi.adoc[leveloffset=+1] + include::platform/ref-controller-rh-satellite.adoc[leveloffset=+1] + include::platform/ref-controller-openstack.adoc[leveloffset=+1] + include::platform/ref-controller-rh-virtualization.adoc[leveloffset=+1] + include::platform/ref-controller-aap-template.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-log-files.adoc b/downstream/assemblies/platform/assembly-controller-log-files.adoc index 46c2e2d215..abd45e8609 100644 --- a/downstream/assemblies/platform/assembly-controller-log-files.adoc +++ b/downstream/assemblies/platform/assembly-controller-log-files.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-log-files"] = {ControllerNameStart} logfiles diff --git a/downstream/assemblies/platform/assembly-controller-logging-aggregation.adoc b/downstream/assemblies/platform/assembly-controller-logging-aggregation.adoc index 82fce29567..40b0e37425 100644 --- a/downstream/assemblies/platform/assembly-controller-logging-aggregation.adoc +++ b/downstream/assemblies/platform/assembly-controller-logging-aggregation.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-logging-aggregation"] = Logging and Aggregation @@ -20,8 +22,23 @@ If you already use `rsyslog` for logging system logs on the {ControllerName} ins Use the `/api/v2/settings/logging/` endpoint to configure how the {ControllerName} `rsyslog` process handles messages that have not yet been sent in the event that your external logger goes offline: -* `LOG_AGGREGATOR_MAX_DISK_USAGE_GB`: Specifies the amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). -Equivalent to the `rsyslogd queue.maxdiskspace` setting. +* `LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB`: Maximum disk persistence for rsyslogd action queuing in GB. ++ +Specifies the amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). ++ +Equivalent to the `rsyslogd queue.maxDiskSpace` setting. + +* `LOG_AGGREGATOR_ACTION_QUEUE_SIZE`: Maximum number of messages that can be stored in the log action queue. ++ +Defines how large the rsyslog action queue can grow in number of messages stored. +This can have an impact on memory use. +When the queue reaches 75% of this number, the queue starts writing to disk (`queue.highWatermark` in `rsyslog`). +When it reaches 90%, `NOTICE`, `INFO`, and `DEBUG` messages start to be discarded (`queue.discardMark` with 'queue.discardSeverity=5`). ++ +Equivalent to the `rsyslogd queue.size` setting on the action. + +It stores files in the directory specified by `LOG_AGGREGATOR_MAX_DISK_USAGE_PATH`. + * `LOG_AGGREGATOR_MAX_DISK_USAGE_PATH`: Specifies the location to store logs that should be retried after an outage of the external log aggregator (defaults to `/var/lib/awx`). Equivalent to the `rsyslogd queue.spoolDirectory` setting. diff --git a/downstream/assemblies/platform/assembly-controller-login.adoc b/downstream/assemblies/platform/assembly-controller-login.adoc index d94f57e31d..cf7852f2e8 100644 --- a/downstream/assemblies/platform/assembly-controller-login.adoc +++ b/downstream/assemblies/platform/assembly-controller-login.adoc @@ -1,14 +1,18 @@ -[id="controller-login"] +:_mod-docs-content-type: ASSEMBLY +[id="assembly-controller-login"] ifdef::controller-GS[] -= Logging into the {ControllerName} dashboard after installation += Logging in to the {ControllerName} dashboard after installation After you install {ControllerName}, you must log in to the Dashboard. endif::controller-GS[] + ifdef::controller-UG[] -= Logging into {ControllerName} after installation += Logging into {PlatformNameShort} after installation -After you install {ControllerName}, you must log in. +After you install {PlatformNameShort}, you must log in. endif::controller-UG[] include::platform/proc-controller-logging-in.adoc[leveloffset=+1] + +include::platform/proc-controller-find-subscription.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-management-jobs.adoc b/downstream/assemblies/platform/assembly-controller-management-jobs.adoc index a5dabcf030..3e0ee7377b 100644 --- a/downstream/assemblies/platform/assembly-controller-management-jobs.adoc +++ b/downstream/assemblies/platform/assembly-controller-management-jobs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-management-jobs"] = Management Jobs @@ -12,7 +14,8 @@ image:management-jobs.png[Management jobs] The following job types are available for you to schedule and launch: * *Cleanup Activity Stream*: Remove activity stream history older than a specified number of days -* *Cleanup Expired OAuth 2 Tokens*: Remove expired OAuth 2 access tokens and refresh tokens +// [emcwhinn] Removing as part of AAP-37805 +// * *Cleanup Expired OAuth 2 Tokens*: Remove expired OAuth 2 access tokens and refresh tokens * *Cleanup Expired Sessions*: Remove expired browser sessions from the database * *Cleanup Job Details*: Remove job history older than a specified number of days diff --git a/downstream/assemblies/platform/assembly-controller-metrics.adoc b/downstream/assemblies/platform/assembly-controller-metrics.adoc index be5848883a..41c4f1a228 100644 --- a/downstream/assemblies/platform/assembly-controller-metrics.adoc +++ b/downstream/assemblies/platform/assembly-controller-metrics.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-metrics"] = Metrics -A metrics endpoint, `/api/v2/metrics/` is available in the API that produces instantaneous metrics about {ControllerName}, which can be +A metrics endpoint, `/api/controller/v2/metrics/` is available in the API that produces instantaneous metrics about {ControllerName}, which can be consumed by system monitoring software such as the open source project Prometheus. The types of data shown at the `metrics/` endpoint are `Content-type: text/plain` and `application/json`. diff --git a/downstream/assemblies/platform/assembly-controller-organizations.adoc b/downstream/assemblies/platform/assembly-controller-organizations.adoc index 779971e06e..744c4a541e 100644 --- a/downstream/assemblies/platform/assembly-controller-organizations.adoc +++ b/downstream/assemblies/platform/assembly-controller-organizations.adoc @@ -1,37 +1,40 @@ -[id="assembly-controller-organizations"] +ifdef::context[:parent-context: {context}] -ifdef::controller-GS[] -= Managing organizations in {ControllerName} +:_mod-docs-content-type: ASSEMBLY -An organization is a logical collection of users, teams, projects, and inventories. -It is the highest level object in the controller object hierarchy. -After you have created an organization, {ControllerName} displays the organization details. -You can then manage access and execution environments for the organization. +[id="assembly-controller-organizations_{context}"] -image::controller-tower-hierarchy.png[Hierarchy] += Organizations + +:context: access-mgmt-orgs + +An organization is a logical collection of users, teams, and resources. It is the highest level object in the {PlatformNameShort} object hierarchy. After you have created an organization, {PlatformNameShort} displays the organization details. You can then manage access and execution environments for the organization. +{PlatformNameShort} automatically creates a default organization and the system administrator is automatically assigned to this organization. If you have a Self-support level license, you have only the default organization available and must not delete it. + +// [ddacosta] Removed this statement because I think it was relevant when this content was upstream but in the downstream docs, it’s implied that you have a license. +//[NOTE] +//==== +//Only Enterprise or Premium licenses can add new organizations. +//==== +//Enterprise and Premium license users who want to add a new organization should refer to the xref:proc-controller-create-organization[Creating an organization]. include::platform/proc-controller-review-organizations.adoc[leveloffset=+1] -include::platform/proc-controller-edit-an-organization.adoc[leveloffset=+1] -endif::controller-GS[] -ifdef::controller-UG[] -= Organizations -An organization is a logical collection of users, teams, projects, and inventories. -It is the highest level object in the controller object hierarchy. +include::platform/proc-controller-create-organization.adoc[leveloffset=+1] -image::controller-tower-hierarchy.png[Hierarchy] +include::platform/con-controller-access-organizations.adoc[leveloffset=+1] -From the navigation menu, select btn:[Organizations] to display the existing organizations for your installation. +include::platform/proc-controller-add-organization-user.adoc[leveloffset=+2] -image:organizations-home-showing-example-organization.png[Organizations] +include::platform/proc-gw-add-admin-organization.adoc[leveloffset=+2] -Organizations can be searched by *Name* or *Description*. +include::platform/proc-gw-add-team-organization.adoc[leveloffset=+2] -Modify organizations using the image:leftpencil.png[Edit,15,15] icon. -Click btn:[Delete] to remove a selected organization. +include::platform/proc-gw-delete-organization.adoc[leveloffset=+2] -include::platform/proc-controller-create-organization.adoc[leveloffset=+1] -include::platform/con-controller-access-organizations.adoc[leveloffset=+1] +include::platform/ref-controller-organization-notifications.adoc[leveloffset=+1] -endif::controller-UG[] +include::platform/proc-gw-organizations-exec-env.adoc[leveloffset=+1] +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-controller-pac.adoc b/downstream/assemblies/platform/assembly-controller-pac.adoc new file mode 100644 index 0000000000..08a042c157 --- /dev/null +++ b/downstream/assemblies/platform/assembly-controller-pac.adoc @@ -0,0 +1,52 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-08 +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context-of-controller-pac: {context}] + +ifndef::context[] +[id="controller-pac"] +endif::[] +ifdef::context[] +[id="controller-pac_{context}"] +endif::[] += Implementing policy enforcement + +:context: controller-pac + +Policy enforcement at automation runtime is a feature that uses encoded rules to define, manage, and enforce policies that govern how your users interact with your {PlatformNameShort} instance. Policy enforcement automates policy management, improving security, compliance, and efficiency. + +OPA, or link:https://www.openpolicyagent.org/docs/latest/[Open Policy Agent], is a policy engine that offloads policy decisions from your Ansible instance. When it is triggered, the policy enforcement feature connects to OPA to retrieve policies specified in your configuration, and applies policy rules to your automation content. If OPA detects a policy violation, it will stop the action and give your user information about the policy violation. + +*Prerequisites* + +Before you can implement policy enforcement in your {PlatformNameShort} instance, you must have: + +// * An {PlatformNameShort} 2.5 deployment with the `FEATURE_POLICY_AS_CODE_ENABLED` feature flag set to `TRUE`. +* Access to an OPA server that is reachable from your {PlatformNameShort} deployment. +* Configured {PlatformNameShort} with settings required for authenticating to your OPA server. +* Some familiarity with OPA and the Rego language, which is the language policies are written in. + +For policy enforcement to work correctly, you must both configure the OPA server in your policy settings, and associate a specific policy with a particular resource. +For example, a particular organization, inventory, or job template. + +[NOTE] +==== +OPA API V1 is the only version currently supported in {PlatformNameShort}. +==== + +// include::platform/proc-enable-pac.adoc[leveloffset=+1] + +include::platform/proc-configure-pac-settings.adoc[leveloffset=+1] + +include::platform/con-pac-policies-rules.adoc[leveloffset=+1] + +include::platform/proc-configure-pac-enforcement.adoc[leveloffset=+1] + +include::platform/proc-pac-add-policy-to-org.adoc[leveloffset=+2] + +include::platform/proc-pac-add-policy-to-inventory.adoc[leveloffset=+2] + +include::platform/proc-pac-add-policy-to-template.adoc[leveloffset=+2] + +include::platform/ref-pac-inputs-outputs.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-project-signing.adoc b/downstream/assemblies/platform/assembly-controller-project-signing.adoc index 913a7e159f..a1e1135038 100644 --- a/downstream/assemblies/platform/assembly-controller-project-signing.adoc +++ b/downstream/assemblies/platform/assembly-controller-project-signing.adoc @@ -1,3 +1,7 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + [id="assembly-controller-project-signing"] = Project Signing and Verification @@ -6,44 +10,22 @@ Project signing and verification lets you sign files in your project directory, changed in any way, or files have been added or removed from the project unexpectedly. To do this, you require a private key for signing and a matching public key for verifying. -For project maintainers, the supported way to sign content is to use the `ansible-sign` utility, using the _command-line -interface_ (CLI) supplied with it. - -The CLI aims to make it easy to use cryptographic technology such as _GNU Privacy Guard_ (GPG) to validate that files within a project have not been tampered with in any way. -Currently, GPG is the only supported means of signing and validation. - -{ControllerNameStart} is used to verify the signed content. -After a matching public key has been associated with the signed project, {ControllerName} verifies that the files included during signing have not changed, and that files have been added or removed unexpectedly. -If the signature is not valid or a file has changed, the project fails to update, and jobs making use of the project will not launch. Verification status of the project ensures that only secure, untampered content can be run in jobs. - -If the repository has already been configured for signing and verification, the usual workflow for altering the project becomes the following: - -. You have a project repository set up already and want to make a change to a file. -. You make the change, and run the following command: -+ -[literal, options="nowrap" subs="+attributes"] ----- -ansible-sign project gpg-sign /path/to/project ----- -+ -This command updates a checksum manifest and signs it. -. You commit the change, the updated checksum manifest, and the signature to the repository. - -When you synchronize the project, {ControllerName} pulls in the new changes, checks that the public key associated with the project in {ControllerName} matches the private key that the checksum manifest was signed with (this prevents tampering with the checksum manifest itself), then re-calculates the checksums of each file in the manifest to ensure that the checksum matches (and thus that no file has changed). It also ensures that all files are accounted for: - -Files must be included in, or excluded from, the `MANIFEST.in` file. -For more information on this file, see xref:con-controller-signing-your-project[Sign a project] -If files have been added or removed unexpectedly, verification fails. - -image:content-sign-diagram.png[Content signing] +include::platform/ref-controller-intro-proj-sign.adoc[leveloffset=+1] include::platform/ref-controller-proj-sign-prerequisites.adoc[leveloffset=+1] + include::platform/proc-controller-adding-gpg-key.adoc[leveloffset=+1] + include::platform/proc-controller-use-ansible-sign.adoc[leveloffset=+1] + include::platform/con-controller-signing-your-project.adoc[leveloffset=+1] + include::platform/ref-controller-verify-your-project.adoc[leveloffset=+1] + include::platform/ref-controller-automate-signing.adoc[leveloffset=+1] +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-controller-projects.adoc b/downstream/assemblies/platform/assembly-controller-projects.adoc index fa8c969e80..8ef83cb6f7 100644 --- a/downstream/assemblies/platform/assembly-controller-projects.adoc +++ b/downstream/assemblies/platform/assembly-controller-projects.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-projects"] ifdef::controller-GS[] = Managing projects @@ -6,7 +8,7 @@ ifdef::controller-UG[] = Projects endif::controller-UG[] -A Project is a logical collection of Ansible playbooks, represented in {ControllerName}. +A project is a logical collection of Ansible playbooks, represented in {ControllerName}. You can manage playbooks and playbook directories different ways: * By placing them manually under the Project Base Path on your {ControllerName} server. @@ -27,17 +29,18 @@ It is configured in `/etc/tower/conf.d/custom.py`. Use caution when editing this file, as incorrect settings can disable your installation. ==== -The Projects page displays the list of the projects that are currently available. +The *Projects* page displays the list of the projects that are currently available. A *Demo Project* is provided that you can work with initially. //image:projects-list-all.png[Projects - home] -The default view is collapsed (*Compact*) with project name and its status, but you can use the image:arrow.png[Arrow,15,15] next to each entry to expand for more information. +// Not in the latest 2.5 UI [AAP-45087] +//The default view is collapsed (*Compact*) with project name and its status, but you can use the image:arrow.png[Arrow,15,15] next to each entry to expand for more information. //image:projects-list-all-expanded.png[Projects - expanded] -For each project listed, you can get the latest SCM revision image:sync.png[Refresh,15,15], edit image:leftpencil.png[Edit,15,15] the project, or copy image:copy.png[Copy,15,15] the project attributes, using the icons next to each project. +For each project listed, you can get the latest SCM revision image:sync.png[Refresh,15,15], edit image:leftpencil.png[Edit,15,15] the project, or duplicate image:copy.png[Copy,15,15] the project attributes, using the icons next to each project. Projects can be updated while a related job is running. @@ -80,13 +83,31 @@ credential. //==== include::platform/proc-controller-adding-a-project.adoc[leveloffset=+1] + +include::platform/proc-projects-manage-playbooks-manually.adoc[leveloffset=+2] + +include::platform/ref-projects-manage-playbooks-with-source-control.adoc[leveloffset=+2] + +include::platform/proc-scm-git-subversion.adoc[leveloffset=+3] + +include::platform/proc-scm-insights.adoc[leveloffset=+3] + +include::platform/proc-scm-remote-archive.adoc[leveloffset=+3] + include::platform/proc-controller-updating-a-project.adoc[leveloffset=+1] + include::platform/ref-work-with-permissions.adoc[leveloffset=+1] + //include::platform/ref-work-with-notifications.adoc[leveloffset=+1] + //include::platform/ref-work-with-job-templates.adoc[leveloffset=+1] + //include::platform/ref-work-with-schedules.adoc[leveloffset=+1] + include::platform/ref-projects-galaxy-support.adoc[leveloffset=+1] + include::platform/ref-projects-collections-support.adoc[leveloffset=+1] + endif::controller-UG[] ifdef::controller-GS[] @@ -96,6 +117,8 @@ This Getting Started Guide uses lightweight examples to get you up and running. ==== include::platform/proc-controller-set-up-project.adoc[leveloffset=+1] + include::platform/proc-controller-edit-project.adoc[leveloffset=+1] + include::platform/proc-controller-sync-project.adoc[leveloffset=+1] endif::controller-GS[] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-controller-resource-operator.adoc b/downstream/assemblies/platform/assembly-controller-resource-operator.adoc index e4a4cb8999..0c1bf6a50e 100644 --- a/downstream/assemblies/platform/assembly-controller-resource-operator.adoc +++ b/downstream/assemblies/platform/assembly-controller-resource-operator.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] :context: performance-considerations @@ -6,8 +8,30 @@ ifdef::context[:parent-context: {context}] = {OperatorResource} include::platform/con-resource-operator-overview.adoc[leveloffset=+1] + include::platform/proc-use-controller-resource-operator.adoc[leveloffset=+1] + include::platform/proc-add-controller-access-token.adoc[leveloffset=+1] + include::platform/proc-create-a-connection-secret.adoc[leveloffset=+1] -include::platform/proc-create-an-ansiblejob.adoc[leveloffset=+1] -include::platform/proc-create-a-jobtemplate.adoc[leveloffset=+1] \ No newline at end of file + +include::platform/proc-create-crs-resource-operator.adoc[leveloffset=+1] + +include::platform/proc-create-an-ansiblejob.adoc[leveloffset=+2] + +include::platform/proc-create-a-jobtemplate.adoc[leveloffset=+2] + +include::platform/proc-operator-create-controller-project.adoc[leveloffset=+2] + +include::platform/proc-operator-create-controller-schedule.adoc[leveloffset=+2] + +include::platform/proc-operator-create-controller-workflow.adoc[leveloffset=+2] + +include::platform/proc-operator-create-controller-workflow-template.adoc[leveloffset=+2] + +include::platform/proc-operator-create-controller-inventory.adoc[leveloffset=+2] + +include::platform/proc-operator-create-controller-credential.adoc[leveloffset=+2] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-controller-search.adoc b/downstream/assemblies/platform/assembly-controller-search.adoc index 7279b6accb..14b3753fdc 100644 --- a/downstream/assemblies/platform/assembly-controller-search.adoc +++ b/downstream/assemblies/platform/assembly-controller-search.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="assembly-controller-search"] @@ -15,8 +17,12 @@ An expandable list of search conditions is available from the *Name* menu in the //image:search-bar-key.png[key sheet] include::platform/ref-controller-search-tips.adoc[leveloffset=+1] + include::platform/ref-controller-values-for-search-fields.adoc[leveloffset=+2] + include::platform/ref-controller-search-values-related-fields.adoc[leveloffset=+2] + include::platform/ref-controller-other-search-considerations.adoc[leveloffset=+2] + include::platform/ref-controller-search-sort.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-secret-management.adoc b/downstream/assemblies/platform/assembly-controller-secret-management.adoc index e10ff5408c..afbc61d2a4 100644 --- a/downstream/assemblies/platform/assembly-controller-secret-management.adoc +++ b/downstream/assemblies/platform/assembly-controller-secret-management.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-secret-management"] = Secret management system @@ -18,12 +20,13 @@ With external credentials backed by credential plugins, you can map credential f * {Azure} _Key Management System_ (KMS) * Thycotic DevOps Secrets Vault * Thycotic Secret Server +* GitHub app token lookup These external secret values are fetched before running a playbook that needs them. .Additional resources -For more information about specifying secret management system credentials in the user interface, see xref:controller-credentials[Managing user credentials]. +For more information about specifying secret management system credentials in the user interface, see link:{URLControllerUserGuide}/index#controller-credentials[Managing user credentials]. include::platform/proc-controller-configure-secret-lookups.adoc[leveloffset=+1] include::platform/ref-controller-metadata-credential-input.adoc[leveloffset=+2] @@ -36,6 +39,4 @@ include::platform/ref-hashicorp-signed-ssh.adoc[leveloffset=+2] include::platform/ref-azure-key-vault-lookup.adoc[leveloffset=+2] include::platform/ref-thycotic-devops-vault.adoc[leveloffset=+2] include::platform/ref-thycotic-secret-server.adoc[leveloffset=+2] - - - +include::platform/proc-controller-github-app-token.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-controller-subscription-management.adoc b/downstream/assemblies/platform/assembly-controller-subscription-management.adoc new file mode 100644 index 0000000000..aa7a5882ca --- /dev/null +++ b/downstream/assemblies/platform/assembly-controller-subscription-management.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +:context: subscription-management +:_mod-docs-content-type: + +[id="assembly-controller-subscription-management"] + += Subscription management in {PlatformNameShort} and {ControllerName} + +{PlatformNameShort} provides capabilities to monitor usage, activate subscriptions, and maintain compliance with Red Hat subscription requirements. + +include::platform/con-host-metrics-subscriptions.adoc[leveloffset=+1] + +include::platform/con-host-metrics-dashboard.adoc[leveloffset=+2] + +include::platform/con-soft-deletion.adoc[leveloffset=+3] + +include::assembly-aap-activate.adoc[leveloffset=+1] + +//include::assembly-aap-manifest-files.adoc[leveloffset=+1] + +include::platform/con-controller-keep-subscription-in-compliance.adoc[leveloffset=+1] + +include::platform/con-view-hosts-in-ui.adoc[leveloffset=+2] + +include::platform/con-view-hosts-in-CLI.adoc[leveloffset=+2] + +include::platform/proc-controller-awx-manage-utility.adoc[leveloffset=+3] + +include::platform/ref-delete-hosts-api-endpoint.adoc[leveloffset=+2] + + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-controller-teams.adoc b/downstream/assemblies/platform/assembly-controller-teams.adoc index 4cd9299a90..2d7657b1e6 100644 --- a/downstream/assemblies/platform/assembly-controller-teams.adoc +++ b/downstream/assemblies/platform/assembly-controller-teams.adoc @@ -1,24 +1,32 @@ ifdef::context[:parent-context: {context}] -[id="assembly-controller-teams"] +:_mod-docs-content-type: ASSEMBLY -:context: controller-teams -= Managing teams +[id="assembly-controller-teams_{context}"] -A *Team* is a subdivision of an organization with associated users, projects, credentials, and permissions. -Teams offer a means to implement role-based access control schemes and delegate responsibilities across organizations. -For example, you can grant permissions to a whole team rather than to each user on the team. += Teams -From the navigation panel, select {MenuControllerTeams}. +:context: controller-teams -image:organizations-teams-list.png[Teams list] +A team is a subdivision of an organization with associated users, and resources. Teams provide a means to implement role-based access control schemes and delegate responsibilities across organizations. For instance, you can grant permissions to a Team rather than each user on the team. -You can sort and search the team list and searched by *Name* or *Organization*. +You can create as many teams as needed for your organization. Teams can only be assigned to one organization while an organization can be made up of multiple teams. Each team can be assigned roles, the same way roles are assigned for users. Teams can also scalably assign ownership for credentials, preventing multiple interface click-throughs to assign the same credentials to the same user. -Click the Edit image:leftpencil.png[Edit,15,15] icon next to the entry to edit information about the team. -You can also review *Users* and *Permissions* associated with this team. +include::platform/proc-gw-team-list-view.adoc[leveloffset=+1] include::platform/proc-controller-creating-a-team.adoc[leveloffset=+1] +include::platform/proc-gw-team-add-user.adoc[leveloffset=+1] + +include::platform/proc-gw-team-remove-user.adoc[leveloffset=+1] + +include::platform/proc-gw-add-admin-team.adoc[leveloffset=+1] + +include::platform/proc-controller-user-permissions.adoc[leveloffset=+1] + +include::platform/proc-gw-remove-roles-team.adoc[leveloffset=+1] + +include::platform/proc-gw-delete-team.adoc[leveloffset=+1] + ifdef::parent-context[:context: {parent-context}] -ifndef::parent-context[:!context:] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-controller-topology-viewer.adoc b/downstream/assemblies/platform/assembly-controller-topology-viewer.adoc index a63eb1b86b..cd0f5620a8 100644 --- a/downstream/assemblies/platform/assembly-controller-topology-viewer.adoc +++ b/downstream/assemblies/platform/assembly-controller-topology-viewer.adoc @@ -1,13 +1,20 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + [id="assembly-controller-topology-viewer"] -= Topology viewer += Topology View -Use the topology viewer to view node type, node health, and specific details about each node if you already have a mesh topology deployed. +Use the *Topology View* to view node type, node health, and specific details about each node if you already have a mesh topology deployed. To access the topology viewer from the {ControllerName} UI, you must have *System Administrator* permissions. -For more information about {AutomationMesh} on a VM-based installation, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_guide_for_vm-based_installations/index[{PlatformName} {AutomationMesh} guide for VM-based installations]. +For more information about {AutomationMesh} on a VM-based installation, see the link:{URLOperatorMesh}[{AutomationMeshStart} for VM environments]. -For more information about {AutomationMesh} on an operator-based installation, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_for_operator-based_installations/index[{PlatformName} {AutomationMesh} for operator-based installations]. +For more information about {AutomationMesh} on an operator-based installation, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_mesh_for_managed_cloud_or_operator_environments/index[{AutomationMeshStart} for managed cloud or operator environments]. include::platform/proc-controller-access-topology-viewer.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-controller-user-interface.adoc b/downstream/assemblies/platform/assembly-controller-user-interface.adoc index dd572e02e1..000bf2f42a 100644 --- a/downstream/assemblies/platform/assembly-controller-user-interface.adoc +++ b/downstream/assemblies/platform/assembly-controller-user-interface.adoc @@ -1,36 +1,31 @@ -ifdef::context[:parent-context: {context}] +:_mod-docs-content-type: ASSEMBLY [id="assembly-controller-user-interface"] -:context: controller-UI - = The User Interface -The {ControllerName} _User Interface_ (UI) provides a graphical framework for your IT orchestration requirements. -The navigation panel provides quick access to {ControllerName} resources, such as *Projects*, *Inventories*, *Job Templates*, and *Jobs*. +The {MenuTopAE} _User Interface_ (UI) provides a graphical framework for your IT orchestration requirements. -[NOTE] -==== -The {ControllerName} UI is also available as a technical preview and is subject to change in future releases. -To preview the new UI, click the *Enable Preview of New User Interface* toggle to *On* from the *Miscellaneous System* option of the *Settings* menu. +Access your user profile, the *About* page, view related documentation, or log out using the icons in the page header. -//image:configure-tower-system-misc-preview-newui.png[image] +The navigation panel provides quick access to {ControllerName} resources, such as *Jobs*, *Templates*, *Schedules*, *Projects*, *Infrastructure*, and *Administration*. -After saving, logout and log back in to access the new UI from the preview banner. -To return to the current UI, click the link on the top banner where indicated. -//image:ug-dashboard-preview-banner.png[image] -==== +* link:{URLControllerUserGuide}/controller-jobs[Jobs] +* link:{URLControllerUserGuide}/controller-job-templates[Job templates] +* link:{URLControllerUserGuide}/controller-workflow-job-templates[Workflow job templates] +* link:{URLControllerUserGuide}/controller-schedules[Schedules] +* link:{URLControllerUserGuide}/controller-projects[Projects] -Access your user profile, the *About* page, view related documentation, or log out using the icons in the page header. +//You can view the activity stream for that user by clicking the btn:[Activity Stream] image:activitystream.png[activitystream,15,15] icon. -You can view the activity stream for that user by clicking the btn:[Activity Stream] image:activitystream.png[activitystream,15,15] icon. +//include::platform/con-controller-views.adoc[leveloffset=+1] +//include::platform/con-controller-resources.adoc[leveloffset=+1] +//include::platform/con-controller-access.adoc[leveloffset=+1] +include::platform/con-controller-infrastructure.adoc[leveloffset=+1] -include::platform/con-controller-views.adoc[leveloffset=+1] -include::platform/con-controller-resources.adoc[leveloffset=+1] -include::platform/con-controller-access.adoc[leveloffset=+1] include::platform/con-controller-administration.adoc[leveloffset=+1] //For next version/tech preview //include::platform/con-controller-analytics.adoc[leveloffset=+1] not created yet -//Settings not included in tech preview -include::platform/con-controller-settings.adoc[leveloffset=+1] \ No newline at end of file + +include::platform/con-controller-settings.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-controller-users.adoc b/downstream/assemblies/platform/assembly-controller-users.adoc index b0e3a56ebd..b59f86c46f 100644 --- a/downstream/assemblies/platform/assembly-controller-users.adoc +++ b/downstream/assemblies/platform/assembly-controller-users.adoc @@ -1,19 +1,35 @@ -[id="assembly-controller-users"] +:_mod-docs-content-type: ASSEMBLY -ifdef::controller-GS[] -= User roles in {ControllerName} -endif::controller-GS[] -ifdef::controller-UG[] -= Managing Users in {ControllerName} -endif::controller-UG[] +ifdef::context[:parent-context: {context}] + +[id="assembly-controller-users_{context}"] += Users + +:context: access-mgmt-users + +Users associated with an organization are shown in the *Users* tab of the organization. + +You can add other users to an organization, including a normal user or system administrator, but first, you must create them. + +[NOTE] +==== +{PlatformNameShort} automatically creates a default admin user so they can log in and set up {PlatformNameShort} for their organization. This user can not be deleted or modified. + +==== +You can sort or search the User list by *Username*, *First name*, *Last name*, or *Email*. Click the arrows in the header to toggle your sorting preference. +You can view *User type* and *Email* beside the user name on the Users page. + +include::platform/proc-gw-users-list-view.adoc[leveloffset=+1] -include::platform/con-controller-create-users.adoc[leveloffset=+1] -ifdef::controller-UG[] include::platform/proc-controller-creating-a-user.adoc[leveloffset=+1] + +include::platform/proc-gw-editing-a-user.adoc[leveloffset=+1] + include::platform/proc-controller-deleting-a-user.adoc[leveloffset=+1] -include::platform/ref-controller-user-organizations.adoc[leveloffset=+1] -include::platform/ref-controller-user-teams.adoc[leveloffset=+1] + include::platform/ref-controller-user-roles.adoc[leveloffset=+1] -include::platform/proc-controller-user-permissions.adoc[leveloffset=+2] -include::platform/proc-controller-user-tokens.adoc[leveloffset=+1] -endif::controller-UG[] + +include::platform/proc-gw-remove-roles-user.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-controlling-data-collection.adoc b/downstream/assemblies/platform/assembly-controlling-data-collection.adoc index 338acde18d..c75af43eee 100644 --- a/downstream/assemblies/platform/assembly-controlling-data-collection.adoc +++ b/downstream/assemblies/platform/assembly-controlling-data-collection.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="assembly-controlling-data-collection"] diff --git a/downstream/assemblies/platform/assembly-deploying-chatbot-operator.adoc b/downstream/assemblies/platform/assembly-deploying-chatbot-operator.adoc new file mode 100644 index 0000000000..e457aaa64b --- /dev/null +++ b/downstream/assemblies/platform/assembly-deploying-chatbot-operator.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: ASSEMBLY + + +ifdef::context[:parent-context: {context}] + + +[id="deploying-chatbot-operator"] += Deploying the {AAPchatbot} on {OCPShort} + +:context: deploying-chatbot-operator + +[role="_abstract"] + +As a system administrator, you can deploy {AAPchatbot} on {PlatformNameShort} {PlatformVers} on {OCPShort}. + +include::platform/con-about-lightspeed-intelligent-assistant.adoc[leveloffset=+1] + +== Deploying the {AAPchatbot} +This section provides information about the procedures involved in deploying the {AAPchatbot} on {OCPShort}. + +include::platform/proc-install-aap-operator-chatbot.adoc[leveloffset=+2] + +include::platform/proc-create-chatbot-config-secret.adoc[leveloffset=+2] + +include::platform/proc-update-aap-operator-yaml-chatbot.adoc[leveloffset=+2] + +include::platform/con-using-chatbot.adoc[leveloffset=+1] + + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-deprovisioning-mesh.adoc b/downstream/assemblies/platform/assembly-deprovisioning-mesh.adoc index 6581f36051..7f622a7b6f 100644 --- a/downstream/assemblies/platform/assembly-deprovisioning-mesh.adoc +++ b/downstream/assemblies/platform/assembly-deprovisioning-mesh.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] @@ -11,11 +13,16 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -You can deprovision {AutomationMesh} nodes and instance groups using the {PlatformNameShort} installer. The procedures in this section describe how to deprovision specific nodes or entire groups, with example inventory files for each procedure. +You can deprovision {AutomationMesh} nodes and instance groups using the {PlatformNameShort} installer. +The procedures in this section describe how to deprovision specific nodes or entire groups, with example inventory files for each procedure. include::platform/proc-deprovisioning-mesh-nodes.adoc[leveloffset=+1] +include::platform/proc-deprovision-isolated-nodes.adoc[leveloffset=+1] + include::platform/proc-deprovision-group.adoc[leveloffset=+1] +include::platform/proc-deprovision-isolated-groups.adoc[leveloffset=+1] + ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-disconnected-installation.adoc b/downstream/assemblies/platform/assembly-disconnected-installation.adoc index a823e7d6d1..9f3e882609 100644 --- a/downstream/assemblies/platform/assembly-disconnected-installation.adoc +++ b/downstream/assemblies/platform/assembly-disconnected-installation.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] @@ -13,11 +15,13 @@ If you are not connected to the internet or do not have access to online reposit Before installing {PlatformNameShort} on a disconnected network, you must meet the following prerequisites: -. A created subscription manifest. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files#doc-wrapper[Obtaining a manifest file] for more information. +* A subscription manifest that you can upload to the platform. + +For more information, see link:{URLCentralAuth}/assembly-gateway-licensing#assembly-aap-obtain-manifest-files[Obtaining a manifest file]. -. The {PlatformNameShort} setup bundle at link:{PlatformDownloadUrl}[Customer Portal] is downloaded. +* The {PlatformNameShort} setup bundle at link:{PlatformDownloadUrl}[Customer Portal] is downloaded. -. The link:https://docs.ansible.com/ansible/latest/collections/community/general/nsupdate_module.html[DNS records] for the {ControllerName} and {PrivateHubName} servers are created. +* The link:https://docs.ansible.com/ansible/latest/collections/community/general/nsupdate_module.html[DNS records] for the {ControllerName} and {PrivateHubName} servers are created. include::platform/con-aap-installation-on-disconnected-rhel.adoc[leveloffset=+1] @@ -28,29 +32,39 @@ include::platform/proc-creating-a-new-web-server-to-host-repositories.adoc[level include::platform/proc-accessing-rpm-repositories-for-locally-mounted-dvd.adoc[leveloffset=+1] -include::platform/proc-adding-a-subscription-manifest-to-aap-without-an-internet-connection.adoc[leveloffset=+1] +//include::platform/proc-adding-a-subscription-manifest-to-aap-without-an-internet-connection.adoc[leveloffset=+1] +//removed for 2.5 changes AAP-30807 made by rjgrange include::platform/proc-installing-the-aap-setup-bundle.adoc[leveloffset=+1] include::platform/proc-completing-post-installation-tasks.adoc[leveloffset=+1] -include::platform/proc-importing-collections-into-private-automation-hub.adoc[leveloffset=+1] +//include::platform/proc-importing-collections-into-private-automation-hub.adoc[leveloffset=+1] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/proc-creating-collection-namespace.adoc[leveloffset=+1] +//include::platform/proc-creating-collection-namespace.adoc[leveloffset=+1] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/proc-approving-the-imported-collection.adoc[leveloffset=+1] +//include::platform/proc-approving-the-imported-collection.adoc[leveloffset=+1] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/con-building-an-execution-environment-in-a-disconnected-environment.adoc[leveloffset=+1] +//include::platform/con-building-an-execution-environment-in-a-disconnected-environment.adoc[leveloffset=+1] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/proc-installing-the-ansible-builder-rpm.adoc[leveloffset=+2] +//include::platform/proc-installing-the-ansible-builder-rpm.adoc[leveloffset=+2] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/proc-creating-the-custom-execution-environment-definition.adoc[leveloffset=+2] +//include::platform/proc-creating-the-custom-execution-environment-definition.adoc[leveloffset=+2] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/proc-building-the-custom-execution-environment.adoc[leveloffset=+2] +//include::platform/proc-building-the-custom-execution-environment.adoc[leveloffset=+2] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/proc-uploading-the-custom-execution-environment-to-the-private-hub.adoc[leveloffset=+2] +//include::platform/proc-uploading-the-custom-execution-environment-to-the-private-hub.adoc[leveloffset=+2] +//removed for 2.5 changes AAP-30807 made by rjgrange -include::platform/proc-upgrading-between-minor-aap-releases.adoc[leveloffset=+1] +// Removing references to upgrades for 2.5-ea - AAP-17771 +// include::platform/proc-upgrading-between-minor-aap-releases.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-edge-manager-architecture.adoc b/downstream/assemblies/platform/assembly-edge-manager-architecture.adoc new file mode 100644 index 0000000000..d1c0c35043 --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-architecture.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-edge-manager-architecture"] + += {RedHatEdge} architecture + +You can manage individual devices or an entire fleet by using the {RedHatEdge}. The {RedHatEdge} uses an agent-based architecture that allows for a scalable and robust device management, even with limited network conditions. + +By deploying a {RedHatEdge} agent to a device, the agent autonomously manages and monitors the device while periodically communicating with the {RedHatEdge} service to check for new configurations and to report device status. + +The {RedHatEdge} supports image-based operating systems. +You can include the {RedHatEdge} agent and the agent configuration in the image that is distributed to the devices. + +Image-based operating systems allow the agent to start a transactional update of the image and to roll back to the earlier version in case of an update error. + +The {RedHatEdge} architecture has the following main features: + +* Agent +* Service +* Image-based operating system +* API server +* Database +* Device +* Device fleet + +Learn more from the following sections: + +* xref:edge-manager-agent-service[{RedHatEdge} agent and service] +* xref:edge-manager-api-server[{RedHatEdge} API server] + +include::platform/con-edge-manager-agent-service.adoc[leveloffset=+1] +include::platform/con-edge-manager-api-server.adoc[leveloffset=+1] +include::platform/con-edge-manager-device-enroll.adoc[leveloffset=+1] +include::platform/con-edge-manager-enroll-meth.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-edge-manager-device-fleets.adoc b/downstream/assemblies/platform/assembly-edge-manager-device-fleets.adoc new file mode 100644 index 0000000000..dda676309b --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-device-fleets.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-edge-manager-device-fleets"] + += Device fleets + +The {RedHatEdge} simplifies the management of a large number of devices and workloads through _device fleets_. +A fleet is a resource that defines a group of devices governed by a common device template and management policies. + +When you make a change to the device template, all devices in the fleet receive the changes when the {RedHatEdge} agent detects the new target specification. + +Device monitoring in a fleet is also simplified because you can check the status summary of the whole fleet. + +Fleet-level management offers the following advantages: + +* Scales your operations because you perform operations only once for each fleet instead of once for each device. +* Minimizes the risk of configuration mistakes and configuration drift. +* Automatically applies the target configuration when you add devices to the fleet or replace devices in the fleet. +The fleet specification consists of the following features: + +Label selector:: Determines which devices are part of the fleet. +Device template:: Defines the configuration that the {RedHatEdge} enforces on devices in the fleet. +Policies:: Governs how devices are managed, for example, how changes to the device template are rolled out to the devices. + +You can have both individually managed and fleet-managed devices at the same time. + +When you select a device into a fleet, the {RedHatEdge} creates the device specification for the new device based on the device template. +If you update the device template for a fleet or a new device joins the fleet, the {RedHatEdge} enforces the new specification in the fleet. + +If a device is not selected into any fleets, the device is considered user-managed or unmanaged. +For user-managed devices, you must update the device specification either manually or through an external automation. + +[IMPORTANT] +==== +A device cannot be a member of more than one fleet at the same time. +==== + +For more information, see xref:edge-manager-labels[Labels and label selectors]. + +include::platform/ref-edge-manager-device-selection.adoc[leveloffset=+1] + +include::platform/ref-edge-manager-device-templates.adoc[leveloffset=+1] + +include::platform/proc-edge-manager-add-devices-ui.adoc[leveloffset=+1] + +include::platform/proc-edge-manager-add-devices-cli.adoc[leveloffset=+1] + +include::platform/con-edge-manager-rollout-device-selection.adoc[leveloffset=+1] + +include::platform/con-edge-manager-device-targeting.adoc[leveloffset=+2] + +include::platform/con-edge-manager-device-selection-strat.adoc[leveloffset=+2] + +include::platform/con-edge-manager-limit-device.adoc[leveloffset=+2] + +include::platform/ref-edge-manager-success-threshold.adoc[leveloffset=+2] + +include::platform/con-edge-manager-rollout-disruption.adoc[leveloffset=+1] + +include::platform/ref-edge-manager-disruption-parameters.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-edge-manager-images.adoc b/downstream/assemblies/platform/assembly-edge-manager-images.adoc new file mode 100644 index 0000000000..8cd19f6299 --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-images.adoc @@ -0,0 +1,64 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-edge-manager-images"] + += Operating system images for using with the {RedHatEdge} + +Image-based operating systems allow the operating system and its configuration and applications to be versioned, deployed, and updated as a single unit. +Using an image-based operating system reduces operational risks by doing the following: + +* Minimizing potential drift between what is tested and what is deployed to a large number of devices. +* Minimizing the risk of failed updates that require expensive maintenance or replacement through transactional updates and rollbacks. + +The {RedHatEdge} focuses on image-based Linux operating systems that run bootable container images (`bootc`). + +For more information, see link:https://bootc-dev.github.io/bootc/[bootc]. + +[IMPORTANT] +==== +The `bootc` tool does not update package-based operating systems. +==== + +include::platform/proc-edge-manager-image-build.adoc[leveloffset=+1] + +include::platform/ref-edge-manager-images-special-considerations.adoc[leveloffset=+1] + +include::platform/con-edge-manager-buildtime-runtime.adoc[leveloffset=+2] + +include::platform/con-edge-manager-usr-dir.adoc[leveloffset=+2] + +include::platform/con-edge-manager-drop-dir.adoc[leveloffset=+2] + +include::platform/con-edge-manager-os-img-script.adoc[leveloffset=+2] + +include::platform/con-edge-manager-build-bootc.adoc[leveloffset=+1] + +include::platform/con-edge-manager-build-prereq.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-install-CLI.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-log-into-CLI.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-request-cert.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-image-pullsecrets.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-build-bootc-image.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-build-sign-image.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-build-disk-image.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-sign-disk-image.adoc[leveloffset=+2] + +include::platform/ref-edge-manager-additional-resources-images.adoc[leveloffset=+2] + +include::platform/ref-edge-manager-platform-requirements.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-virt.adoc[leveloffset=+3] + +include::platform/proc-edge-manager-build-image-bootc.adoc[leveloffset=+3] + +include::platform/proc-edge-manager-build-image-QCoW2.adoc[leveloffset=+3] + +include::platform/proc-edge-manager-vmware.adoc[leveloffset=+3] diff --git a/downstream/assemblies/platform/assembly-edge-manager-install.adoc b/downstream/assemblies/platform/assembly-edge-manager-install.adoc new file mode 100644 index 0000000000..a55f8cb305 --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-install.adoc @@ -0,0 +1,33 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-edge-manager-install"] + += Installing the {RedHatEdge} on {PlatformNameShort} + +Install the {RedHatEdge} to manage edge devices and applications at scale. +This guide focuses on a standalone deployment of the {RedHatEdge} on {RHEL} alongside {PlatformNameShort}. + +// For Tech Preview there is only one option, bootc not yet available: + +//You can select one of two methods to install the {RedHatEdge}: + +//* RPM Installation (on an existing {RHEL} (RHEL) system) +//* Bootc image appliance (with the {RedHatEdge} pre-installed) + +include::platform/proc-edge-manager-install-rpm-package.adoc[leveloffset=+1] + +include::platform/con-edge-manager-set-up-oauth.adoc[leveloffset=+1] + +include::platform/proc-edge-manager-oauth-auto.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-oauth-manually.adoc[leveloffset=+2] + +include::platform/proc-edge-manager-integrate-aap.adoc[leveloffset=+2] + +include::platform/ref-edge-manager-certificates.adoc[leveloffset=+1] + +//include::platform/proc-edge-manager-bootc.adoc[leveloffset=+1] + +//include::platform/con-edge-manager-rbac-auth.adoc[leveloffset=+1] +//include::platform/ref-edge-manager-rbac-roles.adoc[leveloffset=+2] +//include::platform/ref-edge-manager-auth-resources.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-edge-manager-intro.adoc b/downstream/assemblies/platform/assembly-edge-manager-intro.adoc new file mode 100644 index 0000000000..f18d167557 --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-intro.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-edge-manager-intro"] + += {RedHatEdge} overview + +The {RedHatEdge} provides streamlined management of edge devices and applications through a declarative approach. +By defining the required state of your edge devices, which includes your operating system versions, host configurations, and application deployments, the {RedHatEdge} automatically implements and maintains these configurations across your entire device fleet. + +The {RedHatEdge} on {PlatformNameShort} offers elevated integration with your automations. +You can then focus more on orchestrating the environment without worrying about updating the operating system. + +See the following topics to learn more about using {RedHatEdge} on {PlatformNameShort}: + +* xref:assembly-edge-manager-architecture[{RedHatEdge} architecture] +* xref:assembly-edge-manager-install[Installing the {RedHatEdge}] +* xref:assembly-edge-manager-images[Operating system images for using with the {RedHatEdge}] +* xref:edge-manager-provisioning-devices[Provision devices] +* xref:assembly-edge-manager-manage-devices[Manage devices] +* xref:assembly-edge-manager-device-fleets[Device fleets] + +//include::platform/con-edge-manager-core-capabilities.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-edge-manager-manage-apps.adoc b/downstream/assemblies/platform/assembly-edge-manager-manage-apps.adoc new file mode 100644 index 0000000000..0ec7f90eae --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-manage-apps.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="edge-manager-manage-apps"] + += Managing applications on an edge device + +You can deploy, update, or remove applications on a device by updating the list of applications in the device specification. +When the {RedHatEdge} agent checks in and detects the change in the specification, the agent downloads any new or updated application packages and images from an Open Container Initiative (OCI)-compatible registry. +Then, the agent deploys the packages to the appropriate application runtime or removes them from that runtime. + +The {RedHatEdge} supports the `podman-compose` tool as the application runtime and format. + +.Prerequisites + +* You must install the {RedHatEdge} CLI. +* You must log in to the {RedHatEdge} service. +* Your device must run an operating system image with the `podman-compose` tool installed. + +For more information, see xref:edge-manager-build-bootc[Building a _bootc_ operating system image for use with the {RedHatEdge}]. + +include::platform/proc-edge-manager-build-app-packages.adoc[leveloffset=+1] + +include::platform/ref-edge-manager-specify-apps-inline.adoc[leveloffset=+1] + +include::platform/proc-edge-manager-deploy-apps.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-edge-manager-manage-devices.adoc b/downstream/assemblies/platform/assembly-edge-manager-manage-devices.adoc new file mode 100644 index 0000000000..0a33f87a10 --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-manage-devices.adoc @@ -0,0 +1,64 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-edge-manager-manage-devices"] + += Manage devices + +The {RedHatEdge} manages the device lifecycle from enrollment to decommissioning of a device. +The device lifecycle also includes device management, such as organizing, monitoring, and updating your devices with the {RedHatEdge}. + +You can manage your devices individually or in a fleet. +With the {RedHatEdge} you can manage a whole fleet of devices as a single object instead of managing many devices individually. + +You only need to specify the required configuration once, and then the {RedHatEdge} applies the configuration to all devices in the fleet. + +Understanding individual device management is the foundation for managing devices in a fleet. +You might want to manage your devices individually in the following scenarios: + +* If a few devices have different configurations. +* If you use external automation for updating the device. + +The following sections focus on managing individual devices: + +* xref:edge-manager-enroll[Enroll devices] +* xref:edge-manager-view-devices[View devices] +* xref:edge-manager-labels[Labels and label selectors] +* xref:edge-manager-update-labels[Updating labels on the CLI] +* xref:edge-manager-update-os[Update the operating system] +* xref:edge-manager-manage-os-config[Operating system configuration for edge devices] + +include::platform/con-edge-manager-enroll.adoc[leveloffset=+1] +include::platform/proc-edge-manager-enroll-device-cli.adoc[leveloffset=+2] + +include::platform/con-edge-manager-view-devices.adoc[leveloffset=+1] +include::platform/proc-edge-manager-view-device-inventory-ui.adoc[leveloffset=+2] +include::platform/proc-edge-manager-view-device-inventory-cli.adoc[leveloffset=+2] +include::platform/con-edge-manager-labels.adoc[leveloffset=+2] +include::platform/proc-edge-manager-view-device-labels-ui.adoc[leveloffset=+3] +include::platform/proc-edge-manager-view-devices-cli.adoc[leveloffset=+3] +include::platform/proc-edge-manager-update-labels.adoc[leveloffset=+3] +include::platform/ref-edge-manager-field-selectors.adoc[leveloffset=+2] +include::platform/ref-edge-manager-additional-fields.adoc[leveloffset=+3] +include::platform/ref-edge-manager-fields-discovery.adoc[leveloffset=+3] +include::platform/ref-edge-manager-supported-operators.adoc[leveloffset=+3] + +include::platform/con-edge-manager-update-os.adoc[leveloffset=+1] +include::platform/proc-edge-manager-update-os-cli.adoc[leveloffset=+2] + +include::platform/con-edge-manager-manage-os-config.adoc[leveloffset=+1] +include::platform/con-edge-manager-config-providers.adoc[leveloffset=+2] +include::platform/ref-edge-manager-config-git-repo.adoc[leveloffset=+3] +include::platform/ref-edge-manager-K8s-cluster.adoc[leveloffset=+3] +include::platform/ref-edge-manager-config-http.adoc[leveloffset=+3] +include::platform/ref-edge-manager-config-inline.adoc[leveloffset=+3] +include::platform/proc-edge-manager-config-git-cli.adoc[leveloffset=+2] + +include::platform/ref-edge-manager-device-lifecycle.adoc[leveloffset=+1] +include::platform/ref-edge-manager-rule-files.adoc[leveloffset=+2] + +include::platform/ref-edge-manager-monitor-device.adoc[leveloffset=+1] +//include::platform/proc-edge-manager-monitor-device-resources-web-ui.adoc[leveloffset=+2] +include::platform/proc-edge-manager-monitor-device-resources-cli.adoc[leveloffset=+2] + +//include::platform/con-edge-manager-access-devices.adoc[leveloffset=+1] +//include::platform/proc-edge-manager-access-devices-cli.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-edge-manager-provisioning-devices.adoc b/downstream/assemblies/platform/assembly-edge-manager-provisioning-devices.adoc new file mode 100644 index 0000000000..ad03af0558 --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-provisioning-devices.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="edge-manager-provisioning-devices"] + += Provision devices + +You can provision devices with the {RedHatEdge} in different environments. +Use the operating system image or disk image that you built for use with the {RedHatEdge}. +Depending on your target environment, provision a physical or virtual device. + +//Is there a certain AAP persona that can do this? +//*Required access:* Cluster administrator + +See the following sections: + +* xref:edge-manager-provisioning-physical[Provision physical devices] +* xref:edge-manager-provisioning-openshift-virt[Provision devices with {OCPVShort}] + +include::platform/con-edge-manager-provisioning-physical.adoc[leveloffset=+1] +include::platform/con-edge-manager-provisioning-openshift-virt.adoc[leveloffset=+1] +include::platform/proc-edge-manager-provision-cloudinit-config.adoc[leveloffset=+2] +include::platform/proc-edge-manager-provision-virt-create.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-edge-manager-troubleshooting.adoc b/downstream/assemblies/platform/assembly-edge-manager-troubleshooting.adoc new file mode 100644 index 0000000000..c276c7851e --- /dev/null +++ b/downstream/assemblies/platform/assembly-edge-manager-troubleshooting.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-edge-manager-troubleshooting"] + += Troubleshooting {RedHatEdge} + +When working with devices in {RedHatEdge}, you might see issues related to configuration, connectivity, or deployment. +Troubleshooting these issues requires understanding how device configurations are applied, how to check logs, and how to verify communication between the device and the service. + +include::platform/proc-edge-manager-view-device-config.adoc[leveloffset=+1] +include::platform/proc-edge-manager-generate-device-log.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-gateway-licensing.adoc b/downstream/assemblies/platform/assembly-gateway-licensing.adoc new file mode 100644 index 0000000000..ba5e965498 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gateway-licensing.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +[id="assembly-gateway-licensing"] += Managing {PlatformNameShort} licensing, updates and support + +:context: licensing-gw + +Ansible is an open source software project and is licensed under the GNU General Public License version 3, as described in the link:https://github.com/ansible/ansible/blob/devel/COPYING[Ansible Source Code]. + +You must have valid subscriptions attached before installing {PlatformNameShort}. + +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-attaching-subscriptions[Attaching Subscriptions]. + +include::platform/ref-controller-trial-evaluation.adoc[leveloffset=+1] + +include::platform/ref-controller-licenses.adoc[leveloffset=+1] + +include::platform/ref-controller-node-counting.adoc[leveloffset=+1] + +include::platform/ref-controller-subscription-types.adoc[leveloffset=+1] + +include::platform/proc-attaching-subscriptions.adoc[leveloffset=+1] + +include::assembly-aap-manifest-files.adoc[leveloffset=+1] + +include::assembly-aap-activate.adoc[leveloffset=+1] + \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-gs-auto-dev.adoc b/downstream/assemblies/platform/assembly-gs-auto-dev.adoc new file mode 100644 index 0000000000..7c55c4f532 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gs-auto-dev.adoc @@ -0,0 +1,89 @@ +ifdef::context[:parent-context-of-assembly-gs-auto-dev: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="assembly-gs-auto-dev"] +endif::[] +ifdef::context[] +[id="assembly-gs-auto-dev_{context}"] +endif::[] + +:context: assembly-gs-auto-dev + += Getting started as an automation developer + +As an automation developer, you can use {PlatformNameShort} to implement your organization's automation strategy. +{PlatformNameShort} can help you write, test, and share automation content, and download and use Red Hat certified collections. +This guide will walk you through the basic steps to get set up as an automation developer on {PlatformNameShort}, with information on how to: + +* Set up your development environment +* Create, publish, and use custom automation content +* Build and use {ExecEnvName} and decision environments +* Create and run rulebook activations for {EDAName} +* Create and use job templates + +include::platform/con-gs-setting-up-dev-env.adoc[leveloffset=+1] + +include::platform/con-gs-create-automation-content.adoc[leveloffset=+1] + +include::platform/con-gs-define-events-rulebooks.adoc[leveloffset=+1] + +include::platform/con-gs-ansible-roles.adoc[leveloffset=+1] + +include::platform/proc-creating-ansible-role.adoc[leveloffset=+2] + +include::platform/con-gs-learn-about-collections.adoc[leveloffset=+1] + +include::platform/proc-gs-publish-to-a-collection.adoc[leveloffset=+1] + +include::platform/proc-gs-upload-collection.adoc[leveloffset=+2] + +include::platform/con-gs-execution-env.adoc[leveloffset=+1] + +include::platform/proc-gs-use-base-execution-env.adoc[leveloffset=+2] + +include::platform/con-gs-about-builder.adoc[leveloffset=+2] + +include::platform/proc-gs-add-ee-to-job-template.adoc[leveloffset=+2] + +include::platform/ref-gs-about-container-registries.adoc[leveloffset=+2] + +include::platform/con-gs-build-decision-env.adoc[leveloffset=+1] + +include::platform/proc-gs-auto-dev-set-up-decision-env.adoc[leveloffset=+2] + +include::platform/proc-gs-auto-dev-create-automation-execution-proj.adoc[leveloffset=+1] + +include::platform/proc-gs-auto-dev-create-automation-decision-proj.adoc[leveloffset=+1] + +include::platform/con-gs-auto-dev-about-inv.adoc[leveloffset=+1] +// [hherbly] this repeats module above include::platform/proc-controller-create-inventory.adoc[leveloffset=+2] + +include::platform/con-gs-auto-dev-job-templates.adoc[leveloffset=+1] + +include::platform/proc-controller-getting-started-with-job-templates.adoc[leveloffset=+2] + +include::platform/proc-set-domain-of-interest.adoc[leveloffset=+2] + +include::platform/proc-gs-auto-dev-create-template.adoc[leveloffset=+2] +// [hherbly] incomplete module? include::platform/proc-gs-auto-dev-run-template.adoc[leveloffset=+2] + +include::platform/proc-controller-edit-job-template.adoc[leveloffset=+2] + +include::platform/con-gs-rulebook-activations.adoc[leveloffset=+1] + +include::platform/proc-gs-eda-set-up-rulebook-activation.adoc[leveloffset=+2] + +include::eda/con-eda-rulebook-activation-list-view.adoc[leveloffset=+3] + +include::eda/proc-eda-enable-rulebook-activations.adoc[leveloffset=+2] + +include::eda/proc-eda-restart-rulebook-activations.adoc[leveloffset=+2] + +include::eda/proc-eda-delete-rulebook-activations.adoc[leveloffset=+2] + +include::eda/proc-eda-activate-webhook.adoc[leveloffset=+2] + +ifdef::parent-context-of-assembly-gs-auto-dev[:context: {parent-context-of-assembly-gs-auto-dev}] +ifndef::parent-context-of-assembly-gs-auto-dev[:!context:] diff --git a/downstream/assemblies/platform/assembly-gs-auto-op.adoc b/downstream/assemblies/platform/assembly-gs-auto-op.adoc new file mode 100644 index 0000000000..74901cc012 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gs-auto-op.adoc @@ -0,0 +1,72 @@ +ifdef::context[:parent-context-of-assembly-gs-auto-op: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="assembly-gs-auto-op"] +endif::[] +ifdef::context[] +[id="assembly-gs-auto-op_{context}"] +endif::[] + +:context: assembly-gs-auto-op + += Getting started as an automation operator + +As an automation operator, {PlatformNameShort} can help you organize and manage automation projects with Red Hat certified collections or custom content for your organization. + +To get started as a platform operator, see the following sections: + +* link:{URLGettingStarted}/assembly-gs-auto-op#con-gs-playbooks[Get started with playbooks] +* link:{URLGettingStarted}/assembly-gs-auto-op#proc-gs-publish-to-a-collection_assembly-gs-auto-op[Publishing to a collection in a source code manager] +* link:{URLGettingStarted}/assembly-gs-auto-op#proc-gs-auto-op-projects[Automation execution projects] +* link:{URLGettingStarted}/assembly-gs-auto-op#con-gs-execution-env_assembly-gs-auto-op[Build and use an {ExecEnvShort}] +* link:{URLGettingStarted}/assembly-gs-auto-op#con-gs-auto-op-job-templates[Job templates] +* link:{URLGettingStarted}/assembly-gs-auto-op#con-gs-auto-op-about-inv[About inventories] +* link:{URLGettingStarted}/assembly-gs-auto-op#con-gs-automation-execution-jobs[Automation execution jobs] + +include::platform/con-gs-playbooks.adoc[leveloffset=+1] + +include::platform/proc-gs-write-playbook.adoc[leveloffset=+1] + +include::platform/con-gs-ansible-roles.adoc[leveloffset=+1] + +include::platform/proc-creating-ansible-role.adoc[leveloffset=+2] + +include::platform/con-gs-ansible-content.adoc[leveloffset=+1] + +include::platform/con-gs-learn-about-collections.adoc[leveloffset=+2] + +// [hherbly] removing because it repeats modules above include::platform/proc-gs-browse-content.adoc[leveloffset=+2] +include::platform/proc-gs-downloading-content.adoc[leveloffset=+2] + +include::platform/proc-gs-publish-to-a-collection.adoc[leveloffset=+1] + +include::platform/con-gs-manage-collections.adoc[leveloffset=+2] + +include::platform/proc-gs-upload-collection.adoc[leveloffset=+2] + +include::platform/con-gs-execution-env.adoc[leveloffset=+1] + +include::platform/proc-gs-use-base-execution-env.adoc[leveloffset=+2] + +include::platform/proc-controller-use-an-exec-env.adoc[leveloffset=+2] + +include::platform/proc-gs-auto-op-projects.adoc[leveloffset=+1] + +include::platform/con-gs-auto-op-job-templates.adoc[leveloffset=+1] + +include::platform/proc-gs-auto-op-launch-template.adoc[leveloffset=+2] + +include::platform/con-gs-auto-op-about-inv.adoc[leveloffset=+1] + +include::platform/con-gs-auto-op-execute-inv.adoc[leveloffset=+2] + +include::platform/con-gs-automation-execution-jobs.adoc[leveloffset=+1] + +include::platform/proc-gs-auto-op-review-job-status.adoc[leveloffset=+2] + +include::platform/proc-gs-auto-op-review-job-output.adoc[leveloffset=+2] + +ifdef::parent-context-of-assembly-gs-auto-op[:context: {parent-context-of-assembly-gs-auto-op}] +ifndef::parent-context-of-assembly-gs-auto-op[:!context:] diff --git a/downstream/assemblies/platform/assembly-gs-key-functionality.adoc b/downstream/assemblies/platform/assembly-gs-key-functionality.adoc new file mode 100644 index 0000000000..b70ccf774d --- /dev/null +++ b/downstream/assemblies/platform/assembly-gs-key-functionality.adoc @@ -0,0 +1,43 @@ +ifdef::context[:parent-context-of-assembly-gs-key-functionality: {context}] + +:_mod-docs-content-type: ASSEMBLY + +ifndef::context[] +[id="assembly-gs-key-functionality"] +endif::[] +ifdef::context[] +[id="assembly-gs-key-functionality_{context}"] +endif::[] + +:context: assembly-gs-key-functionality + += Key functionality and concepts + +With {PlatformNameShort}, you can create, manage, and scale automation for your organization across users, teams, and regions. See the following functionality and concepts of {PlatformNameShort} for more details. + +The release of {PlatformNameShort} {PlatformVers} introduces an updated, unified user interface (UI) that allows you to interact with and manage each part of the platform. + +include::snippets/snip-gateway-component-description.adoc[leveloffset=+1] + +include::platform/con-gw-activity-stream.adoc[leveloffset=+1] + +include::platform/con-gs-automation-execution.adoc[leveloffset=+1] + +include::platform/con-gs-automation-content.adoc[leveloffset=+1] + +include::platform/con-gs-automation-decisions.adoc[leveloffset=+1] + +include::platform/con-gs-automation-mesh.adoc[leveloffset=+1] + +include::platform/con-gs-ansible-lightspeed.adoc[leveloffset=+1] + +include::platform/con-gs-developer-tools.adoc[leveloffset=+1] + +include::platform/ref-gs-install-config.adoc[leveloffset=+1] + +include::platform/con-gs-dashboard-components.adoc[leveloffset=+1] + +include::platform/con-gs-final-set-up.adoc[leveloffset=+1] + +ifdef::parent-context-of-assembly-gs-key-functionality[:context: {parent-context-of-assembly-gs-key-functionality}] +ifndef::parent-context-of-assembly-gs-key-functionality[:!context:] diff --git a/downstream/assemblies/platform/assembly-gs-platform-admin.adoc b/downstream/assemblies/platform/assembly-gs-platform-admin.adoc new file mode 100644 index 0000000000..43b4efc096 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gs-platform-admin.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-gs-platform-admin"] + += Getting started as a platform administrator + +As a platform administrator, {PlatformNameShort} can help you enable your users and teams to develop and run automation. + +This guide walks you through the basic steps to get set up as an administrator for {PlatformNameShort}, including configuring and maintaining the platform for users. + +To get started as an administrator, see the following: + +* xref:proc-gs-logging-in[Logging in for the first time] +* xref:con-gs-config-authentication[Configure authentication] +* xref:con-gs-manage-RBAC[Managing user access with role-based access control] + +include::platform/proc-gs-logging-in.adoc[leveloffset=+1] + +include::platform/proc-adding-a-subscription.adoc[leveloffset=+1] + +include::platform/con-gs-config-authentication.adoc[leveloffset=+1] + +include::platform/con-gs-manage-RBAC.adoc[leveloffset=+1] + +include::platform/proc-controller-create-organization.adoc[leveloffset=+1] + +include::platform/proc-controller-creating-a-team.adoc[leveloffset=+1] + +include::platform/proc-gs-platform-admin-create-user.adoc[leveloffset=+1] + +include::platform/proc-gs-social-auth-github.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-gw-config-authentication-type.adoc b/downstream/assemblies/platform/assembly-gw-config-authentication-type.adoc new file mode 100644 index 0000000000..143909d7b3 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-config-authentication-type.adoc @@ -0,0 +1,60 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + +[id="gw-config-authentication-type"] + += Configuring an authentication type + +{PlatformNameShort} provides multiple authenticator plugins that you can configure to simplify the login experience for your organization. These are the authenticator plugins that are provided: + +* xref:gw-local-authentication[Local] +* xref:controller-set-up-LDAP[LDAP] +* xref:controller-set-up-SAML[SAML] +* xref:controller-set-up-tacacs[TACACS+] +* xref:controller-set-up-radius[Radius] +* xref:controller-set-up-azure[Azure] +* xref:proc-controller-google-oauth2-settings[Google OAuth] +* xref:controller-set-up-generic-oidc[Generic OIDC] +* xref:gw-keycloak-authentication[Keycloak] +* xref:proc-controller-github-settings[GitHub] +* xref:proc-controller-github-organization-settings[GitHub organization] +* xref:proc-controller-github-team-settings[GitHub team] +* xref:proc-controller-github-enterprise-settings[GitHub enterprise] +* xref:proc-controller-github-enterprise-org-settings[GitHub enterprise organization] +* xref:proc-controller-github-enterprise-team-settings[GitHub enterprise team] + +include::platform/proc-gw-local-authentication.adoc[leveloffset=+1] + +include::platform/proc-controller-set-up-LDAP.adoc[leveloffset=+1] + +include::platform/proc-controller-set-up-SAML.adoc[leveloffset=+1] + +include::platform/proc-controller-configure-transparent-SAML.adoc[leveloffset=+2] + +include::platform/proc-controller-set-up-tacacs+.adoc[leveloffset=+1] + +include::platform/proc-controller-set-up-azure.adoc[leveloffset=+1] + +include::platform/proc-controller-google-oauth2-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-set-up-generic-oidc.adoc[leveloffset=+1] + +include::platform/proc-gw-config-keycloak-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-github-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-github-organization-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-github-team-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-github-enterprise-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-github-enterprise-org-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-github-enterprise-team-settings.adoc[leveloffset=+1] + +include::platform/proc-controller-set-up-radius.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-gw-configure-authentication.adoc b/downstream/assemblies/platform/assembly-gw-configure-authentication.adoc new file mode 100644 index 0000000000..4118efba9f --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-configure-authentication.adoc @@ -0,0 +1,33 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + +[id="gw-configure-authentication"] + += Configuring authentication in the {PlatformNameShort} + +Using the authentication settings in {PlatformNameShort}, you can set up a simplified login through several authentication methods, such as LDAP and SAML. +Depending on the authentication method you select, you will be required to enter different information to complete the configuration. Be sure to include all the information required for your configuration needs. + +== Prerequisites + +* A running installation of {PlatformNameShort} {PlatformVers} +* A running instance of your authentication source +* Administrator rights to the {PlatformNameShort} +* Any connection information needed to connect {PlatformNameShort} {PlatformVers} to your source (see individual authentication types for details). + +include::platform/con-gw-pluggable-authentication.adoc[leveloffset=+1] + +include::platform/con-gw-create-authentication.adoc[leveloffset=+1] + +include::platform/proc-gw-select-auth-type.adoc[leveloffset=+2] + +include::platform/proc-gw-define-rules-triggers.adoc[leveloffset=+2] + +include::platform/proc-gw-adjust-mapping-order.adoc[leveloffset=+2] + +include::platform/proc-aap-enable-disable-auth.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] + \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-gw-managing-access.adoc b/downstream/assemblies/platform/assembly-gw-managing-access.adoc new file mode 100644 index 0000000000..c21124dd11 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-managing-access.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="gw-managing-access"] + += Managing access with role based access control + +:context: gw-manage-rbac + +Role-based access control (RBAC) restricts user access based on their role within an organization to which they are assigned in {PlatformNameShort}. The roles in RBAC refer to the levels of access that users have to the {PlatformNameShort} components and resources. + +You can control what users can do with the components of {PlatformNameShort} at a broad or granular level depending on your RBAC policy. You can designate whether the user is a system administrator or normal user and align roles and access permissions with their positions within the organization. + +Roles can be defined with multiple permissions that can then be assigned to resources, teams and users. The permissions that make up a role dictate what the assigned role allows. Permissions are allocated with only the access needed for a user to perform the tasks appropriate for their role. + +include::assembly-controller-organizations.adoc[leveloffset=+1] +include::assembly-controller-teams.adoc[leveloffset=+1] +include::assembly-controller-users.adoc[leveloffset=+1] +include::assembly-gw-resources.adoc[leveloffset=+1] + \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-gw-managing-authentication.adoc b/downstream/assemblies/platform/assembly-gw-managing-authentication.adoc new file mode 100644 index 0000000000..d2ee680bea --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-managing-authentication.adoc @@ -0,0 +1,22 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + +[id="gw-managing-authentication"] + += Managing authentication in {PlatformNameShort} + +After you have configured your authentication settings, you can view a list of authenticators, search, sort and view the details for each authenticator configured on the system. + +include::platform/proc-gw-authentication-list-view.adoc[leveloffset=+1] + +include::platform/proc-gw-searching-authenticator.adoc[leveloffset=+1] + +include::platform/proc-gw-display-auth-details.adoc[leveloffset=+1] + +include::platform/proc-gw-edit-authenticator.adoc[leveloffset=+1] + +include::platform/proc-gw-delete-authenticator.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-gw-mapping.adoc b/downstream/assemblies/platform/assembly-gw-mapping.adoc new file mode 100644 index 0000000000..6f6c139fa5 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-mapping.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="gw-mapping"] + += Mapping + +To control which users are allowed into the {PlatformNameShort} server, and placed into {PlatformNameShort} organizations or teams based on their attributes (like username and email address) or what groups they belong to, you can configure authenticator maps. + +Authenticator maps allow you to add conditions that must be met before a user is given or denied access to a resource type. Authenticator maps are associated with an authenticator and are given an order. The maps are processed in order when the user logs in. These can be thought of like firewall rules or mail filters. + +include::platform/con-gw-understanding-authenticator-mapping.adoc[leveloffset=+1] + +include::platform/con-gw-authenticator-map-types.adoc[leveloffset=+1] + +include::platform/con-gw-authenticator-map-triggers.adoc[leveloffset=+1] + +include::platform/con-gw-authenticator-map-examples.adoc[leveloffset=+1] + +include::platform/proc-gw-allow-mapping.adoc[leveloffset=+1] + +include::platform/ref-controller-organization-mapping.adoc[leveloffset=+1] + +include::platform/ref-controller-team-mapping.adoc[leveloffset=+1] + +include::platform/proc-gw-role-mapping.adoc[leveloffset=+1] + +include::platform/proc-gw-superuser-mapping.adoc[leveloffset=+1] + +include::platform/con-gw-review-mapping-results.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-gw-resources.adoc b/downstream/assemblies/platform/assembly-gw-resources.adoc new file mode 100644 index 0000000000..5762e69f02 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-resources.adoc @@ -0,0 +1,16 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-gw-resources"] + += Resources + +You can manage user access to {PlatformNameShort} resources and what users can do with those resources. Users are granted access through the roles to which they are assigned or through roles inherited through the role hierarchy, for example, through the roles they inherit through team membership. {PlatformNameShort} resources differ depending on the functionality you are configuring. For example, resources can be job templates and projects for automation execution or decision environments and rulebook activations for automation decisions. + +include::platform/proc-gw-team-access-resources.adoc[leveloffset=+1] + +include::platform/proc-gw-user-access-resources.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-gw-roles.adoc b/downstream/assemblies/platform/assembly-gw-roles.adoc new file mode 100644 index 0000000000..50a859b546 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-roles.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-gw-roles"] + += Roles + +Roles are units of organization in the {PlatformName}. When you assign a role to a team or user, you are granting access to use, read, or write credentials. Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among resources, or with other users. All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource. + +include::platform/proc-gw-roles.adoc[leveloffset=+1] +include::platform/proc-gw-create-roles.adoc[leveloffset=+1] +include::platform/proc-gw-edit-roles.adoc[leveloffset=+1] +include::platform/proc-gw-delete-roles.adoc[leveloffset=+1] + diff --git a/downstream/assemblies/platform/assembly-gw-settings.adoc b/downstream/assemblies/platform/assembly-gw-settings.adoc new file mode 100644 index 0000000000..763df5ef21 --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-settings.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="assembly-gw-settings"] + += Configuring {PlatformNameShort} + +You can configure {PlatformNameShort} from the *Settings* menu using the following selections: + +* *Subscriptions* +* *{GatewayStart}* +* *User Preferences* +* *Troubleshooting* + +[NOTE] +==== +The other selections available from the *Settings* menu are specific to automation execution. For more information, refer to the link:{URLControllerAdminGuide}/index#controller-config[{TitleControllerAdminGuide}] guide. +==== + +include::platform/proc-controller-configure-subscriptions.adoc[leveloffset=+1] +include::platform/proc-settings-platform-gateway.adoc[leveloffset=+1] +include::platform/proc-settings-gw-security-options.adoc[leveloffset=+2] +include::platform/proc-settings-gw-session-options.adoc[leveloffset=+2] +include::platform/proc-settings-gw-password-security.adoc[leveloffset=+2] +include::platform/proc-settings-gw-custom-login.adoc[leveloffset=+2] +include::platform/proc-settings-gw-additional-options.adoc[leveloffset=+2] +include::platform/proc-settings-user-preferences.adoc[leveloffset=+1] +include::platform/proc-settings-troubleshooting.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-gw-token-based-authentication.adoc b/downstream/assemblies/platform/assembly-gw-token-based-authentication.adoc new file mode 100644 index 0000000000..8df2ca1a6d --- /dev/null +++ b/downstream/assemblies/platform/assembly-gw-token-based-authentication.adoc @@ -0,0 +1,48 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + +[id="gw-token-based-authentication"] + += Configuring access to external applications with token-based authentication + +Token-based authentication permits authentication of third-party tools and services with the platform through integrated OAuth 2 token support. {PlatformNameShort} utilizes both OAuth Tokens and Personal Access Tokens (PATs). + +OAuth Tokens:: OAuth Tokens are tied to specific applications and allow applications to access data without disclosing user login information. + +Personal Access Tokens:: PATs are personal to a user and not tied to a specific application. They are created directly by a user for their own use. + +The default expiration for access tokens has been updated from 1000 years to 1 year. This change ensures frequent token rotation for increased credential security. + +[NOTE] +==== +Access tokens in controller 2.4 and previous versions of the platform gateway were valid for 1000 years. Any existing tokens created prior to the 2.5.20250604 patch release will retain a 1000 year expiration. +==== + +You can customize this setting to meet your specific requirements by modifying the expiration time in your `settings.py` file as follows: + +---- +OAUTH2_PROVIDER__ACCESS_TOKEN_EXPIRE_SECONDS = 31536000 +---- + +For more information on the `settings.py` file and how it can be used to configure aspects of the platform, see link:{URLAAPOperationsGuide}/aap-advanced-config#settings-py_advanced-config[`settings.py`] in {TitleAAPOperationsGuide}. + +For more information on the OAuth2 specification, see link:https://datatracker.ietf.org/doc/html/rfc6749[The OAuth 2.0 Authorization Framework]. + +For more information on using the `manage` utility to create tokens, see xref:ref-controller-token-session-management[Token and session management]. + +include::assembly-controller-applications.adoc[leveloffset=+1] +include::platform/proc-controller-apps-create-tokens.adoc[leveloffset=+1] +include::platform/ref-controller-app-token-functions.adoc[leveloffset=+2] +include::platform/ref-controller-refresh-existing-token.adoc[leveloffset=+3] +include::platform/ref-controller-revoke-access-token.adoc[leveloffset=+3] +include::platform/ref-controller-token-session-management.adoc[leveloffset=+2] +include::platform/ref-controller-create-oauth2-token.adoc[leveloffset=+3] +include::platform/ref-controller-revoke-oauth2-token.adoc[leveloffset=+3] +include::platform/ref-controller-clear-tokens.adoc[leveloffset=+3] +//[emcwhinn - Temporarily hiding expire sessions module as it does not yet exist for gateway as per AAP-35735] +//include::platform/ref-controller-expire-sessions.adoc[leveloffset=+3] +include::platform/ref-controller-clear-sessions.adoc[leveloffset=+3] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-horizontal-scaling.adoc b/downstream/assemblies/platform/assembly-horizontal-scaling.adoc new file mode 100644 index 0000000000..08498512fc --- /dev/null +++ b/downstream/assemblies/platform/assembly-horizontal-scaling.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +[id="assembly-horizontal-scaling"] += Horizontal Scaling in {PlatformName} + +You can set up multi-node deployments for components across {PlatformNameShort}. Whether you require horizontal scaling for {MenuTopAE}, {MenuAD}, or {AutomationMesh}, you can scale your deployments based on your organization’s needs. + +include::platform/con-hs-eda-controller.adoc[leveloffset=+1] + +include::platform/con-hs-eda-sizing-scaling.adoc[leveloffset=+2] + +include::platform/proc-hs-eda-setup.adoc[leveloffset=+2] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-install-aap-gateway.adoc b/downstream/assemblies/platform/assembly-install-aap-gateway.adoc new file mode 100644 index 0000000000..aad2e2244d --- /dev/null +++ b/downstream/assemblies/platform/assembly-install-aap-gateway.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +[id="install-aap-gateway_{context}"] + +:context: install-aap-gateway + += Installing {PlatformName} gateway on {OCP} + +As a namespace administrator, you can use {PlatformNameShort} gateway to manage new {PlatformNameShort} components in your OpenShift environment. + +The {PlatformNameShort} gateway uses the {PlatformNameShort} custom resource to manage and integrate the following {PlatformNameShort} components into a unified user interface: + +* {ControllerNameStart} +* {HubNameStart} +* {EDAName} +* {LightspeedShortName} (This feature is disabled by default, you must opt in to use it.) + +Before you can deploy the {Gateway} you must have {OperatorPlatformNameShort} installed in a namespace. +If you have not installed {OperatorPlatformNameShort} see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#install-aap-operator_operator-platform-doc[Installing the {OperatorPlatformName} on {OCP}]. + +[NOTE] +==== +{GatewayStart} is only available under {OperatorPlatformNameShort} version 2.5. Every component deployed under {OperatorPlatformNameShort} 2.5 defaults to version 2.5. +==== + +If you have the {OperatorPlatformNameShort} and some or all of the {PlatformNameShort} components installed see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#operator-deploy-central-config_install-aap-gateway[Deploying the {Gateway} with existing {PlatformNameShort} components] for how to proceed. + +include::platform/proc-operator-link-components.adoc[leveloffset=+1] + +include::platform/proc-operator-deploy-central-config.adoc[leveloffset=+1] + +include::platform/proc-operator-access-aap.adoc[leveloffset=+1] + + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-install-aap-operator.adoc b/downstream/assemblies/platform/assembly-install-aap-operator.adoc index 4f150282cb..d228c32deb 100644 --- a/downstream/assemblies/platform/assembly-install-aap-operator.adoc +++ b/downstream/assemblies/platform/assembly-install-aap-operator.adoc @@ -1,15 +1,41 @@ - +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] +[id="install-aap-operator"] +:context: install-aap-operator -[id="assembly-install-aap-operator"] -= Installing the {PlatformName} operator on {OCP} += Installing the {OperatorPlatformName} on {OCP} [role="_abstract"] -.Prerequisites +[NOTE] +==== +For information about the {OperatorPlatformNameShort} system requirements and infrastructure topology see +link:{URLTopologies}/ocp-topologies[Operator topologies] in _{TitleTopologies}_. +==== + +When installing your {OperatorPlatformNameShort} you have a choice of a namespace-scoped operator or a cluster-scoped operator. +This depends on the update channel you choose, stable-2.x or cluster-scoped-2.x. + +A namespace-scoped operator is confined to one namespace, offering tighter security. A cluster-scoped operator spans multiple namespaces, which grants broader permissions. + +If you are managing multiple {PlatformNameShort} instances with the same {OperatorPlatformNameShort} version, use the cluster-scoped operator, which uses a single operator to manage all {PlatformNameShort} custom resources in your cluster. + +If you need multiple operator versions in the same cluster, you must use the namespace-scoped operator. +The operator and the deployment share the same namespace. +This can also be helpful when debugging because the operator logs pertain to custom resources in that namespace only. + +For help with installing a namespace or cluster-scoped operator see the following procedure. + +[IMPORTANT] +==== +You cannot deploy {PlatformNameShort} in the default namespace on your OpenShift Cluster. The `aap` namespace is recommended. You can use a custom namespace, but it should run only {PlatformNameShort}. +==== + + +== Prerequisites * You have installed the {PlatformName} catalog in OperatorHub. * You have created a `StorageClass` object for your platform and a persistent volume claim (PVC) with `ReadWriteMany` access mode. See link:https://docs.openshift.com/container-platform/{OCPLatest}/storage/dynamic-provisioning.html[Dynamic provisioning] for details. * To run {OCP} clusters on Amazon Web Services (AWS) with `ReadWriteMany` access mode, you must add NFS or other storage. @@ -20,5 +46,6 @@ ifdef::context[:parent-context: {context}] include::platform/proc-install-aap-operator.adoc[leveloffset=+2] + ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-installing-aap-operator-cli.adoc b/downstream/assemblies/platform/assembly-installing-aap-operator-cli.adoc index e2ba37cb62..47e385d26d 100644 --- a/downstream/assemblies/platform/assembly-installing-aap-operator-cli.adoc +++ b/downstream/assemblies/platform/assembly-installing-aap-operator-cli.adoc @@ -1,20 +1,14 @@ -// Used in -// titles/aap-operator-installation/ -//// -Retains the context of the parent assembly if this assembly is nested within another assembly. -For more information about nesting assemblies, see: https://redhat-documentation.github.io/modular-docs/#nesting-assemblies -See also the complementary step on the last line of this file. -//// +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] -[id="installing-aap-operator-cli"] -= Installing {OperatorPlatform} from the {OCPShort} CLI +[id="installing-aap-operator-cli_{context}"] += Installing {OperatorPlatformName} from the {OCP} CLI :context: installing-aap-operator-cli [role="_abstract"] -Use these instructions to install the {OperatorPlatform} on {OCP} from the {OCPShort} command-line interface (CLI) using the [command]`oc` command. +Use these instructions to install the {OperatorPlatformNameShort} on {OCP} from the {OCPShort} command-line interface (CLI) using the [command]`oc` command. == Prerequisites @@ -26,13 +20,21 @@ include::platform/proc-install-cli-aap-operator.adoc[leveloffset=+1] You can use the {OCPShort} CLI to fetch the web address and the password of the {ControllerNameStart} that you created. +== Fetching {Gateway} login details from the {OCPShort} CLI + +To login to the {Gateway}, you need the web address and the password. + +include::platform/proc-cli-get-controller-address.adoc[leveloffset=+1] + include::platform/proc-cli-get-controller-pwd.adoc[leveloffset=+1] +include::platform/proc-cli-get-controller-pwd-decode.adoc[leveloffset=+1] + [role="_additional-resources"] == Additional resources -* For more information on running operators on OpenShift Container Platform, navigate to the link:{BaseURL}/openshift_container_platform/[OpenShift Container Platform product documentation] and click the _Operators - Working with Operators in OpenShift Container Platform_ guide. +* link:{BaseURL}/openshift_container_platform/[OpenShift Container Platform product documentation] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-installing-controller-operator.adoc b/downstream/assemblies/platform/assembly-installing-controller-operator.adoc index 9bff54c4c2..067ab7d3dc 100644 --- a/downstream/assemblies/platform/assembly-installing-controller-operator.adoc +++ b/downstream/assemblies/platform/assembly-installing-controller-operator.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + //// Retains the context of the parent assembly if this assembly is nested within another assembly. For more information about nesting assemblies, see: https://redhat-documentation.github.io/modular-docs/#nesting-assemblies @@ -8,20 +10,19 @@ ifdef::context[:parent-context: {context}] [id="installing-controller-operator"] -= Installing and configuring {ControllerName} on {OCP} web console - += Configuring {ControllerName} on {OCP} web console -:context: installing-contr-operator +:context: installing-controller-operator [role="_abstract"] -You can use these instructions to install the {ControllerName} operator on {OCP}, specify custom resources, and deploy {PlatformNameShort} with an external database. +You can use these instructions to configure the {ControllerName} operator on {OCP}, specify custom resources, and deploy {PlatformNameShort} with an external database. {ControllerNameStart} configuration can be done through the {ControllerName} extra_settings or directly in the user interface after deployment. However, it is important to note that configurations made in extra_settings take precedence over settings made in the user interface. [NOTE] ==== -When an instance of {ControllerName} is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new {ControllerName} instance in the same namespace. See xref:proc-find-delete-PVCs_{context}[Finding and deleting PVCs] for more information. +When an instance of {ControllerName} is removed, the associated PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new {ControllerName} instance in the same namespace. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-find-delete-PVCs_installing-controller-operator[Finding and deleting PVCs] for more information. ==== @@ -30,25 +31,17 @@ When an instance of {ControllerName} is removed, the associated PVCs are not aut == Prerequisites * You have installed the {PlatformName} catalog in Operator Hub. -* For Controller, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured. +* For {ControllerName}, a default StorageClass must be configured on the cluster for the operator to dynamically create needed PVCs. This is not necessary if an external PostgreSQL database is configured. * For Hub a StorageClass that supports ReadWriteMany must be available on the cluster to dynamically created the PVC needed for the content, redis and api pods. If it is not the default StorageClass on the cluster, you can specify it when creating your AutomationHub object. -== Installing the {ControllerName} operator -Use this procedure to install the {ControllerName} operator. - -.Procedure - -. Navigate to menu:Operators[Installed Operators], then click on the *Ansible Automation Platform* operator. -. Locate the *Automation controller* tab, then click btn:[Create instance]. - -You can proceed with configuring the instance using either the Form View or YAML view. - - -include::platform/proc-creating-controller-form-view.adoc[leveloffset=+2] include::platform/proc-configuring-controller-image-pull-policy.adoc[leveloffset=+2] + include::platform/proc-configuring-controller-ldap-security.adoc[leveloffset=+2] + include::platform/proc-configuring-controller-route-options.adoc[leveloffset=+2] + include::platform/proc-controller-ingress-options.adoc[leveloffset=+2] + include::platform/proc-operator-external-db-controller.adoc[leveloffset=+1] include::platform/proc-find-delete-PVCs.adoc[leveloffset=+1] @@ -56,7 +49,7 @@ include::platform/proc-find-delete-PVCs.adoc[leveloffset=+1] [role="_additional-resources"] == Additional resources -* For more information on running operators on OpenShift Container Platform, navigate to the link:{BaseURL}/openshift_container_platform/[OpenShift Container Platform product documentation] and click the _Operators - Working with Operators in OpenShift Container Platform_ guide. +* link:{BaseURL}/openshift_container_platform/[OpenShift Container Platform product documentation] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-installing-hub-operator.adoc b/downstream/assemblies/platform/assembly-installing-hub-operator.adoc index dbc6a7648e..1b9e980ef0 100644 --- a/downstream/assemblies/platform/assembly-installing-hub-operator.adoc +++ b/downstream/assemblies/platform/assembly-installing-hub-operator.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + //// Retains the context of the parent assembly if this assembly is nested within another assembly. For more information about nesting assemblies, see: https://redhat-documentation.github.io/modular-docs/#nesting-assemblies @@ -8,18 +10,18 @@ ifdef::context[:parent-context: {context}] [id="installing-hub-operator"] -= Installing and configuring {HubName} on {OCP} web console += Configuring {HubName} on {OCP} web console :context: installing-hub-operator [role="_abstract"] -You can use these instructions to install the {HubName} operator on {OCP}, specify custom resources, and deploy {PlatformNameShort} with an external database. +You can use these instructions to configure the {HubName} operator on {OCP}, specify custom resources, and deploy {PlatformNameShort} with an external database. {HubNameStart} configuration can be done through the {HubName} pulp_settings or directly in the user interface after deployment. However, it is important to note that configurations made in pulp_settings take precedence over settings made in the user interface. Hub settings should always be set as lowercase on the Hub custom resource specification. [NOTE] ==== -When an instance of {HubName} is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new {HubName} instance in the same namespace. See xref:proc-find-delete-PVCs_{context}[Finding and deleting PVCs] for more information. +When an instance of {HubName} is removed, the PVCs are not automatically deleted. This can cause issues during migration if the new deployment has the same name as the previous one. Therefore, it is recommended that you manually remove old PVCs before deploying a new {HubName} instance in the same namespace. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-find-delete-PVCs_installing-contr-operator[Finding and deleting PVCs] for more information. ==== @@ -27,33 +29,45 @@ When an instance of {HubName} is removed, the PVCs are not automatically deleted == Prerequisites -* You have installed the {PlatformName} operator in Operator Hub. +* You have installed the {OperatorPlatformNameShort} in Operator Hub. -== Installing the {HubName} operator -Use this procedure to install the {HubName} operator. +// commenting out below as encouraging users to use platform gateway for installation, only covering configuration here [gmurray] +// == Installing the {HubName} operator +// Use this procedure to install the {HubName} operator. -.Procedure +// .Procedure -. Navigate to menu:Operators[Installed Operators]. -. Locate the *Automation hub* entry, then click btn:[Create instance]. +// . Navigate to menu:Operators[Installed Operators]. +// . Locate the *Automation hub* entry, then click btn:[Create instance]. include::platform/con-storage-options-for-operator-installation-on-ocp.adoc[leveloffset=+2] + include::platform/proc-provision-ocp-storage-with-readwritemany.adoc[leveloffset=+3] + include::platform/proc-provision-ocp-storage-amazon-s3.adoc[leveloffset=+3] + include::platform/proc-provision-ocp-storage-azure-blob.adoc[leveloffset=+3] + include::platform/proc-hub-route-options.adoc[leveloffset=+2] + include::platform/proc-hub-ingress-options.adoc[leveloffset=+2] -include::platform/proc-configure-ldap-hub-ocp.adoc[leveloffset=+1] + include::platform/proc-access-hub-operator-ui.adoc[leveloffset=+1] + include::platform/proc-operator-external-db-hub.adoc[leveloffset=+1] + include::platform/proc-enable-hstore-extension.adoc[leveloffset=+2] + include::platform/proc-find-delete-PVCs.adoc[leveloffset=+1] + include::platform/ref-ocp-additional-configs.adoc[leveloffset=+1] +include::platform/proc-aap-add-allowed-registries.adoc[leveloffset=+1] + [role="_additional-resources"] == Additional resources -* For more information on running operators on OpenShift Container Platform, navigate to the link:{BaseURL}/openshift_container_platform/[OpenShift Container Platform product documentation] and click the _Operators - Working with Operators in OpenShift Container Platform_ guide. +* link:{BaseURL}/openshift_container_platform/[OpenShift Container Platform product documentation] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-inventory-file-importing.adoc b/downstream/assemblies/platform/assembly-inventory-file-importing.adoc index ef1d6c8d17..519ea1ef53 100644 --- a/downstream/assemblies/platform/assembly-inventory-file-importing.adoc +++ b/downstream/assemblies/platform/assembly-inventory-file-importing.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: ASSEMBLY + [id="assembly-inventory-file-importing"] = Inventory File Importing -With {ControllerNameStart} you can select an inventory file from source control, rather than creating one from scratch. +With {ControllerName} you can select an inventory file from source control, rather than creating one from scratch. //This function is the same as for custom inventory scripts, except that the contents are obtained from source control instead of editing their contents in a browser. The files are non-editable, and as inventories are updated at the source, the inventories within the projects are also updated accordingly, including the `group_vars` and `host_vars` files or directory associated with them. SCM types can consume both inventory files and scripts. @@ -21,9 +23,8 @@ For example, if importing from a sourced `.ini` file, you can add the following Similarly, group descriptions also default to _imported_, but can also be overridden by `_awx_description`. -To use old inventory scripts in source control, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-inventories#ref-controller-export-old-scripts[Export old inventory scripts] in the _{ControllerUG}_. +To use old inventory scripts in source control, see link:{URLControllerUserGuide}/controller-inventories#ref-controller-export-old-scripts[Export old inventory scripts] in _{ControllerUG}_. //include::platform/con-controller-custom-dynamic-inv-scripts.adoc[leveloffset=+1] -include::platform/ref-controller-scm-inv-source-fields.adoc[leveloffset=+1] - +include::platform/ref-controller-scm-inv-source-fields.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-inventory-introduction.adoc b/downstream/assemblies/platform/assembly-inventory-introduction.adoc index 82c3b60fb8..80ea03dfc6 100644 --- a/downstream/assemblies/platform/assembly-inventory-introduction.adoc +++ b/downstream/assemblies/platform/assembly-inventory-introduction.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + //[id="con-inventory-introduction_{context}"] = About the installer inventory file @@ -17,9 +19,11 @@ The following table shows possible locations: [cols="30%,70%",options="header"] |==== | Installer | Location -| *Bundle tar* | `/ansible-automation-platform-setup-bundle-` -| *Non-bundle tar* | `/ansible-automation-platform-setup-` | *RPM* | `/opt/ansible-automation-platform/installer` +| *RPM bundle tar* | `/ansible-automation-platform-setup-bundle-` +| *RPM non-bundle tar* | `/ansible-automation-platform-setup-` +| *Container bundle tar* | `/ansible-automation-platform-containerized-setup-bundle-` +| *Container non-bundle tar* | `/ansible-automation-platform-containerized-setup-` |==== You can verify the hosts in your inventory using the command: @@ -34,15 +38,20 @@ ansible all -i Controller node B ++ +Controller node A --> Controller node C ++ +Controller node B --> Controller node C ++ +You can force the listener by setting ++ +`receptor_listener=True` ++ +However, a connection Controller B --> A is likely to be rejected as that connection already exists. ++ +This means that nothing connects to Controller A as Controller A is creating the connections to the other nodes, and the following command does not return anything on Controller A: ++ +`[root@controller1 ~]# ss -ntlp | grep 27199 [root@controller1 ~]#` +==== .{InsightsName} [options="header"] @@ -244,10 +167,10 @@ a|Open *only* if the internal database is used along with another component. Oth |link:https://console.redhat.com[https://console.redhat.com:443] |General account services, subscriptions |link:https://catalog.redhat.com[https://catalog.redhat.com:443] |Indexing execution environments |link:https://sso.redhat.com[https://sso.redhat.com:443] |TCP -|link:https://automation-hub-prd.s3.amazonaws.com[https://automation-hub-prd.s3.amazonaws.com:443] + -link:https://automation-hub-prd.s3.us-east-2.amazonaws.com/[https://automation-hub-prd.s3.us-east-2.amazonaws.com:443/]| Firewall access +|\https://automation-hub-prd.s3.amazonaws.com + +\https://automation-hub-prd.s3.us-east-2.amazonaws.com| Firewall access |link:https://galaxy.ansible.com[https://galaxy.ansible.com:443] |Ansible Community curated Ansible content -|link:https://ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com[https://ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com:443] | Dual Stack IPv6 endpoint for Community curated Ansible content repository +|\https://ansible-galaxy-ng.s3.dualstack.us-east-1.amazonaws.com | Dual Stack IPv6 endpoint for Community curated Ansible content repository |link:https://registry.redhat.io[https://registry.redhat.io:443] |Access to container images provided by Red Hat and partners |link:https://cert.console.redhat.com[https://cert.console.redhat.com:443] |Red Hat and partner curated Ansible Collections |=== @@ -265,16 +188,28 @@ link:https://automation-hub-prd.s3.us-east-2.amazonaws.com/[https://automation-h [IMPORTANT] ==== -Image manifests and filesystem blobs are served directly from `registry.redhat.io`. -However, from 1 May 2023, filesystem blobs are served from `quay.io` instead. -To avoid problems pulling container images, you must enable outbound connections to the listed `quay.io` hostnames. +As of *April 1st, 2025*, `quay.io` is adding three additional endpoints. As a result, customers must adjust allow/block lists within their firewall systems lists to include the following endpoints: + +* `cdn04.quay.io` +* `cdn05.quay.io` +* `cdn06.quay.io` + +To avoid problems pulling container images, customers must allow outbound TCP connections (ports 80 and 443) to the following hostnames: + +* `cdn.quay.io` +* `cdn01.quay.io` +* `cdn02.quay.io` +* `cdn03.quay.io` +* `cdn04.quay.io` +* `cdn05.quay.io` +* `cdn06.quay.io` -This change should be made to any firewall configuration that specifically enables outbound connections to `registry.redhat.io`. +This change should be made to any firewall configuration that specifically enables outbound connections to `registry.redhat.io` or `registry.access.redhat.com`. Use the hostnames instead of IP addresses when configuring firewall rules. -After making this change, you can continue to pull images from `registry.redhat.io`. -You do not require a `quay.io` login, or need to interact with the `quay.io` registry directly in any way to continue pulling Red Hat container images. +After making this change, you can continue to pull images from `registry.redhat.io` or `registry.access.redhat.com`. You do not require a `quay.io` login, or need to interact with the `quay.io` registry directly in any way to continue pulling Red Hat container images. -For more information, see link:https://access.redhat.com/articles/6999582[Firewall changes for container image pulls]. +For more information, see link:https://access.redhat.com/articles/7084334[Firewall changes for container image pulls 2024/2025]. ==== +// emurtoug: This note is also included in the Managing content guide \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-operator-add-execution-nodes.adoc b/downstream/assemblies/platform/assembly-operator-add-execution-nodes.adoc index 8bf218fe42..12d32d9c46 100644 --- a/downstream/assemblies/platform/assembly-operator-add-execution-nodes.adoc +++ b/downstream/assemblies/platform/assembly-operator-add-execution-nodes.adoc @@ -1,12 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="operator-add-execution-nodes_{context}"] -= Adding execution nodes to {PlatformNameShort} Operator += Adding execution nodes to {OperatorPlatformName} :context: operator-upgrade -You can enable the {PlatformNameShort} Operator with execution nodes by downloading and installing the install bundle. +You can enable the {OperatorPlatformNameShort} with execution nodes by downloading and installing the install bundle. include::platform/proc-add-operator-execution-nodes.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-operator-configure-aap-components.adoc b/downstream/assemblies/platform/assembly-operator-configure-aap-components.adoc new file mode 100644 index 0000000000..1c480cc4b8 --- /dev/null +++ b/downstream/assemblies/platform/assembly-operator-configure-aap-components.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +[id="operator-configure-gateway_{context}"] + +:context: operator-configure-aap-components + += Configuring {PlatformName} components on {OperatorPlatformName} + +After you have installed {OperatorPlatformNameShort} and set up your {PlatformNameShort} components, you can configure them for your desired output. + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-operator-configure-gateway.adoc b/downstream/assemblies/platform/assembly-operator-configure-gateway.adoc new file mode 100644 index 0000000000..3dda310abf --- /dev/null +++ b/downstream/assemblies/platform/assembly-operator-configure-gateway.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + + +[id="operator-configure-gateway"] += Configuring {Gateway} on {OCP} web console + +You can use these instructions to further configure the {Gateway} operator on {OCP}, specify custom resources, and deploy {PlatformNameShort} with an external database. + +:context: operator-configure-gateway + +include::platform/proc-operator-external-db-gateway.adoc[leveloffset=+1] + +include::platform/proc-operator-troubleshoot-ext-db.adoc[leveloffset=+1] + +include::platform/proc-operator-enable-https-redirect.adoc[leveloffset=+1] + +include::platform/proc-operator-config-csrf-gateway.adoc[leveloffset=+1] + +include::platform/proc-operator-aap-faq.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-operator-install-operator.adoc b/downstream/assemblies/platform/assembly-operator-install-operator.adoc new file mode 100644 index 0000000000..bd9e4730ab --- /dev/null +++ b/downstream/assemblies/platform/assembly-operator-install-operator.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + +[id="operator-install-operator_{context}"] + += Installing {OperatorPlatformName} on {OCP} + +:context: operator-install-operator + +As a system administrator, you can use {OperatorPlatformNameShort} to deploy new {PlatformNameShort} instances in your OpenShift environment. + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-operator-install-planning.adoc b/downstream/assemblies/platform/assembly-operator-install-planning.adoc index 0f20c18ffc..d31752d6f6 100644 --- a/downstream/assemblies/platform/assembly-operator-install-planning.adoc +++ b/downstream/assemblies/platform/assembly-operator-install-planning.adoc @@ -1,26 +1,30 @@ +:_mod-docs-content-type: ASSEMBLY ifdef::context[:parent-context: {context}] - - -[id="operator-install-planning"] -= Planning your {PlatformName} operator installation on {OCP} - +[id="operator-install-planning_{context}"] += Planning your {OperatorPlatformName} on {OCP} :context: operator-install-planning [role="_abstract"] {PlatformName} is supported on both Red Hat Enterprise Linux and Red Hat Openshift. -OpenShift operators help install and automate day-2 operations of complex, distributed software on {OCP}. The {OperatorPlatform} enables you to deploy and manage {PlatformNameShort} components on {OCP}. +OpenShift operators help install and automate day-2 operations of complex, distributed software on {OCP}. The {OperatorPlatformNameShort} enables you to deploy and manage {PlatformNameShort} components on {OCP}. You can use this section to help plan your {PlatformName} installation on your {OCP} environment. Before installing, review the supported installation scenarios to determine which meets your requirements. -include::platform/con-about-operator.adoc[leveloffset=2] -include::platform/ref-operator-ocp-version.adoc[leveloffset=2] -include::platform/con-ocp-supported-install.adoc[leveloffset=2] -include::platform/con-operator-custom-resources.adoc[leveloffset=2] -include::platform/con-operator-additional-resources.adoc[leveloffset=2] +include::platform/con-about-operator.adoc[leveloffset=+1] + +include::platform/ref-operator-ocp-version.adoc[leveloffset=+1] + +include::platform/con-ocp-supported-install.adoc[leveloffset=+1] + +include::platform/con-operator-custom-resources.adoc[leveloffset=+1] + +include::platform/con-operator-csrf-management.adoc[leveloffset=+1] + +include::platform/con-operator-additional-resources.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] diff --git a/downstream/assemblies/platform/assembly-operator-upgrade.adoc b/downstream/assemblies/platform/assembly-operator-upgrade.adoc index c8abe644bf..3703232a7c 100644 --- a/downstream/assemblies/platform/assembly-operator-upgrade.adoc +++ b/downstream/assemblies/platform/assembly-operator-upgrade.adoc @@ -1,21 +1,31 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="operator-upgrade_{context}"] -= Upgrading {OperatorPlatform} on {OCPShort} += Upgrading {OperatorPlatformName} on {OCP} :context: operator-upgrade [role="_abstract"] -The {OperatorPlatform} simplifies the installation, upgrade and deployment of new {PlatformName} instances in your {OCPShort} environment. +The {OperatorPlatformNameShort} simplifies the installation, upgrade, and deployment of new {PlatformName} instances in your {OCPShort} environment. + +include::platform/con-operator-upgrade-overview.adoc[leveloffset=+1] include::platform/con-operator-upgrade-considerations.adoc[leveloffset=+1] + include::platform/con-operator-upgrade-prereq.adoc[leveloffset=+1] + +include::platform/con-operator-channel-upgrade.adoc[leveloffset=+1] + include::platform/proc-operator-upgrade.adoc[leveloffset=+1] +include::platform/proc-operator-create_crs.adoc[leveloffset=+1] +include::assembly-aap-post-upgrade.adoc[leveloffset=+1] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-planning-installation.adoc b/downstream/assemblies/platform/assembly-planning-installation.adoc index f46526e3aa..c4caab3621 100644 --- a/downstream/assemblies/platform/assembly-planning-installation.adoc +++ b/downstream/assemblies/platform/assembly-planning-installation.adoc @@ -1,16 +1,16 @@ +:_mod-docs-content-type: ASSEMBLY +[id="planning-installation"] ifdef::context[:parent-context: {context}] -[id="planning-installation"] = Planning your {PlatformName} installation - :context: planning [role="_abstract"] {PlatformName} is supported on both {RHEL} and Red Hat OpenShift. Use this guide to plan your {PlatformName} installation on {RHEL}. -To install {PlatformName} on your {OCP} environment, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/index[Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform]. +To install {PlatformName} on your {OCP} environment, see link:{URLOperatorInstallation}[{TitleOperatorInstallation}]. ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-planning-mesh.adoc b/downstream/assemblies/platform/assembly-planning-mesh.adoc index 07340516cc..9ebccf69cc 100644 --- a/downstream/assemblies/platform/assembly-planning-mesh.adoc +++ b/downstream/assemblies/platform/assembly-planning-mesh.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] [id="assembly-planning-mesh"] @@ -9,7 +11,6 @@ ifdef::operator-mesh[] endif::operator-mesh[] :context: planning-mesh - [role="_abstract"] ifdef::mesh-VM[] The following topics contain information to help plan an {AutomationMesh} deployment in your VM-based {PlatformNameShort} environment. diff --git a/downstream/assemblies/platform/assembly-platform-install-overview.adoc b/downstream/assemblies/platform/assembly-platform-install-overview.adoc index e928ba7551..6d96a6d539 100644 --- a/downstream/assemblies/platform/assembly-platform-install-overview.adoc +++ b/downstream/assemblies/platform/assembly-platform-install-overview.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] @@ -9,25 +11,30 @@ ifdef::context[:parent-context: {context}] [role="_abstract"] -The {PlatformName} installation program offers you flexibility, allowing you to install {PlatformNameShort} by using a number of supported installation scenarios. Starting with {PlatformNameShort} {PlatformVers}, the installation scenarios include the optional deployment of {EDAcontroller}, which introduces the automated resolution of IT requests. +The {PlatformName} installation program offers you flexibility, allowing you to install {PlatformNameShort} by using several supported installation scenarios. Regardless of the installation scenario you choose, installing {PlatformNameShort} involves the following steps: xref:proc-editing-installer-inventory-file_platform-install-scenario[Editing the {PlatformName} installer inventory file]:: The {PlatformNameShort} installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment. -xref:proc-running-setup-script_platform-install-scenario[Running the {PlatformName} installer setup script]:: The setup script installs your private automation hub by using the required parameters defined in the inventory file. +xref:proc-running-setup-script_platform-install-scenario[Running the {PlatformName} installer setup script]:: The setup script installs {PlatformNameShort} by using the required parameters defined in the inventory file. + +xref:proc-verify-aap-installation_platform-install-scenario[Verifying your {PlatformNameShort} installation]:: After installing {PlatformNameShort}, you can verify that the installation has been successful by logging in to the platform UI and seeing the relevant functionality. -xref:proc-verify-controller-installation_platform-install-scenario[Verifying {ControllerName} installation]:: After installing {PlatformNameShort}, you can verify that the installation has been successful by logging in to the {ControllerName}. +// Removing to consolidate AAP installation verification - you verify by logging into the gateway rather than logging into each component's UI - AAP-17771 +// xref:proc-verify-controller-installation_platform-install-scenario[Verifying {ControllerName} installation]:: After installing {PlatformNameShort}, you can verify that the installation has been successful by logging in to the {ControllerName}. -xref:proc-verify-hub-installation_platform-install-scenario[Verifying {HubName} installation]:: After installing {PlatformNameShort}, you can verify that the installation has been successful by logging in to the {HubName}. +// xref:proc-verify-hub-installation_platform-install-scenario[Verifying {HubName} installation]:: After installing {PlatformNameShort}, you can verify that the installation has been successful by logging in to the {HubName}. -xref:proc-verify-eda-controller-installation_platform-install-scenario[Verifying {EDAcontroller} installation]:: After installing {PlatformNameShort}, you can verify that the installation has been successful by logging in to the {EDAcontroller}. +// xref:proc-verify-eda-controller-installation_platform-install-scenario[Verifying {EDAcontroller} installation]:: After installing {PlatformNameShort}, you can verify that the installation has been successful by logging in to the {EDAcontroller}. //xref:assembly-platform-whats-next_platform-install-scenario[Post-installation steps]:: After successful installation, you can begin using the features of {PlatformNameShort}. [role="_additional-resources"] .Additional resources -For more information about the supported installation scenarios, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/index[{PlatformName} Planning Guide]. + +. For more information about the supported installation scenarios, see the link:{LinkPlanningGuide}. +. For more information on available topologies, see link:{LinkTopologies}. include::platform/con-aap-installation-prereqs.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-platform-install-scenario.adoc b/downstream/assemblies/platform/assembly-platform-install-scenario.adoc index 8298cc4c84..83b206737b 100644 --- a/downstream/assemblies/platform/assembly-platform-install-scenario.adoc +++ b/downstream/assemblies/platform/assembly-platform-install-scenario.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] @@ -10,57 +12,86 @@ ifdef::context[:parent-context: {context}] :context: platform-install-scenario [role="_abstract"] -{PlatformNameShort} is a modular platform. You can deploy {ControllerName} with other automation platform components, such as {HubName} and {EDAcontroller}. For more information about the components provided with {PlatformNameShort}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/planning-installation#ref-platform-components[{PlatformName} components] in the {PlatformName} Planning Guide. +{PlatformNameShort} is a modular platform. The {Gateway} deploys automation platform components, such as {ControllerName}, {HubName}, and {EDAcontroller}. -There are several supported installation scenarios for {PlatformName}. To install {PlatformName}, you must edit the inventory file parameters to specify your installation scenario. You can use one of the following as a basis for your own inventory file: +For more information about the components provided with {PlatformNameShort}, see link:{URLPlanningGuide}/ref-aap-components[{PlatformName} components] in {TitlePlanningGuide}. + +There are several supported installation scenarios for {PlatformName}. To install {PlatformName}, you must edit the inventory file parameters to specify your installation scenario. You can use the link:{URLTopologies}/rpm-topologies#example_enterprise_inventory_file[enterprise installer] as a basis for your own inventory file. + +// New install scenarios including platform gateway AAP-17771 +//* xref:ref-gateway-controller-ext-db[Single platform gateway and {ControllerName} with an external (installer managed) database] +//* xref:ref-gateway-controller-hub-ext-db[Single platform gateway, {ControllerName}, and {HubName} with an external (installer managed) database] +//* xref:ref-gateway-controller-hub-eda-ext-db[Single platform gateway, {ControllerName}, {HubName}, and {EDAcontroller} node with an external (installer managed) database] + +.Additional resources +For a comprehensive list of predefined variables used in installation inventory files, see link:{URLInstallationGuide}/appendix-inventory-files-vars[Inventory file variables]. + +// Removed for install scenario consolidation AAP-17771 +// * xref:ref-single-controller-ext-installer-managed-db[Single {ControllerName} with external (installer managed) database] +// * xref:ref-single-controller-hub-ext-database-inventory[Single {ControllerName} and single {HubName} with external (installer managed) database] +// * xref:ref-single-controller-hub-eda-with-managed-db[Single {ControllerName}, single {HubName}, and single event-driven ansible controller node with external (installer managed ) database] //[ifowler] Removed for AAP-18700 Install Guide Scenario Consolidation -//* xref:ref-standlone-platform-inventory_platform-install-scenario[Standalone automation controller with external (installer managed) database] -* xref:ref-single-controller-ext-installer-managed-db[Single {ControllerName} with external (installer managed) database] -//[ifowler] Removed for AAP-18700 Install Guide Scenario Consolidation +//* xref:ref-standlone-platform-inventory_platform-install-scenario[Standalone automation controller with external (installer managed) database] //* xref:ref-single-controller-ext-customer-managed-db_platform-install-scenario[Single {ControllerName} with external (customer provided) database] //* xref:ref-standlone-platform-ext-database-inventory_platform-install-scenario[{PlatformNameShort} with an external (installer managed) database] //* xref:ref-example-platform-ext-database-customer-provided_platform-install-scenario[{PlatformNameShort} with an external (customer provided) database] //* xref:ref-single-eda-controller-with-internal-db_platform-install-scenario[Single {EDAcontroller} node with internal database] //* xref:ref-standlone-hub-inventory_platform-install-scenario[Standalone {HubName} with internal database] -* xref:ref-single-controller-hub-ext-database-inventory[Single {ControllerName} and single {HubName} with external (installer managed) database] -//[ifowler] Removed for AAP-18700 Install Guide Scenario Consolidation //* xref:ref-standalone-hub-ext-database-customer-provided_platform-install-scenario[Single {HubName} with external (customer provided) database] // xref:ref-ldap-config-on-pah_platform-install-scenario[LDAP configuration on {PrivateHubName}] -* xref:ref-single-controller-hub-eda-with-managed-db[Single {ControllerName}, single {HubName}, and single event-driven ansible controller node with external (installer managed ) database] - include::platform/proc-editing-inventory-file.adoc[leveloffset=+1] include::platform/con-install-scenario-examples.adoc[leveloffset=+1] include::platform/con-install-scenario-recommendations.adoc[leveloffset=+2] +//Added for AAP-29120 +include::platform/ref-gateway-controller-ext-db.adoc[leveloffset=+3] +include::platform/ref-gateway-controller-hub-ext-db.adoc[leveloffset=+3] +include::platform/ref-gateway-controller-hub-eda-ext-db.adoc[leveloffset=+3] +include::platform/con-ha-hub-installation.adoc[leveloffset=+3] +include::platform/proc-install-ha-hub-selinux.adoc[leveloffset=+3] +include::platform/proc-configure-pulpcore-service.adoc[leveloffset=+4] +include::platform/proc-apply-selinux-context.adoc[leveloffset=+4] +include::hub/hub/proc-configure-content-signing-on-pah.adoc[leveloffset=+3] +include::platform/proc-add-eda-safe-plugin-var.adoc[leveloffset=+3] + +include::platform/proc-set-registry-username-password.adoc[leveloffset=+2] +//[emcwhinn] Removing for AAP-29246 as content is being moved to one guide in 2.4 customer portal +//include::platform/con-eda-2-5-with-controller-2-4.adoc[leveloffset=+3] //[ifowler] Removed for AAP-18700 Install Guide Scenario Consolidation //include::platform/ref-platform-non-inst-database-inventory.adoc[leveloffset=+3] -include::platform/ref-single-controller-ext-installer-managed-db.adoc[leveloffset=+3] -//[ifowler] Removed for AAP-18700 Install Guide Scenario Consolidation //include::platform/ref-single-controller-ext-customer-managed-db.adoc[leveloffset=+3] //include::platform/ref-example-platform-ext-database-inventory.adoc[leveloffset=+3] //include::platform/ref-example-platform-ext-database-customer-provided.adoc[leveloffset=+3] //include::platform/ref-single-eda-controller-with-internal-db.adoc[leveloffset=+3] //include::platform/ref-standalone-hub-inventory.adoc[leveloffset=+3] -include::platform/ref-standalone-controller-hub-ext-database-inventory.adoc[leveloffset=+3] -include::platform/ref-connect-hub-to-rhsso.adoc[leveloffset=+4] -include::platform/con-ha-hub-installation.adoc[leveloffset=+4] -include::platform/proc-install-ha-hub-selinux.adoc[leveloffset=+4] -include::platform/proc-configure-pulpcore-service.adoc[leveloffset=+4] -include::platform/proc-apply-selinux-context.adoc[leveloffset=+4] -include::hub/hub/proc-configure-content-signing-on-pah.adoc[leveloffset=+3] -include::platform/ref-ldap-config-on-pah.adoc[leveloffset=+3] -include::platform/ref-ldap-referrals.adoc[leveloffset=+3] -include::platform/ref-single-controller-hub-eda-with-managed-db.adoc[leveloffset=+3] +// include::platform/ref-standalone-controller-hub-ext-database-inventory.adoc[leveloffset=+3] +//[rjgrange] Removed for AAP-22613 Removing all references to SSO and LDAP installation +//include::platform/ref-connect-hub-to-rhsso.adoc[leveloffset=+4] + + +//[rjgrange] Removed for AAP-22613 Removing all references to SSO and LDAP installation +//include::platform/ref-ldap-config-on-pah.adoc[leveloffset=+3] +//include::platform/ref-ldap-referrals.adoc[leveloffset=+3] +// include::platform/ref-single-controller-hub-eda-with-managed-db.adoc[leveloffset=+3] //[ifowler] Removed for AAP-18700 Install Guide Scenario Consolidation //include::platform/ref-standalone-hub-ext-database-customer-provided.adoc[leveloffset=+3] // dcdacosta - removed this assembly because the modules are included above. include::assembly-installing-high-availability-hub.adoc[leveloffset=+3] +// sayjadha - Added platform/con-backup-aap.adoc as part of AAP-39133: RPM installation - Document use of compression when performing a backup. The backup procedure was missing in RPM install guide, and added info. about new compression variables. include::platform/con-backup-aap.adoc[leveloffset=+1] + +include::platform/ref-redis-config-enterprise-topology.adoc[leveloffset=+3] include::platform/proc-running-setup-script.adoc[leveloffset=+1] -include::platform/proc-verify-controller-installation.adoc[leveloffset=+1] -include::platform/ref-controller-configs.adoc[leveloffset=+2] -include::platform/proc-verify-hub-installation.adoc[leveloffset=+1] -include::platform/ref-hub-configs.adoc[leveloffset=+2] -include::platform/proc-verify-eda-controller-installation.adoc[leveloffset=+1] +include::platform/proc-verify-aap-installation.adoc[leveloffset=+1] +include::platform/con-backup-aap.adoc[leveloffset=+1] +include::platform/con-adding-subscription-manifest.adoc[leveloffset=+1] + +// Removing to consolidate AAP installation verification - you verify by logging into the gateway rather than logging into each component's UI - AAP-17771 +// include::platform/proc-verify-controller-installation.adoc[leveloffset=+1] +// include::platform/ref-controller-configs.adoc[leveloffset=+2] +// include::platform/proc-verify-hub-installation.adoc[leveloffset=+1] +// include::platform/ref-hub-configs.adoc[leveloffset=+2] +// include::platform/proc-verify-eda-controller-installation.adoc[leveloffset=+1] + //[ifowler] Removed for AAP-18700 Install Guide Scenario Consolidation moved to Operations Guide //include::assembly-platform-whats-next.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-pod-spec-modifications.adoc b/downstream/assemblies/platform/assembly-pod-spec-modifications.adoc index 91f24f1ff7..95036570e4 100644 --- a/downstream/assemblies/platform/assembly-pod-spec-modifications.adoc +++ b/downstream/assemblies/platform/assembly-pod-spec-modifications.adoc @@ -1,13 +1,27 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] :context: performance-considerations -[id="assembly-pod-spec-modifications"] +[id="assembly-pod-spec-modifications_{context}"] + = Pod specification modifications +A pod in Kubernetes is the smallest deployable compute unit, consisting of one or more containers sharing networking and storage on a single host. +{PlatformName} uses a default pod specification, which can be customized with a user-defined YAML or JSON document. + include::platform/con-pod-specification-mods.adoc[leveloffset=+1] + include::platform/proc-customizing-pod-specs.adoc[leveloffset=+1] + include::platform/proc-enable-pods-ref-images.adoc[leveloffset=+1] + include::platform/ref-resource-management-pods-containers.adoc[leveloffset=+1] -include::platform/ref-requests-limits.adoc[leveloffset=+1] -include::platform/ref-resource-types.adoc[leveloffset=+1] + +include::platform/ref-requests-limits.adoc[leveloffset=+2] + +include::platform/ref-resource-types.adoc[leveloffset=+2] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-setting-up-automation-mesh.adoc b/downstream/assemblies/platform/assembly-setting-up-automation-mesh.adoc index f73e98cff7..f4ebb28861 100644 --- a/downstream/assemblies/platform/assembly-setting-up-automation-mesh.adoc +++ b/downstream/assemblies/platform/assembly-setting-up-automation-mesh.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] @@ -13,13 +15,18 @@ ifdef::context[:parent-context: {context}] Configure the {PlatformNameShort} installer to set up {AutomationMesh} for your Ansible environment. Perform additional tasks to customize your installation, such as importing a Certificate Authority (CA) certificate. include::platform/con-install-mesh.adoc[leveloffset=+1] + +include::platform/proc-editing-inventory-file.adoc[leveloffset=+1] + +include::platform/proc-running-setup-script.adoc[leveloffset=+1] + include::platform/proc-import-mesh-ca.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/platform-system-requirements[{PlatformName} System Requirements] +* link:{URLPlanningGuide}/platform-system-requirements[System Requirements] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-setup-postgresql-ext-database.adoc b/downstream/assemblies/platform/assembly-setup-postgresql-ext-database.adoc new file mode 100644 index 0000000000..fe69b56cbc --- /dev/null +++ b/downstream/assemblies/platform/assembly-setup-postgresql-ext-database.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="setting-up-a-customer-provided-external-database"] +ifdef::context[:parent-context: {context}] + += Setting up a customer provided (external) database + +[IMPORTANT] +==== +* When using an external database with {PlatformNameShort}, you must create and maintain that database. Ensure that you clear your external database when uninstalling {PlatformNameShort}. + +* {PlatformName} requires customer provided (external) database to have ICU support. + +* During configuration of an external database, you must check the external database coverage. For more information, see link:https://access.redhat.com/articles/4010491[{PlatformName} Database Scope of Coverage]. +==== + +There are two possible scenarios for setting up an external database: + +. An external database with PostgreSQL admin credentials +. An external database without PostgreSQL admin credentials + +include::platform/proc-setup-ext-db-with-admin-creds.adoc[leveloffset=+1] + +include::platform/proc-setup-ext-db-without-admin-creds.adoc[leveloffset=+1] + +include::platform/proc-enable-hstore-extension.adoc[leveloffset=+1] + +include::platform/proc-configure-ext-db-mtls.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-specify-dedicated-nodes.adoc b/downstream/assemblies/platform/assembly-specify-dedicated-nodes.adoc index 95658ce33a..3fa34c70f5 100644 --- a/downstream/assemblies/platform/assembly-specify-dedicated-nodes.adoc +++ b/downstream/assemblies/platform/assembly-specify-dedicated-nodes.adoc @@ -1,8 +1,11 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] :context: performance-considerations -[id="assembly-specify-dedicted-nodes"] +[id="assembly-specify-dedicted-nodes_{context}"] + = Specifying dedicated nodes A Kubernetes cluster runs on top of many Virtual Machines or nodes (generally anywhere between 2 and 20 nodes). @@ -15,6 +18,12 @@ Schedule the control plane nodes to run on different nodes to the automation job If the control plane pods share nodes with the job pods, the control plane can become resource starved and degrade the performance of the whole application. include::platform/ref-assign-pods-to-nodes.adoc[leveloffset=+1] + include::platform/proc-specify-nodes-job-execution.adoc[leveloffset=+1] -include::platform/proc-set-custom-pod-timeout.adoc[leveloffset=+1] + +include::platform/ref-set-custom-pod-timeout.adoc[leveloffset=+1] + include::platform/ref-schedule-jobs-worker-nodes.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-system-requirements.adoc b/downstream/assemblies/platform/assembly-system-requirements.adoc index ee275df358..d029196628 100644 --- a/downstream/assemblies/platform/assembly-system-requirements.adoc +++ b/downstream/assemblies/platform/assembly-system-requirements.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + ifdef::context[:parent-context: {context}] @@ -9,19 +11,35 @@ Use this information when planning your {PlatformName} installations and designi .Prerequisites -* You can obtain root access either through the `sudo` command, or through privilege escalation. For more on privilege escalation see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_privilege_escalation.html[Understanding privilege escalation]. +* You can obtain root access either through the `sudo` command, or through privilege escalation. For more on privilege escalation, see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_privilege_escalation.html[Understanding privilege escalation]. * You can de-escalate privileges from root to users such as: AWX, PostgreSQL, {EDAName}, or Pulp. -* You have configured an NTP client on all nodes. For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/migrate-isolated-execution-nodes#automation_controller_configuration_requirements[Configuring NTP server using Chrony]. +* You have configured an NTP client on all nodes. +// emurtough commented out link to upgrade and migration guide - to be replaced once the guide is published +// For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/migrate-isolated-execution-nodes#automation_controller_configuration_requirements[Configuring NTP server using Chrony]. + +// emurtough commented out files to address duplication across 2.5 doc set 9/18/2024 +// ddacosta added conditional tags to share content between install guide and planning guide + +ifdef::aap-plan[] +include::platform/ref-RPM-system-requirements.adoc[leveloffset=+1] +include::platform/ref-containerized-system-requirements.adoc[leveloffset=+1] +include::platform/ref-OCP-system-requirements.adoc[leveloffset=+1] +endif::aap-plan[] +ifdef::aap-install[] include::platform/ref-system-requirements.adoc[leveloffset=+1] +include::platform/ref-gateway-system-requirements.adoc[leveloffset=+1] include::platform/ref-controller-system-requirements.adoc[leveloffset=+1] include::platform/ref-automation-hub-requirements.adoc[leveloffset=+1] include::platform/ref-ha-hub-reqs.adoc[leveloffset=+2] include::platform/ref-eda-system-requirements.adoc[leveloffset=+1] include::platform/ref-postgresql-requirements.adoc[leveloffset=+1] include::platform/proc-setup-postgresql-ext-database.adoc[leveloffset=+2] +include::platform/proc-postgresql-enable-mtls-authentication.adoc[leveloffset=+2] +include::platform/proc-postgresql-use-custom-certificates.adoc[leveloffset=+2] include::platform/proc-enable-hstore-extension.adoc[leveloffset=+2] include::platform/proc-benchmark-postgresql.adoc[leveloffset=+2] +endif::aap-install[] ifdef::parent-context[:context: {parent-context}] ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-ug-controller-attributes-custom-notifications.adoc b/downstream/assemblies/platform/assembly-ug-controller-attributes-custom-notifications.adoc index 69016b5fdb..3cacdd971a 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-attributes-custom-notifications.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-attributes-custom-notifications.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-attributes-custom-notifications"] = Supported attributes for custom notifications Learn about the list of supported job attributes and the proper syntax for constructing the message text for notifications. -include::platform/ref-controller-supported-attributes.adoc[leveloffset=+1] \ No newline at end of file +include::platform/ref-controller-supported-attributes.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-ug-controller-instance-groups.adoc b/downstream/assemblies/platform/assembly-ug-controller-instance-groups.adoc index 323ad42826..94734a7197 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-instance-groups.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-instance-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-instance-groups"] = Managing Instance Groups @@ -8,11 +10,13 @@ The following view displays the capacity levels based on policy algorithms: image::ug-instance-groups_list_view.png[Instance groups list view] -.Additional resources +For more information see: -* For more information about the policy or rules associated with instance groups, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups#controller-instance-groups[Instance Groups] section of the _{ControllerAG}_. -* For more information on connecting your instance group to a container, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups#controller-container-groups[Container Groups]. +* xref:con-controller-instance-groups[Instance groups] +* xref:controller-container-groups[Container groups] include::platform/proc-controller-create-instance-group.adoc[leveloffset=+1] + include::platform/proc-controller-associate-instances-to-instance-group.adoc[leveloffset=+2] -include::platform/proc-controller-view-jobs-associated-with-instance-group.adoc[leveloffset=+2] \ No newline at end of file + +include::platform/proc-controller-view-jobs-associated-with-instance-group.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-ug-controller-job-slicing.adoc b/downstream/assemblies/platform/assembly-ug-controller-job-slicing.adoc index adc3350020..59db74548d 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-job-slicing.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-job-slicing.adoc @@ -1,3 +1,8 @@ +:_mod-docs-content-type: ASSEMBLY + +ifdef::context[:parent-context: {context}] + + [id="controller-job-slicing"] = Job slicing @@ -14,9 +19,12 @@ When this number is greater than `1`, {ControllerName} generates a workflow from The inventory is distributed evenly among the slice jobs. The workflow job is then started, and proceeds as though it were a normal workflow. -When launching a job, the API returns either a job resource (if job_slice_count = 1) or a workflow job resource. +When launching a job, the API returns either a job resource (if `job_slice_count = 1`) or a workflow job resource. The corresponding User Interface (UI) redirects to the appropriate screen to display the status of the run. include::platform/con-controller-job-slice-considerations.adoc[leveloffset=+1] include::platform/ref-controller-job-slice-execution-behavior.adoc[leveloffset=+1] include::platform/proc-controller-search-job-slices.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-ug-controller-job-templates.adoc b/downstream/assemblies/platform/assembly-ug-controller-job-templates.adoc index 37af94ba2a..0305b48f5e 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-job-templates.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-job-templates.adoc @@ -1,57 +1,80 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-job-templates"] = Job templates +You can create both Job templates and Workflow job templates from {MenuAETemplates}. + +For Workflow job templates, see link:{URLControllerUserGuide}/controller-workflow-job-templates[Workflow job templates]. + A job template is a definition and set of parameters for running an Ansible job. Job templates are useful to run the same job many times. They also encourage the reuse of Ansible Playbook content and collaboration between teams. -The *Templates* list view shows job templates that are currently available. -The default view is collapsed (Compact), showing the template name, template type, and the timestamp of the last job that ran using that template. -You can click the arrow image:arrow.png[Arrow,15,15] icon next to each entry to expand and view more information. -This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. - -//image::ug-job-templates-home.png[Job templates home] - -From this screen you can launch image:rightrocket.png[Rightrocket,15,15], edit image:leftpencil.png[Leftpencil,15,15], copy image:copy.png[Copy,15,15] and delete image:delete-button.png[Delete,15.15] a job template. +include::platform/ref-controller-intro-job-template.adoc[leveloffset=+1] -[NOTE] -==== -You can use job templates to build a workflow template. -Templates that show the *Workflow Visualizer* image:visualizer.png[Visualizer, 15,15] icon next to them are workflow templates. -Clicking the icon allows you to build a workflow graphically. -Many parameters in a job template enable you to select *Prompt on Launch* that you can change at the workflow level, and do not affect the values assigned at the job template level. -For instructions, see the xref:controller-workflow-visualizer[Workflow Visualizer] section. -==== +include::platform/proc-set-domain-of-interest.adoc[leveloffset=+1] include::platform/proc-controller-create-job-template.adoc[leveloffset=+1] -:context: templates + include::platform/proc-controller-adding-permissions.adoc[leveloffset=+1] -:!context: templates + include::platform/proc-controller-delete-job-template.adoc[leveloffset=+1] + include::platform/con-controller-work-with-notifications.adoc[leveloffset=+1] + include::platform/con-controller-view-completed-jobs.adoc[leveloffset=+1] + include::platform/proc-controller-scheduling-job-templates.adoc[leveloffset=+1] + include::platform/con-controller-surveys.adoc[leveloffset=+1] + include::platform/proc-controller-create-survey.adoc[leveloffset=+2] + include::platform/ref-controller-optional-survey-questions.adoc[leveloffset=+2] + include::platform/proc-controller-launch-job-template.adoc[leveloffset=+1] + +include::platform/ref-controller-job-template-variables.adoc[leveloffset=+2] + include::platform/proc-controller-copy-a-job-template.adoc[leveloffset=+1] -include::platform/con-controller-fact-scan-job-templates.adoc[leveloffset=+1] -include::platform/ref-controller-fact-scan-playbooks.adoc[leveloffset=+2] -include::platform/ref-controller-supported-oses.adoc[leveloffset=+2] -include::platform/ref-controller-pre-scan-setup.adoc[leveloffset=+2] -include::platform/ref-controller-custom-fact-scans.adoc[leveloffset=+2] -include::platform/con-controller-fact-caching.adoc[leveloffset=+2] -include::platform/con-controller-benefits-fact-caching.adoc[leveloffset=+2] + +//Removed at AAP-45082 as Controller 3.2 is out of date. +//include::platform/con-controller-fact-scan-job-templates.adoc[leveloffset=+1] + +include::platform/ref-controller-fact-scan-playbooks.adoc[leveloffset=+1] + +include::platform/ref-controller-supported-oses.adoc[leveloffset=+1] + +include::platform/ref-controller-pre-scan-setup.adoc[leveloffset=+1] + +include::platform/ref-controller-custom-fact-scans.adoc[leveloffset=+1] + +include::platform/con-controller-fact-caching.adoc[leveloffset=+1] + +include::platform/con-controller-benefits-fact-caching.adoc[leveloffset=+1] + include::platform/con-controller-cloud-credentials.adoc[leveloffset=+1] -include::platform/ref-controller-openstack-cloud.adoc[leveloffset=+2] + +include::platform/ref-controller-openstack-cloud.adoc[leveloffset=+1] + include::platform/ref-controller-aws-cloud.adoc[leveloffset=+2] + include::platform/ref-controller-google-cloud.adoc[leveloffset=+2] + include::platform/ref-controller-azure-cloud.adoc[leveloffset=+2] + include::platform/ref-controller-vmware-cloud.adoc[leveloffset=+2] + include::platform/con-controller-provisioning-callbacks.adoc[leveloffset=+1] + include::platform/proc-controller-enable-provision-callbacks.adoc[leveloffset=+2] + +include::platform/proc-controller-use-REST-manually.adoc[leveloffset=+2] + include::platform/proc-controller-pass-extra-variables-provisioning-callbacks.adoc[leveloffset=+2] + include::platform/ref-controller-extra-variables.adoc[leveloffset=+1] + include::platform/con-controller-relaunch-job-template.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-ug-controller-jobs.adoc b/downstream/assemblies/platform/assembly-ug-controller-jobs.adoc index acd3c097d7..8a21eb4d5d 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-jobs.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-jobs.adoc @@ -1,8 +1,14 @@ +ifdef::context[:parent-context: {context}] + +:_mod-docs-content-type: ASSEMBLY + [id="controller-jobs"] = Jobs in {ControllerName} -A job is an instance of {ControllerName} launching an Ansible playbook against an inventory of hosts. +:context: jobs-in-controller + +A job is an instance of {ControllerName} launching an Ansible Playbook against an inventory of hosts. The *Jobs* list view displays a list of jobs and their statuses, shown as completed successfully, failed, or as an active (running) job. The default view is collapsed (Compact) with the job name, status, job type, start, and finish times. @@ -13,13 +19,15 @@ image::ug-jobs-list-all-expanded.png[Jobs list expanded] From this screen you can complete the following tasks: +* In the *Domains* taskbar you can specify a domain to make relevant resources easily accessible. +Click the image:wrench.png[Wrench,15,15] icon to edit the existing labels or btn:[Add Domain] to set up your own. * View details and standard output of a particular job * Relaunch image:rightrocket.png[Launch,15,15] jobs -* Cancel or Remove selected jobs +* Cancel or delete selected jobs The relaunch operation only applies to relaunches of playbook runs and does not apply to project or inventory updates, system jobs, and workflow jobs. -When a job relaunches, the *Jobs Output* view is displayed. -Selecting any type of job also takes you to the *Job Output* view for that job, where you can filter jobs by various criteria: +When a job relaunches, the *Output* view is displayed. +Selecting any type of job also takes you to the *Output* view for that job, where you can filter jobs by various criteria: image::ug-job-details-view-filters.png[Job details view filters] @@ -27,24 +35,50 @@ image::ug-job-details-view-filters.png[Job details view filters] * The *Event* option in the *Search output* list enables you to filter by the events of interest, such as errors, host failures, host retries, and items skipped. You can include as many events in the filter as necessary. //* The *Advanced* option is a refined search that gives you a combination of including or excluding criteria, searching by key, or by lookup type. -For more information on using the search, refer to the xref:assembly-controller-search[Search] section. +For more information about using the search, see the xref:assembly-controller-search[Search] section. include::platform/con-controller-inventory-sync-jobs.adoc[leveloffset=+1] + include::platform/ref-controller-inventory-sync-details.adoc[leveloffset=+2] + include::platform/con-controller-scm-inventory-jobs.adoc[leveloffset=+1] + include::platform/ref-controller-scm-inventory-details.adoc[leveloffset=+2] + include::platform/con-controller-playbook-run-jobs.adoc[leveloffset=+1] + include::platform/ref-controller-playbook-run-search.adoc[leveloffset=+2] -include::platform/ref-controller-host-details.adoc[leveloffset=+2] + +//Commenting this out until I know what it's talking about. +//include::platform/ref-controller-host-details.adoc[leveloffset=+2] + include::platform/ref-controller-playbook-run-details.adoc[leveloffset=+2] + +include::platform/con-controller-playbook-access-info-sharing.adoc[leveloffset=+2] + +include::platform/ref-controller-isolation-functionality.adoc[leveloffset=+2] + include::platform/con-controller-capacity-determination.adoc[leveloffset=+1] + include::platform/con-controller-resource-determination-capacity.adoc[leveloffset=+2] + include::platform/ref-controller-memory-relative-capacity.adoc[leveloffset=+3] + include::platform/ref-controller-cpu-relative-capacity.adoc[leveloffset=+3] + include::platform/con-controller-capacity-job-impacts.adoc[leveloffset=+2] + include::platform/con-controller-impact-of-job-types.adoc[leveloffset=+3] + include::platform/proc-controller-select-capacity.adoc[leveloffset=+3] + include::platform/con-controller-job-branch-overriding.adoc[leveloffset=+1] + include::platform/con-controller-source-tree-copy.adoc[leveloffset=+2] + include::platform/con-controller-project-revision-behavior.adoc[leveloffset=+2] + include::platform/ref-controller-git-refspec.adoc[leveloffset=+2] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-ug-controller-notifications.adoc b/downstream/assemblies/platform/assembly-ug-controller-notifications.adoc index 3704f9b7f9..7cde97ebef 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-notifications.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-notifications.adoc @@ -1,6 +1,8 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-notifications"] -= Notifications += Notifiers A xref:controller-notification-types[Notification type] such as Email, Slack or a Webhook, is an instance of a Notification Template, and has a name, description and configuration defined in the Notification template. @@ -22,21 +24,39 @@ In this case, you associate the notification template with the job template at ` Users and teams are also able to define their own notifications that can be attached to arbitrary jobs. include::platform/con-controller-notification-hierarchy.adoc[leveloffset=+1] + include::platform/con-controller-notification-workflow.adoc[leveloffset=+1] + include::platform/proc-controller-create-notification-template.adoc[leveloffset=+1] + include::platform/con-controller-notification-types.adoc[leveloffset=+1] + include::platform/ref-controller-notification-email.adoc[leveloffset=+2] + include::platform/ref-controller-notification-grafana.adoc[leveloffset=+2] + include::platform/ref-controller-notification-irc.adoc[leveloffset=+2] + include::platform/ref-controller-notification-mattermost.adoc[leveloffset=+2] + include::platform/ref-controller-notification-pager-duty.adoc[leveloffset=+2] + include::platform/ref-controller-notification-rocketchat.adoc[leveloffset=+2] + include::platform/ref-controller-notification-slack.adoc[leveloffset=+2] + include::platform/ref-controller-notification-twilio.adoc[leveloffset=+2] + include::platform/ref-controller-notification-webhook.adoc[leveloffset=+2] + include::platform/ref-controller-notification-webhook-payloads.adoc[leveloffset=+3] + include::platform/proc-controller-create-custom-notifications.adoc[leveloffset=+1] + include::platform/con-controller-enable-notifications.adoc[leveloffset=+1] + include::platform/con-controller-configure-hostname-notifications.adoc[leveloffset=+1] + include::platform/proc-controller-reset-tower-base.adoc[leveloffset=+2] + include::platform/ref-controller-notifications-api.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-ug-controller-schedules.adoc b/downstream/assemblies/platform/assembly-ug-controller-schedules.adoc index 62da9665f2..541627d10f 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-schedules.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-schedules.adoc @@ -1,12 +1,17 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-schedules"] = Schedules +:context: schedules-controller + From the navigation panel, click {MenuAESchedules} to access your configured schedules. -The schedules list can be sorted by any of the attributes from each column using the directional arrows. +The schedules list can be sorted by any of the attributes from each column by using the directional arrows. You can also search by name, date, or the name of the month in which a schedule runs. -Each schedule has options to enable or disable that schedule using the *On* or *Off* toggle next to the schedule name. +Use the *On* or *Off* toggle to stop an active schedule or activate a stopped schedule. + Click the Edit image:leftpencil.png[Edit,15,15] icon to edit a schedule. image::ug-schedules-sample-list.png[Schedules sample list] @@ -20,5 +25,9 @@ Type:: This identifies whether the schedule is associated with a source control Next run:: The next scheduled run of this task. include::platform/proc-controller-adding-new-schedule.adoc[leveloffset=+1] + +include::platform/proc-controller-add-new-schedule-from-resource.adoc[leveloffset=+1] + include::platform/proc-controller-define-schedule-rules.adoc[leveloffset=+2] + include::platform/proc-controller-define-schedule-exceptions.adoc[leveloffset=+2] diff --git a/downstream/assemblies/platform/assembly-ug-controller-setting-up-insights.adoc b/downstream/assemblies/platform/assembly-ug-controller-setting-up-insights.adoc index 54d887e300..20d2495fea 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-setting-up-insights.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-setting-up-insights.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-setting-up-insights"] = Setting up {InsightsName} Remediations @@ -11,9 +13,13 @@ Red Hat Insights users create a maintenance plan to group the fixes and can crea {ControllerNameStart} tracks the maintenance plan playbooks through a Red Hat Insights project. Authentication to Red Hat Insights through Basic Authorization is backed by a special credential, which must first be established in {ControllerName}. + To run a Red Hat Insights maintenance plan, you need a Red Hat Insights project and inventory. include::platform/proc-controller-create-insights-credential.adoc[leveloffset=+1] + include::platform/proc-controller-create-insights-project.adoc[leveloffset=+1] + include::platform/con-controller-create-insights-inventory.adoc[leveloffset=+1] + include::platform/proc-controller-remediate-insights-inventory.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-ug-controller-work-with-webhooks.adoc b/downstream/assemblies/platform/assembly-ug-controller-work-with-webhooks.adoc index 8eae1af8d3..7efedd2971 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-work-with-webhooks.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-work-with-webhooks.adoc @@ -1,15 +1,17 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-work-with-webhooks"] = Working with Webhooks -A Webhook enables you to execute specified commands between applications over the web. +Use webhooks to run specified commands between applications over the web. {ControllerNameStart} currently provides webhook integration with GitHub and GitLab. -Set up a webhook using the following services: +Set up a webhook by using the following services: * xref:controller-set-up-github-webhook[Setting up a GitHub webhook] * xref:controller-set-up-gitlab-webhook[Setting up a GitLab webhook] -* xref:controller-view-payload-output[Viewing a payload output] +* xref:controller-view-payload-output[Viewing the payload output] The webhook post-status-back functionality for GitHub and GitLab is designed to work only under certain CI events. Receiving another kind of event results in messages such as the following in the service log: @@ -17,5 +19,7 @@ Receiving another kind of event results in messages such as the following in the `awx.main.models.mixins Webhook event did not have a status API endpoint associated, skipping.` include::platform/proc-controller-set-up-github-webhook.adoc[leveloffset=+1] + include::platform/proc-controller-set-up-gitlab-webhook.adoc[leveloffset=+1] + include::platform/proc-controller-view-payload-output.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-ug-controller-workflow-job-templates.adoc b/downstream/assemblies/platform/assembly-ug-controller-workflow-job-templates.adoc index 0c989b0b38..b382facea8 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-workflow-job-templates.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-workflow-job-templates.adoc @@ -1,7 +1,15 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-workflow-job-templates"] = Workflow job templates +:context: worklow-job-templates + +You can create both Job templates and Workflow job templates from {MenuAETemplates}. + +For Job templates, see xref:controller-job-templates[Job templates]. + A workflow job template links together a sequence of disparate resources that tracks the full set of jobs that were part of the release process as a single unit. These resources include the following: @@ -10,11 +18,16 @@ These resources include the following: * Project syncs * Inventory source syncs -The *Templates* list view shows the workflow and job templates that are currently available. -The default view is collapsed (Compact), showing the template name, template type, and the statuses of the jobs that have run by using that template. -You can click the arrow next to each entry to expand and view more information. +The *Automation Templates* page shows the workflow and job templates that are currently available. + +//The default view is to show each template as a card, showing the template name and template type. + +Select the template name to display more information about the template, including when it last ran. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. -From this screen you can launch image:rightrocket.png[Launch,15,15], edit image:leftpencil.png[Edit,15,15], and copy image:copy.png[Copy,15,15] a workflow job template. + +From this screen you can launch image:rightrocket.png[Launch icon,15,15] , edit image:leftpencil.png[Edit icon,15,15], and duplicate image:copy.png[Duplicate icon, 15,15] a workflow job template. + +//From the template card you can launch image:rightrocket.png[Launch,15,15], edit image:leftpencil.png[Leftpencil,15,15] a template, or, using the {MoreActionsIcon} icon, you can duplicate image:copy.png[Duplicate,15,15] or delete image:delete-button.png[Delete,15,15]a template. Only workflow templates have the workflow visualizer image:visualizer.png[Workflow visualizer,15,15] icon as a shortcut for accessing the workflow editor. @@ -25,20 +38,33 @@ image::ug-wf-templates-home.png[Workflow templates home] Workflow templates can be used as building blocks for another workflow template. You can enable *Prompt on Launch* by setting up several settings in a workflow template, which you can edit at the workflow job template level. These do not affect the values assigned at the individual workflow template level. -For further instructions, see the xref:controller-workflow-visualizer[Workflow Visualizer] section. +For further instructions, see the xref:controller-workflow-visualizer[Workflow visualizer] section. ==== include::platform/proc-controller-create-workflow-template.adoc[leveloffset=+1] + include::platform/con-controller-work-with-permissions.adoc[leveloffset=+1] + include::platform/con-controller-workflow-notifications.adoc[leveloffset=+1] + include::platform/con-controller-view-completed-workflow-jobs.adoc[leveloffset=+1] + include::platform/proc-controller-scheduling-workflow-job-templates.adoc[leveloffset=+1] + include::platform/con-controller-workflow-job-surveys.adoc[leveloffset=+1] + include::platform/con-controller-workflow-visualizer.adoc[leveloffset=+1] + include::platform/proc-controller-build-workflow.adoc[leveloffset=+2] + include::platform/ref-controller-approval-nodes.adoc[leveloffset=+2] + include::platform/proc-controller-building-nodes-scenarios.adoc[leveloffset=+2] + include::platform/proc-controller-edit-nodes.adoc[leveloffset=+2] + include::platform/proc-controller-launch-workflow-template.adoc[leveloffset=+1] + include::platform/proc-controller-copy-workflow-job-template.adoc[leveloffset=+1] + include::platform/ref-controller-workflow-job-template-extra-variables.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-ug-controller-workflows.adoc b/downstream/assemblies/platform/assembly-ug-controller-workflows.adoc index 8805d3989c..be22cd8552 100644 --- a/downstream/assemblies/platform/assembly-ug-controller-workflows.adoc +++ b/downstream/assemblies/platform/assembly-ug-controller-workflows.adoc @@ -1,8 +1,12 @@ +:_mod-docs-content-type: ASSEMBLY + [id="controller-workflows"] = Workflows in {ControllerName} -Workflows enable you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions. +:context: workflows-controller + +Workflows enable you to configure a sequence of disparate job templates (or workflow templates) that might or might not share inventory, playbooks, or permissions. Workflows have `admin` and `execute` permissions, similar to job templates. A workflow accomplishes the task of tracking the full set of jobs that were part of the release process as a single unit. @@ -12,7 +16,7 @@ These nodes can be jobs, project syncs, or inventory syncs. A template can be part of different workflows or used multiple times in the same workflow. A copy of the graph structure is saved to a workflow job when you launch the workflow. -The following example shows a workflow that contains all three, as well as a workflow job template: +The following example shows a workflow that has all three, and a workflow job template: image::ug-node-all-scenarios-wf.png[Node in workflow] @@ -21,7 +25,9 @@ Nodes linking to a job template which has prompt-driven fields (job_type, job_ta Job templates that prompt for a credential or inventory, without defaults, are not available for inclusion in a workflow. include::platform/con-controller-workflow-scenarios.adoc[leveloffset=+1] + include::platform/ref-controller-workflows-extra-variables.adoc[leveloffset=+1] + include::platform/con-controller-workflow-states.adoc[leveloffset=+1] -include::platform/con-controller-role-based-access-controls.adoc[leveloffset=+1] +include::platform/con-controller-role-based-access-controls.adoc[leveloffset=+1] diff --git a/downstream/assemblies/platform/assembly-update-container.adoc b/downstream/assemblies/platform/assembly-update-container.adoc new file mode 100644 index 0000000000..41c9c9491e --- /dev/null +++ b/downstream/assemblies/platform/assembly-update-container.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="update-container"] + += Container-based {PlatformNameShort} + + +To update your container-based {PlatformNameShort}, start by reviewing the update considerations. You can then download the latest version of the {PlatformNameShort} installer, configure the `inventory` file in the installation bundle to reflect your environment, and then run the installer. + +include::platform/proc-update-aap-container.adoc[leveloffset=+1] +include::platform/proc-backup-aap-container.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-update-ocp.adoc b/downstream/assemblies/platform/assembly-update-ocp.adoc new file mode 100644 index 0000000000..d73beca14e --- /dev/null +++ b/downstream/assemblies/platform/assembly-update-ocp.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="update-ocp"] + += Updating {PlatformName} on {OCP} + +You can use an upgrade patch to update your operator-based {PlatformNameShort}. + +include::platform/proc-update-aap-on-ocp.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/assembly-update-rpm.adoc b/downstream/assemblies/platform/assembly-update-rpm.adoc new file mode 100644 index 0000000000..62d19f7d14 --- /dev/null +++ b/downstream/assemblies/platform/assembly-update-rpm.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="update-rpm"] + += RPM-based {PlatformNameShort} + +To update your RPM-based {PlatformNameShort}, start by reviewing the update considerations. You can then download the latest version of the {PlatformNameShort} installer, configure the `inventory` file in the installation bundle to reflect your environment, and then run the installer. + +include::platform/con-update-planning.adoc[leveloffset=+1] +include::assembly-choosing-obtaining-installer.adoc[leveloffset=+1] +include::platform/proc-backup-aap-rpm.adoc[leveloffset=+1] +include::platform/proc-inventory-file-setup-rpm.adoc[leveloffset=+1] +include::platform/proc-running-setup-script-for-updates.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/platform/assembly-using-custom-tls-certificates.adoc b/downstream/assemblies/platform/assembly-using-custom-tls-certificates.adoc new file mode 100644 index 0000000000..c5efd3cae5 --- /dev/null +++ b/downstream/assemblies/platform/assembly-using-custom-tls-certificates.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="using-custom-tls-certificates"] +ifdef::context[:parent-context: {context}] + += Using custom TLS certificates + +{PlatformName} uses X.509 certificate and key pairs to secure traffic both internally between {PlatformNameShort} components and externally for public UI and API connections. + +There are two primary ways to manage TLS certificates for your {PlatformNameShort} deployment: + +. {PlatformNameShort} generated certificates (this is the default) +. User-provided certificates + +// AAP generated certificates +include::platform/con-installer-generated-certs.adoc[leveloffset=+1] + +== User-provided certificates + +To use your own TLS certificates and keys to replace some or all of the self-signed certificates generated during installation, you can set specific variables in your inventory file. These certificates and keys must be generated by a public or organizational CA in advance so that they are available during the installation process. + +// Option 1: Use a custom CA to generate all TLS certificates +include::platform/proc-use-custom-ca-certs.adoc[leveloffset=+2] + +// Option 2: Provide custom TLS certificates for each service +include::platform/proc-provide-custom-tls-certs-per-service.adoc[leveloffset=+2] + +// Considerations for Option 2 +include::platform/con-certs-per-service-considerations.adoc[leveloffset=+2] + +// Providing a custom CA certificate +include::platform/proc-provide-custom-ca-cert.adoc[leveloffset=+2] + +// Receptor certificate considerations +include::platform/con-receptor-cert-considerations.adoc[leveloffset=+1] + +ifdef::parent-context[:context: {parent-context}] +ifndef::parent-context[:!context:] diff --git a/downstream/assemblies/platform/eda b/downstream/assemblies/platform/eda new file mode 120000 index 0000000000..cca4c84ae0 --- /dev/null +++ b/downstream/assemblies/platform/eda @@ -0,0 +1 @@ +../../modules/eda \ No newline at end of file diff --git a/downstream/assemblies/playbooks/assembly-open-source-license.adoc b/downstream/assemblies/playbooks/assembly-open-source-license.adoc new file mode 100644 index 0000000000..ae4033b19c --- /dev/null +++ b/downstream/assemblies/playbooks/assembly-open-source-license.adoc @@ -0,0 +1,5 @@ +[id="assembly-open-source-license"] + += Open source license + +include::../aap-common/gplv3-license-text.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/terraform-aap/assembly-terraform-integrating-from-aap.adoc b/downstream/assemblies/terraform-aap/assembly-terraform-integrating-from-aap.adoc new file mode 100644 index 0000000000..22dbd9f42c --- /dev/null +++ b/downstream/assemblies/terraform-aap/assembly-terraform-integrating-from-aap.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="terraform-integrating-from-aap"] + += Integrating from {PlatformNameShort} + +Use the procedures in this section to set up the integration from {PlatformNameShort}. You need to create a credential, build an execution environment, and launch a job template in {PlatformNameShort}. + +include::terraform-aap/proc-terraform-creating-credential.adoc[leveloffset=+1] + +include::terraform-aap/proc-terraform-building-execution-environment.adoc[leveloffset=+1] + +include::terraform-aap/proc-terraform-creating-launching-job-template.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/terraform-aap/assembly-terraform-integrating-from-terraform.adoc b/downstream/assemblies/terraform-aap/assembly-terraform-integrating-from-terraform.adoc new file mode 100644 index 0000000000..13fb87cebe --- /dev/null +++ b/downstream/assemblies/terraform-aap/assembly-terraform-integrating-from-terraform.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="terraform-integrating-from-terraform"] + += Integrating from {Terraform} + +If you have already provisioned your environment from {TerraformEnterpriseShortName}, you can use the {Terraform} official provider for {PlatformNameShort} to leverage {PlatformNameShort} automation capabilities for Day 2 tasks and manage infrastructure updates and lifecycle events. For more information about integrating from {TerraformEnterpriseShortName}, see the link:https://developer.hashicorp.com/terraform/enterprise[{Terraform} documentation] and the link:https://registry.terraform.io/providers/ansible/aap/latest[{PlatformNameShort} official provider] in the {Terraform} registry. diff --git a/downstream/assemblies/terraform-aap/assembly-terraform-introduction.adoc b/downstream/assemblies/terraform-aap/assembly-terraform-introduction.adoc new file mode 100644 index 0000000000..b5eb7be8cc --- /dev/null +++ b/downstream/assemblies/terraform-aap/assembly-terraform-introduction.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="terraform-introduction"] + += About this Integration + +The integration of {PlatformName} and {TerraformEnterpriseFullName} offers a powerful solution for streamlining IT operations. This collaboration combines the strengths of both tools to save time and effort while reducing risks in complex IT environments. + +include::terraform-aap/con-terraform-intro.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/terraform-aap/assembly-terraform-migrating-from-community.adoc b/downstream/assemblies/terraform-aap/assembly-terraform-migrating-from-community.adoc new file mode 100644 index 0000000000..158520e16c --- /dev/null +++ b/downstream/assemblies/terraform-aap/assembly-terraform-migrating-from-community.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: ASSEMBLY + +[id="terraform-migrating-from-community-terraform"] + += Migrating from the community version of Terraform + +If you are using {TerraformCommunityName} Edition (TCE) and want to use {PlatformNameShort}, you must migrate to {TerraformEnterpriseShortName} (TFE) or {TerraformCloudShortName}. + +include::terraform-aap/proc-terraform-migrating-from-community.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/assemblies/terraform-aap/terraform-aap b/downstream/assemblies/terraform-aap/terraform-aap new file mode 120000 index 0000000000..3bf2bf51a6 --- /dev/null +++ b/downstream/assemblies/terraform-aap/terraform-aap @@ -0,0 +1 @@ +../../modules/terraform-aap \ No newline at end of file diff --git a/downstream/assemblies/topologies/assembly-appendix-topology-resources.adoc b/downstream/assemblies/topologies/assembly-appendix-topology-resources.adoc new file mode 100644 index 0000000000..fc7542e791 --- /dev/null +++ b/downstream/assemblies/topologies/assembly-appendix-topology-resources.adoc @@ -0,0 +1,9 @@ +[id="appendix-topology-resources"] += Additional resources for tested deployment models + +This appendix provides a reference for the additional resources relevant to the tested deployment models outlined in {TitleTopologies}. + +* For additional information about each of the tested topologies described in this document, see the link:https://github.com/ansible/test-topologies/[test-topologies GitHub repo]. + +* For questions around IBM cloud specific configurations or issues, see link:https://www.ibm.com/mysupport[IBM support]. + diff --git a/downstream/assemblies/topologies/assembly-container-topologies.adoc b/downstream/assemblies/topologies/assembly-container-topologies.adoc new file mode 100644 index 0000000000..34506d2d90 --- /dev/null +++ b/downstream/assemblies/topologies/assembly-container-topologies.adoc @@ -0,0 +1,11 @@ +[id="container-topologies"] + += Container topologies + +The containerized installer deploys {PlatformNameShort} on {RHEL} by using Podman which runs the platform in containers on host machines. Customers manage the product and infrastructure lifecycle. + +//Container growth topology +include::topologies/ref-cont-a-env-a.adoc[leveloffset=+1] + +//Container enterprise topology +include::topologies/ref-cont-b-env-a.adoc[leveloffset=+1] diff --git a/downstream/assemblies/topologies/assembly-ocp-topologies.adoc b/downstream/assemblies/topologies/assembly-ocp-topologies.adoc new file mode 100644 index 0000000000..e4bfd573ba --- /dev/null +++ b/downstream/assemblies/topologies/assembly-ocp-topologies.adoc @@ -0,0 +1,17 @@ +[id="ocp-topologies"] + += Operator topologies + +The {OperatorPlatformNameShort} uses Red Hat OpenShift Operators to deploy {PlatformNameShort} within Red Hat OpenShift. Customers manage the product and infrastructure lifecycle. + +[IMPORTANT] +==== +You can only install a single instance of the {OperatorPlatformNameShort} into a single namespace. +Installing multiple instances in the same namespace can lead to improper operation for both Operator instances. +==== + +//OCP growth topology +include::topologies/ref-ocp-a-env-a.adoc[leveloffset=+1] + +//OCP enterprise topology +include::topologies/ref-ocp-b-env-a.adoc[leveloffset=+1] diff --git a/downstream/assemblies/topologies/assembly-overview-tested-deployment-models.adoc b/downstream/assemblies/topologies/assembly-overview-tested-deployment-models.adoc new file mode 100644 index 0000000000..391c04bb42 --- /dev/null +++ b/downstream/assemblies/topologies/assembly-overview-tested-deployment-models.adoc @@ -0,0 +1,11 @@ +[id="overview-tested-deployment-models"] + += Overview of tested deployment models + +Red Hat tests {PlatformNameShort} {PlatformVers} with a defined set of topologies to give you opinionated deployment options. Deploy all components of {PlatformNameShort} so that all features and capabilities are available for use without the need to take further action. + +Red Hat tests the installation of {PlatformNameShort} {PlatformVers} based on a defined set of infrastructure topologies or reference architectures. Enterprise organizations can use one of the {EnterpriseTopologyPlural} for production deployments to ensure the highest level of uptime, performance, and continued scalability. Organizations or deployments that are resource constrained can use a {GrowthTopology}. + +It is possible to install {PlatformNameShort} on different infrastructure topologies and with different environment configurations. Red Hat does not fully test topologies outside of published reference architectures. Red Hat recommends using a tested topology for all new deployments and provides commercially reasonable support for deployments that meet minimum requirements. + +include::topologies/ref-installation-deployment-models.adoc[leveloffset=+1] diff --git a/downstream/assemblies/topologies/assembly-rpm-topologies.adoc b/downstream/assemblies/topologies/assembly-rpm-topologies.adoc new file mode 100644 index 0000000000..ee511d71c6 --- /dev/null +++ b/downstream/assemblies/topologies/assembly-rpm-topologies.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: ASSEMBLY +[id="rpm-topologies"] + += RPM topologies + +The RPM installer deploys {PlatformNameShort} on {RHEL} by using RPMs to install the platform on host machines. Customers manage the product and infrastructure lifecycle. + +//RPM growth topology +include::topologies/ref-rpm-a-env-a.adoc[leveloffset=+1] + +//RPM enterprise topology +include::topologies/ref-rpm-b-env-a.adoc[leveloffset=+1] diff --git a/downstream/assemblies/topologies/topologies b/downstream/assemblies/topologies/topologies new file mode 120000 index 0000000000..e20855697b --- /dev/null +++ b/downstream/assemblies/topologies/topologies @@ -0,0 +1 @@ +../../modules/topologies \ No newline at end of file diff --git a/downstream/assemblies/troubleshooting-aap/assembly-diagnosing-the-problem.adoc b/downstream/assemblies/troubleshooting-aap/assembly-diagnosing-the-problem.adoc index 21aacb1a57..357b5051e9 100644 --- a/downstream/assemblies/troubleshooting-aap/assembly-diagnosing-the-problem.adoc +++ b/downstream/assemblies/troubleshooting-aap/assembly-diagnosing-the-problem.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY [id="diagnosing-the-problem"] @@ -6,4 +7,5 @@ To start troubleshooting {PlatformNameShort}, use the `must-gather` command on {OCPShort} or the `sos` utility on a {VMBase} to collect configuration and diagnostic information. You can attach the output of these utilities to your support case. include::troubleshooting-aap/proc-troubleshoot-must-gather.adoc[leveloffset=+1] -include::troubleshooting-aap/proc-troubleshoot-sosreport.adoc[leveloffset=+1] \ No newline at end of file + +include::troubleshooting-aap/proc-troubleshoot-sosreport.adoc[leveloffset=+1] diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-backup-recovery.adoc b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-backup-recovery.adoc index a35a7bfb35..a5436bf521 100644 --- a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-backup-recovery.adoc +++ b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-backup-recovery.adoc @@ -3,6 +3,6 @@ = Backup and recovery -* For information about performing a backup and recovery of {PlatformNameShort}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-backup-and-restore#doc-wrapper[Backup and restore] in the Automation Controller Administration Guide. +* For information about performing a backup and recovery of {PlatformNameShort}, see link:{URLControllerAdminGuide}/controller-backup-and-restore[Backup and restore] in _{TitleControllerAdminGuide}_. -* For information about troubleshooting backup and recovery for installations of {OperatorPlatform} on {OCPShort}, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/aap-troubleshoot-backup-recover[Troubleshooting] section in the Red{nbsp}Hat {OperatorPlatform} Backup and Recovery Guide. \ No newline at end of file +* For information about troubleshooting backup and recovery for installations of {OperatorPlatformNameShort} on {OCPShort}, see the link:{URLOperatorBackup}/assembly-aap-troubleshoot-backup-recover[Troubleshooting] section in _{TitleOperatorBackup}_. diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-controller.adoc b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-controller.adoc index c9615b50d5..786946dc94 100644 --- a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-controller.adoc +++ b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-controller.adoc @@ -3,6 +3,6 @@ = Resources for troubleshooting {ControllerName} -* For information about troubleshooting {ControllerName}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-troubleshooting[Troubleshooting automation controller] in the Automation Controller Administration Guide. +* For information about troubleshooting {ControllerName}, see link:{URLControllerAdminGuide}/controller-troubleshooting[Troubleshooting {ControllerName}] in _{TitleControllerAdminGuide}_. -* For information about troubleshooting the performance of {ControllerName}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/assembly-controller-improving-performance#ref-controller-performance-troubleshooting[Performance troubleshooting for automation controller] in the Automation Controller Administration Guide. \ No newline at end of file +* For information about troubleshooting the performance of {ControllerName}, see link:{URLControllerAdminGuide}/assembly-controller-improving-performance#ref-controller-performance-troubleshooting[Performance troubleshooting for {ControllerName}] in _{TitleControllerAdminGuide}_. \ No newline at end of file diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-jobs.adoc b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-jobs.adoc index 12855b2479..1999ca1215 100644 --- a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-jobs.adoc +++ b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-jobs.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: ASSEMBLY [id="troubleshoot-jobs"] @@ -5,8 +6,12 @@ Troubleshoot issues with jobs. -include::troubleshooting-aap/proc-troubleshoot-job-localhost.adoc[leveloffset=+1] +// Michelle - commenting out for now as it refers to upgrade info +// include::troubleshooting-aap/proc-troubleshoot-job-localhost.adoc[leveloffset=+1] include::troubleshooting-aap/proc-troubleshoot-job-resolve-module.adoc[leveloffset=+1] + include::troubleshooting-aap/proc-troubleshoot-job-timeout.adoc[leveloffset=+1] + include::troubleshooting-aap/proc-troubleshoot-job-pending.adoc[leveloffset=+1] -include::troubleshooting-aap/proc-troubleshoot-job-permissions.adoc[leveloffset=+1] \ No newline at end of file + +include::troubleshooting-aap/proc-troubleshoot-job-permissions.adoc[leveloffset=+1] diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-networking.adoc b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-networking.adoc index 86ff2ca94b..47c34db517 100644 --- a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-networking.adoc +++ b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-networking.adoc @@ -6,3 +6,5 @@ Troubleshoot networking issues. include::troubleshooting-aap/proc-troubleshoot-subnet-conflict.adoc[leveloffset=+1] + +include::troubleshooting-aap/proc-troubleshoot-ssl-tls-issues.adoc[leveloffset=+1] diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-playbooks.adoc b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-playbooks.adoc index 5473d6ab0f..c88e776aca 100644 --- a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-playbooks.adoc +++ b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-playbooks.adoc @@ -3,4 +3,7 @@ = Playbooks -You can use {Navigator} to interactively troubleshoot your playbook. For more information about troubleshooting a playbook with {Navigator}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_content_navigator_creator_guide/assembly-troubleshooting-navigator_ansible-navigator[Troubleshooting Ansible content with {Navigator}] in the Automation Content Navigator Creator Guide. +You can use {Navigator} to interactively troubleshoot your playbook. +For more information about troubleshooting a playbook with {Navigator}, see +link:{URLNavigatorGuide}/assembly-troubleshooting-navigator_ansible-navigator[Troubleshooting Ansible content with {Navigator}] +in the _{TitleNavigatorGuide}_ Guide. diff --git a/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-upgrade.adoc b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-upgrade.adoc new file mode 100644 index 0000000000..3a12be1aa0 --- /dev/null +++ b/downstream/assemblies/troubleshooting-aap/assembly-troubleshoot-upgrade.adoc @@ -0,0 +1,8 @@ + +[id="troubleshoot-upgrade"] + += Upgrading + +Troubleshoot issues when upgrading to {PlatformNameShort} 2.5. + +include::troubleshooting-aap/proc-troubleshoot-upgrade-issues.adoc[leveloffset=+1] diff --git a/downstream/attributes/attributes.adoc b/downstream/attributes/attributes.adoc index f0391dba2f..258ff4f26c 100644 --- a/downstream/attributes/attributes.adoc +++ b/downstream/attributes/attributes.adoc @@ -7,15 +7,20 @@ :CentralAuthStart: Central authentication :CentralAuth: central authentication :PlatformVers: 2.5 -//The Ansible-core version required to install AAP +:PostgresVers: PostgreSQL 15 +//The ansible-core version used to install AAP :CoreInstVers: 2.14 -//The Ansible-core version used by the AAP control plane and EEs -:CoreUseVers: 2.15 -:PlatformDownloadUrl: https://access.redhat.com/downloads/content/480/ver=2.5/rhel---9/2.4/x86_64/product-software +//The ansible-core version used by the AAP control plane and EEs +:CoreUseVers: 2.16 +:PlatformDownloadUrl: https://access.redhat.com/downloads/content/480/ver=2.5/rhel---9/2.5/x86_64/product-software :BaseURL: https://docs.redhat.com/en/documentation :VMBase: VM-based installation +:Installer: installation program :OperatorBase: operator-based installation :ContainerBase: container-based installation +:PlatformDashboard: platform dashboard +:Gateway: platform gateway +:GatewayStart: Platform gateway // Event-Driven Ansible :EDAName: Event-Driven Ansible @@ -29,12 +34,25 @@ :AnsibleContentParser: content parser tool :ibmwatsonxcodeassistant: IBM watsonx Code Assistant +// Ansible Lightspeed intelligent assistant (chatbot and AI) +:AAPchatbot: Ansible Lightspeed intelligent assistant +:RHELAI: Red Hat Enterprise Linux AI +:OCPAI: Red Hat OpenShift AI +:IBMwatsonxai: IBM watsonx.ai +:OpenAI: OpenAI +:AzureOpenAI: Microsoft Azure OpenAI + + // AAP on Clouds :AAPonAzureName: Red Hat Ansible Automation Platform on Microsoft Azure :AAPonAzureNameShort: Ansible Automation Platform on Microsoft Azure :AWS: Amazon Web Services -:GCP: Google Cloud Platform :Azure: Microsoft Azure +:MSEntraID: Microsoft Entra ID +:SaaSonAWS: Red Hat Ansible Automation Platform Service on AWS +:SaaSonAWSShort: Ansible Automation Platform Service on AWS +// AAP on GCP has been deprecated +:GCP: Google Cloud Platform // Automation Mesh :AutomationMesh: automation mesh @@ -43,7 +61,8 @@ :RunnerRpm: Ansible-runner rpm/container // Operators -:OperatorPlatform: Ansible Automation Platform Operator +:OperatorPlatformName: Red Hat Ansible Automation Platform Operator +:OperatorPlatformNameShort: Ansible Automation Platform Operator :OperatorHub: Ansible Automation Platform Hub Operator :OperatorController: Ansible Automation Platform Controller Operator :OperatorResource: Ansible Automation Platform Resource Operator @@ -90,11 +109,12 @@ :MeshConnect: automation mesh connector :MeshReceptor: automation mesh receptor :ControllerGS: Getting started with automation controller -:ControllerUG: Automation controller User Guide -:ControllerAG: Automation controller Administration Guide +:ControllerUG: Using automation execution +:ControllerAG: Configuring automation execution :Analytics: Automation Analytics - +// Red Hat Edge Manager +:RedHatEdge: Red Hat Edge Manager // Execution environments :ExecEnvNameStart: Automation execution environments @@ -108,10 +128,23 @@ :Runner: Ansible Runner :Role: Role ARG Spec -// Ansible developer tools -:ToolsName: Ansible developer tools +// Terraform +:TerraformEnterpriseFullName: IBM HashiCorp Terraform +:TerraformEnterpriseShortName: Terraform Enterprise +:TerraformCloudShortName: HCP Terraform +:TerraformCommunityName: Terraform Community +:Terraform: Terraform + +// Ansible development tools +:ToolsName: Ansible development tools :AAPRHDH: Ansible plug-ins for Red Hat Developer Hub +:AAPRHDHShort: Ansible plug-ins :RHDH: Red Hat Developer Hub +:RHDHVers: 1.4 +:RHDHShort: RHDH +:SelfService: Ansible Automation Platform self-service technology preview +:SelfServiceShort: self-service technology preview +:SelfServiceShortStart: Self-service technology preview :Builder: Ansible Builder :Navigator: automation content navigator :NavigatorStart: Automation content navigator @@ -130,7 +163,7 @@ :Console: console.redhat.com // Satellite attributes -:SatelliteVers: 6.15 +:SatelliteVers: 6.16 // OpenShift attributes :OCP: Red Hat OpenShift Container Platform @@ -138,6 +171,8 @@ :OCPLatest: 4.15 :ODF: Red Hat OpenShift Data Foundation :ODFShort: OpenShift Data Foundation +:OCPV: Red Hat OpenShift Virtualization +:OCPVShort: OpenShift Virtualization // Red Hat products :RHSSO: Red Hat Single Sign-On @@ -153,9 +188,21 @@ :DocumentationFeedback: providing-feedback.adoc :Boilerplate: aap-common/boilerplate.adoc +//Open Source licenses +:Apache: apache-2.0-license.adoc +:GNU3: gplv3-license.adoc +:OpenSourceA: aap-common/open-source-apache.adoc +:OpenSourceG: aap-common/open-source-gnu3.adoc + // Linux platforms :RHEL: Red Hat Enterprise Linux +// Topologies +:GrowthTopology: growth topology +:GrowthTopologyPlural: growth topologies +:EnterpriseTopology: enterprise topology +:EnterpriseTopologyPlural: enterprise topologies + // 2.5 Gateway Menu selections // These menu selections were based on the UI build environment dated 05/03/24 and should be verified against the final build before GA // Top level menu definitions for use only when selections go 3 levels deep. @@ -207,30 +254,30 @@ // FYI Automation Execution and Automation Decisions Projects will be under 1 selection in the 2.5-next or later. :MenuADProjects: menu:{MenuAD}[Projects] :MenuADDecisionEnvironments: menu:{MenuAD}[Decision Environments] -:MenuADWebhooks: menu:{MenuAD}[Webhooks] +:MenuADEventStreams: menu:{MenuAD}[Event Streams] :MenuADCredentials: menu:{MenuAD}[Infrastructure > Credentials] :MenuADCredentialType: menu:{MenuAD}[Infrastructure > Credential Types] -:MenuAECredentials: menu:{MenuTopAE}[Infrastructure > Credentials] -:MenuAECredentialType: menu:{MenuTopAE}[Infrastructure > Credential Types] + + // Automation Content (aka automation hub menu selections) // In 2.5EA the Automation Content selection will open a hub ui instance in a new tab/browser so the menu definitions will not change until 2.5-next -:MenuACNamespaces: menu:Collections[Namespaces] -:MenuACCollections: menu:Collections[Collections] -:MenuACExecEnvironments: menu:Execution Environments[Execution Environments] +:MenuACNamespaces: menu:{MenuTopAC}[Namespaces] +:MenuACCollections: menu:{MenuTopAC}[Collections] +:MenuACExecEnvironments: menu:{MenuTopAC}[Execution Environments] // Automation Content > Administration -:MenuACAdminSignatureKeys: menu:Signature Keys[] -:MenuACAdminRepositories: menu:Collections[Repositories] -:MenuACAdminRemoteRegistries: menu:Execution Environments[Remote Registries] -:MenuACAdminTasks: menu:Task Management[] -:MenuACAdminCollectionApproval: menu:Collections[Approval] -:MenuACAdminRemotes: menu:Collections[Remotes] -:MenuACAPIToken: menu:Collections[API token] +:MenuACAdminSignatureKeys: menu:{MenuTopAC}[Signature Keys] +:MenuACAdminRepositories: menu:{MenuTopAC}[Repositories] +:MenuACAdminRemoteRegistries: menu:{MenuTopAC}[Remote Registries] +:MenuACAdminTasks: menu:{MenuTopAC}[Task Management] +:MenuACAdminCollectionApproval: menu:{MenuTopAC}[Collection Approvals] +:MenuACAdminRemotes: menu:{MenuTopAC}[Remotes] +:MenuACAPIToken: menu:{MenuTopAC}[API token] //Each of the services previously had selections for access which will be centralized, ultimately these should be changed to use the attributes in Access Management menu selections once automation hub is provide in the full ui platform experience in 2.5-next -:MenuHubUsers: menu:User Access[Users] +:MenuHubUsers: menu:{MenuAM}[Users] :MenuHubGroups: menu:User Access[Groups] -:MenuHubRoles: menu:User Access[Roles] +:MenuHubRoles: menu:{MenuAM}[Roles] // Automation Analytics menu selections - According to mockups, analytics will be included in the Gateway nav only includes Automation Calculator, Host Metrics and Subscription Usage, other settings are also included on the Ansible dashboard on the Hybrid Cloud Console https://console.redhat.com/ansible/ansible-dashboard :MenuAAReports: menu:{MenuAA}[Reports] @@ -267,10 +314,198 @@ :MenuSetSubscription: menu:{MenuAEAdminSettings}[Subscription] :MenuSetGateway: menu:{MenuAEAdminSettings}[Platform gateway] :MenuSetUserPref: menu:{MenuAEAdminSettings}[User Preferences] -:MenuSetSystem: menu:{MenuAEAdminSettings}[System] -:MenuSetJob: menu:{MenuAEAdminSettings}[Job] -:MenuSetLogging: menu:{MenuAEAdminSettings}[Logging] -:MenuSetTroubleshooting: menu:{MenuAEAdminSettings}[Troubleshooting] +:MenuSetSystem: menu:{MenuAEAdminSettings}[Automation Execution > System] +:MenuSetJob: menu:{MenuAEAdminSettings}[Automation Execution > Job] +:MenuSetLogging: menu:{MenuAEAdminSettings}[Automation Execution > Logging] +:MenuSetTroubleshooting: menu:{MenuAEAdminSettings}[Automation Execution > Troubleshooting] +:MenuSetPolicy: menu:{MenuAEAdminSettings}[Automation Execution > Policy] // Not yet implemented but look to be in the future scope 2.5-next plan //:MenuSetLogin: {MenuAEAdminSettings}[Log In Settings] //:MenuSetUI: {MenuAEAdminSettings}[User Interface Settings] + +// Title and link attributes +// +// titles/troubleshooting-aap +:TitleTroubleshootingAAP: Troubleshooting Ansible Automation Platform +:URLTroubleshootingAAP: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/troubleshooting_ansible_automation_platform +:LinkTroubleshootingAAP: {URLTroubleshootingAAP}[{TitleTroubleshootingAAP}] +// +// titles/self-service-install +:TitleSelfServiceInstall: Installing Ansible Automation Platform self-service technology preview +:URLSelfServiceInstall: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/installing_ansible_automation_platform_self-service_technology_preview +:LinkSelfServiceInstall: {URLSelfServiceInstall}[{TitleSelfServiceInstall}] +// +// titles/self-service-using +:TitleSelfServiceUsing: Using Ansible Automation Platform self-service technology preview +:URLSelfServiceUsing: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_ansible_automation_platform_self-service_technology_preview +:LinkSelfServiceUsing: {URLSelfServiceUsing}[{TitleSelfServiceUsing}] +// +// titles/aap-plugin-rhdh-install +:TitlePluginRHDHInstall: Installing Ansible plug-ins for Red Hat Developer Hub +:URLPluginRHDHInstall: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/installing_ansible_plug-ins_for_red_hat_developer_hub +:LinkPluginRHDHInstall: {URLPluginRHDHInstall}[{TitlePluginRHDHInstall}] +// +// titles/aap-plugin-rhdh-using +:TitlePluginRHDHUsing: Using Ansible plug-ins for Red Hat Developer Hub +:URLPluginRHDHUsing: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_ansible_plug-ins_for_red_hat_developer_hub +:LinkPluginRHDHUsing: {URLPluginRHDHUsing}[{TitlePluginRHDHUsing}] +// +// titles/aap-operations-guide +:TitleAAPOperationsGuide: Operating Ansible Automation Platform +:URLAAPOperationsGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/operating_ansible_automation_platform +:LinkAAPOperationsGuide: {URLAAPOperationsGuide}[{TitleAAPOperationsGuide}] +// +// titles/eda/eda-user-guide +:TitleEDAUserGuide: Using automation decisions +:URLEDAUserGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_decisions +:LinkEDAUserGuide: {URLEDAUserGuide}[{TitleEDAUserGuide}] +// +// titles/upgrade +:TitleUpgrade: RPM upgrade and migration +:URLUpgrade: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/rpm_upgrade_and_migration +:LinkUpgrade: {URLUpgrade}[{TitleUpgrade}] +// +// titles/aap-operator-installation +:TitleOperatorInstallation: Installing on OpenShift Container Platform +:URLOperatorInstallation: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/installing_on_openshift_container_platform +:LinkOperatorInstallation: {URLOperatorInstallation}[{TitleOperatorInstallation}] +// +// titles/aap-installation-guide +:TitleInstallationGuide: RPM installation +:URLInstallationGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/rpm_installation +:LinkInstallationGuide: {URLInstallationGuide}[{TitleInstallationGuide}] +// +// titles/aap-planning-guide +:TitlePlanningGuide: Planning your installation +:URLPlanningGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/planning_your_installation +:LinkPlanningGuide: {URLPlanningGuide}[{TitlePlanningGuide}] +// +// titles/operator-mesh +:TitleOperatorMesh: Automation mesh for managed cloud or operator environments +:URLOperatorMesh: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_mesh_for_managed_cloud_or_operator_environments +:LinkOperatorMesh: {URLOperatorMesh}[{TitleOperatorMesh}] +// +// titles/automation-mesh +:TitleAutomationMesh: Automation mesh for VM environments +:URLAutomationMesh: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_mesh_for_vm_environments +:LinkAutomationMesh: {URLAutomationMesh}[{TitleAutomationMesh}] +// +// titles/ocp_performance_guide +:TitleOCPPerformanceGuide: Performance considerations for operator environments +:URLOCPPerformanceGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/performance_considerations_for_operator_environments +:LinkOCPPerformanceGuide: {URLOCPPerformanceGuide}[{TitleOCPPerformanceGuide}] +// +// titles/security-guide +:TitleSecurityGuide: Implementing security automation +:URLSecurityGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/implementing_security_automation +:LinkSecurityGuide: {URLSecurityGuide}[{TitleSecurityGuide}] +// +// titles/playbooks/playbooks-getting-started +:TitlePlaybooksGettingStarted: Getting started with playbooks +:URLPlaybooksGettingStarted: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_playbooks +:LinkPlaybooksGettingStarted: {URLPlaybooksGettingStarted}[{TitlePlaybooksGettingStarted}] +// +// titles/playbooks/playbooks-reference +:TitlePlaybooksReference: Reference guide to Ansible Playbooks +:URLPlaybooksReference: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/reference_guide_to_ansible_playbooks +:LinkPlaybooksReference: {URLPlaybooksReference}[{TitlePlaybooksReference}] +// +// titles/release-notes +:TitleReleaseNotes: Release notes +:URLReleaseNotes: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/release_notes +:LinkReleaseNotes: {URLReleaseNotes}[{TitleReleaseNotes}] +// +// titles/controller/controller-user-guide +:TitleControllerUserGuide: Using automation execution +:URLControllerUserGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_execution +:LinkControllerUserGuide: {URLControllerUserGuide}[{TitleControllerUserGuide}] +// +// titles/controller/controller-admin-guide +:TitleControllerAdminGuide: Configuring automation execution +:URLControllerAdminGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/configuring_automation_execution +:LinkControllerAdminGuide: {URLControllerAdminGuide}[{TitleControllerAdminGuide}] +// +// titles/controller/controller-api-overview +:TitleControllerAPIOverview: Automation execution API overview +:URLControllerAPIOverview: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_execution_api_overview +:LinkControllerAPIOverview: {URLControllerAPIOverview}[{TitleControllerAPIOverview}] +// +// titles/aap-operator-backup +:TitleOperatorBackup: Backup and recovery for operator environments +:URLOperatorBackup: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/backup_and_recovery_for_operator_environments +:LinkOperatorBackup: {URLOperatorBackup}[{TitleOperatorBackup}] +// +// titles/central-auth +:TitleCentralAuth: Access management and authentication +:URLCentralAuth: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/access_management_and_authentication +:LinkCentralAuth: {URLCentralAuth}[{TitleCentralAuth}] +// +// titles/getting-started +:TitleGettingStarted: Getting started with Ansible Automation Platform +:URLGettingStarted: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_ansible_automation_platform +:LinkGettingStarted: {URLGettingStarted}[{TitleGettingStarted}] +// +// titles/aap-containerized-install +:TitleContainerizedInstall: Containerized installation +:URLContainerizedInstall: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/containerized_installation +:LinkContainerizedInstall: {URLContainerizedInstall}[{TitleContainerizedInstall}] +// +// titles/navigator-guide +:TitleNavigatorGuide: Using content navigator +:URLNavigatorGuide: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_content_navigator +:LinkNavigatorGuide: {URLNavigatorGuide}[{TitleNavigatorGuide}] +// +// titles/aap-hardening +:TitleHardening: Hardening and compliance +:URLHardening: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/hardening_and_compliance +:LinkHardening: {URLHardening}[{TitleHardening}] +// +// titles/builder +:TitleBuilder: Creating and using execution environments +:URLBuilder: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/creating_and_using_execution_environments +:LinkBuilder: {URLBuilder}[{TitleBuilder}] +// +// titles/hub/managing-content +:TitleHubManagingContent: Managing automation content +:URLHubManagingContent: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/managing_automation_content +:LinkHubManagingContent: {URLHubManagingContent}[{TitleHubManagingContent}] +// +// titles/analytics +:TitleAnalytics: Using automation analytics +:URLAnalytics: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_analytics +:LinkAnalytics: {URLAnalytics}[{TitleAnalytics}] +// +// titles/develop-automation-content +:TitleDevelopAutomationContent: Developing automation content +:URLDevelopAutomationContent: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/developing_automation_content +:LinkDevelopAutomationContent: {URLDevelopAutomationContent}[{TitleDevelopAutomationContent}] +// +// titles/topologies +:TitleTopologies: Tested deployment models +:URLTopologies: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/tested_deployment_models +:LinkTopologies: {URLTopologies}[{TitleTopologies}] +// +//titles/edge-manager/edge-manager-user-guide +:TitleEdgeManager: Managing device fleets with the Red Hat Edge Manager +:URLEdgeManager: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/managing_device_fleets_with_the_red_hat_edge_manager +:LinkEdgeManager: {URLEdgeManager}[{TitleEdgeManager}] +// +// titles/aap-migration +:TitleMigration: Ansible Automation Platform migration +:URLMigration: {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/ansible_automation_platform_migration +:LinkSaaSAWSGuide: {URLMigration}[{TitleMigration}] +// +// Lightspeed branch titles/lightspeed-user-guide +:TitleLightspeedUserGuide: Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide +:URLLightspeedUserGuide: {BaseURL}/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide +:LinkLightspeedUserGuide: {URLLightspeedUserGuide}[{TitleLightspeedUserGuide}] +// +// Clouds branch titles/aap-on-azure +:TitleAzureGuide: Red Hat Ansible Automation Platform on Microsoft Azure Guide +:URLAzureGuide: {BaseURL}/ansible_on_clouds/2.x_latest/html/red_hat_ansible_automation_platform_on_microsoft_azure_guide +:LinkAzureGuide: {URLAzureGuide}[{TitleAzureGuide}] +// +// Clouds branch titles/saas-aws +:TitleSaaSAWSGuide: Red Hat Ansible Automation Platform Service on AWS +:URLSaaSAWSGuide: {BaseURL}/ansible_on_clouds/2.x_latest/html/red_hat_ansible_automation_platform_service_on_aws +:LinkSaaSAWSGuide: {URLSaaSAWSGuide}[{TitleSaaSAWSGuide}] diff --git a/downstream/images/AAP_dashboard_2.5.png b/downstream/images/AAP_dashboard_2.5.png new file mode 100644 index 0000000000..b9f17caa35 Binary files /dev/null and b/downstream/images/AAP_dashboard_2.5.png differ diff --git a/downstream/images/Subscription_tab.png b/downstream/images/Subscription_tab.png new file mode 100644 index 0000000000..808333ea97 Binary files /dev/null and b/downstream/images/Subscription_tab.png differ diff --git a/downstream/images/aap-ansible-lightspeed-intelligent-assistant.png b/downstream/images/aap-ansible-lightspeed-intelligent-assistant.png new file mode 100644 index 0000000000..f7ee73a1d3 Binary files /dev/null and b/downstream/images/aap-ansible-lightspeed-intelligent-assistant.png differ diff --git a/downstream/images/aap-network-ports-protocols.png b/downstream/images/aap-network-ports-protocols.png index 7e558fa42e..418d4684b8 100644 Binary files a/downstream/images/aap-network-ports-protocols.png and b/downstream/images/aap-network-ports-protocols.png differ diff --git a/downstream/images/account-linking-flow.png b/downstream/images/account-linking-flow.png new file mode 100644 index 0000000000..7a865445f0 Binary files /dev/null and b/downstream/images/account-linking-flow.png differ diff --git a/downstream/images/activity_stream_details.png b/downstream/images/activity_stream_details.png new file mode 100644 index 0000000000..75ebde3fe9 Binary files /dev/null and b/downstream/images/activity_stream_details.png differ diff --git a/downstream/images/activity_stream_page.png b/downstream/images/activity_stream_page.png new file mode 100644 index 0000000000..18ee045d90 Binary files /dev/null and b/downstream/images/activity_stream_page.png differ diff --git a/downstream/images/am-apple-team-map-example.png b/downstream/images/am-apple-team-map-example.png new file mode 100644 index 0000000000..106c5ed79f Binary files /dev/null and b/downstream/images/am-apple-team-map-example.png differ diff --git a/downstream/images/am-do-not-escalate-privileges.png b/downstream/images/am-do-not-escalate-privileges.png new file mode 100644 index 0000000000..92caa1dfdf Binary files /dev/null and b/downstream/images/am-do-not-escalate-privileges.png differ diff --git a/downstream/images/am-escalate-privileges.png b/downstream/images/am-escalate-privileges.png new file mode 100644 index 0000000000..c6d69306dd Binary files /dev/null and b/downstream/images/am-escalate-privileges.png differ diff --git a/downstream/images/am-mapping-order.png b/downstream/images/am-mapping-order.png new file mode 100644 index 0000000000..dc4b0fd8fb Binary files /dev/null and b/downstream/images/am-mapping-order.png differ diff --git a/downstream/images/am-org-mapping-full-annotation.png b/downstream/images/am-org-mapping-full-annotation.png new file mode 100644 index 0000000000..e273e2a87e Binary files /dev/null and b/downstream/images/am-org-mapping-full-annotation.png differ diff --git a/downstream/images/am-org-mapping.png b/downstream/images/am-org-mapping.png new file mode 100644 index 0000000000..90ef0ae166 Binary files /dev/null and b/downstream/images/am-org-mapping.png differ diff --git a/downstream/images/ansible-network-ports-protocols.png b/downstream/images/ansible-network-ports-protocols.png deleted file mode 100644 index 39f7c612aa..0000000000 Binary files a/downstream/images/ansible-network-ports-protocols.png and /dev/null differ diff --git a/downstream/images/automation_analytics.png b/downstream/images/automation_analytics.png new file mode 100644 index 0000000000..c60744f773 Binary files /dev/null and b/downstream/images/automation_analytics.png differ diff --git a/downstream/images/change_subscription.png b/downstream/images/change_subscription.png new file mode 100644 index 0000000000..83637502ad Binary files /dev/null and b/downstream/images/change_subscription.png differ diff --git a/downstream/images/chatbot-icon.png b/downstream/images/chatbot-icon.png new file mode 100644 index 0000000000..428cdaaf7a Binary files /dev/null and b/downstream/images/chatbot-icon.png differ diff --git a/downstream/images/cont-a-env-a.png b/downstream/images/cont-a-env-a.png new file mode 100644 index 0000000000..03be405778 Binary files /dev/null and b/downstream/images/cont-a-env-a.png differ diff --git a/downstream/images/cont-b-env-a.png b/downstream/images/cont-b-env-a.png new file mode 100644 index 0000000000..5f182ec244 Binary files /dev/null and b/downstream/images/cont-b-env-a.png differ diff --git a/downstream/images/credential-types-drop-down-menu.png b/downstream/images/credential-types-drop-down-menu.png index 9ed1f34abd..b54f1a59f9 100644 Binary files a/downstream/images/credential-types-drop-down-menu.png and b/downstream/images/credential-types-drop-down-menu.png differ diff --git a/downstream/images/credentials-create-github-app-lookup-credential.png b/downstream/images/credentials-create-github-app-lookup-credential.png new file mode 100644 index 0000000000..b035513451 Binary files /dev/null and b/downstream/images/credentials-create-github-app-lookup-credential.png differ diff --git a/downstream/images/credentials-github-app-target-secret-info.png b/downstream/images/credentials-github-app-target-secret-info.png new file mode 100644 index 0000000000..11cd083028 Binary files /dev/null and b/downstream/images/credentials-github-app-target-secret-info.png differ diff --git a/downstream/images/delete-button.png b/downstream/images/delete-button.png index 5c3dedf8ee..afd0c081ce 100644 Binary files a/downstream/images/delete-button.png and b/downstream/images/delete-button.png differ diff --git a/downstream/images/devtools-extension-navigator-output.png b/downstream/images/devtools-extension-navigator-output.png new file mode 100644 index 0000000000..789fc805fe Binary files /dev/null and b/downstream/images/devtools-extension-navigator-output.png differ diff --git a/downstream/images/devtools-extension-navigator-tasks.png b/downstream/images/devtools-extension-navigator-tasks.png new file mode 100644 index 0000000000..aa6f5bd5ba Binary files /dev/null and b/downstream/images/devtools-extension-navigator-tasks.png differ diff --git a/downstream/images/devtools-reopen-in-container.png b/downstream/images/devtools-reopen-in-container.png new file mode 100644 index 0000000000..3047f23bdf Binary files /dev/null and b/downstream/images/devtools-reopen-in-container.png differ diff --git a/downstream/images/eda-event-details.png b/downstream/images/eda-event-details.png index b107ef0780..2ebb7df88f 100644 Binary files a/downstream/images/eda-event-details.png and b/downstream/images/eda-event-details.png differ diff --git a/downstream/images/eda-event-streams-mapping-UI.png b/downstream/images/eda-event-streams-mapping-UI.png new file mode 100644 index 0000000000..16e2f972f5 Binary files /dev/null and b/downstream/images/eda-event-streams-mapping-UI.png differ diff --git a/downstream/images/eda-event-streams-swapping-sources.png b/downstream/images/eda-event-streams-swapping-sources.png new file mode 100644 index 0000000000..d83d34189f Binary files /dev/null and b/downstream/images/eda-event-streams-swapping-sources.png differ diff --git a/downstream/images/eda-forwarding-event-to-activation-toggle.png b/downstream/images/eda-forwarding-event-to-activation-toggle.png new file mode 100644 index 0000000000..3e46f739c1 Binary files /dev/null and b/downstream/images/eda-forwarding-event-to-activation-toggle.png differ diff --git a/downstream/images/eda-latest-event-streams-mapping.png b/downstream/images/eda-latest-event-streams-mapping.png new file mode 100644 index 0000000000..ad9250ed00 Binary files /dev/null and b/downstream/images/eda-latest-event-streams-mapping.png differ diff --git a/downstream/images/eda-payload-body-event-streams.png b/downstream/images/eda-payload-body-event-streams.png new file mode 100644 index 0000000000..d41506290e Binary files /dev/null and b/downstream/images/eda-payload-body-event-streams.png differ diff --git a/downstream/images/eda-rule-audit-event-streams.png b/downstream/images/eda-rule-audit-event-streams.png new file mode 100644 index 0000000000..d4c2a456d2 Binary files /dev/null and b/downstream/images/eda-rule-audit-event-streams.png differ diff --git a/downstream/images/eda-rule-audit-list-view.png b/downstream/images/eda-rule-audit-list-view.png index ea8b62bcdd..ccaedfe0d7 100644 Binary files a/downstream/images/eda-rule-audit-list-view.png and b/downstream/images/eda-rule-audit-list-view.png differ diff --git a/downstream/images/eda-verify-event-streams.png b/downstream/images/eda-verify-event-streams.png new file mode 100644 index 0000000000..1014005611 Binary files /dev/null and b/downstream/images/eda-verify-event-streams.png differ diff --git a/downstream/images/eda-verify-rulebook-attachment.png b/downstream/images/eda-verify-rulebook-attachment.png new file mode 100644 index 0000000000..d136609011 Binary files /dev/null and b/downstream/images/eda-verify-rulebook-attachment.png differ diff --git a/downstream/images/ee-create-new.png b/downstream/images/ee-create-new.png new file mode 100644 index 0000000000..bd9b7a93a9 Binary files /dev/null and b/downstream/images/ee-create-new.png differ diff --git a/downstream/images/gw-clustered-redis.png b/downstream/images/gw-clustered-redis.png new file mode 100644 index 0000000000..743d8c05dc Binary files /dev/null and b/downstream/images/gw-clustered-redis.png differ diff --git a/downstream/images/gw-single-node-redis.png b/downstream/images/gw-single-node-redis.png new file mode 100644 index 0000000000..a02e42d9d2 Binary files /dev/null and b/downstream/images/gw-single-node-redis.png differ diff --git a/downstream/images/hosts_jobs_details.png b/downstream/images/hosts_jobs_details.png new file mode 100644 index 0000000000..757452dba1 Binary files /dev/null and b/downstream/images/hosts_jobs_details.png differ diff --git a/downstream/images/job-settings-full.png b/downstream/images/job-settings-full.png new file mode 100644 index 0000000000..ae00c4c174 Binary files /dev/null and b/downstream/images/job-settings-full.png differ diff --git a/downstream/images/logging-settings.png b/downstream/images/logging-settings.png new file mode 100644 index 0000000000..f9137a70c3 Binary files /dev/null and b/downstream/images/logging-settings.png differ diff --git a/downstream/images/logging-splunk-controller-example.png b/downstream/images/logging-splunk-controller-example.png new file mode 100644 index 0000000000..a568f61516 Binary files /dev/null and b/downstream/images/logging-splunk-controller-example.png differ diff --git a/downstream/images/ocp-a-env-a.png b/downstream/images/ocp-a-env-a.png new file mode 100644 index 0000000000..03fa781c3b Binary files /dev/null and b/downstream/images/ocp-a-env-a.png differ diff --git a/downstream/images/ocp-b-env-a.png b/downstream/images/ocp-b-env-a.png new file mode 100644 index 0000000000..22b6f99a95 Binary files /dev/null and b/downstream/images/ocp-b-env-a.png differ diff --git a/downstream/images/platform_gateway_full.png b/downstream/images/platform_gateway_full.png new file mode 100644 index 0000000000..99882b4d69 Binary files /dev/null and b/downstream/images/platform_gateway_full.png differ diff --git a/downstream/images/platform_gateway_settings_page.png b/downstream/images/platform_gateway_settings_page.png new file mode 100644 index 0000000000..f58f14cd5a Binary files /dev/null and b/downstream/images/platform_gateway_settings_page.png differ diff --git a/downstream/images/project-create-git-github-app.png b/downstream/images/project-create-git-github-app.png new file mode 100644 index 0000000000..f0c5198045 Binary files /dev/null and b/downstream/images/project-create-git-github-app.png differ diff --git a/downstream/images/project-sync-github-app.png b/downstream/images/project-sync-github-app.png new file mode 100644 index 0000000000..506821caa6 Binary files /dev/null and b/downstream/images/project-sync-github-app.png differ diff --git a/downstream/images/project-update-launch-cache-timeout.png b/downstream/images/project-update-launch-cache-timeout.png index 15b97d7531..20d11486b5 100644 Binary files a/downstream/images/project-update-launch-cache-timeout.png and b/downstream/images/project-update-launch-cache-timeout.png differ diff --git a/downstream/images/rhaap-sign-in-page.png b/downstream/images/rhaap-sign-in-page.png new file mode 100644 index 0000000000..4d29ba64e4 Binary files /dev/null and b/downstream/images/rhaap-sign-in-page.png differ diff --git a/downstream/images/rhdh-ansible-plugin-architecture.png b/downstream/images/rhdh-ansible-plugin-architecture.png new file mode 100644 index 0000000000..5178397f0a Binary files /dev/null and b/downstream/images/rhdh-ansible-plugin-architecture.png differ diff --git a/downstream/images/rhdh-check-devtools-container.png b/downstream/images/rhdh-check-devtools-container.png new file mode 100644 index 0000000000..8fbaf48e36 Binary files /dev/null and b/downstream/images/rhdh-check-devtools-container.png differ diff --git a/downstream/images/rhdh-check-plugin-config.png b/downstream/images/rhdh-check-plugin-config.png new file mode 100644 index 0000000000..fde6bb35b4 Binary files /dev/null and b/downstream/images/rhdh-check-plugin-config.png differ diff --git a/downstream/images/rhdh-feedback-form.png b/downstream/images/rhdh-feedback-form.png new file mode 100644 index 0000000000..27c4ea9013 Binary files /dev/null and b/downstream/images/rhdh-feedback-form.png differ diff --git a/downstream/images/rhdh-plugin-dashboard.png b/downstream/images/rhdh-plugin-dashboard.png new file mode 100644 index 0000000000..2a0ca54da9 Binary files /dev/null and b/downstream/images/rhdh-plugin-dashboard.png differ diff --git a/downstream/images/rhdh-plugin-registry.png b/downstream/images/rhdh-plugin-registry.png new file mode 100644 index 0000000000..f7b0e1a0e1 Binary files /dev/null and b/downstream/images/rhdh-plugin-registry.png differ diff --git a/downstream/images/rhdh-vscode-run-playbook.png b/downstream/images/rhdh-vscode-run-playbook.png new file mode 100644 index 0000000000..4521589f43 Binary files /dev/null and b/downstream/images/rhdh-vscode-run-playbook.png differ diff --git a/downstream/images/rpm-a-env-a.png b/downstream/images/rpm-a-env-a.png new file mode 100644 index 0000000000..19ccb5d084 Binary files /dev/null and b/downstream/images/rpm-a-env-a.png differ diff --git a/downstream/images/rpm-b-env-a.png b/downstream/images/rpm-b-env-a.png new file mode 100644 index 0000000000..57686baafb Binary files /dev/null and b/downstream/images/rpm-b-env-a.png differ diff --git a/downstream/images/self-service-create-oauth-app.png b/downstream/images/self-service-create-oauth-app.png new file mode 100644 index 0000000000..d3ac47c2aa Binary files /dev/null and b/downstream/images/self-service-create-oauth-app.png differ diff --git a/downstream/images/self-service-generate-oauth-token.png b/downstream/images/self-service-generate-oauth-token.png new file mode 100644 index 0000000000..9cecc9252c Binary files /dev/null and b/downstream/images/self-service-generate-oauth-token.png differ diff --git a/downstream/images/self-service-plugin-registry.png b/downstream/images/self-service-plugin-registry.png new file mode 100644 index 0000000000..ea1512c5e5 Binary files /dev/null and b/downstream/images/self-service-plugin-registry.png differ diff --git a/downstream/images/self-service-pod-env-variables.png b/downstream/images/self-service-pod-env-variables.png new file mode 100644 index 0000000000..e13b22a0cc Binary files /dev/null and b/downstream/images/self-service-pod-env-variables.png differ diff --git a/downstream/images/self-service-sign-in-page.png b/downstream/images/self-service-sign-in-page.png new file mode 100644 index 0000000000..5e616783f1 Binary files /dev/null and b/downstream/images/self-service-sign-in-page.png differ diff --git a/downstream/images/self-service-templates-view.png b/downstream/images/self-service-templates-view.png new file mode 100644 index 0000000000..b894277e3a Binary files /dev/null and b/downstream/images/self-service-templates-view.png differ diff --git a/downstream/images/self-service-verify-helm-install.png b/downstream/images/self-service-verify-helm-install.png new file mode 100644 index 0000000000..495b0d4e75 Binary files /dev/null and b/downstream/images/self-service-verify-helm-install.png differ diff --git a/downstream/images/self-service-view-deployment-logs.png b/downstream/images/self-service-view-deployment-logs.png new file mode 100644 index 0000000000..a106161782 Binary files /dev/null and b/downstream/images/self-service-view-deployment-logs.png differ diff --git a/downstream/images/self-service-view-install-messages.png b/downstream/images/self-service-view-install-messages.png new file mode 100644 index 0000000000..92316ff091 Binary files /dev/null and b/downstream/images/self-service-view-install-messages.png differ diff --git a/downstream/images/settings_subscription_page.png b/downstream/images/settings_subscription_page.png new file mode 100644 index 0000000000..d6287bea5e Binary files /dev/null and b/downstream/images/settings_subscription_page.png differ diff --git a/downstream/images/sort-order-example.png b/downstream/images/sort-order-example.png index 4d1ac14901..5392b9dcb7 100644 Binary files a/downstream/images/sort-order-example.png and b/downstream/images/sort-order-example.png differ diff --git a/downstream/images/subscriptions_first-page.png b/downstream/images/subscriptions_first-page.png new file mode 100644 index 0000000000..8a704b779e Binary files /dev/null and b/downstream/images/subscriptions_first-page.png differ diff --git a/downstream/images/sun-icon.png b/downstream/images/sun-icon.png new file mode 100644 index 0000000000..75a2bb529a Binary files /dev/null and b/downstream/images/sun-icon.png differ diff --git a/downstream/images/svg/OCP-A_Env-A-R.svg b/downstream/images/svg/OCP-A_Env-A-R.svg new file mode 100644 index 0000000000..43e202c963 --- /dev/null +++ b/downstream/images/svg/OCP-A_Env-A-R.svg @@ -0,0 +1,661 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.34 + + + + Sheet.35 + Automation controller deployment + + + + Automation controller deployment + + Sheet.8 + Automation controller web pod + + + + Automation controller web pod + + Sheet.14 + Automation controller task pod + + + + Automation controller task pod + + Sheet.1 + Mesh ingress pod + + + + Mesh ingress pod + + Sheet.9 + Ingress + + + + Ingress + + Sheet.10 + Service + + + + Service + + Sheet.12 + Ingress + + + + Ingress + + Sheet.16 + Service + + + + Service + + Sheet.18 + + + + Sheet.20 + Automation hub deployment + + + + Automation hub deployment + + Sheet.21 + Automation hub web pod + + + + Automation hub web pod + + Sheet.22 + Automation hub content pod + + + + Automation hub content pod + + Sheet.28 + Ingress + + + + Ingress + + Sheet.30 + Service + + + + Service + + Sheet.27 + Automation hub API pod + + + + Automation hub API pod + + Sheet.36 + Automation hub worker pod + + + + Automation hub worker pod + + Sheet.54 + + + + Sheet.55 + Event Driven Ansible deployment + + + + Event Driven Ansible deployment + + Sheet.57 + Event Driven Ansible API pod + + + + Event Driven Ansible API pod + + Sheet.58 + Event Driven Ansible Activation pod + + + + Event Driven Ansible Activation pod + + Sheet.59 + Ingress + + + + Ingress + + Sheet.60 + Service + + + + Service + + Sheet.63 + Event Driven Ansible event stream pod + + + + Event Driven Ansible event stream pod + + Sheet.64 + Event Driven Ansible Scheduler pod + + + + Event Driven Ansible Scheduler pod + + Sheet.17 + + + + Sheet.19 + + + + Sheet.24 + + + + Sheet.29 + + + + Sheet.33 + + + + Sheet.39 + + + + Sheet.40 + + + + Sheet.42 + + + + Sheet.52 + + + + Sheet.65 + + + + Sheet.66 + + + + Sheet.67 + + + + Sheet.68 + + + + Sheet.70 + + + + Sheet.45 + + + + Sheet.47 + + + + Sheet.13 + + + + Sheet.51 + + + + Sheet.71 + + + + Sheet.74 + + + + Sheet.79 + + + + Sheet.25 + + + + Sheet.6 + + + + Sheet.15 + Platform gateway deployment + + + + Platform gateway deployment + + Sheet.23 + Platform gateway pod + + + + Platform gateway pod + + Sheet.3 + Ingress + + + + Ingress + + Sheet.5 + Service + + + + Service + + Sheet.80 + + + + Sheet.81 + + + + Sheet.82 + Ingress + + + + Ingress + + Sheet.83 + Service + + + + Service + + Sheet.46 + + + + Sheet.69 + + + + Sheet.85 + + + + Sheet.88 + + + + Sheet.89 + + + + Sheet.90 + + + + Sheet.91 + + + + Sheet.92 + + + + Sheet.94 + + + + Sheet.95 + Ansible Automation platform operator + + + + Ansible Automation platform operator + + Sheet.96 + + + + Sheet.97 + + + + Sheet.98 + + + + Sheet.99 + + + + Sheet.100 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.101 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.102 + Redis: 6379 – job control/caching + + + + Redis: 6379 – job control/caching + + Sheet.104 + Port 80/443 + + + + Port 80/443 + + Sheet.105 + Port 5432 + + + + Port 5432 + + Sheet.108 + + + + Sheet.109 + + + + Sheet.110 + + + + Sheet.56 + + + + Sheet.106 + Port 5432 + + + + Port 5432 + + Sheet.86 + + + + Sheet.84 + + + + Sheet.87 + + + + Sheet.111 + + + + Sheet.61 + Event Driven Annsibe Worker pod + + + + Event Driven Annsibe Worker pod + + Sheet.4 + + + + Sheet.26 + + + + Sheet.31 + + + + Sheet.32 + + + + Sheet.37 + + + + Sheet.62 + + + + Sheet.43 + + + + Sheet.11 + + + + Sheet.44 + + + + Sheet.48 + + + + Sheet.78 + + + + Sheet.103 + Port 80/443 + + + + Port 80/443 + + Sheet.49 + + + + Sheet.107 + Port 6379 + + + + Port 6379 + + Sheet.7 + + + + Sheet.50 + Postgres pod + + + + Postgres pod + + Sheet.53 + PVC + + + + PVC + + Sheet.75 + Service + + + + Service + + Sheet.76 + PVC + + + + PVC + + Sheet.77 + + + + Sheet.112 + Redis pod + + + + Redis pod + + Sheet.113 + PVC + + + + PVC + + Sheet.114 + Service + + + + Service + + diff --git a/downstream/images/svg/OCP-B_Env-A-R.svg b/downstream/images/svg/OCP-B_Env-A-R.svg new file mode 100644 index 0000000000..8a58e98de2 --- /dev/null +++ b/downstream/images/svg/OCP-B_Env-A-R.svg @@ -0,0 +1,660 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.34 + + + + Sheet.35 + Automation controller deployment + + + + Automation controller deployment + + Sheet.8 + Automation controller web pod + + + + Automation controller web pod + + Sheet.14 + Automation controller task pod + + + + Automation controller task pod + + Sheet.1 + Mesh ingress pod + + + + Mesh ingress pod + + Sheet.9 + Ingress + + + + Ingress + + Sheet.10 + Service + + + + Service + + Sheet.12 + Ingress + + + + Ingress + + Sheet.16 + Service + + + + Service + + Sheet.18 + + + + Sheet.20 + Automation hub deployment + + + + Automation hub deployment + + Sheet.21 + Automation hub web pod + + + + Automation hub web pod + + Sheet.22 + Automation hub content pod + + + + Automation hub content pod + + Sheet.28 + Ingress + + + + Ingress + + Sheet.30 + Service + + + + Service + + Sheet.27 + Automation hub API pod + + + + Automation hub API pod + + Sheet.36 + Automation hub worker pod + + + + Automation hub worker pod + + Sheet.38 + Automation hub worker pod 2 + + + + Automation hub worker pod 2 + + Sheet.54 + + + + Sheet.55 + Event Driven Ansible deployment + + + + Event Driven Ansible deployment + + Sheet.57 + Event Driven Ansible API pod + + + + Event Driven Ansible API pod + + Sheet.58 + Event Driven Ansible Activation pod + + + + Event Driven Ansible Activation pod + + Sheet.59 + Ingress + + + + Ingress + + Sheet.60 + Service + + + + Service + + Sheet.63 + Event Driven Ansible event stream pod + + + + Event Driven Ansible event stream pod + + Sheet.64 + Event Driven Ansible Scheduler pod + + + + Event Driven Ansible Scheduler pod + + Sheet.2 + Postgres (External) + + + + Postgres(External) + + Sheet.7 + Redis (External) + + + + Redis(External) + + Sheet.17 + + + + Sheet.19 + + + + Sheet.24 + + + + Sheet.29 + + + + Sheet.33 + + + + Sheet.39 + + + + Sheet.40 + + + + Sheet.41 + + + + Sheet.42 + + + + Sheet.52 + + + + Sheet.65 + + + + Sheet.66 + + + + Sheet.67 + + + + Sheet.68 + + + + Sheet.70 + + + + Sheet.13 + + + + Sheet.51 + + + + Sheet.74 + + + + Sheet.79 + + + + Sheet.25 + + + + Sheet.6 + + + + Sheet.15 + Platform gateway deployment + + + + Platform gateway deployment + + Sheet.23 + Platform gateway pod + + + + Platform gateway pod + + Sheet.3 + Ingress + + + + Ingress + + Sheet.5 + Service + + + + Service + + Sheet.80 + + + + Sheet.81 + + + + Sheet.82 + Ingress + + + + Ingress + + Sheet.83 + Service + + + + Service + + Sheet.46 + + + + Sheet.69 + + + + Sheet.89 + + + + Sheet.90 + + + + Sheet.91 + + + + Sheet.92 + + + + Sheet.94 + + + + Sheet.95 + Ansible Automation platform operator + + + + Ansible Automation platform operator + + Sheet.96 + + + + Sheet.97 + + + + Sheet.98 + + + + Sheet.99 + + + + Sheet.100 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.101 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.102 + Redis: 6379 – job control/caching + + + + Redis: 6379 – job control/caching + + Sheet.104 + Port 80/443 + + + + Port 80/443 + + Sheet.105 + Port 5432 + + + + Port 5432 + + Sheet.108 + + + + Sheet.109 + + + + Sheet.110 + + + + Sheet.56 + + + + Sheet.106 + Port 5432 + + + + Port 5432 + + Sheet.84 + + + + Sheet.87 + + + + Sheet.112 + + Sheet.113 + + + + Sheet.114 + + + + Sheet.115 + + + + Sheet.116 + + + + Sheet.117 + + + + Sheet.118 + + + + + Sheet.119 + + Sheet.120 + + + + Sheet.121 + + + + Sheet.122 + + + + Sheet.123 + + + + Sheet.124 + + + + Sheet.125 + + + + + Sheet.61 + Event Driven Annsibe Worker pod + + + + Event Driven Annsibe Worker pod + + Sheet.4 + + + + Sheet.26 + + + + Sheet.31 + + + + Sheet.32 + + + + Sheet.37 + + + + Sheet.62 + + + + Sheet.43 + + + + Sheet.11 + + + + Sheet.44 + + + + Sheet.48 + + + + Sheet.78 + + + + Sheet.103 + Port 80/443 + + + + Port 80/443 + + Sheet.49 + + + + Sheet.107 + Port 6379 + + + + Port 6379 + + diff --git a/downstream/images/svg/st_CONT_B_Env_A-R.svg b/downstream/images/svg/st_CONT_B_Env_A-R.svg new file mode 100644 index 0000000000..9d20cd8681 --- /dev/null +++ b/downstream/images/svg/st_CONT_B_Env_A-R.svg @@ -0,0 +1,1070 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.5 + + + + Sheet.10 + Automation mesh + + + + Automation mesh + + Sheet.52 + + + + Sheet.65 + HA proxy/ load balancer + + + + HA proxy/ load balancer + + Sheet.40 + + + + Sheet.48 + + + + Sheet.55 + + + + Sheet.81 + + + + Sheet.98 + + + + Sheet.102 + + + + Sheet.103 + + + + Sheet.104 + + + + Sheet.105 + + + + Sheet.106 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.107 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.108 + Redis: 6379 – job control/caching + + + + Redis: 6379 – job control/caching + + Sheet.109 + Receptor:27199 – work/job execution + + + + Receptor:27199 – work/job execution + + Sheet.45 + Port 6379 + + + + Port 6379 + + Sheet.113 + + + + Sheet.14 + + + + Sheet.34 + Port 27199 + + + + Port 27199 + + Sheet.53 + + + + Sheet.57 + Port 6379 + + + + Port 6379 + + Sheet.54 + + + + Sheet.18 + Port 27199 + + + + Port 27199 + + Sheet.86 + + + + Sheet.87 + Hop node + + + + Hop node + + Sheet.9 + Automation mesh container + + + + Automation mesh container + + Sheet.134 + + + + Sheet.135 + 50051 grpc + + + + 50051 grpc + + Sheet.139 + + + + Sheet.24 + + + + Sheet.35 + + + + Sheet.36 + + + + Sheet.38 + Automation hub + + + + Automation hub + + Sheet.39 + Automation hub container + + + + Automation hub container + + Sheet.110 + redis + + + + redis + + Sheet.112 + Port 6379 + + + + Port 6379 + + Sheet.118 + + + + Sheet.119 + Automation controller + + + + Automation controller + + Sheet.120 + Automation controller container + + + + Automation controller container + + Sheet.149 + + + + Sheet.100 + + + + Sheet.121 + + + + Sheet.122 + + + + Sheet.148 + + + + Sheet.82 + + + + Sheet.150 + + + + Sheet.59 + + + + Sheet.123 + + + + Sheet.46 + + + + Sheet.151 + Hop node + + + + Hop node + + Sheet.19 + + + + Sheet.22 + Platform gateway + + + + Platform gateway + + Sheet.28 + Platform gateway container + + + + Platform gateway container + + Sheet.33 + redis + + + + redis + + Sheet.162 + + Sheet.12 + + + + Sheet.17 + Event-Driven Ansible + + + + Event-Driven Ansible + + Sheet.25 + Event-Driven Ansible container + + + + Event-Driven Ansible container + + Sheet.29 + redis + + + + redis + + + Sheet.163 + + + + Sheet.61 + + + + Sheet.84 + + + + Sheet.136 + Automation hub + + + + Automation hub + + Sheet.140 + Automation hub container + + + + Automation hub container + + Sheet.141 + redis + + + + redis + + Sheet.144 + + + + Sheet.145 + Event-Driven Ansible + + + + Event-Driven Ansible + + Sheet.146 + Event-Driven Ansible container + + + + Event-Driven Ansible container + + Sheet.147 + redis + + + + redis + + Sheet.90 + + + + Sheet.94 + Automation controller + + + + Automation controller + + Sheet.115 + Automation controller container + + + + Automation controller container + + Sheet.164 + + + + Sheet.124 + + + + Sheet.125 + + + + Sheet.169 + + + + Sheet.170 + + + + Sheet.79 + + + + Sheet.80 + Port 80/443 + + + + Port 80/443 + + Sheet.88 + + + + Sheet.3 + + + + Sheet.78 + + + + Sheet.99 + Port 80/443 + + + + Port 80/443 + + Sheet.66 + Port 16379 + + + + Port 16379 + + Sheet.58 + + + + Sheet.49 + + + + Sheet.62 + + + + Sheet.26 + + + + Sheet.63 + + + + Sheet.60 + + + + Sheet.64 + + + + Sheet.1 + Port 6379 + + + + Port 6379 + + Sheet.173 + + + + Sheet.111 + + + + Sheet.142 + Port 16379 + + + + Port 16379 + + Sheet.171 + Port 6379 + + + + Port 6379 + + Sheet.37 + + Sheet.176 + + + + Sheet.177 + + + + Sheet.175 + + + + Sheet.178 + + + + Sheet.183 + + + + Sheet.186 + + + + + Sheet.21 + Port 27199 + + + + Port 27199 + + Sheet.43 + + Sheet.152 + + + + Sheet.154 + Automation mesh container + + + + Automation mesh container + + Sheet.155 + Automation mesh container + + + + Automation mesh container + + Sheet.156 + Execution node + + + + Execution node + + + Sheet.44 + + Sheet.157 + + + + Sheet.159 + Automation mesh container + + + + Automation mesh container + + Sheet.160 + Automation mesh container + + + + Automation mesh container + + Sheet.161 + Execution node + + + + Execution node + + + Sheet.187 + + + + Sheet.188 + + + + Sheet.189 + + + + Sheet.190 + + + + Sheet.191 + + + + Sheet.192 + + + + Sheet.193 + + + + Sheet.194 + + + + Sheet.195 + + + + Sheet.196 + + + + Sheet.197 + + + + Sheet.198 + + + + Sheet.199 + + + + Sheet.200 + + + + Sheet.201 + + + + Sheet.202 + + + + Sheet.203 + + + + Sheet.204 + + + + Sheet.51 + + + + Sheet.97 + + + + Sheet.72 + + + + Sheet.69 + + + + Sheet.165 + + Sheet.6 + + + + Sheet.15 + Platform gateway + + + + Platform gateway + + Sheet.23 + Platform gateway container + + + + Platform gateway container + + Sheet.30 + redis + + + + redis + + + Sheet.133 + + + + Sheet.32 + + + + Sheet.50 + Port 16379 + + + + Port 16379 + + Sheet.137 + Port 50051 + + + + Port 50051 + + Sheet.92 + Port 80/443 + + + + Port 80/443 + + Sheet.131 + Port 5432 + + + + Port 5432 + + Sheet.205 + + + + Sheet.2 + + + + Sheet.116 + + + + Sheet.95 + Port 80/443 + + + + Port 80/443 + + Sheet.20 + Port 5432 + + + + Port 5432 + + Sheet.89 + Port 5432 + + + + Port 5432 + + Sheet.126 + Port 5432 + + + + Port 5432 + + Sheet.96 + Port 80/443 + + + + Port 80/443 + + Sheet.42 + Port 80/443 + + + + Port 80/443 + + Sheet.129 + Port 5432 + + + + Port 5432 + + Sheet.130 + Port 5432 + + + + Port 5432 + + Sheet.206 + + + + Sheet.31 + + + + Sheet.27 + Port 27199 + + + + Port 27199 + + Sheet.207 + Port 80/443 + + + + Port 80/443 + + Sheet.208 + Port 80/443 + + + + Port 80/443 + + Sheet.209 + Port 80/443 + + + + Port 80/443 + + Sheet.210 + Port 80/443 + + + + Port 80/443 + + Sheet.211 + Port 80/443 + + + + Port 80/443 + + Sheet.212 + Port 80/443 + + + + Port 80/443 + + Sheet.127 + Port 5432 + + + + Port 5432 + + Sheet.16 + Postgres (External) + + + + Postgres (External) + + Sheet.128 + Port 5432 + + + + Port 5432 + + diff --git a/downstream/images/svg/st_Cont-A_Env-A-R.svg b/downstream/images/svg/st_Cont-A_Env-A-R.svg new file mode 100644 index 0000000000..b65483f408 --- /dev/null +++ b/downstream/images/svg/st_Cont-A_Env-A-R.svg @@ -0,0 +1,404 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.1 + Example proxy/load balancer + + + + Example proxy/load balancer + + Sheet.2 + Platform gateway containers + + + + Platformgateway containers + + Sheet.3 + Event Driven Ansible containers + + + + Event Driven Ansible containers + + Sheet.4 + Automation controller containers + + + + Automation controller containers + + Sheet.5 + Automation hub containers + + + + Automation hub containers + + Sheet.6 + Postgres container + + + + Postgres container + + Sheet.7 + Redis container + + + + Redis container + + Sheet.8 + Automation mesh container + + + + Automation mesh container + + Sheet.9 + Execution container + + + + Execution container + + Sheet.10 + + + + Sheet.11 + + + + Sheet.12 + + + + Sheet.13 + + + + Sheet.14 + + + + Sheet.15 + + + + Sheet.19 + + + + Sheet.16 + + + + Sheet.17 + + + + Sheet.18 + + + + Sheet.20 + + + + Sheet.21 + + + + Sheet.24 + + + + Sheet.25 + + + + Sheet.27 + Port 5432 + + + + Port 5432 + + Sheet.28 + Port 27199 + + + + Port 27199 + + Sheet.29 + Port 6379 + + + + Port 6379 + + Sheet.30 + Port 80/443 + + + + Port 80/443 + + Sheet.31 + Port 80/443 + + + + Port 80/443 + + Sheet.32 + + + + Sheet.33 + Port 80/443 + + + + Port 80/443 + + Sheet.34 + + + + Sheet.35 + Port 80/443 + + + + Port 80/443 + + Sheet.36 + + + + Sheet.37 + + + + Sheet.38 + + + + Sheet.39 + + + + Sheet.40 + + + + Sheet.41 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.42 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.43 + Redis: 6379 – job control/caching + + + + Redis: 6379 – job control/caching + + Sheet.44 + Receptor:27199 – work/job execution + + + + Receptor:27199 – work/job execution + + Sheet.45 + + + + Sheet.46 + Ansible automation platform + + + + Ansible automation platform + + Sheet.47 + + + + Sheet.48 + + + + Sheet.49 + + + + Sheet.50 + + + + Sheet.51 + + + + Sheet.54 + + + + Sheet.56 + + + + Sheet.57 + + + + Sheet.58 + + + + Sheet.23 + + + + Sheet.26 + + + + Sheet.22 + + + + Sheet.59 + + + + Sheet.53 + + + + Sheet.60 + + + + Sheet.52 + + + + Sheet.55 + Port 80/443 + + + + Port 80/443 + + diff --git a/downstream/images/svg/st_Network-R.svg b/downstream/images/svg/st_Network-R.svg new file mode 100644 index 0000000000..7b1d36eb84 --- /dev/null +++ b/downstream/images/svg/st_Network-R.svg @@ -0,0 +1,1020 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.26 + + + + Sheet.27 + + + + Sheet.28 + Port 8443 + + + + Port 8443 + + Sheet.24 + + + + Sheet.25 + gateway.node + + + + gateway.node + + Sheet.33 + Platform gateway + + + + Platform gateway + + Sheet.22 + + + + Sheet.23 + gateway.node + + + + gateway.node + + Sheet.32 + Platform gatway + + + + Platform gatway + + Sheet.8 + + + + Sheet.9 + eda.node + + + + eda.node + + Sheet.36 + Automation EDA controller + + + + Automation EDAcontroller + + Sheet.4 + + + + Sheet.5 + SSH/WinRm/HTTP/etc + + + + SSH/WinRm/HTTP/etc + + Sheet.58 + Managed node + + + + Managed node + + Sheet.41 + + + + Sheet.47 + redis.node + + + + redis.node + + Sheet.56 + + + + Sheet.57 + redis.node + + + + redis.node + + Sheet.46 + + Sheet.12 + + + + Sheet.13 + hub.node + + + + hub.node + + + Sheet.51 + Automation hub + + + + Automation hub + + Sheet.18 + + + + Sheet.19 + database.node + + + + database.node + + Sheet.54 + PostgreSQL database + + + + PostgreSQL database + + Sheet.64 + + + + Sheet.65 + Port 16379 + + + + Port 16379 + + Sheet.66 + Redis cluster node + + + + Redis cluster node + + Sheet.67 + Redis + + + + Redis + + Sheet.73 + + + + Sheet.74 + + + + Sheet.80 + + + + Sheet.85 + + + + Sheet.87 + Port 5432 + + + + Port 5432 + + Sheet.93 + + + + Sheet.94 + + + + Sheet.96 + + + + Sheet.97 + + + + Sheet.68 + + + + Sheet.69 + + + + Sheet.71 + Ingress + + + + Ingress + + Sheet.70 + ingress.node + + + + ingress.node + + Sheet.109 + + + + Sheet.112 + Port 6379 + + + + Port 6379 + + Sheet.79 + + + + Sheet.89 + Port 5432 + + + + Port 5432 + + Sheet.119 + + + + Sheet.121 + Port 5432 + + + + Port 5432 + + Sheet.120 + + + + Sheet.49 + + + + Sheet.124 + + + + Sheet.128 + + + + Sheet.130 + + + + Sheet.131 + + + + Sheet.132 + + + + Sheet.133 + + + + Sheet.134 + + + + Sheet.135 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.136 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.137 + Redis: (1)6379 – job control/caching + + + + Redis: (1)6379 – job control/caching + + Sheet.138 + Receptor:27199 – work/job execution + + + + Receptor:27199 – work/job execution + + Sheet.139 + External: Varied + + + + External: Varied + + Sheet.98 + Port 27199 + + + + Port 27199 + + Sheet.95 + Port 27199 + + + + Port 27199 + + Sheet.35 + Port 80/443 + + + + Port 80/443 + + Sheet.72 + Port 80/443 + + + + Port 80/443 + + Sheet.149 + + Sheet.125 + + + + Sheet.126 + NFs/S3/etc + + + + NFs/S3/etc + + Sheet.127 + External storage + + + + External storage + + + Sheet.151 + + Sheet.48 + + Sheet.6 + + + + Sheet.7 + hop.node + + + + hop.node + + + Sheet.55 + Hop node + + + + Hop node + + + Sheet.152 + + Sheet.90 + + + + Sheet.91 + activation.node + + + + activation.node + + Sheet.92 + EDA activation + + + + EDA activation + + + Sheet.153 + + Sheet.20 + + + + Sheet.21 + + + + Sheet.52 + External event system + + + + External event system + + + Sheet.154 + + Sheet.16 + + + + Sheet.17 + worker.node + + + + worker.node + + Sheet.34 + EDA webhook worker + + + + EDA webhook worker + + + Sheet.129 + + + + Sheet.156 + + + + Sheet.101 + + + + Sheet.155 + + Sheet.10 + + + + Sheet.11 + exec.node + + + + exec.node + + Sheet.50 + Execution node + + + + Execution node + + + Sheet.157 + + + + Sheet.77 + + + + Sheet.123 + Port 80/443 + + + + Port 80/443 + + Sheet.158 + + + + Sheet.114 + + + + Sheet.159 + + + + Sheet.160 + + + + Sheet.81 + + + + Sheet.78 + Port 80/443 + + + + Port 80/443 + + Sheet.141 + Port 5432 + + + + Port 5432 + + Sheet.29 + Port 50051 grpc + + + + Port 50051grpc + + Sheet.82 + Port 80/443 + + + + Port 80/443 + + Sheet.14 + + Sheet.1 + + + + Sheet.2 + controller.node + + + + controller.node + + Sheet.3 + Automation controller + + + + Automation controller + + + Sheet.15 + + + + Sheet.146 + + + + Sheet.147 + Port 80/443 + + + + Port 80/443 + + Sheet.30 + + + + Sheet.116 + + + + Sheet.118 + Port 80/443 + + + + Port 80/443 + + Sheet.31 + + + + Sheet.161 + + + + Sheet.140 + Client + + + + Client + + Sheet.162 + Port 80/443 + + + + Port 80/443 + + Sheet.83 + + + + Sheet.84 + + + + Sheet.103 + + + + Sheet.108 + + + + Sheet.113 + + + + Sheet.115 + + + + Sheet.102 + + + + Sheet.122 + Port80/443 + + + + Port80/443 + + Sheet.143 + + + + Sheet.150 + + + + Sheet.165 + + + + Sheet.144 + + + + Sheet.148 + + + + + + Sheet.145 + Port 80/443 + + + + Port 80/443 + + Sheet.166 + + + + Sheet.142 + + + + Sheet.167 + + + + Sheet.75 + + + + Sheet.168 + Port80/443 + + + + Port80/443 + + Sheet.170 + + + + Sheet.76 + + + + Sheet.169 + Port80/443 + + + + Port80/443 + + Sheet.171 + + + + Sheet.172 + + + + Sheet.99 + + + + Sheet.117 + + + + Sheet.105 + + + + Sheet.111 + Port 6379 + + + + Port 6379 + + Sheet.110 + Port 6379 + + + + Port 6379 + + Sheet.107 + + + + Sheet.104 + + + + Sheet.106 + + + + Sheet.86 + + + + Sheet.88 + Port 5432 + + + + Port 5432 + + Sheet.173 + + + + Sheet.174 + 8443 default gateway port + + + + 8443 default gateway port + + Sheet.100 + Port80/443 + + + + Port80/443 + + diff --git a/downstream/images/svg/st_Proxy.svg b/downstream/images/svg/st_Proxy.svg new file mode 100644 index 0000000000..509fefdfac --- /dev/null +++ b/downstream/images/svg/st_Proxy.svg @@ -0,0 +1,370 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.21 + + + + Sheet.4 + + + + Sheet.5 + Automation controller VM + + + + Automation controller VM + + Sheet.6 + Automation controller + + + + Automation controller + + Sheet.7 + + + + Sheet.8 + Event-Driven ansible VM + + + + Event-Driven ansible VM + + Sheet.9 + Event-Driven ansible + + + + Event-Driven ansible + + Sheet.10 + + + + Sheet.11 + Automation hub VM + + + + Automation hub VM + + Sheet.12 + Automation hub + + + + Automation hub + + Sheet.19 + + Sheet.1 + + + + Sheet.2 + Platform gateway VM + + + + Platform gateway VM + + Sheet.3 + Platform gateway + + + + Platform gateway + + + Sheet.20 + + Sheet.16 + + + + Sheet.17 + database.node + + + + database.node + + Sheet.18 + Database + + + + Database + + + Sheet.22 + + + + Sheet.23 + + + + Sheet.24 + + + + Sheet.25 + + + + Sheet.26 + + + + Sheet.27 + + + + Sheet.28 + + + + Sheet.30 + Internet gateway + + + + Internet gateway + + Sheet.31 + + + + Sheet.32 + Loadbalancer + + + + Loadbalancer + + Sheet.33 + Default security group + + + + Default security group + + Sheet.29 + Proxy (squid) + + + + Proxy (squid) + + Sheet.34 + + + + Sheet.35 + + + + Sheet.36 + All traffic allowed + + + + All traffic allowed + + Sheet.37 + HTTPS traffic through Port 3128 + + + + HTTPS traffic through Port 3128 + + Sheet.38 + + + + Sheet.39 + SSH traffic Port 22 + + + + SSH trafficPort 22 + + Sheet.40 + + + + Sheet.41 + Access API and UI (HTTPS) through public IPS + + + + Access API and UI (HTTPS) through public IPS + + Sheet.42 + + + + Sheet.43 + All outbound traffic + + + + All outbound traffic + + Sheet.44 + Restricted security group + + + + Restricted security group + + Sheet.45 + + + + Sheet.46 + Control plane VPC + + + + Control plane VPC + + Sheet.47 + Port 80/443 + + + + Port 80/443 + + Sheet.48 + Port 80/443 + + + + Port 80/443 + + Sheet.49 + Port 80/443 + + + + Port 80/443 + + Sheet.50 + + Sheet.51 + + + + Sheet.53 + Automation mesh + + + + Automation mesh + + + Sheet.52 + + + + Sheet.54 + Port 27199 + + + + Port 27199 + + Sheet.13 + Incoming only during install + + + + Incoming only during install + + diff --git a/downstream/images/svg/st_RPM_A_Env_A-R.svg b/downstream/images/svg/st_RPM_A_Env_A-R.svg new file mode 100644 index 0000000000..4453cfb456 --- /dev/null +++ b/downstream/images/svg/st_RPM_A_Env_A-R.svg @@ -0,0 +1,367 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.5 + + + + Sheet.10 + Automation mesh + + + + Automation mesh + + Sheet.48 + + + + Sheet.102 + + + + Sheet.103 + + + + Sheet.104 + + + + Sheet.105 + + + + Sheet.106 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.107 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.108 + Redis: 6379 – job control/caching + + + + Redis: 6379 – job control/caching + + Sheet.109 + Receptor:27199 – work/job execution + + + + Receptor:27199 – work/job execution + + Sheet.9 + Execution VM + + + + Execution VM + + Sheet.69 + + + + Sheet.117 + + + + Sheet.20 + Port 5432 + + + + Port 5432 + + Sheet.89 + Port 5432 + + + + Port 5432 + + Sheet.7 + Automation hub VM + + + + Automation hub VM + + Sheet.19 + Event Driven Ansible VM + + + + Event Driven Ansible VM + + Sheet.8 + Automation Controller VM + + + + Automation Controller VM + + Sheet.29 + + + + Sheet.38 + + + + Sheet.40 + + + + Sheet.132 + + + + Sheet.133 + + + + Sheet.31 + + + + Sheet.18 + Port 27199 + + + + Port 27199 + + Sheet.135 + Port 80/443 + + + + Port 80/443 + + Sheet.1 + + + + Sheet.28 + Port 80/443 + + + + Port 80/443 + + Sheet.46 + + + + Sheet.32 + + + + Sheet.25 + Port 80/443 + + + + Port 80/443 + + Sheet.15 + + + + Sheet.92 + Port 80/443 + + + + Port 80/443 + + Sheet.34 + Port 27199 + + + + Port 27199 + + Sheet.2 + + + + Sheet.12 + + + + Sheet.17 + + + + Sheet.21 + + + + Sheet.3 + + Sheet.6 + + + + Sheet.13 + Redis + + + + Redis + + Sheet.14 + Platform gateway VM + + + + Platform gateway VM + + + Sheet.16 + + Sheet.22 + + + + Sheet.23 + + + + Sheet.24 + + + + Sheet.26 + + + + Sheet.27 + + + + Sheet.30 + + + + + Sheet.33 + Postgres + + + + Postgres + + Sheet.49 + + + + Sheet.39 + Port 6379 + + + + Port 6379 + + diff --git a/downstream/images/svg/st_RPM_A_Env_B-R.svg b/downstream/images/svg/st_RPM_A_Env_B-R.svg new file mode 100644 index 0000000000..d0123b6892 --- /dev/null +++ b/downstream/images/svg/st_RPM_A_Env_B-R.svg @@ -0,0 +1,379 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.5 + + + + Sheet.10 + Automation mesh + + + + Automation mesh + + Sheet.48 + + + + Sheet.102 + + + + Sheet.103 + + + + Sheet.104 + + + + Sheet.105 + + + + Sheet.106 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.107 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.108 + Redis: 6379 – job control/caching + + + + Redis: 6379 – job control/caching + + Sheet.109 + Receptor:27199 – work/job execution + + + + Receptor:27199 – work/job execution + + Sheet.9 + Execution VM + + + + Execution VM + + Sheet.69 + + + + Sheet.117 + + + + Sheet.20 + Port 5432 + + + + Port 5432 + + Sheet.89 + Port 5432 + + + + Port 5432 + + Sheet.7 + Automation hub VM + + + + Automation hub VM + + Sheet.8 + Automation Controller VM + + + + Automation Controller VM + + Sheet.29 + + + + Sheet.38 + + + + Sheet.40 + + + + Sheet.132 + + + + Sheet.133 + + + + Sheet.31 + + + + Sheet.18 + Port 27199 + + + + Port 27199 + + Sheet.135 + Port 80/443 + + + + Port 80/443 + + Sheet.1 + + + + Sheet.28 + Port 80/443 + + + + Port 80/443 + + Sheet.46 + + + + Sheet.32 + + + + Sheet.25 + Port 80/443 + + + + Port 80/443 + + Sheet.15 + + + + Sheet.92 + Port 80/443 + + + + Port 80/443 + + Sheet.34 + Port 27199 + + + + Port 27199 + + Sheet.2 + + + + Sheet.12 + + + + Sheet.17 + + + + Sheet.21 + + + + Sheet.3 + + Sheet.6 + + + + Sheet.13 + Redis + + + + Redis + + Sheet.14 + Platform gateway VM + + + + Platform gateway VM + + + Sheet.16 + + Sheet.22 + + + + Sheet.23 + + + + Sheet.24 + + + + Sheet.26 + + + + Sheet.27 + + + + Sheet.30 + + + + + Sheet.33 + Postgres + + + + Postgres + + Sheet.19 + Event Driven Ansible VM + + + + Event Driven Ansible VM + + Sheet.136 + + + + Sheet.49 + + + + Sheet.39 + Port 6379 + + + + Port 6379 + + diff --git a/downstream/images/svg/st_RPM_B_Env_A-R.svg b/downstream/images/svg/st_RPM_B_Env_A-R.svg new file mode 100644 index 0000000000..726553ceb2 --- /dev/null +++ b/downstream/images/svg/st_RPM_B_Env_A-R.svg @@ -0,0 +1,1039 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.5 + + + + Sheet.10 + Automation mesh + + + + Automation mesh + + Sheet.52 + + + + Sheet.40 + + + + Sheet.48 + + + + Sheet.55 + + + + Sheet.81 + + + + Sheet.98 + + + + Sheet.102 + + + + Sheet.103 + + + + Sheet.104 + + + + Sheet.105 + + + + Sheet.106 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.107 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.108 + Redis: 6379 – job control/caching + + + + Redis: 6379 – job control/caching + + Sheet.109 + Receptor:27199 – work/job execution + + + + Receptor:27199 – work/job execution + + Sheet.45 + Port 6379 + + + + Port 6379 + + Sheet.113 + + + + Sheet.14 + + + + Sheet.34 + Port 27199 + + + + Port 27199 + + Sheet.2 + + + + Sheet.53 + + + + Sheet.57 + Port 6379 + + + + Port 6379 + + Sheet.54 + + + + Sheet.18 + Port 27199 + + + + Port 27199 + + Sheet.86 + + + + Sheet.9 + Hop node VM + + + + Hop node VM + + Sheet.134 + + + + Sheet.135 + 50051 grpc + + + + 50051 grpc + + Sheet.138 + + + + Sheet.139 + + + + Sheet.35 + + + + Sheet.36 + + + + Sheet.38 + Automation hub + + + + Automation hub + + Sheet.39 + Automation hub VM2 + + + + Automation hub VM2 + + Sheet.110 + redis + + + + redis + + Sheet.112 + Port 6379 + + + + Port 6379 + + Sheet.118 + + + + Sheet.119 + Automation controller + + + + Automation controller + + Sheet.120 + Automation controller VM2 + + + + Automation controller VM2 + + Sheet.149 + + + + Sheet.100 + + + + Sheet.121 + + + + Sheet.122 + + + + Sheet.148 + + + + Sheet.82 + + + + Sheet.150 + + + + Sheet.59 + + + + Sheet.123 + + + + Sheet.151 + Hop node + + + + Hop node + + Sheet.22 + Platform gateway + + + + Platform gateway + + Sheet.12 + + + + Sheet.17 + Event-Driven Ansible + + + + Event-Driven Ansible + + Sheet.25 + Event-Driven Ansible VM2 + + + + Event-Driven Ansible VM2 + + Sheet.29 + redis + + + + redis + + Sheet.116 + + + + Sheet.84 + + + + Sheet.136 + Automation hub + + + + Automation hub + + Sheet.140 + Automation hub VM1 + + + + Automation hub VM1 + + Sheet.141 + redis + + + + redis + + Sheet.144 + + + + Sheet.145 + Event-Driven Ansible + + + + Event-Driven Ansible + + Sheet.146 + Event-Driven Ansible VM1 + + + + Event-Driven Ansible VM1 + + Sheet.147 + redis + + + + redis + + Sheet.90 + + + + Sheet.94 + Automation controller + + + + Automation controller + + Sheet.115 + Automation controller VM1 + + + + Automation controller VM1 + + Sheet.164 + + + + Sheet.169 + + + + Sheet.170 + + + + Sheet.79 + + + + Sheet.80 + Port 80/443 + + + + Port 80/443 + + Sheet.88 + + + + Sheet.3 + + + + Sheet.78 + + + + Sheet.99 + Port 80/443 + + + + Port 80/443 + + Sheet.66 + Port 16379 + + + + Port 16379 + + Sheet.58 + + + + Sheet.49 + + + + Sheet.63 + + + + Sheet.60 + + + + Sheet.64 + + + + Sheet.1 + Port 6379 + + + + Port 6379 + + Sheet.173 + + + + Sheet.111 + + + + Sheet.142 + Port 16379 + + + + Port 16379 + + Sheet.171 + Port 6379 + + + + Port 6379 + + Sheet.37 + + Sheet.176 + + + + Sheet.177 + + + + Sheet.175 + + + + Sheet.178 + + + + Sheet.183 + + + + Sheet.186 + + + + + Sheet.21 + Port 27199 + + + + Port 27199 + + Sheet.43 + + Sheet.152 + + + + Sheet.154 + Execution VM + + + + Execution VM + + Sheet.156 + Execution node + + + + Execution node + + + Sheet.44 + + Sheet.56 + + + + Sheet.67 + Execution VM + + + + Execution VM + + Sheet.68 + Execution node + + + + Execution node + + + Sheet.187 + + + + Sheet.188 + + + + Sheet.189 + + + + Sheet.190 + + + + Sheet.191 + + + + Sheet.192 + + + + Sheet.193 + + + + Sheet.194 + + + + Sheet.65 + HA proxy/ load balancer + + + + HA proxy/ load balancer + + Sheet.195 + + + + Sheet.196 + + + + Sheet.197 + + + + Sheet.198 + + + + Sheet.199 + + + + Sheet.200 + + + + Sheet.51 + + + + Sheet.163 + + + + Sheet.61 + + + + Sheet.129 + Port 5432 + + + + Port 5432 + + Sheet.131 + Port 5432 + + + + Port 5432 + + Sheet.46 + + Sheet.19 + + + + Sheet.28 + Platform gateway VM2 + + + + Platform gateway VM2 + + Sheet.33 + redis + + + + redis + + + Sheet.128 + Port 5432 + + + + Port 5432 + + Sheet.16 + Postgres (External) + + + + Postgres (External) + + Sheet.31 + + + + Sheet.27 + Port 27199 + + + + Port 27199 + + Sheet.47 + + + + Sheet.70 + + + + Sheet.71 + + + + Sheet.73 + + + + Sheet.74 + + + + Sheet.69 + + + + Sheet.72 + + + + Sheet.165 + + Sheet.6 + + + + Sheet.15 + Platform gateway + + + + Platform gateway + + Sheet.23 + Platform gateway VM1 + + + + Platform gateway VM1 + + Sheet.30 + redis + + + + redis + + + Sheet.89 + Port 5432 + + + + Port 5432 + + Sheet.95 + Port 80/443 + + + + Port 80/443 + + Sheet.92 + Port 80/443 + + + + Port 80/443 + + Sheet.96 + Port 80/443 + + + + Port 80/443 + + Sheet.127 + Port 5432 + + + + Port 5432 + + Sheet.20 + Port 5432 + + + + Port 5432 + + Sheet.126 + Port 5432 + + + + Port 5432 + + Sheet.75 + + + + Sheet.97 + + + + Sheet.42 + Port 80/443 + + + + Port 80/443 + + Sheet.62 + + + + Sheet.26 + + + + Sheet.76 + Platform gateway + + + + Platform gateway + + Sheet.85 + Port80/443 + + + + Port80/443 + + Sheet.87 + Port80/443 + + + + Port80/443 + + Sheet.91 + Port80/443 + + + + Port80/443 + + Sheet.101 + Port80/443 + + + + Port80/443 + + Sheet.201 + + + + Sheet.124 + + + + Sheet.125 + + + + Sheet.32 + + + + Sheet.50 + Port 16379 + + + + Port 16379 + + Sheet.202 + + + + Sheet.133 + + + + Sheet.137 + Port 50051 + + + + Port 50051 + + Sheet.83 + Port80/443 + + + + Port80/443 + + Sheet.77 + Port80/443 + + + + Port80/443 + + diff --git a/downstream/images/svg/st_RPM_B_Env_B.svg b/downstream/images/svg/st_RPM_B_Env_B.svg new file mode 100644 index 0000000000..e5c825bfea --- /dev/null +++ b/downstream/images/svg/st_RPM_B_Env_B.svg @@ -0,0 +1,785 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Page-1 + + + Sheet.25 + + + + Sheet.24 + + + + Sheet.67 + + + + Sheet.2 + Ansible Automation Platform + + + + Ansible Automation Platform + + Sheet.5 + + + + Sheet.9 + Execution node + + + + Execution node + + Sheet.10 + Automation mesh + + + + Automation mesh + + Sheet.11 + + + + Sheet.51 + Hop node + + + + Hop node + + Sheet.52 + + + + Sheet.18 + Port 27199 + + + + Port 27199 + + Sheet.27 + Port 27199 + + + + Port 27199 + + Sheet.13 + Execution node + + + + Execution node + + Sheet.7 + + + + Sheet.62 + Automation hub 2.4 x + + + + Automation hub 2.4 x + + Sheet.88 + Automation hub VM + + Sheet.64 + + + + + + Automation hub VM + + + Sheet.89 + Automation hub VM + + Sheet.63 + + + + + + Automation hub VM + + + Sheet.80 + + + + Sheet.81 + + + + Sheet.82 + + + + Sheet.83 + + + + Sheet.84 + + + + Sheet.86 + 80/443 http(s) ingress + + + + 80/443 http(s) ingress + + Sheet.90 + PostgreSQL: 5432-database + + + + PostgreSQL: 5432-database + + Sheet.91 + Redis: (1)6379 – job control/caching + + + + Redis: (1)6379 – job control/caching + + Sheet.92 + Receptor:27199 – work/job execution + + + + Receptor:27199 – work/job execution + + Sheet.97 + + + + Sheet.3 + + + + Sheet.96 + Port 80/443 + + + + Port 80/443 + + Sheet.41 + HA proxy/ load balancer + + + + HA proxy/ load balancer + + Sheet.42 + + + + Sheet.65 + + + + Sheet.46 + Port 27199 + + + + Port 27199 + + Sheet.102 + + + + Sheet.1 + + Sheet.8 + + + + Sheet.12 + + + + Sheet.17 + + + + Sheet.19 + + + + Sheet.22 + + + + Sheet.85 + + + + + Sheet.29 + + Sheet.55 + + + + Sheet.87 + Event-Driven ansible + + + + Event-Driven ansible + + Sheet.100 + + + + Sheet.101 + + + + Sheet.103 + + + + Sheet.122 + Event-Driven ansible VM + + + + Event-Driven ansible VM + + Sheet.123 + redis + + + + redis + + Sheet.124 + redis + + + + redis + + Sheet.125 + redis + + + + redis + + Sheet.126 + Event-Driven ansible VM + + + + Event-Driven ansible VM + + Sheet.127 + Event-Driven ansible VM + + + + Event-Driven ansible VM + + + Sheet.56 + + Sheet.6 + + + + Sheet.15 + Platform gateway + + + + Platform gateway + + Sheet.23 + + + + Sheet.30 + + + + Sheet.105 + + + + Sheet.106 + Platform gateway VM + + + + Platform gateway VM + + Sheet.107 + redis + + + + redis + + Sheet.57 + redis + + + + redis + + Sheet.58 + redis + + + + redis + + Sheet.28 + Platform gateway VM + + + + Platform gateway VM + + Sheet.33 + Platform gateway VM + + + + Platform gateway VM + + + Sheet.4 + + + + Sheet.14 + Automation controller 2.4 x + + + + Automation controller 2.4 x + + Sheet.21 + + + + Sheet.16 + Automation controller VM + + + Automation controller VM + + Sheet.32 + + + + Sheet.31 + Automation controller VM + + + Automation controller VM + + Sheet.118 + + + + Sheet.115 + + + + Sheet.35 + + + + Sheet.44 + + + + Sheet.43 + + + + Sheet.95 + + + + Sheet.59 + + + + Sheet.108 + + + + Sheet.53 + + + + Sheet.54 + + + + Sheet.109 + Port 16379 + + + + Port 16379 + + Sheet.60 + Port 80/443 + + + + Port 80/443 + + Sheet.110 + + + + Sheet.61 + + + + Sheet.112 + Port 16379 + + + + Port 16379 + + Sheet.66 + + + + Sheet.26 + + + + Sheet.104 + Port 80/443 + + + + Port 80/443 + + Sheet.73 + + + + Sheet.74 + + + + Sheet.99 + Port 5432 + + + + Port 5432 + + Sheet.20 + Port 5432 + + + + Port 5432 + + Sheet.45 + Port 5432 + + + + Port 5432 + + Sheet.128 + + + + Sheet.40 + Port 80/443 + + + + Port 80/443 + + Sheet.129 + + + + Sheet.130 + + + + Sheet.132 + + + + Sheet.133 + Port80/443 + + + + Port80/443 + + Sheet.134 + Port80/443 + + + + Port80/443 + + Sheet.136 + + + + Sheet.68 + + + + Sheet.70 + + + + Sheet.38 + Port 5432 + + + + Port 5432 + + Sheet.137 + + + + Sheet.138 + + + + Sheet.69 + Postgres external database + + + + Postgres external database + + Sheet.139 + + + + Sheet.37 + + + + Sheet.98 + + + + Sheet.36 + Port 80/443 + + + + Port 80/443 + + Sheet.39 + + + + Sheet.113 + + + + Sheet.47 + + + + Sheet.114 + + + + Sheet.140 + + + + Sheet.131 + + + + Sheet.141 + + + + Sheet.72 + + + + Sheet.135 + Port443 + + + + Port443 + + Sheet.142 + + + + Sheet.94 + + + + Sheet.34 + Port 6379 + + + + Port 6379 + + Sheet.116 + Port 6379 + + + + Port 6379 + + diff --git a/downstream/images/system-settings-full.png b/downstream/images/system-settings-full.png new file mode 100644 index 0000000000..c949e1f31d Binary files /dev/null and b/downstream/images/system-settings-full.png differ diff --git a/downstream/images/system-settings-page.png b/downstream/images/system-settings-page.png new file mode 100644 index 0000000000..c9c5e8c5e9 Binary files /dev/null and b/downstream/images/system-settings-page.png differ diff --git a/downstream/images/system_settings_page.png b/downstream/images/system_settings_page.png new file mode 100644 index 0000000000..ed329b561f Binary files /dev/null and b/downstream/images/system_settings_page.png differ diff --git a/downstream/images/troubleshooting_options.png b/downstream/images/troubleshooting_options.png new file mode 100644 index 0000000000..6dc2374af7 Binary files /dev/null and b/downstream/images/troubleshooting_options.png differ diff --git a/downstream/images/ug-job-details-for-example-job.png b/downstream/images/ug-job-details-for-example-job.png index d2335a0661..0a5a3a4bdd 100644 Binary files a/downstream/images/ug-job-details-for-example-job.png and b/downstream/images/ug-job-details-for-example-job.png differ diff --git a/downstream/images/ug-job-details-view-filters.png b/downstream/images/ug-job-details-view-filters.png index 18db229bb6..0c4aa6da59 100644 Binary files a/downstream/images/ug-job-details-view-filters.png and b/downstream/images/ug-job-details-view-filters.png differ diff --git a/downstream/images/ug-jobs-events-summary.png b/downstream/images/ug-jobs-events-summary.png index aa7d41ca78..8d230009f4 100644 Binary files a/downstream/images/ug-jobs-events-summary.png and b/downstream/images/ug-jobs-events-summary.png differ diff --git a/downstream/images/ug-jobs-list-all-expanded.png b/downstream/images/ug-jobs-list-all-expanded.png index f701619120..d37641ea3b 100644 Binary files a/downstream/images/ug-jobs-list-all-expanded.png and b/downstream/images/ug-jobs-list-all-expanded.png differ diff --git a/downstream/images/ug-schedules-sample-list.png b/downstream/images/ug-schedules-sample-list.png index 3d428b8ed5..b0e4022e7d 100644 Binary files a/downstream/images/ug-schedules-sample-list.png and b/downstream/images/ug-schedules-sample-list.png differ diff --git a/downstream/images/ug-scm-project-branching-emphasized.png b/downstream/images/ug-scm-project-branching-emphasized.png index 0936e3d2ca..84041d07da 100644 Binary files a/downstream/images/ug-scm-project-branching-emphasized.png and b/downstream/images/ug-scm-project-branching-emphasized.png differ diff --git a/downstream/images/ug-sliced-job-shown-jobs-list-view.png b/downstream/images/ug-sliced-job-shown-jobs-list-view.png index 5f99307c29..e1243c5ade 100644 Binary files a/downstream/images/ug-sliced-job-shown-jobs-list-view.png and b/downstream/images/ug-sliced-job-shown-jobs-list-view.png differ diff --git a/downstream/images/ug-wf-add-template.png b/downstream/images/ug-wf-add-template.png index f048be3d74..f29ad85ca6 100644 Binary files a/downstream/images/ug-wf-add-template.png and b/downstream/images/ug-wf-add-template.png differ diff --git a/downstream/images/ug-wf-approval-node.png b/downstream/images/ug-wf-approval-node.png index b37e860dd9..3f7c4bc04a 100644 Binary files a/downstream/images/ug-wf-approval-node.png and b/downstream/images/ug-wf-approval-node.png differ diff --git a/downstream/images/ug-wf-create-sibling-node.png b/downstream/images/ug-wf-create-sibling-node.png index 1e064e7352..9f13f87cfd 100644 Binary files a/downstream/images/ug-wf-create-sibling-node.png and b/downstream/images/ug-wf-create-sibling-node.png differ diff --git a/downstream/images/ug-wf-dropdown-list.png b/downstream/images/ug-wf-dropdown-list.png index c9c0196816..95c10fca50 100644 Binary files a/downstream/images/ug-wf-dropdown-list.png and b/downstream/images/ug-wf-dropdown-list.png differ diff --git a/downstream/images/ug-wf-editor-convergent-node-all.png b/downstream/images/ug-wf-editor-convergent-node-all.png index 8415682c5f..cb7bf6d340 100644 Binary files a/downstream/images/ug-wf-editor-convergent-node-all.png and b/downstream/images/ug-wf-editor-convergent-node-all.png differ diff --git a/downstream/images/user_preferences_page.png b/downstream/images/user_preferences_page.png new file mode 100644 index 0000000000..c77ef8abd7 Binary files /dev/null and b/downstream/images/user_preferences_page.png differ diff --git a/downstream/images/vscode-extensions-icon.png b/downstream/images/vscode-extensions-icon.png new file mode 100644 index 0000000000..2e693328ed Binary files /dev/null and b/downstream/images/vscode-extensions-icon.png differ diff --git a/downstream/images/vscode-remote-icon.png b/downstream/images/vscode-remote-icon.png new file mode 100644 index 0000000000..473846b34a Binary files /dev/null and b/downstream/images/vscode-remote-icon.png differ diff --git a/downstream/images/workflow.png b/downstream/images/workflow.png new file mode 100644 index 0000000000..d0765e76d5 Binary files /dev/null and b/downstream/images/workflow.png differ diff --git a/downstream/images/wrench.png b/downstream/images/wrench.png new file mode 100644 index 0000000000..eca0a05381 Binary files /dev/null and b/downstream/images/wrench.png differ diff --git a/downstream/modules/aap-hardening/.platform b/downstream/modules/aap-hardening/.platform new file mode 120000 index 0000000000..1d58796b7d --- /dev/null +++ b/downstream/modules/aap-hardening/.platform @@ -0,0 +1 @@ +../platform \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-aap-additional-software.adoc b/downstream/modules/aap-hardening/con-aap-additional-software.adoc index a60ec75177..f18c25bdb3 100644 --- a/downstream/modules/aap-hardening/con-aap-additional-software.adoc +++ b/downstream/modules/aap-hardening/con-aap-additional-software.adoc @@ -7,6 +7,9 @@ [role="_abstract"] -When installing the {PlatformNameShort} components on {RHEL} servers, the {RHEL} servers should be dedicated to that use alone. Additional server capabilities should not be installed in addition to {PlatformNameShort}, as this is an unsupported configuration and may affect the security and performance of the {PlatformNameShort} software. +When installing the {PlatformNameShort} components on {RHEL} servers, the {RHEL} servers should be dedicated to that use alone. +Additional server capabilities must not be installed in addition to {PlatformNameShort}, as this is an unsupported configuration and might affect the security and performance of the {PlatformNameShort} software. -Similarly, when {PlatformNameShort} is deployed on a {RHEL} host, it installs software like the nginx web server, the Pulp software repository, and the PostgreSQL database server. This software should not be modified or used in a more generic fashion (for example, do not use nginx to server additional website content or PostgreSQL to host additional databases) as this is an unsupported configuration and may affect the security and performance of {PlatformNameShort}. The configuration of this software is managed by the {PlatformNameShort} installer, and any manual changes might be undone when performing upgrades. \ No newline at end of file +Similarly, when {PlatformNameShort} is deployed on a {RHEL} host, it installs software like the nginx web server, the Pulp software repository, and the PostgreSQL database server (unless a user-provided external database is used). +This software should not be modified or used in a more generic fashion (for example, do not use nginx to serve additional website content or PostgreSQL to host additional databases) as this is an unsupported configuration and might affect the security and performance of {PlatformNameShort}. +The configuration of this software is managed by the {PlatformNameShort} {Installer}, and any manual changes might be undone when performing upgrades. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-automation-use-secrets.adoc b/downstream/modules/aap-hardening/con-automation-use-secrets.adoc index 7b9ae4c4f5..9a17b93071 100644 --- a/downstream/modules/aap-hardening/con-automation-use-secrets.adoc +++ b/downstream/modules/aap-hardening/con-automation-use-secrets.adoc @@ -13,10 +13,18 @@ * Secret tokens and passwords for external services defined in {ControllerName} settings. * “password” type survey field entries. -You can grant users and teams the ability to use these credentials without actually exposing the credential to the user. This means that if a user moves to a different team or leaves the organization, you don’t have to re-key all of your systems. +You can grant users and teams the ability to use these credentials without actually exposing the credential to the user. This means that if a user moves to a different team or leaves the organization, you do not have to re-key all of your systems. -{ControllerName} uses SSH (or the Windows equivalent) to connect to remote hosts . To pass the key from the {ControllerName} to SSH, the key must be decrypted before it can be written to a named pipe. {ControllerNameStart} then uses that pipe to send the key to SSH (so that it is never written to disk). If passwords are used, the {ControllerName} handles those by responding directly to the password prompt and decrypting the password before writing it to the prompt. +{ControllerNameStart} uses SSH (or the Windows equivalent) to connect to remote hosts. +To pass the key from {ControllerName} to SSH, the key must be decrypted before it can be written to a named pipe. +{ControllerNameStart} then uses that pipe to send the key to SSH (so that it is never written to disk). +If passwords are used, {ControllerName} handles those by responding directly to the password prompt and decrypting the password before writing it to the prompt. As an administrator with superuser access, you can define a custom credential type in a standard format using a YAML/JSON-like definition, enabling the assignment of new credential types to jobs and inventory updates. This enables you to define a custom credential type that works in ways similar to existing credential types. For example, you can create a custom credential type that injects an API token for a third-party web service into an environment variable, which your playbook or custom inventory script can consume. -To encrypt secret fields, {PlatformNameShort} uses AES in CBC mode with a 256-bit key for encryption, PKCS7 padding, and HMAC using SHA256 for authentication. The encryption/decryption process derives the AES-256 bit encryption key from the `SECRET_KEY`, the field name of the model field, and the database-assigned auto-incremented record ID. Thus, if any attribute used in the key generation process changes, {PlatformNameShort} fails to correctly decrypt the secret. {PlatformNameShort} is designed such that the `SECRET_KEY` is never readable in playbooks {PlatformNameShort} launches, so that these secrets are never readable by {PlatformNameShort} users, and no secret field values are ever made available through the {PlatformNameShort} REST API. If a secret value is used in a playbook, you must use `no_log` on the task so that it is not accidentally logged. For more information, see link:https://docs.ansible.com/ansible/latest/reference_appendices/logging.html#protecting-sensitive-data-with-no-log[Protecting sensitive data with no log]. +To encrypt secret fields, {PlatformNameShort} uses the _Advanced Encryption Standard_ (AES) in _Cipher Block Chaining_ (CBC) mode with a 256-bit key for encryption, _Public-Key cryptography Standard_ (PKCS7) padding, and _Hash-Based Message Authentication Code_ (HMAC) using SHA256 for authentication. +The encryption/decryption process derives the AES-256 bit encryption key from the `SECRET_KEY`, the field name of the model field, and the database-assigned auto-incremented record ID. +Thus, if any attribute used in the key generation process changes, {PlatformNameShort} fails to correctly decrypt the secret. +{PlatformNameShort} is designed such that the `SECRET_KEY` is never readable in playbooks {PlatformNameShort} launches. +This means that these secrets are never readable by {PlatformNameShort} users, and no secret field values are ever made available through the {PlatformNameShort} REST API. +If a secret value is used in a playbook, you must use `no_log` on the task so that it is not accidentally logged. For more information, see link:https://docs.ansible.com/ansible/latest/reference_appendices/logging.html#protecting-sensitive-data-with-no-log[Protecting sensitive data with no log]. diff --git a/downstream/modules/aap-hardening/con-compliance-profile-considerations.adoc b/downstream/modules/aap-hardening/con-compliance-profile-considerations.adoc new file mode 100644 index 0000000000..f8c7f601d2 --- /dev/null +++ b/downstream/modules/aap-hardening/con-compliance-profile-considerations.adoc @@ -0,0 +1,9 @@ +[id="con-compliance-profile-considerations"] + += Compliance profile considerations + +In many environments, {PlatformNameShort} might need to be installed on {RHEL} systems where security controls have been applied to meet the requirements of a compliance profile such as CIS Critical Security Controls, _Payment Card Industry/Data Security Standard_ (PCI/DSS), the DISA STIG, or a similar profile. +In these environments, there are a specific set of security controls that might need to be modified for {PlatformNameShort} to run properly. +Apply any compliance profile controls to the {RHEL} servers being used for {PlatformNameShort} before installation, and then modify the following security controls as required. + +In environments where these controls are required, discuss waiving the controls with your security auditor. diff --git a/downstream/modules/aap-hardening/con-credential-management-planning.adoc b/downstream/modules/aap-hardening/con-credential-management-planning.adoc index 31667f2ea3..f3283e6424 100644 --- a/downstream/modules/aap-hardening/con-credential-management-planning.adoc +++ b/downstream/modules/aap-hardening/con-credential-management-planning.adoc @@ -7,14 +7,22 @@ [role="_abstract"] -{ControllerNameStart} uses credentials to authenticate requests to jobs against machines, synchronize with inventory sources, and import project content from a version control system. {ControllerNameStart} manages three sets of secrets: +{PlatformName} uses credentials to authenticate requests to jobs against machines, synchronize with inventory sources, and import project content from a version control system. -* User passwords for *local automation controller users*. See the xref:con-user-authentication-planning_{context}[User Authentication Planning] section of this guide for additional details. -* Secrets for automation controller *operational use* (database password, message bus password, and so on). +{ControllerNameStart} manages three sets of secrets: + +* User passwords for *local {PlatformNameShort} users*. +//See the xref:con-user-authentication-planning_{context}[User Authentication Planning] section of this guide for additional details. +* Secrets for {PlatformNameShort} *operational use* (database password, message bus password, and so on). * Secrets for *automation use* (SSH keys, cloud credentials, external password vault credentials, and so on). Implementing a privileged access or credential management solution to protect credentials from compromise is a highly recommended practice. Organizations should audit the use of, and provide additional programmatic control over, access and privilege escalation. -You can further secure automation credentials by ensuring they are unique and stored only in {ControllerName}. Services such as OpenSSH can be configured to allow credentials on connections only from specific addresses. Use different credentials for automation from those used by system administrators to log into a server. Although direct access should be limited where possible, it can be used for disaster recovery or other ad-hoc management purposes, allowing for easier auditing. +You can further secure automation credentials by ensuring they are unique and stored only in {ControllerName}. +Services such as OpenSSH can be configured to allow credentials on connections only from specific addresses. +Use different credentials for automation from those used by system administrators to log in to a server. +Although direct access should be limited where possible, it can be used for disaster recovery or other ad-hoc management purposes, allowing for easier auditing. -Different automation jobs might need to access a system at different levels. For example, you can have low-level system automation that applies patches and performs security baseline checking, while a higher-level piece of automation deploys applications. By using different keys or credentials for each piece of automation, the effect of any one key vulnerability is minimized. This also allows for easy baseline auditing. +Different automation jobs might need to access a system at different levels. +For example, you can have low-level system automation that applies patches and performs security baseline checking, while a higher-level piece of automation deploys applications. +By using different keys or credentials for each piece of automation, the effect of any one key vulnerability is minimized. This also allows for easy baseline auditing. diff --git a/downstream/modules/aap-hardening/con-day-two-operations.adoc b/downstream/modules/aap-hardening/con-day-two-operations.adoc index f7e587847b..bbb3efb314 100644 --- a/downstream/modules/aap-hardening/con-day-two-operations.adoc +++ b/downstream/modules/aap-hardening/con-day-two-operations.adoc @@ -7,4 +7,4 @@ [role="_abstract"] -Day 2 Operations include Cluster Health and Scaling Checks, including Host, Project, and environment level Sustainment. You should continually analyze configuration and security drift. \ No newline at end of file +Day 2 Operations include Cluster Health and Scaling Checks, including Host, Project, and environment level Sustainment. You must continually analyze configuration and security drift. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-deployment-methods.adoc b/downstream/modules/aap-hardening/con-deployment-methods.adoc new file mode 100644 index 0000000000..33e4ef6c3a --- /dev/null +++ b/downstream/modules/aap-hardening/con-deployment-methods.adoc @@ -0,0 +1,16 @@ +[id="con-deployment-methods"] + += {PlatformName} deployment methods + +There are three different installation methods for {PlatformNameShort}: + +* RPM-based on {RHEL} +* Container-based on {RHEL} +* Operator-based on {OCP} + +This document offers guidance on hardening {PlatformNameShort} when installed using either of the first two installation methods (RPM-based or container-based). +This document further recommends using the container-based installation method for new deployments, as the RPM-based {Installer} will be deprecated in a future release. + +For further information, see link:{URLReleaseNotes}/aap-2.5-deprecated-features#aap-2.5-deprecated-features[Deprecated features]. + +Operator-based deployments are not described in this document. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-dns-ntp-service-planning.adoc b/downstream/modules/aap-hardening/con-dns-ntp-service-planning.adoc index 23a419897d..d403ad3c80 100644 --- a/downstream/modules/aap-hardening/con-dns-ntp-service-planning.adoc +++ b/downstream/modules/aap-hardening/con-dns-ntp-service-planning.adoc @@ -1,6 +1,8 @@ // Module included in the following assemblies: // downstream/assemblies/assembly-hardening-aap.adoc -[ide="con-dns-ntp-service-planning.adoc_{context}"] +[id="con-dns-ntp-service-planning.adoc_{context}"] -= DNS, NTP, and service planning \ No newline at end of file += DNS, NTP, and service planning + +When installing {PlatformNameShort}, DNS and NTP configurations are crucial for a successful deployment and proper operation. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-external-credential-vault.adoc b/downstream/modules/aap-hardening/con-external-credential-vault.adoc index 39db35c1e8..13a05ea2c3 100644 --- a/downstream/modules/aap-hardening/con-external-credential-vault.adoc +++ b/downstream/modules/aap-hardening/con-external-credential-vault.adoc @@ -7,8 +7,11 @@ [role="_abstract"] -Secrets management is an essential component of maintaining a secure automation platform. We recommend the following secrets management practices: +Secrets management is an essential component of maintaining a secure automation platform. +We recommend the following secrets management practices: -* Ensure that there are no unauthorized users with access to the system, and ensure that only users who require access are granted it. {ControllerNameStart} encrypts sensitive information such as passwords and API tokens, but also stores the key to decryption. Authorized users potentially have access to everything. +* Ensure that there are no unauthorized users with access to the system, and ensure that only users who require access are granted it. +{ControllerNameStart} encrypts sensitive information such as passwords and API tokens, but also stores the key to decryption. +Authorized users potentially have access to everything. -* Use an external system to manage secrets. In cases where credentials need to be updated, an external system can retrieve updated credentials with less complexity than an internal system. External systems for managing secrets include CyberArk, HashiCorp Vault, {Azure} Key Management, and others. For more information, see the link:https://docs.ansible.com/automation-controller/4.4/html/userguide/credential_plugins.html#secret-management-system[Secret Management System] section of the {ControllerUG} v4.4. \ No newline at end of file +* Use an external system to manage secrets. In cases where credentials need to be updated, an external system can retrieve updated credentials with less complexity than an internal system. External systems for managing secrets include CyberArk, HashiCorp Vault, {Azure} Key Management, and others. For more information, see the link:https://docs.ansible.com/automation-controller/4.4/html/userguide/credential_plugins.html#secret-management-system[Secret Management System] section of {ControllerUG}. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-install-secure-host.adoc b/downstream/modules/aap-hardening/con-install-secure-host.adoc index fd23a45dc8..6b42b46f05 100644 --- a/downstream/modules/aap-hardening/con-install-secure-host.adoc +++ b/downstream/modules/aap-hardening/con-install-secure-host.adoc @@ -7,8 +7,16 @@ [role="_abstract"] -The {PlatformNameShort} installer can be run from one of the infrastructure servers, such as an {ControllerName}, or from an external system that has SSH access to the {PlatformNameShort} infrastructure servers. The {PlatformNameShort} installer is also used not just for installation, but for subsequent day-two operations, such as backup and restore, as well as upgrades. This guide recommends performing installation and day-two operations from a dedicated external server, hereafter referred to as the installation host. Doing so eliminates the need to log in to one of the infrastructure servers to run these functions. The installation host must only be used for management of {PlatformNameShort} and must not run any other services or software. +The {PlatformNameShort} {Installer} can be run from one of the infrastructure servers, such as an {ControllerName}, or from an external system that has SSH access to the {PlatformNameShort} infrastructure servers. +The {PlatformNameShort} {Installer} is also used not just for installation, but for subsequent day-two operations, such as backup and restore, as well as upgrades. +This guide recommends performing installation and day-two operations from a dedicated external server, hereafter referred to as the installation host. +Doing so eliminates the need to log in to one of the infrastructure servers to run these functions. +The installation host must only be used for management of {PlatformNameShort} and must not run any other services or software. -The installation host must be a {RHEL} server that has been installed and configured in accordance with link:{BaseURL}/red_hat_enterprise_linux/8/html/security_hardening/index[Security hardening for Red Hat Enterprise Linux] and any security profile requirements relevant to your organization (CIS, STIG, and so on). Obtain the {PlatformNameShort} installer as described in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#choosing_and_obtaining_a_red_hat_ansible_automation_platform_installer[Automation Platform Planning Guide], and create the installer inventory file as describe in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_installation_guide/index#proc-editing-installer-inventory-file_platform-install-scenario[Automation Platform Installation Guide]. This inventory file is used for upgrades, adding infrastructure components, and day-two operations by the installer, so preserve the file after installation for future operational use. +The installation host must be a {RHEL} server that has been installed and configured in accordance with link:{BaseURL}/red_hat_enterprise_linux/9/html/security_hardening/index[Security hardening for Red Hat Enterprise Linux] and any security profile requirements relevant to your organization (CIS, STIG, and so on). +Obtain the {PlatformNameShort} installer as described in the link:{URLPlanningGuide}/choosing_and_obtaining_a_red_hat_ansible_automation_platform_installer[Planning your installation], and create the installer inventory file as described in the link:{URLInstallationGuide}/assembly-platform-install-scenario#proc-editing-installer-inventory-file_platform-install-scenario[Editing the Red Hat Ansible Automation Platform installer inventory file]. +This inventory file is used for upgrades, adding infrastructure components, and day-two operations by the {Installer}, so preserve the file after installation for future operational use. -Access to the installation host must be restricted only to those personnel who are responsible for managing the {PlatformNameShort} infrastructure. Over time, it will contain sensitive information, such as the installer inventory (which contains the initial login credentials for {PlatformNameShort}), copies of user-provided PKI keys and certificates, backup files, and so on. The installation host must also be used for logging in to the {PlatformNameShort} infrastructure servers through SSH when necessary for infrastructure management and maintenance. +Access to the installation host must be restricted only to those personnel who are responsible for managing the {PlatformNameShort} infrastructure. +Over time, it will contain sensitive information, such as the installation inventory (which contains the initial login credentials for {PlatformNameShort}), copies of user-provided PKI keys and certificates, backup files, and so on. +The installation host must also be used for logging in to the {PlatformNameShort} infrastructure servers through SSH when necessary for infrastructure management and maintenance. diff --git a/downstream/modules/aap-hardening/con-installation.adoc b/downstream/modules/aap-hardening/con-installation.adoc index 64589fd3c9..ed4d5fcaa8 100644 --- a/downstream/modules/aap-hardening/con-installation.adoc +++ b/downstream/modules/aap-hardening/con-installation.adoc @@ -7,4 +7,6 @@ [role="_abstract"] -There are installation-time decisions that affect the security posture of {PlatformNameShort}. The installation process includes setting a number of variables, some of which are relevant to the hardening of the {PlatformNameShort} infrastructure. Before installing {PlatformNameShort}, consider the guidance in the installation section of this guide. \ No newline at end of file +There are installation-time decisions that affect the security posture of {PlatformNameShort}. +The installation process includes setting a number of variables, some of which are relevant to the hardening of the {PlatformNameShort} infrastructure. +Before installing {PlatformNameShort}, consider the guidance in the installation section of this guide. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-logging-log-capture.adoc b/downstream/modules/aap-hardening/con-logging-log-capture.adoc index 230010399b..2deb4a45a7 100644 --- a/downstream/modules/aap-hardening/con-logging-log-capture.adoc +++ b/downstream/modules/aap-hardening/con-logging-log-capture.adoc @@ -7,8 +7,22 @@ [role="_abstract"] -Visibility and analytics is an important pillar of Enterprise Security and Zero Trust Architecture. Logging is key to capturing actions and auditing. You can manage logging and auditing by using the built-in audit support described in the link:{BaseURL}/red_hat_enterprise_linux/9/html/security_hardening/auditing-the-system_security-hardening[Auditing the system] section of the Security hardening for {RHEL} guide. Controller's built-in logging and activity stream support {ControllerName} logs all changes within {ControllerName} and automation logs for auditing purposes. More detailed information is available in the link:https://docs.ansible.com/automation-controller/latest/html/administration/logging.html[Logging and Aggregation] section of the {ControllerName} documentation. +Visibility and analytics is an important pillar of Enterprise Security and Zero Trust Architecture. +Logging is key to capturing actions and auditing. +You can manage logging and auditing by using the built-in audit support described in the link:{BaseURL}/red_hat_enterprise_linux/9/html/security_hardening/auditing-the-system_security-hardening[Auditing the system] section of the Security hardening for {RHEL} guide. +{PlatformNameShort}'s built-in logging and activity stream logs all change within {PlatformName} and automation logs for auditing purposes. +More detailed information is available in the link:{URLControllerAdminGuide}/assembly-controller-logging-aggregation[Logging and Aggregation] section of {TitleControllerAdminGuide}. -This guide recommends that you configure {PlatformNameShort} and the underlying {RHEL} systems to collect logging and auditing centrally, rather than reviewing it on the local system. {ControllerNameStart} must be configured to use external logging to compile log records from multiple components within the controller server. The events occurring must be time-correlated to conduct accurate forensic analysis. This means that the controller server must be configured with an NTP server that is also used by the logging aggregator service, as well as the targets of the controller. The correlation must meet certain industry tolerance requirements. In other words, there might be a varying requirement that time stamps of different logged events must not differ by any amount greater than X seconds. This capability should be available in the external logging service. +This guide recommends that you configure {PlatformNameShort} and the underlying {RHEL} systems to collect logging and auditing centrally, rather than reviewing it on the local system. +{PlatformNameShort} must be configured to use external logging to compile log records from multiple components within the {PlatformNameShort} server. +The events occurring must be time-correlated to conduct accurate forensic analysis. +This means that the {PlatformNameShort} server must be configured with an NTP server that is also used by the logging aggregator service, as well as the targets of {PlatformNameShort}. +The correlation must meet certain industry tolerance requirements. +In other words, there might be a varying requirement that time stamps of different logged events must not differ by any amount greater than _x_ seconds. +This capability should be available in the external logging service. -Another critical capability of logging is the ability to use cryptography to protect the integrity of log tools. Log data includes all information (for example, log records, log settings, and log reports) needed to successfully log information system activity. It is common for attackers to replace the log tools or inject code into the existing tools to hide or erase system activity from the logs. To address this risk, log tools must be cryptographically signed so that you can identify when the log tools have been modified, manipulated, or replaced. For example, one way to validate that the log tool(s) have not been modified, manipulated or replaced is to use a checksum hash against the tool file(s). This ensures the integrity of the tool(s) has not been compromised. +Another critical capability of logging is the ability to use cryptography to protect the integrity of log tools. Log data includes all information (for example, log records, log settings, and log reports) needed to successfully log information system activity. +It is common for attackers to replace the log tools or inject code into the existing tools to hide or erase system activity from the logs. +To address this risk, log tools must be cryptographically signed so that you can identify when the log tools have been modified, manipulated, or replaced. +For example, one way to validate that the log tool(s) have not been modified, manipulated or replaced is to use a checksum hash against the tool file(s). +This ensures the integrity of the tool(s) has not been compromised. diff --git a/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc b/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc index ba5de6ccfd..b564aa5f58 100644 --- a/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc +++ b/downstream/modules/aap-hardening/con-network-firewall-services-planning.adoc @@ -3,20 +3,22 @@ [id="con-network-firewall-services_{context}"] -= Network, firewall, and network services planning for {PlatformNameShort} +//= Network, firewall, and network services planning for {PlatformNameShort} [role="_abstract"] -{PlatformNameShort} requires access to a network to integrate to external auxiliary services and to manage target environments and resources such as hosts, other network devices, applications, cloud services. The link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#ref-network-ports-protocols_planning[network ports and protocols] section of the {PlatformNameShort} planning guide describes how {PlatformNameShort} components interact on the network as well as which ports and protocols are used, as shown in the following diagram: +//{PlatformNameShort} requires access to a network to integrate to external auxiliary services and to manage target environments and resources such as hosts, other network devices, applications, cloud services. +//The link:{URLPlanningGuide}?ref-network-ports-protocols_planning[network ports and protocols] section of {TitlePlanningGuide} describes how {PlatformNameShort} components interact on the network as well as which ports and protocols are used, as shown in the following diagram: -.{PlatformNameShort} Network ports and protocols -image::aap-network-ports-protocols.png[Interaction of {PlatformNameShort} components on the network with information about the ports and protocols that are used.] +//.{PlatformNameShort} Network ports and protocols +//image::aap-network-ports-protocols.png[Interaction of {PlatformNameShort} components on the network with information about the ports and protocols that are used.] -When planning firewall or cloud network security group configurations related to {PlatformNameShort}, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#ref-network-ports-protocols_planning[Network ports and protocols] section of the {PlatformNameShort} Planning Guide to understand what network ports need to be opened on a firewall or security group. +When planning firewall or cloud network security group configurations related to {PlatformNameShort}, see the +"Network Ports" section of your chosen topology in link:{LinkTopologies} to understand what network ports need to be opened on a firewall or security group. -For more information on using a load balancer, and for outgoing traffic requirements for services compatible with {PlatformNameShort}. Consult the Red Hat Knowledgebase article link:https://access.redhat.com/solutions/6756251[What ports need to be opened in the firewall for {PlatformNameShort} 2 Services?]. For internet-connected systems, this article also defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as {HubNameMain}, {InsightsName}, {Galaxy}, the registry.redhat.io container image registry, and so on. +//For more information on using a load balancer, and for outgoing traffic requirements for services compatible with {PlatformNameShort}. Consult the Red Hat Knowledgebase article link:https://access.redhat.com/solutions/6756251[What ports need to be opened in the firewall for {PlatformNameShort} 2 Services?]. For internet-connected systems, this article also defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as {HubNameMain}, {InsightsName}, {Galaxy}, the registry.redhat.io container image registry, and so on. -For internet-connected systems, this article also defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as Red Hat {HubName}, {InsightsShort}, {Galaxy}, the registry.redhat.io container image registry, and so on. +For internet-connected systems, the link:{URLPlanningGuide}/ref-network-ports-protocols_planning[Networks and Protocols] section of {TitlePlanningGuide} defines the outgoing traffic requirements for services that {PlatformNameShort} can be configured to use, such as Red Hat {HubName}, {InsightsShort}, {Galaxy}, the registry.redhat.io container image registry, and so on. Restrict access to the ports used by the {PlatformNameShort} components to protected networks and clients. The following restrictions are highly recommended: diff --git a/downstream/modules/aap-hardening/con-planning-considerations.adoc b/downstream/modules/aap-hardening/con-planning-considerations.adoc index f6f8617464..28595eb98b 100644 --- a/downstream/modules/aap-hardening/con-planning-considerations.adoc +++ b/downstream/modules/aap-hardening/con-planning-considerations.adoc @@ -7,16 +7,14 @@ [role="_abstract"] -When planning an {PlatformNameShort} installation, ensure that the following components are included: +{PlatformName} is composed of the following primary components: -* Installer-manged components -** {ControllerNameStart} -** {EDAcontroller} -** {PrivateHubNameStart} -* PostgreSQL database (if not external) -** External services -** {InsightsName} -** {HubNameStart} -** `registry.redhat.io` (default {ExecEnvShort} container registry) +* {ControllerNameStart} +* {AutomationMeshStart} +* {PrivateHubNameStart} +* {EDAcontroller} -See the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/platform-system-requirements[system requirements] section of the _{PlatformName} Planning Guide_ for additional information. +A PostgreSQL database is also provided, although a user-provided PostgreSQL database can be used as well. +Red Hat recommends that customers always deploy all components of {PlatformNameShort} so that all features and capabilities are available for use without the need to take further action. + +For further information, see link:{URLPlanningGuide}/ref-aap-components#ref-aap-components[{PlatformName} Architecture] diff --git a/downstream/modules/aap-hardening/con-platform-components.adoc b/downstream/modules/aap-hardening/con-platform-components.adoc index e3768d3ad3..3e3f5bd50e 100644 --- a/downstream/modules/aap-hardening/con-platform-components.adoc +++ b/downstream/modules/aap-hardening/con-platform-components.adoc @@ -7,8 +7,8 @@ [role="_abstract"] -{PlatformNameShort} is a modular platform that includes {ControllerName}, {HubName}, {EDAcontroller}, and {InsightsShort}. +{PlatformNameShort} is a modular platform composed of separate components that can be connected together, including {ControllerName}, {Gateway}, {HubName}, and {EDAcontroller}. [role="_additional-resources"] .Additional resources -For more information about the components provided within {PlatformNameShort}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/ref-aap-components[Red Hat Ansible Automation Platform components] in the _Red Hat Ansible Automation Platform Planning Guide_. +For more information about the components provided within {PlatformNameShort}, see link:{URLPlanningGuide}/ref-aap-components[Red Hat Ansible Automation Platform components] in _{TitlePlanningGuide}_. diff --git a/downstream/modules/aap-hardening/con-product-overview.adoc b/downstream/modules/aap-hardening/con-product-overview.adoc index f3c42ba54d..daaf77320b 100644 --- a/downstream/modules/aap-hardening/con-product-overview.adoc +++ b/downstream/modules/aap-hardening/con-product-overview.adoc @@ -7,6 +7,11 @@ [role="_abstract"] -Ansible is an open source, command-line IT automation software application written in Python. You can use {PlatformNameShort} to configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. Ansible’s main strengths are simplicity and ease of use. It also has a strong focus on security and reliability, featuring minimal moving parts. It uses secure, well-known communication protocols like SSH, HTTPS, and WinRM for transport and uses a human-readable language that is designed for getting started quickly without extensive training. +Ansible is an open source, command-line IT automation software application written in Python. +You can use {PlatformNameShort} to configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. +Ansible's main strengths are simplicity and ease of use. It also has a strong focus on security and reliability, featuring minimal moving parts. It uses secure, well-known communication protocols like SSH, HTTPS, and WinRM for transport and uses a human-readable language that is designed for getting started quickly without extensive training. -{PlatformNameShort} enhances the Ansible language with enterprise-class features, such as Role-Based Access Controls (RBAC), centralized logging and auditing, credential management, job scheduling, and complex automation workflows. With {PlatformNameShort} you get certified content from our robust partner ecosystem; added security, reporting, and analytics; and life cycle technical support to scale automation across your organization. {PlatformNameShort} simplifies the development and operation of automation workloads for managing enterprise application infrastructure life cycles. It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. \ No newline at end of file +{PlatformNameShort} enhances the Ansible language with enterprise-class features, such as _Role-Based Access Controls_ (RBAC), centralized logging and auditing, credential management, job scheduling, and complex automation workflows. +With {PlatformNameShort} you get certified content from our robust partner ecosystem; added security; reporting, and analytics, as well as life cycle technical support to scale automation across your organization. +{PlatformNameShort} simplifies the development and operation of automation workloads for managing enterprise application infrastructure life cycles. +It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-protect-sensitive-data-no-log.adoc b/downstream/modules/aap-hardening/con-protect-sensitive-data-no-log.adoc new file mode 100644 index 0000000000..60924f07cd --- /dev/null +++ b/downstream/modules/aap-hardening/con-protect-sensitive-data-no-log.adoc @@ -0,0 +1,5 @@ +[id="con-protect-sensitive-data-no-log"] + += Protecting sensitive data with no_log +If you save Ansible output to a log, you expose any secret data in your Ansible output, such as passwords and usernames. +To keep sensitive values out of your logs, mark tasks that expose them with the `no_log: True` attribute. However, the `no_log` attribute does not affect debugging output, so be careful not to debug playbooks in a production environment. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/con-rbac.adoc b/downstream/modules/aap-hardening/con-rbac.adoc index dee75d460a..b970355605 100644 --- a/downstream/modules/aap-hardening/con-rbac.adoc +++ b/downstream/modules/aap-hardening/con-rbac.adoc @@ -7,17 +7,21 @@ [role="_abstract"] -As an administrator, you can use the Role-Based Access Controls (RBAC) built into {ControllerName} to delegate access to server inventories, organizations, and more. Administrators can also centralize the management of various credentials, allowing end users to leverage a needed secret without ever exposing that secret to the end user. RBAC controls allow the controller to help you increase security and streamline management. +As an administrator, you can use the _Role-Based Access Controls_ (RBAC) built into the {Gateway} to delegate access to server inventories, organizations, and more. +Administrators can also centralize the management of various credentials, enabling end users to use a needed secret without ever exposing that secret to the end user. +RBAC controls allow {PlatformNameShort} to help you increase security and streamline management. -RBAC is the practice of granting roles to users or teams. RBACs are easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an “object” for which a specific capability is being set. +RBAC is the practice of granting roles to users or teams. +RBAC is easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an “object” for which a specific capability is being set. -There are a few main concepts that you should become familiar with regarding {ControllerName}'s RBAC design–roles, resources, and users. Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with “descendant” roles. +There are a few main concepts that you should become familiar with regarding {PlatformNameShort}'s RBAC design–roles, resources, and users. +Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with “descendant” roles. -A role is essentially a collection of capabilities. Users are granted access to these capabilities and the controller’s resources through the roles to which they are assigned or through roles inherited through the role hierarchy. +A role is essentially a collection of capabilities. Users are granted access to these capabilities and {ControllerName}'s resources through the roles to which they are assigned or through roles inherited through the role hierarchy. Roles associate a group of capabilities with a group of users. All capabilities are derived from membership within a role. Users receive capabilities only through the roles to which they are assigned or through roles they inherit through the role hierarchy. All members of a role have all capabilities granted to that role. Within an organization, roles are relatively stable, while users and capabilities are both numerous and may change rapidly. Users can have many roles. -For further detail on Role Hierarchy, access inheritance, Built in Roles, permissions, personas, Role Creation, and so on see link:https://docs.ansible.com/automation-controller/latest/html/userguide/security.html#role-based-access-controls[Role-Based Access Controls]. +For further detail on Role Hierarchy, access inheritance, Built in Roles, permissions, personas, Role Creation, and so on see link:{URLCentralAuth}/gw-managing-access[Managing access with Role-Based access controls]. The following is an example of an organization with roles and resource permissions: @@ -26,6 +30,12 @@ image::aap_ref_arch_2.4.1.png[Reference architecture for an example of an organi User access is based on managing permissions to system objects (users, groups, namespaces) rather than by assigning permissions individually to specific users. You can assign permissions to the groups you create. You can then assign users to these groups. This means that each user in a group has the permissions assigned to that group. -Groups created in Automation Hub can range from system administrators responsible for governing internal collections, configuring user access, and repository management to groups with access to organize and upload internally developed content to Automation Hub. For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access#ref-permissions[{HubNameStart} permissions] for consistency. +Teams created in {HubName} can range from system administrators responsible for governing internal collections, configuring user access, and repository management to groups with access to organize and upload internally developed content to {HubName}. -View-only access can be enabled for further lockdown of the {PrivateHubName}. By enabling view-only access, you can grant access for users to view collections or namespaces on your {PrivateHubName} without the need for them to log in. View-only access allows you to share content with unauthorized users while restricting their ability to only view or download source code, without permissions to edit anything on your {PrivateHubName}. Enable view-only access for your {PrivateHubName} by editing the inventory file found on your {PlatformName} installer. +//TBD link to getting started with hub +//For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access#ref-permissions[{HubNameStart} permissions] for consistency. + +View-only access can be enabled for further lockdown of the {PrivateHubName}. +By enabling view-only access, you can grant access for users to view collections or namespaces on your {PrivateHubName} without the need for them to log in. +View-only access allows you to share content with unauthorized users while restricting their ability to only view or download source code, without permissions to edit anything on your {PrivateHubName}. +Enable view-only access for your {PrivateHubName} by editing the inventory file found on your {PlatformName} installer. diff --git a/downstream/modules/aap-hardening/con-rhel-host-planning.adoc b/downstream/modules/aap-hardening/con-rhel-host-planning.adoc index 52ba269f28..fa4ca7ce8f 100644 --- a/downstream/modules/aap-hardening/con-rhel-host-planning.adoc +++ b/downstream/modules/aap-hardening/con-rhel-host-planning.adoc @@ -7,6 +7,10 @@ [role="_abstract"] -The security of {PlatformNameShort} relies in part on the configuration of the underlying {RHEL} servers. For this reason, the underlying {RHEL} hosts for each {PlatformNameShort} component must be installed and configured in accordance with the link:{BaseURL}/red_hat_enterprise_linux/8/html-single/security_hardening/index[Security hardening for {RHEL} 8] or link:{BaseURL}/red_hat_enterprise_linux/9/html-single/security_hardening/index[Security hardening for {RHEL} 9] (depending on which operating system will be used), as well as any security profile requirements (CIS, STIG, HIPAA, and so on) used by your organization. +The security of {PlatformNameShort} relies in part on the configuration of the underlying {RHEL} servers. +For this reason, the underlying {RHEL} hosts for each {PlatformNameShort} component must be installed and configured in accordance with the link:{BaseURL}/red_hat_enterprise_linux/8/html-single/security_hardening/index[Security hardening for {RHEL} 8] or link:{BaseURL}/red_hat_enterprise_linux/9/html-single/security_hardening/index[Security hardening for {RHEL} 9] (depending on which operating system is used), as well as any security profile requirements (_Center for Internet Security_ (CIS), STIG, _Health Insurance Portability and Accountability Act_ (HIPAA), and so on) used by your organization. +This document recommends {RHEL} 9 for all new deployments. +When using the container-based installation method, {RHEL} 9 is required. -Note that applying certain security controls from the STIG or other security profiles may conflict with {PlatformNameShort} support requirements. Some examples are listed in the xref:con-controller-stig-considerations_{context}[{ControllerNameStart} STIG considerations] section, although this is not an exhaustive list. To maintain a supported configuration, be sure to discuss any such conflicts with your security auditors so the {PlatformNameShort} requirements are understood and approved. +//Note that applying certain security controls from the STIG or other security profiles may conflict with {PlatformNameShort} support requirements. +//Some examples are listed in the xref:con-controller-stig-considerations_{context}[{ControllerNameStart} STIG considerations] section, although this is not an exhaustive list. To maintain a supported configuration, be sure to discuss any such conflicts with your security auditors so the {PlatformNameShort} requirements are understood and approved. diff --git a/downstream/modules/aap-hardening/con-user-authentication-planning.adoc b/downstream/modules/aap-hardening/con-user-authentication-planning.adoc index cfaff9004b..b6e2e250c5 100644 --- a/downstream/modules/aap-hardening/con-user-authentication-planning.adoc +++ b/downstream/modules/aap-hardening/con-user-authentication-planning.adoc @@ -5,19 +5,10 @@ = User authentication planning -[role="_abstract"] +There are two types of user accounts to consider in an {PlatformNameShort} environment: -When planning for access to the {PlatformNameShort} user interface or API, be aware that user accounts can either be local or mapped to an external authentication source such as LDAP. This guide recommends that where possible, all primary user accounts should be mapped to an external authentication source. Using external account sources eliminates a source of error when working with permissions in this context and minimizes the amount of time devoted to maintaining a full set of users exclusively within {PlatformNameShort}. This includes accounts assigned to individual persons as well as for non-person entities such as service accounts used for external application integration. Reserve any local administrator accounts such as the default "admin" account for emergency access or "break glass" scenarios where the external authentication mechanism is not available. +* Infrastructure accounts: user accounts on the RHEL servers that run the {PlatformNameShort} services. +* Application accounts: user accounts for the {PlatformNameShort} web UI and API. -[NOTE] -==== -The {EDAcontroller} does not currently support external authentication, only local accounts. -==== -For user accounts on the {RHEL} servers that run the {PlatformNameShort} services, follow your organizational policies to determine if individual user accounts should be local or from an external authentication source. Only users who have a valid need to perform maintenance tasks on the {PlatformNameShort} components themselves should be granted access to the underlying {RHEL} servers, as the servers will have configuration files that contain sensitive information such as encryption keys and service passwords. Because these individuals must have privileged access to maintain {PlatformNameShort} services, minimizing the access to the underlying {RHEL} servers is critical. Do not grant sudo access to the root account or local {PlatformNameShort} service accounts (awx, pulp, postgres) to untrusted users. - -[NOTE] -==== -The local {PlatformNameShort} service accounts such as awx, pulp, and postgres are created and managed by the {PlatformNameShort} installer. These particular accounts on the underlying {RHEL} hosts cannot come from an external authentication source. -==== diff --git a/downstream/modules/aap-hardening/platform b/downstream/modules/aap-hardening/platform new file mode 120000 index 0000000000..01b1259b79 --- /dev/null +++ b/downstream/modules/aap-hardening/platform @@ -0,0 +1 @@ +../platform/ \ No newline at end of file diff --git a/downstream/modules/aap-hardening/proc-configure-centralized-logging.adoc b/downstream/modules/aap-hardening/proc-configure-centralized-logging.adoc deleted file mode 100644 index 577f2060e1..0000000000 --- a/downstream/modules/aap-hardening/proc-configure-centralized-logging.adoc +++ /dev/null @@ -1,93 +0,0 @@ -// Module included in the following assemblies: -// downstream/assemblies/assembly-hardening-aap.adoc - -[id="proc-configure-centralized-logging_{context}"] - -= Configure centralized logging - -A critical capability of logging is the ability for the {ControllerName} to detect and take action to mitigate a failure, such as reaching storage capacity, which by default shuts down the controller. This guide recommends that the application server be part of a high availability system. When this is the case, {ControllerName} will take the following steps to mitigate failure: - -* If the failure was caused by the lack of log record storage capacity, the application must continue generating log records if possible (automatically restarting the log service if necessary), overwriting the oldest log records in a first-in-first-out manner. -* If log records are sent to a centralized collection server and communication with this server is lost or the server fails, the application must queue log records locally until communication is restored or until the log records are retrieved manually. Upon restoration of the connection to the centralized collection server, action must be taken to synchronize the local log data with the collection server. - -To verify the rsyslog configuration for each {ControllerName} host, complete the following steps for each {ControllerName}: - -The administrator must check the rsyslog configuration for each {ControllerName} host to verify the log rollover against a organizationally defined log capture size. To do this, use the following steps, and correct using the configuration steps as required: - -. Check the `LOG_AGGREGATOR_MAX_DISK_USAGE_GB` field in the {ControllerName} configuration. On the host, execute: -+ ----- -awx-manage print_settings LOG_AGGREGATOR_MAX_DISK_USAGE_GB ----- -+ -If this field is not set to the organizationally defined log capture size, then follow the configuration steps. - -. Check `LOG_AGGREGATOR_MAX_DISK_USAGE_PATH` field in the {ControllerName} configuration for the log file location to `/var/lib/awx`. On the host, execute: -+ ----- -awx-manage print_settings LOG_AGGREGATOR_MAX_DISK_USAGE_PATH ----- -+ -If this field is not set to `/var/lib/awx`, then follow these configuration steps: -+ --- -.. Open a web browser and navigate to \https:///api/v2/settings/logging/, where is the fully-qualified hostname of your {ControllerName}. If the btn:[Log In] option is displayed, click it, log in as an {ControllerName} adminstrator account, and continue. - -.. In the Content section, modify the following values, then click btn:[PUT]: -+ -* LOG_AGGREGATOR_MAX_DISK_USAGE_GB = -* LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = `/var/lib/awx` --- -+ -Note that this change will need to be made on each {ControllerName} in a load-balanced scenario. - -All user session data must be logged to support troubleshooting, debugging and forensic analysis for visibility and analytics. Without this data from the controller’s web server, important auditing and analysis for event investigations will be lost. To verify that the system is configured to ensure that user session data is logged, use the following steps: - -For each {ControllerName} host, navigate to console Settings >> System >> Miscellaneous System. - -. Click btn:[Edit]. -. Set the following: -* Enable Activity Stream = On -* Enable Activity Stream for Inventory Sync = On -* Organization Admins Can Manage Users and Teams = Off -* All Users Visible to Organization Admins = On -. Click btn:[Save] - -To set up logging to any of the aggregator types, read the documentation on link:https://docs.ansible.com/automation-controller/latest/html/administration/logging.html#logging-aggregator-services[supported log aggregators] and configure your log aggregator using the following steps: - -. Navigate to {PlatformNameShort}. -. Click btn:[Settings]. -. Under the list of System options, select Logging settings. -. At the bottom of the Logging settings screen, click btn:[Edit]. -. Set the configurable options from the fields provided: -* Enable External Logging: Click the toggle button to btn:[ON] if you want to send logs to an external log aggregator. The UI requires the Logging Aggregator and Logging Aggregator Port fields to be filled in before this can be done. -* Logging Aggregator: Enter the hostname or IP address you want to send logs. -* Logging Aggregator Port: Specify the port for the aggregator if it requires one. -* Logging Aggregator Type: Select the aggregator service from the drop-down menu: -** Splunk -** Loggly -** Sumologic -** Elastic stack (formerly ELK stack) -* Logging Aggregator Username: Enter the username of the logging aggregator if required. -* Logging Aggregator Password/Token: Enter the password of the logging aggregator if required. -* Log System Tracking Facts Individually: Click the tooltip icon for additional information, whether or not you want to turn it on, or leave it off by default. -* Logging Aggregator Protocol: Select a connection type (protocol) to communicate with the log aggregator. Subsequent options vary depending on the selected protocol. -* Logging Aggregator Level Threshold: Select the level of severity you want the log handler to report. -* TCP Connection Timeout: Specify the connection timeout in seconds. This option is only applicable to HTTPS and TCP log aggregator protocols. -* Enable/disable HTTPS certificate verification: Certificate verification is enabled by default for HTTPS log protocol. Click the toggle button to btn:[OFF] if you do not want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection. -* Loggers to Send Data to the Log Aggregator Form: All four types of data are pre-populated by default. Click the tooltip icon next to the field for additional information on each data type. Delete the data types you do not want. -* Log Format For API 4XX Errors: Configure a specific error message. -. Click btn:[Save] to apply the settings or btn:[Cancel] to abandon the changes. -. To verify if your configuration is set up correctly, btn:[Save] first then click btn:[Test]. This sends a test log message to the log aggregator using the current logging configuration in the {ControllerName}. You should check to make sure this test message was received by your external log aggregator. - -A {ControllerName} account is automatically created for any user who logs in with an LDAP username and password. These users can automatically be placed into organizations as regular users or organization administrators. This means that logging should be turned on when LDAP integration is in use. You can enable logging messages for the SAML adapter the same way you can enable logging for LDAP. - -The following steps enable the LDAP logging: - -To enable logging for LDAP, you must set the level to DEBUG in the Settings configuration window. - -. Click btn:[Settings] from the left navigation pane and select Logging settings from the System list of options. -. Click btn:[Edit]. -. Set the Logging Aggregator Level Threshold field to Debug. -. Click btn:[Save] to save your changes. - diff --git a/downstream/modules/aap-hardening/proc-configure-ldap-logging.adoc b/downstream/modules/aap-hardening/proc-configure-ldap-logging.adoc new file mode 100644 index 0000000000..3121a1b8fc --- /dev/null +++ b/downstream/modules/aap-hardening/proc-configure-ldap-logging.adoc @@ -0,0 +1,39 @@ +[id="proc-configure-ldap-logging"] + += Configuring LDAP logging + +The following steps enable the LDAP logging: + +To enable logging for LDAP, use the following procedure. + +.Procedure + +. Edit the gateway settings file: +.. On {PlatformNameShort}{PlatformVers} Containerized, the file is `~/aap/gateway/etc/settings.py` (as the user running the {Gateway} container). +.. On {PlatformNameShort}{PlatformVers} RPM-based, the file is `/etc/ansible-automation-platform/gateway/settings.py`. ++ +---- + (...) + CACHES['fallback']['LOCATION'] = '/var/cache/ansible-automation-platform/gateway' + + LOGGING['loggers']['aap']['level'] = 'INFO' + LOGGING['loggers']['ansible_base']['level'] = 'INFO' + LOGGING['loggers']['django_auth_ldap']['level'] = 'DEBUG' ###### add this line + + (...) +---- + +. Restart the {Gateway} service or container: + +.. On {PlatformNameShort}{PlatformVers} Containerized, restart the {Gateway} service so that it restarts the {Gateway} container: ++ +[NOTE] +==== +Ensure that you run `systemctl with the `--user`` flag as follows: ++ +`$ systemctl --user restart automation-gateway` +==== + +.. On {PlatformNameShort}{PlatformVers} RPM-based, use the `automation-gateway-service` command: ++ +`# automation-gateway-service restart` diff --git a/downstream/modules/aap-hardening/proc-disaster-recovery-operations.adoc b/downstream/modules/aap-hardening/proc-disaster-recovery-operations.adoc index b7c99acf39..bf2c8e1d1d 100644 --- a/downstream/modules/aap-hardening/proc-disaster-recovery-operations.adoc +++ b/downstream/modules/aap-hardening/proc-disaster-recovery-operations.adoc @@ -7,7 +7,7 @@ [role="_abstract"] -Taking regular backups of {PlatformNameShort} is a critical part of disaster recovery planning. Both backups and restores are performed using the installer, so these actions should be performed from the dedicated installation host described earlier in this document. Refer to the link:https://docs.ansible.com/automation-controller/latest/html/administration/backup_restore.html[Backing Up and Restoring] section of the {ControllerName} documentation for further details on how to perform these operations. +Taking regular backups of {PlatformNameShort} is a critical part of disaster recovery planning. Both backups and restores are performed using the {Installer}, so these actions should be performed from the dedicated installation host described earlier in this document. Refer to the link:https://docs.ansible.com/automation-controller/latest/html/administration/backup_restore.html[Backing Up and Restoring] section of the {ControllerName} documentation for further details on how to perform these operations. An important aspect of backups is that they contain a copy of the database as well as the secret key used to decrypt credentials stored in the database, so the backup files should be stored in a secure encrypted location. This means that access to endpoint credentials are protected properly. Access to backups should be limited only to {PlatformNameShort} administrators who have root shell access to {ControllerName} and the dedicated installation host. @@ -18,11 +18,11 @@ The two main reasons an {PlatformNameShort} administrator needs to back up their In all cases, the recommended and safest process is to always use the same versions of PostgreSQL and {PlatformNameShort} to back up and restore the environment. -Using some redundancy on the system is highly recommended. If the secrets system is down, the {ControllerName} cannot fetch the information and can fail in a way that would be recoverable once the service is restored. If you believe the SECRET_KEY {ControllerName} generated for you has been compromised and has to be regenerated, you can run a tool from the installer that behaves much like the {ControllerName} backup and restore tool. +Using some redundancy on the system is highly recommended. If the secrets system is down, the {ControllerName} cannot fetch the information and can fail in a way that would be recoverable once the service is restored. If you believe the SECRET_KEY {ControllerName} generated for you has been compromised and has to be regenerated, you can run a tool from the {Installer} that behaves much like the {ControllerName} backup and restore tool. To generate a new secret key, perform the following steps: . Backup your {PlatformNameShort} database before you do anything else! Follow the procedure described in the link:https://docs.ansible.com/automation-controller/latest/html/administration/backup_restore.html[Backing Up and Restoring Controller] section. . Using the inventory from your install (same inventory with which you run backups/restores), run `setup.sh -k`. -A backup copy of the prior key is saved in `/etc/tower/`. +A backup copy of the previous key is saved in `/etc/tower/`. diff --git a/downstream/modules/aap-hardening/proc-fapolicyd.adoc b/downstream/modules/aap-hardening/proc-fapolicyd.adoc index 3e84fbc19a..74fd4879e9 100644 --- a/downstream/modules/aap-hardening/proc-fapolicyd.adoc +++ b/downstream/modules/aap-hardening/proc-fapolicyd.adoc @@ -7,14 +7,18 @@ [role="_abstract"] -The {RHEL} 8 STIG requires the fapolicyd daemon to be running. However, {PlatformNameShort} is not currently supported when fapolicyd enforcing policy, as this causes failures during the installation and operation of {PlatformNameShort}. Because of this, the installer runs a pre-flight check that will halt installation if it discovers that fapolicyd is enforcing policy. This guide recommends setting fapolicyd to permissive mode on the {ControllerName} using the following steps: +A compliance policy might require the `fapolicyd` daemon to be running. +However, {PlatformNameShort} is not currently supported when `fapolicyd` is enforcing policy, as this causes failures during both installation and operation of {PlatformNameShort}. +Because of this, the installation program runs a pre-flight check that halts installation if it discovers that `fapolicyd` is enforcing policy. +This guide recommends setting `fapolicyd` to permissive mode on {PlatformNameShort} using the following steps: . Edit the file `/etc/fapolicyd/fapolicyd.conf` and set "permissive = 1". -. Restart the service with the command `sudo systemctl restart fapolicyd.service`. +. Restart the service with the command ++ +`sudo systemctl restart fapolicyd.service` -In environments where STIG controls are routinely audited, discuss waiving the fapolicy-related STIG controls with your security auditor. [NOTE] ==== -If the {RHEL} 8 STIG is also applied to the installation host, the default fapolicyd configuration causes the {PlatformNameShort} installer to fail. In this case, the recommendation is to set fapolicyd to permissive mode on the installation host. +If this security control is also applied to the installation host, the default `fapolicyd` configuration causes the {PlatformNameShort} installation program to fail. In this case, the recommendation is to set `fapolicyd` to permissive mode on the installation host. ==== diff --git a/downstream/modules/aap-hardening/proc-file-systems-mounted-noexec.adoc b/downstream/modules/aap-hardening/proc-file-systems-mounted-noexec.adoc index bfd7caf517..2103f24faf 100644 --- a/downstream/modules/aap-hardening/proc-file-systems-mounted-noexec.adoc +++ b/downstream/modules/aap-hardening/proc-file-systems-mounted-noexec.adoc @@ -7,13 +7,14 @@ [role="_abstract"] -The {RHEL} 8 STIG requires that a number of file systems are mounted with the `noexec` option to prevent execution of binaries located in these file systems. The {PlatformNameShort} installer runs a preflight check that will fail if any of the following file systems are mounted with the `noexec` option: +A compliance profile might require that certain file systems are mounted with the `noexec` option to prevent execution of binaries located in these file systems. The {PlatformNameShort} RPM-based installation program runs a preflight check that fails if any of the following file systems are mounted with the `noexec` option: * `/tmp` * `/var` * `/var/tmp` -To install {PlatformNameShort}, you must re-mount these file systems with the `noexec` option removed. Once installation is complete, proceed with the following steps: +To install {PlatformNameShort}, you must re-mount these file systems with the `noexec` option removed. +When installation is complete, proceed with the following steps: . Reapply the `noexec` option to the `/tmp` and `/var/tmp` file systems. . Change the {ControllerName} job execution path from `/tmp` to an alternate directory that does not have the `noexec` option enabled. diff --git a/downstream/modules/aap-hardening/proc-implement-security-control.adoc b/downstream/modules/aap-hardening/proc-implement-security-control.adoc new file mode 100644 index 0000000000..0313949c56 --- /dev/null +++ b/downstream/modules/aap-hardening/proc-implement-security-control.adoc @@ -0,0 +1,47 @@ +[id="proc-implement-security-control"] + += Implementing security control + +Some of the following examples of meeting compliance requirements come from the US DoD _Security Technical Implementation Guide_, but go back to integrity and security best practices. + +{ControllerNameStart} must use external log providers that can collect user activity logs in independent, protected repositories to prevent modification or repudiation. +{ControllerNameStart} must be configured to use external logging to compile log records from multiple components within the server. +The events occurring must be time-correlated in order to conduct accurate forensic analysis. +In addition, the correlation must meet certain tolerance criteria. + + +The following steps implement the security control: + +.Procedure +. Log in to {ControllerName} as an administrator. +. From the navigation panel, select {MenuSetLogging}. +. On the *Logging settings* page, click btn:[Edit]. +. Set the following fields: + +* Set *Logging Aggregator* to `Not configured`. This is the default. +* Set *Enable External Logging* to `On`. +* Set *Logging Aggregator Level Threshold* to DEBUG. +* Set *TCP Connection Timeout* to 5 (the default) or to the organizational timeout. +* Set *Enable/disable HTTPS certificate verification* to `On`. +. Click btn:[Save]. + +{ControllerNameStart} must allocate log record storage capacity and shut down by default upon log failure (unless availability is an overriding concern). +It is critical that when a system is at risk of failing to process logs, it detects and takes action to mitigate the failure. +Log processing failures include software/hardware errors, failures in the log capturing mechanisms, and log storage capacity being reached or exceeded. +During a failure, the application server must be configured to shut down unless the application server is part of a high availability system. +When availability is an overriding concern, other approved actions in response to a log failure are as follows: + +. If the failure was caused by the lack of log record storage capacity, the application must continue generating log records if possible (automatically restarting the log service if necessary), overwriting the oldest log records in a first-in-first-out manner. +. If log records are sent to a centralized collection server and communication with this server is lost or the server fails, the application must queue log records locally until communication is restored or until the log records are retrieved manually. +Upon restoration of the connection to the centralized collection server, action must be taken to synchronize the local log data with the collection server. ++ +The following steps implement the security control: + +.. Open a web browser and navigate to the logging API, `/api/v2/settings/logging/` ++ +Ensure that you are authenticated as an {ControllerName} administrator. +.. In the *Content* section, modify the following values: + +** `LOG_AGGREGATOR_ACTION_MAX_DISK_USAGE_GB` = organization-defined requirement for log buffering. +** `LOG_AGGREGATOR_MAX_DISK_USAGE_PATH` = `/var/lib/awx` +..Click btn:[PUT]. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/proc-implement-security-controller.adoc b/downstream/modules/aap-hardening/proc-implement-security-controller.adoc new file mode 100644 index 0000000000..804a2246c1 --- /dev/null +++ b/downstream/modules/aap-hardening/proc-implement-security-controller.adoc @@ -0,0 +1,31 @@ +[id="proc-implement-security-controller"] + += Implementing security control for each host + +{ControllerNameStart}'s log files must be accessible by explicitly defined privilege. +A failure of the confidentiality of {ControllerName} log files would enable an attacker to identify key information about the system that they might not otherwise be able to obtain that would enable them to enumerate more information to enable escalation or lateral movement. + +To implement the security control, use the following procedure: + +.Procedure +. As a system administrator for each {ControllerName} host, set the permissions and owner of the {ControllerName} NGINX log directory: + +* `chmod 770 /var/log/nginx` +* `chown nginx:root /var/log/nginx` + +. Set the permissions and owner of the {ControllerName} log directory: + +* `chmod 770 /var/log/tower` +* `chown awx:awx /var/log/tower` + +. Set the permissions and owner of the {controllerName} supervisor log directory: + +* `chmod 770 /var/log/supervisor/` +* `chown root:root /var/log/supervisor/` + +{ControllerNameStart} must be configured to fail over to another system in the event of log subsystem failure. +{ControllerNameStart} hosts must be capable of failing over to another {ControllerName} host which can handle application and logging functions upon detection of an application log processing failure. +This enables continual operation of the application and logging functions while minimizing the loss of operation for the users and loss of log data. + +* If {ControllerName} is not in an HA configuration, the administrator must reinstall {ControllerName}. + diff --git a/downstream/modules/aap-hardening/proc-implement-security-for-admin.adoc b/downstream/modules/aap-hardening/proc-implement-security-for-admin.adoc new file mode 100644 index 0000000000..6b1d3ad794 --- /dev/null +++ b/downstream/modules/aap-hardening/proc-implement-security-for-admin.adoc @@ -0,0 +1,28 @@ +// Module included in the following assemblies: +// downstream/assemblies/assembly-hardening-aap.adoc + +[id="proc-implement-security-for-admin"] + += Implementing security control for system administrators + +{ControllerNameStart} must generate the appropriate log records. +{ControllerNameStart}'s web server must log all details related to user sessions in support of troubleshooting, debugging, and forensic analysis. +Without a data logging feature, the organization loses an important auditing and analysis tool for event investigations. + +Use the following procedure to implement the security control as a System Administrator for each {ControllerName} host: + +.Procedure +. From the navigation panel, select {MenuSetSystem}. The System Settings page is displayed. +. Click btn:[Edit]. +. Set the following: + +* *Enable Activity Stream* = On +* *Enable Activity Stream for Inventory Sync* = On +* *Organization Admins Can Manage Users and Teams* = On +* *All Users Visible to Organization Admins* = On +. Click btn:[Save]. + + + + + diff --git a/downstream/modules/aap-hardening/proc-install-user-pki.adoc b/downstream/modules/aap-hardening/proc-install-user-pki.adoc index 5e472e662d..20e84fa6c7 100644 --- a/downstream/modules/aap-hardening/proc-install-user-pki.adoc +++ b/downstream/modules/aap-hardening/proc-install-user-pki.adoc @@ -7,42 +7,55 @@ [role="_abstract"] -By default, {PlatformNameShort} creates self-signed PKI certificates for the infrastructure components of the platform. Where an existing PKI infrastructure is available, certificates must be generated for the {ControllerName}, {PrivateHubName}, {EDAcontroller}, and the postgres database server. Copy the certificate files and their relevant key files to the installer directory, along with the CA certificate used to verify the certificates. +By default, {PlatformNameShort} creates self-signed _Public Key Infrastructure_ (PKI) certificates for the infrastructure components of the platform. +Where an existing PKI infrastructure is available, certificates must be generated for the {ControllerName}, {PrivateHubName}, {EDAcontroller}, and the postgres database server. +Copy the certificate files and their relevant key files to the installation program directory, along with the CA certificate used to verify the certificates. Use the following inventory variables to configure the infrastructure components with the new certificates. .PKI certificate inventory variables -|=== -| *Variable* | *Details* -| `custom_ca_cert` | The file name of the CA certificate located in the installer directory. +|==== +| *RPM Variable* | *Containerized Variable* | *Details* +| `custom_ca_cert` | `custom_ca_cert` | The path to the custom CA certificate file. -| `web_server_ssl_cert` | The file name of the {ControllerName} PKI certificate located in the installer directory. +If set, this will install a custom CA certificate to the system truststore. -| `web_server_ssl_key` | The file name of the {ControllerName} PKI key located in the installer directory. +| `web_server_ssl_cert` | `controller_tls_cert` | The file name of the {ControllerName} PKI certificate located in the installer directory. -| `automationhub_ssl_cert` | The file name of the {PrivateHubName} PKI certificate located in the installer directory. +| `web_server_ssl_key` | `controller_tls_key` | The file name of the {ControllerName} PKI key located in the installation program directory. -| `automationhub_ssl_key` | The file name of the {PrivateHubName} PKI key located in the installer directory. +| `automationhub_ssl_cert` | `hub_tls_cert` | The file name of the {PrivateHubName} PKI certificate located in the installation program directory. -| `postgres_ssl_cert` | The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. +| `automationhub_ssl_key` | `hub_tls_key` | The file name of the {PrivateHubName} PKI key located in the installation program directory. -| `postgres_ssl_key` | The file name of the database server PKI certificate located in the installer directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. +| `postgres_ssl_cert` | `postgresql_tls_cert` | The file name of the database server PKI certificate located in the installation program directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. -| `automationedacontroller_ssl_cert` | The file name of the {EDAcontroller} PKI certificate located in the installer directory. +| `postgres_ssl_key` | `postgresql_tls_key` | The file name of the database server PKI key located in the installation program directory. This variable is only needed for the installer-managed database server, not if a third-party database is used. -| `automationedacontroller_ssl_key` | The file name of the {EDAcontroller} PKI key located in the installer directory. -|=== +| `automationedacontroller_ssl_cert` | `eda_tls_cert` | The file name of the {EDAcontroller} PKI certificate located in the installation program directory. -When multiple {ControllerName} are deployed with a load balancer, the `web_server_ssl_cert` and `web_server_ssl_key` are shared by each controller. To prevent hostname mismatches, the certificate's Common Name (CN) must match the DNS FQDN used by the load balancer. This also applies when deploying multiple {PrivateHubName} and the `automationhub_ssl_cert` and `automationhub_ssl_key` variables. If your organizational policies require unique certificates for each service, each certificate requires a Subject Alt Name (SAN) that matches the DNS FQDN used for the load-balanced service. To install unique certificates and keys on each {ControllerName}, the certificate and key variables in the installation inventory file must be defined as per-host variables instead of in the `[all:vars]` section. For example: +| `automationedacontroller_ssl_key` | `eda_tls_key` | The file name of the {EDAcontroller} PKI key located in the installation program directory. +| - | `gateway_tls_cert` | The filename of the {Gateway} PKI certificate located in the installation program directory. +| - | `gateway_tls_key` | The file name of the {Gateway} PKI key located in the installation program directory. +|==== + +When multiple {Gateway}s are deployed with a load balancer, `gateway_tls_cert` and `gateway_tls_key` are shared by each {Gateway}. +To prevent hostname mismatches, the certificate's _Common Name_ (CN) must match the DNS FQDN used by the load balancer. +//This also applies when deploying multiple {PrivateHubName} and the `automationhub_ssl_cert` and `automationhub_ssl_key` variables. +If your organizational policies require unique certificates for each service, each certificate requires a _Subject Alt Name_ (SAN) that matches the DNS FQDN used for the load-balanced service. +To install unique certificates and keys on each {Gateway}, the certificate and key variables in the installation inventory file must be defined as per-host variables instead of in the `[all:vars]` section. +For example: ---- +[automationgateway] +gateway0.example.com gateway_tls_cert=/path/to/cert0 gateway_tls_key=/path/to/key0 +gateway1.example.com gateway_tls_cert=/path/to/cert1 gateway_tls_key=/path/to/key1 + [automationcontroller] controller0.example.com web_server_ssl_cert=/path/to/cert0 web_server_ssl_key=/path/to/key0 controller1.example.com web_server_ssl_cert=/path/to/cert1 web_server_ssl_key=/path/to/key1 controller2.example.com web_server_ssl_cert=/path/to/cert2 web_server_ssl_key=/path/to/key2 ----- ----- [automationhub] hub0.example.com automationhub_ssl_cert=/path/to/cert0 automationhub_ssl_key=/path/to/key0 hub1.example.com automationhub_ssl_cert=/path/to/cert1 automationhub_ssl_key=/path/to/key1 diff --git a/downstream/modules/aap-hardening/proc-namespaces.adoc b/downstream/modules/aap-hardening/proc-namespaces.adoc index 581b5e94c6..e306d1d11c 100644 --- a/downstream/modules/aap-hardening/proc-namespaces.adoc +++ b/downstream/modules/aap-hardening/proc-namespaces.adoc @@ -7,9 +7,10 @@ [role="_abstract"] -The {RHEL} 8 STIG requires that the kernel setting `user.max_user_namespaces` is set to "0", but only if Linux containers are not in use. Because {PlatformNameShort} uses containers as part of its {ExecEnvShort} capability, this STIG control does not apply to the {ControllerName}. +A compliance profile might require that the kernel setting `user.max_user_namespaces` is set to "0", to prevent the launch of Linux containers. +The DISA STIG, for example, specifically requires this control but only if Linux containers are not required. Because {PlatformNameShort} can be installed and operated in containers and also uses containers as part of its {ExecEnvShort} capability, Linux containers are required and this control must be disabled. -To check the `user.max_user_namespaces` kernel setting, complete the following steps: +To check the `user.max_user_namespaces` kernel setting, complete the following steps on each {PlatformNameShort} component in the installation inventory: . Log in to your {ControllerName} at the command line. . Run the command `sudo sysctl user.max_user_namespaces`. diff --git a/downstream/modules/aap-hardening/ref-aap-account-planning.adoc b/downstream/modules/aap-hardening/ref-aap-account-planning.adoc new file mode 100644 index 0000000000..019c00b5b3 --- /dev/null +++ b/downstream/modules/aap-hardening/ref-aap-account-planning.adoc @@ -0,0 +1,40 @@ +[id="ref-aap-account-planning"] + += {PlatformNameShort} account planning + +{PlatformNameShort} user accounts for accessing the user interface or API can either be local (stored in the {PlatformNameShort} database) or mapped to an external authentication source, such as a _Lightweight Directory Access Protocol_ (LDAP) server. +This guide recommends that where possible, all primary user accounts should be mapped to an external authentication source. +Using external account sources eliminates a source of error when working with permissions in this context and minimizes the amount of time devoted to maintaining a full set of users exclusively within {PlatformNameShort}. +This includes accounts assigned to individual persons as well as for non-person entities, such as service accounts used for external application integration. +Reserve any local accounts, such as the default “admin” account, for emergency access or “break glass” scenarios where the external authentication mechanism isn't available. + +* LDAP +* SAML +* TACACS+ +* Radius +* Azure Active Directory +* Google OAuth +* Generic OIDC +* Keycloak +* GitHub +* GitHub Organization +* GitHub team +* GitHub enterprise +* GitHub enterprise organization +* GitHub enterprise team + +Choose an authentication mechanism that adheres to your organization's authentication policies and refer to link:{LinkCentralAuth} to understand the prerequisites for the relevant authentication mechanism. +The authentication mechanism used must ensure that the authentication-related traffic between {PlatformNameShort} and the authentication back-end is encrypted when the traffic occurs on a public or non-secure network (for example, LDAPS or LDAP over TLS, HTTPS for OAuth2 and SAML providers, etc.). + +In the {PlatformNameShort} UI, any “system administrator” account can edit, change, and update any inventory or automation definition. Restrict these account privileges to the minimum set of users possible for low-level automation controller configuration and disaster recovery. + +[NOTE] +==== +{platformNameShort} {PlatformVers} introduces a new central authentication mechanism used by all of the platform components: + +* {ControllerNameStart} +* {PrivateHubNameStart} +* {EDAcontroller} + +Before {PlatformVers}, each of these components had their own authentication configuration. +==== diff --git a/downstream/modules/aap-hardening/ref-aap-operational-secrets.adoc b/downstream/modules/aap-hardening/ref-aap-operational-secrets.adoc new file mode 100644 index 0000000000..1f430614d1 --- /dev/null +++ b/downstream/modules/aap-hardening/ref-aap-operational-secrets.adoc @@ -0,0 +1,153 @@ +// Module included in the following assemblies: +// downstream/assemblies/assembly-hardening-aap.adoc + +[id="ref-aap-operational-secrets_{context}"] + += {PlatformNameShort} operational secrets + +{PlatformNameShort} uses several secrets (passwords, keys, and so on) operationally. +These secrets are stored unencrypted on the various {PlatformNameShort} servers, as each component service must read them at startup. +All files are protected by Unix permissions, and restricted to the root user or the appropriate service account user. +These files should be routinely monitored to ensure there has been no unauthorized access or modification. + +== RPM-based installation secrets + +The following table provides the location of these secrets for RPM-based installs of {PlatformNameShort}. + +.{PlatformNameShort} operational secrets + +|=== +2+| *{ControllerNameStart} secrets* +| *File path* | *Description* +| `/etc/tower/SECRET_KEY` | A secret key used for encrypting automation secrets in the database. If the `SECRET_KEY` changes or is unknown, no encrypted fields in the database will be accessible. + +| `/etc/tower/tower.cert` + +`/etc/tower/tower.key` | SSL certificate and key for the {ControllerName} web service. + +A self-signed `cert/key` is installed by default; you can provide a locally appropriate certificate and key. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/tower/conf.d/postgres.py` | Contains the password used by {ControllerName} to connect to the database, unless TLS authentication is used for the database connection. + +| `/etc/tower/conf.d/channels.py` | Contains the secret used by {ControllerName} for websocket broadcasts. + +| `/etc/tower/conf.d/gateway.py` | Contains the key used by {ControllerName} to sync state with the {Gateway}. + +2+| *{GatewayStart} secrets* +| *File path* | *Description* + +| `/etc/ansible-automation-platform/gateway/SECRET_KEY` | A secret key used for encrypting automation secrets in the database. +If the `SECRET_KEY changes` or is unknown, the {Gateway} cannot access the encrypted secrets in the database. + +| `/etc/ansible-automation-platform/gateway/gateway.cert` | SSL certificate for the {Gateway} web service. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/ansible-automation-platform/gateway/gateway.key` | SSL key for the {Gateway} web service. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/ansible-automation-platform/gateway/cache.cert` | SSL certificate used for mutual TLS (mTLS) authentication with the Redis cache used by the {Gateway}. + +| `/etc/ansible-automation-platform/gateway/cache.key` | SSL key used for mutual TLS (mTLS) authentication with the Redis cache used by the {Gateway}. + +| `/etc/ansible-automation-platform/gateway/settings.py` | Contains the password used by the {Gateway} to connect to the database, unless TLS authentication is used for the database connection. +Also contains the password used to connect to the Redis cache used by the {Gateway}. + +2+| *{HubNameStart} secrets* +| *File path* | *Description* +| `/etc/pulp/settings.py` | Contains the password used by {HubName} to connect to the database, unless TLS authentication is used for the database connection. Contains the Django secret key used by the {HubName} web service. + +| `/etc/pulp/certs/token_public_key.pem` | OpenSSL public key in PEM format for the {HubName} EE token authentication. It is generated by default from the `token_private_key.pem` file. + +| `/etc/pulp/certs/token_private_key.pem` | OpenSSL private key in PEM format for the {HubName} EE token authentication. It is generated by default, although a user can provide their own private key with the `pulp_token_auth_key`` installation inventory variable. + +| `/etc/pulp/certs/pulp_webserver.crt` | SSL certificate for the {HubName} web service. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/pulp/certs/pulp_webserver.key` | SSL key for the {HubName} web service. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/pulp/certs/database_fields.symmetric.key` | A key used for encrypting sensitive fields in the {HubName} database table. + +If the key changes or is unknown, {HubName} cannot access the encrypted fields in the database. + +2+| *{EDAName} secrets* +| *File path* | *Description* +| `/etc/ansible-automation-platform/eda/SECRET_KEY` | A secret key used for encrypting fields in the {EDAName} controller database table. + +If the SECRET_KEY changes or is unknown, the {EDAName} controller cannot access the encrypted fields in the database. + +| `/etc/ansible-automation-platform/eda/settings.yaml` | Contains the password used by the {EDAName} gateway to connect to the database, unless TLS authentication is used for the database connection. + +Contains the password used to connect to the Redis cache used by the {EDAName} controller. + +Contains the key used by the {EDAName} controller to sync state with the {Gateway}. + +| `/etc/ansible-automation-platform/eda/server.cert` | SSL certificate for the {EDAName} controller web service. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/ansible-automation-platform/eda/server.key` | SSL key for the {EDAName} controller web service. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/ansible-automation-platform/eda/cache.cert` | SSL certificate used for mutual TLS (mTLS) authentication with the Redis cache used by the {EDAName} controller + +| `/etc/ansible-automation-platform/eda/cache.key` | SSL key used for mutual TLS (mTLS) authentication with the Redis cache used by the {EDAName} controller + +| `/etc/ansible-automation-platform/eda/websocket.cert` |SSL certificate for the {EDAName} controller websocket endpoint. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +| `/etc/ansible-automation-platform/eda/websocket.key` | SSL key for the {EDAName} controller websocket endpoint. + +A self-signed certificate is installed by default, although a user-provided certificate and key pair can be used. + +For more information, see link:{URLHardening}/hardening-aap#proc-install-user-pki_hardening-aap[Installing with user-provided PKI certificates]. + +2+| *Redis secrets* +| *File path* | *Description* +| `/etc/ansible-automation-platform/ca/ansible-automation-platform-managed-ca-cert.crt` | SSL certificate for the internal self-signed certificate authority used by the {Installer} to generate the default self-signed certificates for each component service. + +| `/etc/ansible-automation-platform/ca/ansible-automation-platform-managed-ca-cert.key` | SSL key for the internal self-signed certificate authority used by the {Installer} to generate the default self-signed certificates for each component service. +|=== + +[NOTE] +==== +Some of these file locations reflect the previous product name of {ControllerName}, formerly named Ansible Tower. +==== + +== Container-based installation secrets + +The secrets listed for RPM-based installations are also used in container-based installations, but they are stored in a different manner. +Container-based installations of {PlatformName} use Podman secrets to store operational secrets. +These secrets can be listed using the `podman secret list` command. + +By default, Podman stores data in the home directory of the user who installed and runs the containerized {PlatformName} services. +Podman secrets are stored in the file `$HOME/.local/share/containers/storage/secrets/filedriver/secretsdata.json` as base64-encoded strings, so while they are not in plain text the values are only obfuscated. + +The data stored in a Podman secret can be shown using the command `podman secret inspect --showsecret `. + +This file should be routinely monitored to ensure there has been no unauthorized access or modification. + + + diff --git a/downstream/modules/aap-hardening/ref-architecture.adoc b/downstream/modules/aap-hardening/ref-architecture.adoc index c171814398..b1e932630d 100644 --- a/downstream/modules/aap-hardening/ref-architecture.adoc +++ b/downstream/modules/aap-hardening/ref-architecture.adoc @@ -3,15 +3,24 @@ [id="ref-architecture_{context}"] -= {PlatformNameShort} reference architecture += {PlatformNameShort} deployment topologies [role="_abstract"] -For large-scale production environments with availability requirements, this guide recommends deploying the components described in section 2.1 of this guide using the instructions in the xref:ref-architecture_{context}[reference architecture] documentation for {PlatformName} on {RHEL}. While some variation may make sense for your specific technical requirements, following the reference architecture results in a supported production-ready environment. +Install {PlatformNameShort} {PlatformVers} based on one of the documented tested deployment reference architectures defined in link:{LinkTopologies}. +Enterprise organizations should use one of the enterprise reference architectures for production deployments to ensure the highest level of uptime, performance, and continued scalability. +Organizations or deployments that are resource constrained can use a "growth" reference architecture. +Review the link:{LinkTopologies} document to determine the reference architecture that best suits your requirements. +The reference architecture chosen will include planning information such as an architecture diagram, the number of {RHEL} servers required, the network ports and protocols used by the deployment, and load balancer information for enterprise architectures. + +It is possible to install {PlatformNameShort} on different infrastructure reference architectures and with different environment configurations. Red Hat does not fully test architectures outside of published reference architectures. Red Hat recommends using a tested reference architecture for all new deployments and provides commercially reasonable support for deployments that meet minimum requirements. + +//This diagram might need to be updated. +The following diagram is a tested container enterprise architecture: .Reference architecture overview -image::aap-ref-architecture-322.png[Reference architecture for an example setup of an {PlatformNameShort} deployment for large scale production environments] +image::cont-b-env-a.png[Infrastructure reference architecture that Red Hat has tested that customers can use when self-managing {PlatformNameShort}] -{EDAName} is a new feature of {PlatformNameShort} {PlatformVers} that was not available when the reference architecture detailed in Figure 1: Reference architecture overview was originally written. Currently, the supported configuration is a single {ControllerName}, single {HubName}, and single {EDAController} node with external (installer managed) database. For an organization interested in {EDAName}, the recommendation is to install according to the configuration documented in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario#ref-single-controller-hub-eda-with-managed-db[{PlatformNameShort} Installation Guide]. This document provides additional clarifications when {EDAName} specific hardening configuration is required. +//{EDAName} is a new feature of {PlatformNameShort} {PlatformVers} that was not available when the reference architecture detailed in Figure 1: Reference architecture overview was originally written. Currently, the supported configuration is a single {ControllerName}, single {HubName}, and single {EDAController} node with external (installer managed) database. For an organization interested in {EDAName}, the recommendation is to install according to the configuration documented in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario#ref-single-controller-hub-eda-with-managed-db[{PlatformNameShort} Installation Guide]. This document provides additional clarifications when {EDAName} specific hardening configuration is required. -For smaller production deployments where the full reference architecture may not be needed, this guide recommends deploying {PlatformNameShort} with a dedicated PostgreSQL database server whether managed by the installer or provided externally. +//For smaller production deployments where the full reference architecture may not be needed, this guide recommends deploying {PlatformNameShort} with a dedicated PostgreSQL database server whether managed by the installer or provided externally. diff --git a/downstream/modules/aap-hardening/ref-configure-centralized-logging.adoc b/downstream/modules/aap-hardening/ref-configure-centralized-logging.adoc new file mode 100644 index 0000000000..5c7693eb6c --- /dev/null +++ b/downstream/modules/aap-hardening/ref-configure-centralized-logging.adoc @@ -0,0 +1,26 @@ +[id="ref-configure-centralized-logging"] + += Configure centralized logging + +Centralized logging is essential to assist in monitoring system security and performing large-scale log analysis. +The _Confidentiality, Integrity, and Availability_ (CIA) triad, which originated from a combination of ideas from the US military and government, is the model that is the foundation for proper security system development and best practices. Centralized logging falls under the Integrity aspect to assist in identifying if data or systems have been tampered with. +The logging to a centralized system enables troubleshooting automation across multiple systems by collecting all logs in the single location, therefore making it easier to identify issues, analyze trends, and correlate events from different servers, especially in a complex {PlatformNameShort} deployment. +Manually checking individual machines would be time consuming so centralized logging is valuable with debugging in addition to meeting security best practices. +This ensures the overall system health and stability and assists in identifying potential security threats. +In addition to the logging configuration, the failure to log due to storage capacity, hardware failure as well as high availability architecture should be taken into consideration. + +There are several additional benefits including: + +* The data is sent in JSON format over a HTTP connection using minimal service-specific tweaks engineered in a custom handler or through an imported library. +The types of data that are most useful to the controller are job fact data, job events/job runs, activity stream data, and log messages. +* Deeper insights into the automation process by analyzing logs from different parts of the infrastructure, including playbook execution details, task outcomes, and system events. +* Identifying performance bottlenecks and optimizing the Ansible playbooks by analyzing execution times and resource usage from the logs. +* Centralized logging helps meet compliance mandates by providing a single source of truth for auditing purposes. +* Third Party integration with a centralized log management platform like Splunk, Logstash, ElasticSearch, or Loggly to collect and analyze logs. + +The logging aggregator service works with the following monitoring and data analysis systems: + +* Splunk +* Loggly +* Sumologic +* Elastic stack (formerly ELK stack) diff --git a/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc b/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc index 7690611d70..2b82a8d995 100644 --- a/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc +++ b/downstream/modules/aap-hardening/ref-dns-load-balancing.adoc @@ -7,22 +7,6 @@ [role="_abstract"] -When using a load balancer with {PlatformNameShort} as described in the reference architecture, an additional FQDN is needed for each load-balanced component ({ControllerName} and {PrivateHubName}). +When using a load balancer with {PlatformNameShort} as described in the deployment topology, an additional FQDN is needed for the load-balancer. +For example, an FQDN such as `aap.example.com` might be used for the load balancer which in turn directs traffic to each of the {Gateway} components defined in the installation inventory. -For example, if the following hosts are defined in the {PlatformNameShort} installer inventory file: - ------ -[automationcontroller] -controller0.example.com -controller1.example.com -controller2.example.com - -[automationhub] -hub0.example.com -hub1.example.com -hub2.example.com ------ - -Then the load balancer can use the FQDNs `controller.example.com` and `hub.example.com` for the user-facing name of these {PlatformNameShort} services. - -When a load balancer is used in front of the {PrivateHubName}, the installer must be aware of the load balancer FQDN. Before installing {PlatformNameShort}, in the installation inventory file set the `automationhub_main_url` variable to the FQDN of the load balancer. For example, to match the previous example, you would set the variable to `automationhub_main_url = hub.example.com`. diff --git a/downstream/modules/aap-hardening/ref-dns.adoc b/downstream/modules/aap-hardening/ref-dns.adoc index e256ae0797..49c5e5c1c1 100644 --- a/downstream/modules/aap-hardening/ref-dns.adoc +++ b/downstream/modules/aap-hardening/ref-dns.adoc @@ -1,8 +1,10 @@ -// Moduel included in the following assemblies: +// Module included in the following assemblies: // downstream/assemblies/assembly-hardeing-aap.adoc +[id="ref-dns"] = DNS [role="_abstract"] -When installing {PlatformNameShort}, the installer script checks that certain infrastructure servers are defined with a Fully Qualified Domain Name (FQDN) in the installer inventory. This guide recommends that all {PlatformNameShort} infrastructure nodes have a valid FQDN defined in DNS which resolves to a routable IP address, and that these FQDNs be used in the installer inventory file. \ No newline at end of file +When installing {PlatformNameShort}, the {Installer} script checks that certain infrastructure servers are defined with a _Fully Qualified Domain Name_ (FQDN) in the installer inventory. +This guide recommends that all {PlatformNameShort} infrastructure nodes have a valid FQDN defined in DNS which resolves to a routable IP address, and that these FQDNs be used in the installer inventory file. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc b/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc index 1b8f700125..afe4a056dc 100644 --- a/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc +++ b/downstream/modules/aap-hardening/ref-infrastructure-as-code.adoc @@ -3,11 +3,13 @@ [id="ref-infrastructure-as-code_{context}"] -= Use infrastructure as code paradigm += Use a configuration as code paradigm [role="_abstract"] -The Red Hat Community of Practice has created a set of automation content available via collections to manage {PlatformNameShort} infrastructure and configuration as code. This enables automation of the platform itself through Infrastructure as Code (IaC) or Configuration as Code (CaC). While many of the benefits of this approach are clear, there are critical security implications to consider. +The Red Hat Community of Practice has created a set of automation content available through collections to manage {PlatformNameShort} infrastructure and configuration as code. +This enables automation of the platform itself through _Configuration as Code_ (CaC). +While many of the benefits of this approach are clear, there are security implications to consider. The following Ansible content collections are available for managing {PlatformNameShort} components using an infrastructure as code methodology, all of which are found on the link:https://console.redhat.com/ansible/automation-hub[Ansible Automation Hub]: @@ -16,21 +18,25 @@ The following Ansible content collections are available for managing {PlatformNa | *Validated Collection* | *Collection Purpose* | `infra.aap_utilities` | Ansible content for automating day 1 and day 2 operations of {PlatformNameShort}, including installation, backup and restore, certificate management, and more. -| `infra.controller_configuration` | A collection of roles to manage {ControllerName} components, including managing users and groups (RBAC), projects, job templates and workflows, credentials, and more. - -| `infra.ah_configuration` | Ansible content for interacting with {HubName}, including users and groups (RBAC), collection upload and management, collection approval, managing the {ExecEnvShort} image registry, and more. +| `infra.aap_configuration` | A collection of roles to manage the creation of {PlatformNameShort} components, including users and groups (RBAC), projects, job templates and workflows, credentials, and more. This collection includes functionality from the older `infra.controller_configuration`, `infra.ah_configuration` and `infra.eda_configuration` and should be used in their place with {PlatformNameShort} {PlatformVers}. | `infra.ee_utilities` | A collection of roles for creating and managing {ExecEnvShort} images, or migrating from the older Tower virtualenvs to execution environments. |=== -Many organizations use CI/CD platforms to configure pipelines or other methods to manage this type of infrastructure. However, using {PlatformNameShort} natively, a webhook can be configured to link a Git-based repository natively. In this way, Ansible can respond to git events to trigger Job Templates directly. This removes the need for external CI components from this overall process and thus reduces the attack surface. +Many organizations use CI/CD platforms to configure pipelines or other methods to manage this type of infrastructure. +However, using {PlatformNameShort} natively, a webhook can be configured to link a Git-based repository natively. +In this way, Ansible can respond to Git events to trigger Job Templates directly. +This removes the need for external CI components from this overall process and thus reduces the attack surface. -These practices allow version control of all infrastructure and configuration. Apply Git best practices to ensure proper code quality inspection prior to being synchronized into {PlatformNameShort}. Relevant Git best practices include the following: +These practices enable version control of all infrastructure and configuration. +Apply Git best practices to ensure proper code quality inspection before being synchronized into {PlatformNameShort}. Relevant Git best practices include the following: * Creating pull requests. * Ensuring that inspection tools are in place. * Ensuring that no plain text secrets are committed. * Ensuring that pre-commit hooks and any other policies are followed. -IaC also encourages using external vault systems which removes the need to store any sensitive data in the repository, or deal with having to individually vault files as needed. For more information on using external vault systems, see section xref:con-external-credential-vault_{context}[2.3.2.3 External credential vault considerations] within this guide. +CaC also encourages using external vault systems which removes the need to store any sensitive data in the repository, or deal with having to individually vault files as needed. +This is particularly important when storing {PlatformNameShort} configuration in a source code repository, as {ControllerName} credentials and {EDAName} credentials must be provided to the collection variables in plain text which should not be committed to a source repository. +For more information on using external vault systems, see the link:{URLHardening}/hardening-aap#con-external-credential-vault_hardening-aap[External credential vault considerations] section in this guide. diff --git a/downstream/modules/aap-hardening/ref-infrastructure-server-account-planning.adoc b/downstream/modules/aap-hardening/ref-infrastructure-server-account-planning.adoc new file mode 100644 index 0000000000..870e9abe41 --- /dev/null +++ b/downstream/modules/aap-hardening/ref-infrastructure-server-account-planning.adoc @@ -0,0 +1,13 @@ +[id="ref-infrastructure-server-account-planning"] + += Infrastructure server account planning + +For user accounts on the RHEL servers that run {PlatformNameShort} services, follow your organizational policies to determine if individual user accounts should be local or should use an external authentication source. +Only users who have a valid need to perform maintenance tasks on the {PlatformNameShort} components themselves should be granted access to the underlying RHEL servers, as the servers store configuration files that contain sensitive information, such as encryption keys and service passwords. +Because these individuals must have privileged access to maintain {PlatformNameShort} services, minimizing access to the underlying RHEL servers is critical. Do not grant sudo access to the root account or local {PlatformNameShort} service accounts (awx, pulp, postgres) to untrusted users. + +[NOTE] +==== +Some local service accounts are created and managed by the RPM-based installation program. +These particular accounts on the underlying RHEL hosts cannot come from an external authentication source. +==== diff --git a/downstream/modules/aap-hardening/ref-initial-configuration.adoc b/downstream/modules/aap-hardening/ref-initial-configuration.adoc index 697b7fa705..f039b974b1 100644 --- a/downstream/modules/aap-hardening/ref-initial-configuration.adoc +++ b/downstream/modules/aap-hardening/ref-initial-configuration.adoc @@ -7,9 +7,16 @@ [role="_abstract"] -Granting access to certain parts of the system exposes security vulnerabilities. Apply the following practices to help secure access: +Granting access to certain parts of the system exposes security vulnerabilities. +Apply the following practices to help secure access: -* Minimize access to system administrative accounts. There is a difference between the user interface (web interface) and access to the operating system that the {ControllerName} is running on. A system administrator or root user can access, edit, and disrupt any system application. Anyone with root access to the controller has the potential ability to decrypt those credentials, and so minimizing access to system administrative accounts is crucial for maintaining a secure system. -* Minimize local system access. {ControllerNameStart} should not require local user access except for administrative purposes. Non-administrator users should not have access to the controller system. -* Enforce separation of duties. Different components of automation may need to access a system at different levels. Use different keys or credentials for each component so that the effect of any one key or credential vulnerability is minimized. -* Restrict {ControllerName} to the minimum set of users possible for low-level controller configuration and disaster recovery only. In a controller context, any controller ‘system administrator’ or ‘superuser’ account can edit, change, and update any inventory or automation definition in the controller. \ No newline at end of file +* Minimize access to system administrative accounts. +There is a difference between the user interface (web interface) and access to the operating system that the {ControllerName} is running on. +A system administrator or super user can access, edit, and disrupt any system application. +Anyone with root access to {ControllerName} has the potential ability to decrypt those credentials, and so minimizing access to system administrative accounts is crucial for maintaining a secure system. +* Minimize local system access. {ControllerNameStart} should not require local user access except for administrative purposes. +Non-administrator users should not have access to the {ControllerName} system. +* Enforce separation of duties. +Different components of automation might need to access a system at different levels. +Use different keys or credentials for each component so that the effect of any one key or credential vulnerability is minimized. +* Restrict {ControllerName} to the minimum set of users possible for low-level {ControllerName} configuration and disaster recovery only. In an {ControllerName} context, any {ControllerName} ‘system administrator’ or ‘superuser’ account can edit, change, and update any inventory or automation definition in {ControllerName}. \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-interactive-session-timeout.adoc b/downstream/modules/aap-hardening/ref-interactive-session-timeout.adoc new file mode 100644 index 0000000000..42abd08032 --- /dev/null +++ b/downstream/modules/aap-hardening/ref-interactive-session-timeout.adoc @@ -0,0 +1,24 @@ +[id="ref-interactive-session-timeout"] + += Interactive session timeout + +A compliance profile might require that an interactive session timeout be enforced. +For example, the DISA STIG requires that all users be automatically logged out after 15 minutes of inactivity. +The installation process often requires an hour or more to complete, and this control can stop the installation process and log out the user before installation is complete. +The same also applies to day two operations such as backup and restore, which in production environments often take longer than the recommended interactive session timeout. +During these operations, increase the interactive session timeout to ensure the operation is successful. + +There are multiple ways in which this control can be enforced, including shell timeout variables, setting the idle session timeout for `systemd-logind`, or setting SSH connection timeouts, and different compliance profiles can use one or more of these methods. +The one that most often interrupts the installation and day two operations is the idle session timeout for `systemd-logind`, which was introduced in the DISA STIG version V2R1 ({RHEL} 8) and V2R2 ({RHEL} 9). To increase the idle session timeout for `systemd-logind`, as the root user: + +* Edit the file `/etc/systemd/logind.conf`. +* If the `StopIdleSessionSec` setting is set to zero, no change is needed. +* If the `StopIdleSessionSec` setting is non-zero, this indicates that the session will be terminated after that number of seconds. ++ +Change `StopIdleSessionSec=7200` to increase the timeout, then run `systemctl restart systemd-logind` to apply the change. +* Log out of the interactive session entirely and log back in to ensure the new setting applies to the current login session. + +[NOTE] +==== +This change only needs to be made on the installation host, or if an installation host is not used, the host where the {PlatformNameShort} {Installer} is run. +==== \ No newline at end of file diff --git a/downstream/modules/aap-hardening/ref-ntp.adoc b/downstream/modules/aap-hardening/ref-ntp.adoc index 54e86ce600..360fa1b4b4 100644 --- a/downstream/modules/aap-hardening/ref-ntp.adoc +++ b/downstream/modules/aap-hardening/ref-ntp.adoc @@ -7,6 +7,8 @@ [role="_abstract"] -Configure each server in the {PlatformNameShort} infrastructure to synchronize time with an NTP pool or your organization's NTP service. This ensures that logging and auditing events generated by {PlatformNameShort} have an accurate time stamp, and that any scheduled jobs running from the {ControllerName} execute at the correct time. +Configure each server in the {PlatformNameShort} infrastructure to synchronize time with an _Network Time Protocol_ (NTP) pool or your organization's NTP service. +This ensures that logging and auditing events generated by {PlatformNameShort} have an accurate time stamp, and that any scheduled jobs running from the {ControllerName} execute at the correct time. +This also enables communication between the systems within {PlatformNameShort} to not reject messages based on timeouts. For information on configuring the chrony service for NTP synchronization, see link:{BaseURL}/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/configuring-time-synchronization_configuring-basic-system-settings#using-chrony_configuring-time-synchronization[Using Chrony] in the {RHEL} documentation. diff --git a/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc b/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc index ee96dc49a7..ee2b0c1c64 100644 --- a/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc +++ b/downstream/modules/aap-hardening/ref-security-variables-install-inventory.adoc @@ -7,29 +7,72 @@ [role="_abstract"] -The installation inventory file defines the architecture of the {PlatformNameShort} infrastructure, and provides a number of variables that can be used to modify the initial configuration of the infrastructure components. For more information on the installer inventory, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_installation_guide/index#proc-editing-installer-inventory-file_platform-install-scenario[Ansible Automation Platform Installation Guide]. +The installation inventory file defines the architecture of the {PlatformNameShort} infrastructure and provides a number of variables that can be used to modify the initial configuration of the infrastructure components. +For more information on the installer inventory, see link:{URLPlanningGuide}/about_the_installer_inventory_file[About the installer inventory file]. -The following table lists a number of security-relevant variables and their recommended values for creating the installation inventory. +The following table lists a number of security-relevant variables and their recommended values for an RPM-based deployment. .Security-relevant inventory variables +[cols="33%,33%,33%",options="header"] |=== -| *Variable* | *Recommended Value* | *Details* -| `postgres_use_ssl` | true | The installer configures the installer-managed Postgres database to accept SSL-based connections when this variable is set. +| *RPM deployment variable* | *Recommended Value* | *Details* -| `pg_sslmode` | verify-full | By default, when the controller connects to the database, it tries an encrypted connection, but it is not enforced. Setting this variable to "verify-full" requires a mutual TLS negotiation between the controller and the database. The `postgres_use_ssl` variable must also be set to "true" for this `pg_sslmode` to be effective. +| `postgres_use_ssl` | true | The installation program configures the installer-managed Postgres database to accept SSL-based connections when this variable is set. + +The default for this variable is false which means SSL/TLS is not used for PostgreSQL connections. + +When set to true, the platform connects to PostgreSQL by using SSL/TLS. + +| `pg_sslmode` `automation_gateway_pg_sslmode` `automationhub_pg_sslmode` `automationcontroller_pg_sslmode` |verify-full | These variables control mutual TLS (mTLS) authentication to the database. +By default, when each service connects to the database, it tries an encrypted connection, but it is not enforced. + +Setting this variable to `verify-full` enforces an mTLS negotiation between the service and the database. +The `postgres_use_ssl` variable must also be set to `true` for this pg_sslmode to be effective. *NOTE*: If a third-party database is used instead of the installer-managed database, the third-party database must be set up independently to accept mTLS connections. -| `nginx_disable_https` | false | If set to "true", this variable disables HTTPS connections to the controller. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". +| `nginx_disable_hsts` `automation_gateway_disable_hsts` `automationhub_disable_hsts` `automationcontroller_disable_hsts` | false | If set to `true`, these variables disable HTTPS _strict transport Security_ (HSTS) connections to each of the component web services. + +The default is `false`. If these variables are absent from the installer inventory it is effectively equivalent to defining the variables as `false`. +|=== -| `automationhub_disable_https` | false | If set to "true", this variable disables HTTPS connections to the {PrivateHubName}. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". +The following table lists a number of security-relevant variables and their recommended values for a container-based deployment. -| `automationedacontroller_disable_https` | false | If set to "true", this variable disables HTTPS connections to the {EDAcontroller}. The default is "false", so if this variable is absent from the installer inventory it is effectively the same as explicitly defining the variable to "false". +.Security-relevant containerized inventory variables +[cols="33%,33%,33%",options="header"] |=== +| *Container deployment variable* | *Recommended Value* | *Details* +| `postgresql_disable_tls` | false | If set to `true`, this variable disables TLS connections to the installer-managed PostgreSQL database. + +The default is `false` + +If this variable is absent from the installer inventory, it is effectively equivalent to defining the variable as `false`. -In scenarios such as the reference architecture where a load balancer is used with multiple controllers or hubs, SSL client connections can be terminated at the load balancer or passed through to the individual {PlatformNameShort} servers. If SSL is being terminated at the load balancer, this guide recommends that the traffic gets re-encrypted from the load balancer to the individual {PlatformNameShort} servers, to ensure that end-to-end encryption is in use. In this scenario, the `*_disable_https` variables listed in Table 2.3 would remain the default value of "false". +| `controller_pg_sslmode` `gateway_pg_sslmode` `hub_pg_sslmode` `eda_pg_sslmode` | verify-full a| These variables control mutual TLS (mTLS) authentication to the database. + +By default, when each service connects to the database, it tries an encrypted connection, but it is not enforced. Setting this variable to `verify-full` enforces an mTLS negotiation between the service and the database. [NOTE] ==== -This guide recommends using an external database in production environments, but for development and testing scenarios the database could be co-located on the {ControllerName}. Due to current PostgreSQL 13 limitations, setting `pg_sslmode = verify-full` when the database is co-located on the {ControllerName} results in an error validating the host name during TLS negotiation. Until this issue is resolved, an external database must be used to ensure mutual TLS authentication between the {ControllerName} and the database. +If a third-party database is used instead of the installer-managed database, the third-party database must be set up independently to accept mTLS connections. ==== + +| `controller_nginx_disable_https` `gateway_nginx_disable_https` `hub_nginx_disable_https` `da_nginx_disable_https` | `false` | If set to `true`, these variables disable HTTPS connections to each of the component web services. + +The default is `false`. + +If these variables are absent from the installer inventory, it is effectively equivalent to defining the variables as `false`. + +| `controller_nginx_disable_hsts` `gateway_nginx_disable_hsts` `hub_nginx_disable_hsts` `eda_nginx_disable_hsts` | `false` | If set to 'true', these variables disable HTTPS Strict Transport Security (HSTS) connections to each of the component web services. + +The default is `false`. + +If these variables are absent from the installer inventory it is effectively equivalent to defining the variables as `false`. +|=== + + +In an enterprise architecture where a load balancer is used in front of multiple {Gateway}s, SSL client connections can be terminated at the load balancer or passed through to the individual AAP servers. +If SSL is being terminated at the load balancer, this guide recommends that the traffic gets re-encrypted from the load balancer to the individual {PlatformNameShort} servers. +This ensures that end-to-end encryption is in use. +In this scenario, the `*_disable_https` variables listed are set to the default value of `false`. + diff --git a/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc b/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc index 60a08b2883..e4262d0071 100644 --- a/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc +++ b/downstream/modules/aap-hardening/ref-sensitive-variables-install-inventory.adoc @@ -7,24 +7,34 @@ [role="_abstract"] -The installation inventory file contains a number of sensitive variables, mainly those used to set the initial passwords used by {PlatformNameShort}, that are normally kept in plain text in the inventory file. To prevent unauthorized viewing of these variables, you can keep these variables in an encrypted link:https://docs.ansible.com/ansible/latest/vault_guide/index.html[Ansible vault]. To do this, go to the installer directory and create a vault file: +The installation inventory file contains a number of sensitive variables, mainly those used to set the initial passwords used by {PlatformNameShort}, which are normally kept in plain text in the inventory file. To prevent unauthorized viewing of these variables, you can keep these variables in an encrypted link:https://docs.ansible.com/ansible/latest/vault_guide/index.html[Ansible vault]. -* `cd /path/to/ansible-automation-platform-setup-bundle-2.4-1-x86_64` -* `ansible-vault create vault.yml` +To do this, go to the installer directory -You will be prompted for a password to the new Ansible vault. Do not lose the vault password because it is required every time you need to access the vault file, including during day-two operations and performing backup procedures. You can secure the vault password by storing it in an encrypted password manager or in accordance with your organizational policy for storing passwords securely. +`cd /path/to/ansible-automation-platform-setup-bundle-2.5-1-x86_64` + +and create a vault file + +`ansible-vault create vault.yml` + +You are prompted for a password to the new Ansible vault. +Do not lose the vault password because it is required every time you need to access the vault file, including during day-two operations and performing backup procedures. +You can secure the vault password by storing it in an encrypted password manager or in accordance with your organizational policy for storing passwords securely. Add the sensitive variables to the vault, for example: +//Added containerized variables RPM/containerized: + ---- -admin_password: -pg_password: -automationhub_admin_password: -automationhub_pg_password: -automationhub_ldap_bind_password: -automationedacontroller_admin_password: -automationedacontroller_pg_password: +admin_password/controller_admin_password: +pg_password/controller_pg_password: +automationhub_admin_password/hub_admin_password: +automationhub_pg_password/hub_pg_password: +automationedacontroller_admin_password/eda_admin_password: +automationedacontroller_pg_password/eda_pg_password: +-/gateway_admin_password: +-/gateway_pg_password: ---- -Make sure these variables are not also present in the installation inventory file. To use the new Ansible vault with the installer, run it with the command `./setup.sh -e @vault.yml -- --ask-vault-pass`. +Make sure these variables are not also present in the installation inventory file. To use the new Ansible vault with the {Installer}, run it with the command `./setup.sh -e @vault.yml -- --ask-vault-pass`. diff --git a/downstream/modules/aap-hardening/ref-sudo-nopasswd.adoc b/downstream/modules/aap-hardening/ref-sudo-nopasswd.adoc index 24064f37bd..e431ca45af 100644 --- a/downstream/modules/aap-hardening/ref-sudo-nopasswd.adoc +++ b/downstream/modules/aap-hardening/ref-sudo-nopasswd.adoc @@ -5,9 +5,21 @@ = Sudo and NOPASSWD -[role="_abstract"] +A compliance profile might require that all users with sudo privileges must provide a password (that is, the `NOPASSWD` directive must not be used in a sudoers file). +The {PlatformNameShort} installation program runs many tasks as a privileged user, and by default expects to be able to elevate privileges without a password. +To provide a password to the {Installer} for elevating privileges, append the following options when launching the RPM installer script: -The {RHEL} 8 STIG requires that all users with sudo privileges must provide a password (that is, the "NOPASSWD" directive must not be used in a sudoers file). The {PlatformNameShort} installer runs many tasks as a privileged user, and by default expects to be able to elevate privileges without a password. To provide a password to the installer for elevating privileges, append the following options when launching the installer script: `./setup.sh -- –-ask-become-pass`. +`./setup.sh --ask-become-pass`. + +For the container-based {Installer}: + +`ansible-playbook ansible.containerized_installer.install --ask-become-pass` + +When the {Installer} is run, you are prompted for the user's password to elevate privileges. + +[NOTE] +==== +Using the `--ask-become-pass` option also applies when running the {Installer} for day-two operations such as backup and restore. +==== -This also applies when running the installer script for day-two operations such as backup and restore. diff --git a/downstream/modules/aap-migration/con-artifact-structure.adoc b/downstream/modules/aap-migration/con-artifact-structure.adoc new file mode 100644 index 0000000000..0ec6612ca5 --- /dev/null +++ b/downstream/modules/aap-migration/con-artifact-structure.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: CONCEPT + +[id="artifact-structure"] += Artifact structure + +The migration artifact serves as a comprehensive package containing all necessary components to successfully transfer your {PlatformNameShort} deployment. + +Structure the artifact as follows: + +---- +/ +manifest.yml +secrets.yml +sha256sum.txt +-> controller: + controller.pgc +-> custom_configs: + foo.py + bar.py +-> gateway: + gateway.pgc +-> hub: + hub.pgc +---- diff --git a/downstream/modules/aap-migration/con-containerized-to-managed-prerequisites.adoc b/downstream/modules/aap-migration/con-containerized-to-managed-prerequisites.adoc new file mode 100644 index 0000000000..286b40bf0c --- /dev/null +++ b/downstream/modules/aap-migration/con-containerized-to-managed-prerequisites.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: CONCEPT + +[id="containerized-to-managed-prerequisites"] += Prerequisites for migrating from a container-based deployment to a Managed {PlatformNameShort} deployment + +Before migrating from a container-based deployment to a Managed {PlatformNameShort} deployment, ensure that you meet the following prerequisites: + +* You have a source container-based deployment of {PlatformNameShort}. +* The source deployment is on the latest release of the {PlatformNameShort} version you are on. +* You have a target Managed {PlatformNameShort} deployment. +* You have enabled local authentication on the source deployment before the migration. +* A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment. +* You have a plan to retain a backup throughout the migration process and to ensure that your existing {PlatformNameShort} deployment remains active until your migration has completed successfully. +* You have a plan for any environment changes based on the migration from a self-hosted {PlatformNameShort} deployment to a Managed {PlatformNameShort} deployment: +** Job log retention changes from a customer-configured option to 30 days. +** Network changes occur when moving the control plane to the managed service. +** {AutomationMeshStart} requires reconfiguration. +* You must reconfigure or re-create SSO identity providers post-migration to account for URL changes. diff --git a/downstream/modules/aap-migration/con-containerized-to-ocp-prerequisites.adoc b/downstream/modules/aap-migration/con-containerized-to-ocp-prerequisites.adoc new file mode 100644 index 0000000000..eace6038e1 --- /dev/null +++ b/downstream/modules/aap-migration/con-containerized-to-ocp-prerequisites.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="containerized-to-ocp-prerequisites"] += Prerequisites for migrating from a container-based deployment to an {OCPShort} deployment + +Before migrating from a container-based deployment to an {OCPShort} deployment, ensure that you meet the following prerequisites: + +* You have a source container-based deployment of {PlatformNameShort}. +* The source deployment is on the latest async release of the version you are on. +* You have a target {OCPShort} environment ready. +* The target deployment is on the latest release of the {PlatformNameShort} version you are on. +* You have an {OperatorPlatformNameShort} available. +* You have decided between internal or external database configuration. +* You have decided between internal or external Redis configuration. +* There is network connectivity between source and target environments. diff --git a/downstream/modules/aap-migration/con-introduction-and-objectives.adoc b/downstream/modules/aap-migration/con-introduction-and-objectives.adoc new file mode 100644 index 0000000000..c9854e14d0 --- /dev/null +++ b/downstream/modules/aap-migration/con-introduction-and-objectives.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: CONCEPT + +[id="introduction-and-objectives"] += Introduction and objectives + +This document outlines the necessary steps and considerations for migrating between different {PlatformNameShort} deployment types for {PlatformNameShort} {PlatformVers}. Specifically, it focuses on these migration paths: + +[options="header"] +|=== +|Source environment | Target environment + +|RPM-based {PlatformNameShort} | Container-based {PlatformNameShort} platform +|RPM-based {PlatformNameShort} |{OCPShort} +|RPM-based {PlatformNameShort} | Managed {PlatformNameShort} +|Container-based {PlatformNameShort} | {OCPShort} +|Container-based {PlatformNameShort} | Managed {PlatformNameShort} +|=== + +Migrations outside of those listed are not supported at this time. + +The primary goals of this document are to: + +* Document all components and configurations that must be migrated between {PlatformNameShort} platforms +* Give step-by-step migration workflows for different deployment scenarios +* Identify potential challenges and unknowns that require further investigation diff --git a/downstream/modules/aap-migration/con-manifest-file.adoc b/downstream/modules/aap-migration/con-manifest-file.adoc new file mode 100644 index 0000000000..501b7b8d19 --- /dev/null +++ b/downstream/modules/aap-migration/con-manifest-file.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: CONCEPT + +[id="manifest-file"] += Manifest file + +The `manifest.yml` file serves as the primary metadata document for the migration artifact, containing critical versioning and component information from your source environment. + +Structure the manifest as follows: + +---- +aap_version: X.Y # The version being migrated +platform: rpm # The source platform type +components: + - name: controller + version: x.y.z + - name: hub + version: x.y.z + - name: gateway + version: x.y.z +---- diff --git a/downstream/modules/aap-migration/con-migration-process-overview.adoc b/downstream/modules/aap-migration/con-migration-process-overview.adoc new file mode 100644 index 0000000000..e1d8338542 --- /dev/null +++ b/downstream/modules/aap-migration/con-migration-process-overview.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="migration-process-overview"] += Migration process overview + +The migration between {PlatformNameShort} installation types follows this general workflow: + +. Prepare and assess the source environment - Prepare and assess the existing source environment for migration. +. Export the source environment - Extract the necessary data and configurations from the source environment. +. Create and verify the migration artifact - Package all collected data and configurations into a migration artifact. +. Prepare and assess the target environment - Prepare and assess the new target environment for migration. +. Import the migration content to the target environment - Transfer the migration artifact into the prepared target environment. +. Reconcile the target environment post-import - Address any inconsistencies and reconfigure services in the target environment after import. +. Validate the target environment - Confirm the migrated environment is fully operational. diff --git a/downstream/modules/aap-migration/con-out-of-scope.adoc b/downstream/modules/aap-migration/con-out-of-scope.adoc new file mode 100644 index 0000000000..bb86c3b744 --- /dev/null +++ b/downstream/modules/aap-migration/con-out-of-scope.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="out-of-scope"] += Out of scope + +This guide is focused on the core components of {PlatformNameShort}. The following items are currently out of scope for the migration processes described in this document: + +* {EDAName}: Configuration and content for {EDAName} must be manually recreated in the target environment. +* Instance groups: Instance group configurations must be manually recreated after migration. +* Hub content: Content hosted in {HubName} must be manually reimported or reconfigured. +* Custom Certificate Authority (CA) for receptor mesh: Custom CA configurations for receptor mesh must be manually reconfigured. +* Disconnected environments: The migration processes for disconnected environments is not covered in this guide. +* Execution environments (other than the default one): Custom execution environments must be rebuilt or reimported manually. + +As of the date of writing this guide, the content and configuration for these items are expected to be re-created, imported, or configured manually in the target environment. These out-of-scope items might be added as supported components in future updates to this migration guide. diff --git a/downstream/modules/aap-migration/con-rpm-to-containerized-prerequisites.adoc b/downstream/modules/aap-migration/con-rpm-to-containerized-prerequisites.adoc new file mode 100644 index 0000000000..e63c6d4587 --- /dev/null +++ b/downstream/modules/aap-migration/con-rpm-to-containerized-prerequisites.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="rpm-to-containerized-prerequisites"] += Prerequisites for migrating from an RPM deployment to a containerized deployment + +Before migrating from an RPM-based deployment to a container-based deployment, ensure you meet the following prerequisites: + +* You have a source RPM-based deployment of {PlatformNameShort}. +* The source RPM-based deployment is on the latest async release of the version you are on. +* You have a target environment prepared for a container-based deployment of {PlatformNameShort}. +* The target deployment is on the latest release of the {PlatformNameShort} version you are on. +* You have downloaded the containerized installer. +* You have enough storage for database dumps and backups. +* There is network connectivity between source and target environments. diff --git a/downstream/modules/aap-migration/con-rpm-to-managed-prerequisites.adoc b/downstream/modules/aap-migration/con-rpm-to-managed-prerequisites.adoc new file mode 100644 index 0000000000..24234723cc --- /dev/null +++ b/downstream/modules/aap-migration/con-rpm-to-managed-prerequisites.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: CONCEPT + +[id="rpm-to-managed-prerequisites"] += Prerequisites for migrating from an RPM-based deployment to a Managed {PlatformNameShort} deployment + +Before migrating from an RPM-based deployment to a Managed {PlatformNameShort} deployment, ensure you meet the following prerequisites: + +* You have a source RPM-based deployment of {PlatformNameShort}. +* The source deployment is on the latest release of the {PlatformNameShort} version you are on. +* You have a target Managed {PlatformNameShort} deployment. +* You have enabled local authentication on the source deployment before the migration. +* A local administrator account must be functional on the source deployment before migration. Verify this by performing a successful login to the source deployment. +* You have a plan to retain a backup throughout the migration process and to ensure that your existing {PlatformNameShort} deployment remains active until your migration has completed successfully. +* You have a plan for any environment changes based on the migration from a self-hosted {PlatformNameShort} deployment to a Managed {PlatformNameShort} deployment: +** Job log retention changes from a customer-configured option to 30 days. +** Network changes occur when moving the control plane to the managed service. +** {AutomationMeshStart} requires reconfiguration. +* You must reconfigure or re-create SSO identity providers post-migration to account for URL changes. diff --git a/downstream/modules/aap-migration/con-rpm-to-ocp-prerequisites.adoc b/downstream/modules/aap-migration/con-rpm-to-ocp-prerequisites.adoc new file mode 100644 index 0000000000..56ab73f851 --- /dev/null +++ b/downstream/modules/aap-migration/con-rpm-to-ocp-prerequisites.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="rpm-to-ocp-prerequisites"] += Prerequisites for migrating from an RPM-based deployment to an {OCPShort} deployment + +Before migrating from an RPM-based deployment to an {OCPShort} deployment, ensure you meet the following prerequisites: + +* You have a source RPM-based deployment of {PlatformNameShort}. +* The source RPM-based deployment is on the latest async release of the version you are on. +* You have a target {OCPShort} environment ready. +* The target deployment is on the latest release of the {PlatformNameShort} version you are on. +* You have {OperatorPlatformNameShort} available. +* You have made a decision on internal or external database configuration. +* You have made a decision on internal or external Redis configuration. +* There is network connectivity between source and target environments. diff --git a/downstream/modules/aap-migration/con-secrets-file.adoc b/downstream/modules/aap-migration/con-secrets-file.adoc new file mode 100644 index 0000000000..2a23dadf0e --- /dev/null +++ b/downstream/modules/aap-migration/con-secrets-file.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: CONCEPT + +[id="secrets-file"] += Secrets file + +The `secrets.yml` file in the migration artifact includes essential Django `SECRET_KEY` values and other sensitive data required for authentication between services. + +Structure the secrets file as follows: + +---- +controller_pg_database: +controller_secret_key: +gateway_pg_database: +gateway_secret_key: +hub_pg_database: +hub_secret_key: +hub_db_fields_encryption_key: +---- + +[NOTE] +==== +Ensure the `secrets.yml` file is encrypted kept in a secure location. +==== \ No newline at end of file diff --git a/downstream/modules/aap-migration/proc-containerized-post-import.adoc b/downstream/modules/aap-migration/proc-containerized-post-import.adoc new file mode 100644 index 0000000000..30a198f567 --- /dev/null +++ b/downstream/modules/aap-migration/proc-containerized-post-import.adoc @@ -0,0 +1,83 @@ +:_mod-docs-content-type: PROCEDURE + +[id="containerized-post-import"] += Reconciling the target environment post-import + +Perform the following post-import reconciliation steps to ensure your target environment is fully functional and correctly configured. + +.Procedure +. Deprovision the {Gateway} configuration. ++ +SSH to the host serving a {Gateway} container as the same rootless user used in the source environment export, and run the following commands to remove the {Gateway} proxy configuration: ++ +---- +$ podman exec -it automation-gateway bash +---- ++ +---- +$ aap-gateway-manage migrate +---- ++ +---- +$ aap-gateway-manage shell_plus +>>> HTTPPort.objects.all().delete(); ServiceNode.objects.all().delete(); ServiceCluster.objects.all().delete() +---- + +. Transfer custom configurations and settings. ++ +Edit the inventory file and apply any relevant `extra_settings` to each component by using the `component_extra_settings`. + +. Re-run the installation program on the target environment by using the same inventory from the installation. + +. Validate instances for automation execution. ++ +SSH to the host serving an `automation-controller-task` container as the rootless user, and run the following commands to validate and remove instances that are orphaned from the source artifact: ++ +---- +$ podman exec -it automation-controller-task bash +---- ++ +---- +$ awx-manage list_instances +---- ++ +Find nodes that are no longer part of this cluster. A good indicator is nodes with 0 capacity as they have failed their health checks: ++ +---- +[ungrouped capacity=0] +[DISABLED] node1.example.org capacity=0 node_type=hybrid version=X.Y.Z heartbeat="..." +[DISABLED] node2.example.org capacity=0 node_type=execution version ansible-runner-X.Y.Z heartbeat="..." +---- ++ +Remove those nodes with `awx-manage`, leaving only the `aap-controller-task` instance: ++ +---- +awx-manage deprovision_instance --host=node1.example.org +awx-manage deprovision_instance --host=node2.example.org +---- + +. Repair orphaned {HubName} content links for Pulp. ++ +Run the following command from any host that has direct access to the {HubName} address: ++ +---- +$ curl -d '{"verify_checksums": true}' -X POST -k https:///api/galaxy/pulp/api/v3/repair/ -u : +---- + +. Reconcile instance groups configuration: +.. Go to {MenuInfrastructureInstanceGroups}. +.. Select the *Instance Group* and then select the *Instances* tab. +.. Associate or disassociate instances as required. + +. Reconcile decision environments and credentials: +.. Go to {MenuADDecisionEnvironments}. +.. Edit each decision environment which references a registry URL either unrelated or no longer accessible to this new environment. For example, the {HubName} decision environment might require modification for the target {HubName} environment. +.. Select each associated credential to these decision environments and ensure their addresses align with the new environment. + +. Reconcile execution environments and credentials: +.. Go to {MenuInfrastructureExecEnvironments}. +.. Check each {ExecEnvShort} image and verify their addresses against the new environment. +.. Go to {MenuAECredentials}. +.. Edit each credential and ensure that all environment specific information aligns with the new environment. + +. Verify any further customizations or configurations after the migration, such as RBAC rules with instance groups. diff --git a/downstream/modules/aap-migration/proc-containerized-source-environment-export.adoc b/downstream/modules/aap-migration/proc-containerized-source-environment-export.adoc new file mode 100644 index 0000000000..f68ed734fe --- /dev/null +++ b/downstream/modules/aap-migration/proc-containerized-source-environment-export.adoc @@ -0,0 +1,172 @@ +:_mod-docs-content-type: PROCEDURE + +[id="containerized-source-environment-export"] += Exporting the source environment + +From your source environment, export the data and configurations needed for migration. + +.Procedure +. Verify the PostgreSQL database version is PostgreSQL version 15. ++ +You can verify your current PostgreSQL version by connecting to your database server and running the following command as the `postgres` user: ++ +---- +$ psql -c 'SELECT version();' +---- ++ +[IMPORTANT] +==== +PostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration. + +If using an {PlatformNameShort} managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required. +==== + +. Create a complete backup of the source environment: ++ +---- +$ ansible-playbook -i ansible.containerized_installer.backup +---- + +. Get the connection settings from one node in each of the component groups. ++ +** Access the {ControllerName} node and run: ++ +---- +$ podman exec -it automation-controller-task bash -c 'awx-manage print_settings | grep DATABASES' +---- ++ +** Access the {HubName} node and run: ++ +---- +$ podman exec -it automation-hub-api bash -c "pulpcore-manager diffsettings | grep '^DATABASES'" +---- +** Access the {Gateway} node and run: ++ +---- +$ podman exec -it automation-gateway bash -c "aap-gateway-manage print_settings | grep '^DATABASES'" +---- + +. Validate the database size and make sure you have enough space on the filesystem for the `pg_dump`. ++ +You can verify the database sizes by connecting to your database server and running the following command as the `postgres` user: ++ +---- +$ podman exec -it postgresql bash -c 'psql -c "\l+"' +---- ++ +Adjust the filesystem size or mount an external filesystem as needed before performing the next step. ++ +[NOTE] +==== +This procedure assumes that all target files will be sent to the `/tmp` filesystem. You might want to adjust the commands to match your environment's needs. +==== + +. Stage the manually created artifact on the {Gateway} node. ++ +---- +# mkdir -p /tmp/backups/artifact/{controller,gateway,hub} +---- ++ +---- +# mkdir -p /tmp/backups/artifact/controller/custom_configs +---- ++ +---- +# touch /tmp/backups/artifact/secrets.yml +---- ++ +---- +# cd /tmp/backups/artifact/ +---- + +. Perform database dumps of all components on the {Gateway} node within the artifact created previously. ++ +To run the `psql` and `pg_restore` commands, you must create a temporary container and run the commands inside of it. This command must be run from the database node. ++ +---- +$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume /tmp/backups/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash +---- ++ +[NOTE] +==== +This command assumes the image `registry.redhat.io/rhel8/postgresql-15:latest`. If you are missing the image, check the available images for the user with `podman images ls`. +==== ++ +The command above opens a shell inside the container named `postgresql_restore_temp` and has the artifact mounted into `/var/lib/pgsql/backups`. Also, this command is mounting the PostgreSQL certificates to ensure that you can resolve the correct certificates. ++ +---- +bash-4.4$ cd /var/lib/pgsql/backups +bash-4.4$ psql -h -U -d -t -c 'SHOW server_version;' # ensure connectivity to db +bash-4.4$ pg_dump -h -U -d --clean --create -Fc -f /.pgc +bash-4.4$ ls -ld /.pgc +bash-4.4$ echo "_pg_database: " >> secrets.yml ## Add the DB name for the component to the secrets file +---- ++ +After collecting this data, exit from this temporary container. + +. Export the secrets from the containerized environment from one node of each component group. ++ +For each step below, use the `root` user to run the commands. ++ +.. Access the {ControllerName} node and gather the secret key and add to the `controller_secret_key` value in `secrets.yaml` file. ++ +---- +$ podman secret inspect --showsecret --format "{{.SecretData}}" controller_secret_key +---- ++ +.. Access the {HubName} node and gather the secret key and add to the `hub_secret_key` value in `secrets.yaml` file. ++ +---- +$ podman secret inspect --showsecret --format "{{.SecretData}}" hub_secret_key +---- ++ +.. Access the {HubName} node and gather the `database_fields.symmetric.key` value and add to the `hub_db_fields_encryption_key` value in `secrets.yaml` file. ++ +---- +$ podman secret inspect --showsecret --format "{{.SecretData}}" hub_database_fields +---- ++ +.. Access the {Gateway} node and gather the secret key and add to the `gateway_secret_key` value in `secrets.yaml` file. ++ +---- +$ podman secret inspect --showsecret --format "{{.SecretData}}" gateway_secret_key +---- + +. Export {ControllerName} custom configurations. ++ +If any `extra_settings` exist in your containerized installation inventory, copy them into a new file and saving them under `/tmp/backups/artifact/controller/custom_configs`. + +. Package the artifact. ++ +---- +# cd /tmp/backups/artifact/ +# [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt +# cat sha256sum.txt +# cd /tmp/backups/ +# tar cf artifact.tar artifact +# sha256sum artifact.tar > artifact.tar.sha256 +# sha256sum --check artifact.tar.sha256 +# tar tvf artifact.tar +---- ++ +Example output of `tar tvf artifact.tar`: ++ +---- +drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ +drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ +-rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc +drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ +drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ +-rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc +drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ +-rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc +-rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml +-rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt +---- + +. Download the `artifact.tar` and `artifact.tar.sha256` to your local machine or transfer to the target node with the `scp` command. + +[role="_additional-resources"] +.Additional resources + +* link:{URLContainerizedInstall}/aap-containerized-installation#backing-up-containerized-ansible-automation-platform[Backing up containerized {PlatformNameShort}] diff --git a/downstream/modules/aap-migration/proc-containerized-source-environment-preparation-assessment.adoc b/downstream/modules/aap-migration/proc-containerized-source-environment-preparation-assessment.adoc new file mode 100644 index 0000000000..8423ce83ef --- /dev/null +++ b/downstream/modules/aap-migration/proc-containerized-source-environment-preparation-assessment.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: PROCEDURE + +[id="containerized-environment-source-prep"] += Preparing and assessing the source environment + +Before beginning your migration, document your current containerized deployment. This documentation serves as a reference throughout the migration process and is critical for properly configuring your target environment. + +.Procedure +. Document the full topology of your current containerized deployment: +.. Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers). +.. Note the hostname, IP address, and function of each server in your deployment. +.. Document the network configuration between components. +. {PlatformNameShort} version information: +.. Record the exact {PlatformNameShort} version (X.Y) currently deployed. +. Document the specific version of each component: +.. {ControllerNameStart} version +.. {HubNameStart} version +.. {GatewayStart} version +. Database configuration: +.. Database names for each component +.. Database users and roles +.. Connection parameters and authentication methods +.. Any custom PostgreSQL configurations or optimizations +. Identify all custom configurations and settings +. Document container resource allocations and volumes diff --git a/downstream/modules/aap-migration/proc-containerized-target-import.adoc b/downstream/modules/aap-migration/proc-containerized-target-import.adoc new file mode 100644 index 0000000000..0133260beb --- /dev/null +++ b/downstream/modules/aap-migration/proc-containerized-target-import.adoc @@ -0,0 +1,151 @@ +:_mod-docs-content-type: PROCEDURE + +[id="containerized-target-import"] += Importing the migration content to the target environment + +To import your migration content into the target environment, stop the containerized services, import the database dumps, and then restart the services. + +.Procedure +. Stop the containerized services, except the database. +.. In all nodes, if Performance Co-Pilot is configured, run the following command: ++ +---- +$ systemctl --user stop pcp +---- ++ +.. Access the {ControllerName} node and run: ++ +---- +$ systemctl --user stop automation-controller-task automation-controller-web automation-controller-rsyslog +$ systemctl --user stop receptor +---- ++ +.. Access the {HubName} node and run: ++ +---- +$ systemctl --user stop automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2 +---- ++ +.. Access the {EDAName} node and run: ++ +---- +$ systemctl --user stop automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2 +---- ++ +.. Access the {Gateway} node and run: ++ +---- +$ systemctl --user stop automation-gateway automation-gateway-proxy +---- ++ +.. Access the {Gateway} node when using standalone Redis, or all nodes from the Redis group in your inventory file when using clustered Redis, and run: ++ +---- +$ systemctl --user stop redis-unix redis-tcp +---- ++ +[NOTE] +==== +In an enterprise deployment, the components run on different nodes. Run the commands on each component node. +==== + +. Import database dumps to the containerized environment. +.. If you are using an {PlatformNameShort} managed database, you must create a temporary container to run the `psql` and `pg_restore` commands. Run this command from the database node. ++ +---- +$ podman run -it --rm --name postgresql_restore_temp --network host --volume ~/aap/tls/extracted:/etc/pki/ca-trust/extracted:z --volume ~/aap/postgresql/server.crt:/var/lib/pgsql/server.crt:ro,z --volume ~/aap/postgresql/server.key:/var/lib/pgsql/server.key:ro,z --volume ~/artifact:/var/lib/pgsql/backups:ro,z registry.redhat.io/rhel8/postgresql-15:latest bash +---- ++ +[NOTE] +==== +The command above opens a shell inside the container named `postgresql_restore_temp` with the artifact mounted at `/var/lib/pgsql/backups`. Additionally, it mounts the PostgreSQL certificates to ensure that you can resolve the correct certificates. + +The command assumes the image `registry.redhat.io/rhel8/postgresql-15:latest` is available. If you are missing the image, check the available images for the user with `podman images ls`. + +It also assumes that the artifact is located in the current user's home folder. If the artifact is located elsewhere, change the `~/artifact` with the required path. +==== ++ +.. If you are using a customer-provided (external) database, you can run the `psql` and `pg_restore` commands from any node that has these commands installed and that has to access the database. Reach out to your database administrator if you are unsure. ++ +.. From inside the container, access the database and ensure the users have the `CREATEDB` role. ++ +---- +bash-4.4$ psql -h -U postgres +postgres=# \l +List of databases +Name | Owner | Encoding | Collate | Ctype | Access privileges +--------------------+------------------+----------+-----------+------------+------------------- +automationedacontroller | eda | UTF8 | en_US.UTF-8 | en_US.UTF-8 | +automationhub | automationhub | UTF8 | en_US.UTF-8 | en_US.UTF-8 | +awx | awx | UTF8 | en_US.UTF-8 | en_US.UTF-8 | +gateway | gateway | UTF8 | en_US.UTF-8 | en_US.UTF-8 | +(4 rows) +---- ++ +.. For each component name, add the `CREATEDB` role to the `Owner`. For example: ++ +---- +postgres=# ALTER ROLE awx WITH CREATEDB; +postgres=# \q +---- ++ +Replace `awx` with the database owner. ++ +.. With the `CREATEDB` in place, access the path where the artifact is mounted, and run the `pg_restore` commands. ++ +---- +bash$ cd /var/lib/pgsql/backups +bash$ pg_restore --clean --create --no-owner -h -U -d template1 /.pgc +---- ++ +.. After the restore, remove the permissions from the user. For example: ++ +---- +postgres=# ALTER ROLE awx WITH NOCREATEDB; +postgres=# \q +---- ++ +Replace `awx` with each user containing the role. + +. Start the containerized services, except the database. +.. In all nodes, if Performance Co-Pilot is configured, run the following command: ++ +---- +$ systemctl --user start pcp +---- ++ +.. Access the {ControllerName} node and run: ++ +---- +$ systemctl --user start automation-controller-task automation-controller-web automation-controller-rsyslog +$ systemctl --user start receptor +---- ++ +.. Access the {HubName} node and run: ++ +---- +$ systemctl --user start automation-hub-api automation-hub-content automation-hub-web automation-hub-worker-1 automation-hub-worker-2 +---- ++ +.. Access the {EDAName} node and run: ++ +---- +$ systemctl --user start automation-eda-scheduler automation-eda-daphne automation-eda-web automation-eda-api automation-eda-worker-1 automation-eda-worker-2 automation-eda-activation-worker-1 automation-eda-activation-worker-2 +---- ++ +.. Access the {Gateway} node and run: ++ +---- +$ systemctl --user start automation-gateway automation-gateway-proxy +---- ++ +.. Access the {Gateway} node when using standalone Redis, or all nodes from the Redis group in your inventory when using clustered Redis, and run: ++ +---- +$ systemctl --user start redis-unix redis-tcp +---- ++ +[NOTE] +==== +In an enterprise deployment, the components run on different nodes. Run the commands on each component node. +==== diff --git a/downstream/modules/aap-migration/proc-containerized-target-prep.adoc b/downstream/modules/aap-migration/proc-containerized-target-prep.adoc new file mode 100644 index 0000000000..0f9eda990b --- /dev/null +++ b/downstream/modules/aap-migration/proc-containerized-target-prep.adoc @@ -0,0 +1,76 @@ +:_mod-docs-content-type: PROCEDURE + +[id="containerized-target-prep"] += Preparing and assessing the target environment + +To prepare your target environment, perform the following steps. + +.Procedure + +. Validate the file system home folder size and make sure it has enough space to transfer the artifact. +. Transfer the artifact to the nodes where you will be working by using `scp` or any preferred file transfer method. It is recommended that you work from the {Gateway} node as it will have access to most systems. However, if you have access or file system space limitations due to the PostgreSQL dumps, then work from the database node. +. Download the latest version of containerized {PlatformNameShort} from the link:{PlatformDownloadUrl}[{PlatformNameShort} download page]. +. Validate the artifact checksum. +. Extract the artifact on the home folder for the user running the containers. ++ +---- +$ cd ~ +---- ++ +---- +$ sha256sum-check artifact.tar.sha256 +---- ++ +---- +$ tar xf artifact.tar +---- ++ +---- +$ cd artifact +---- ++ +---- +$ sha256sum-check sha256sum.txt +---- + +. Generate inventory file for containerized deployment. ++ +Configure the inventory file to match the same topology as the source environment. Configure the component database names and the `secret_key` values seen on the `secrets.yml` file from the artifact. You can do this by either setting the extra variables in the inventory file or by using the `secrets.yml` file as an additional variables file when running the installation program. ++ +.. Option 1: Extra variables in the inventory file ++ +---- +$ egrep 'pg_database_key' inventory +controller_pg_database= +controller_secret_key= +gateway_pg_database= +gateway_secret_key= +hub_pg_database= +hub_secret_key= +_hub_database_fields= +---- ++ +[NOTE] +==== +The `_hub_database_fields` value comes from the `hub_db_fields_encryption_key` value in your secret. +==== ++ +.. Option 2: Additional variables file ++ +---- +$ ansible-playbook -i inventory ansible.containerized_installer.install -e @~/artifact/secrets.yml -e "_hub_database_fields='{{ hub_db_fields_encryption_key }}'" +---- + +. Install and configure the containerized target environment. +. Verify PostgreSQL database version is on version 15. +. Create a backup of the initial containerized environment. ++ +---- +$ ansible-playbook -i ansible.containerized_installer.backup +---- + +. Ensure the fresh installation is functional. + +[role="_additional-resources"] +.Additional resources +* link:{URLContainerizedInstall}/aap-containerized-installation#backing-up-containerized-ansible-automation-platform[Backing up containerized {PlatformNameShort}] diff --git a/downstream/modules/aap-migration/proc-containerized-validation.adoc b/downstream/modules/aap-migration/proc-containerized-validation.adoc new file mode 100644 index 0000000000..81f242c410 --- /dev/null +++ b/downstream/modules/aap-migration/proc-containerized-validation.adoc @@ -0,0 +1,58 @@ +:_mod-docs-content-type: PROCEDURE + +[id="containerized-validation"] += Validating the target environment + +After completing the migration, validate your target environment to ensure all components are functional and operating as expected. + +.Procedure +. Verify all migrated components are functional. ++ +To ensure that all components have been successfully migrated, verify that each component is operational and accessible: ++ +.. {GatewayStart}: Access the {PlatformNameShort} URL at `https:///` and verify that the dashboard loads correctly. Check that the {Gateway} service is running and properly connected to {ControllerName}. +.. {ControllerNameStart}: Under *Automation Execution*, check that projects, inventories, and job templates are present and properly configured. +.. {HubNameStart}: Under *Automation Content*, verify that collections, namespaces, and their contents are visible. +.. {EDAName} (if applicable): Under *Automation Execution Decisions*, verify that rule audits, rulebook activations, and projects are accessible. ++ +For each component, check the logs to ensure there are no startup errors or warnings: ++ +---- +podman logs +---- + +. Test workflows and automation processes. ++ +After you have confirmed that all components are functional, test critical automation workflows to ensure they operate correctly in the containerized environment: ++ +.. Run job templates: Run several key job templates, including those with dependencies on various credential types. +.. Test workflow templates: Run workflow templates to ensure that workflow nodes run in the correct order and that the workflow completes successfully. +.. Verify execution environments: Ensure that jobs run in the appropriate execution environments and can access required dependencies. +.. Check job artifacts: Verify that job artifacts are properly stored and accessible. +.. Validate job scheduling: Test scheduled jobs to ensure they run at the expected times. + +. Validate user access and permissions. ++ +Confirm that user accounts, teams, and roles were correctly migrated: ++ +.. User authentication: Test login functionality with various user accounts to ensure authentication works correctly. +.. Role-based access controls: Verify that users have appropriate permissions for organizations, projects, inventories, and job templates. +.. Team memberships: Confirm that team memberships and team-based permissions are intact. +.. API access: Test API tokens and ensure that API access is functioning properly. +.. SSO integration (if applicable): Verify that Single Sign-On authentication is working correctly. + +. Confirm content synchronization and availability. ++ +Ensure that all content sources are properly configured and accessible: ++ +** Collection synchronization: Check that you can synchronize collections from a remote. +** Collection Upload: Check that you can upload collections. +** Collection repositories: Verify that collections are available in {HubName} and can be used in execution environments. +** Project synchronization: Check that projects can sync content from source control repositories. +** External content sources: Test synchronization from {HubName} and {Galaxy} (if configured). +** Execution environment availability: Confirm that all required execution environments are available and can be accessed by the execution nodes. +** Content dependencies: Verify that content dependencies are correctly resolved when running jobs. + +[role="_additional-resources"] +.Additional resources +* link:{URLContainerizedInstall}/troubleshooting-containerized-ansible-automation-platform[Troubleshooting containerized {PlatformNameShort}] diff --git a/downstream/modules/aap-migration/proc-managed-post-import.adoc b/downstream/modules/aap-migration/proc-managed-post-import.adoc new file mode 100644 index 0000000000..70f53a6398 --- /dev/null +++ b/downstream/modules/aap-migration/proc-managed-post-import.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: PROCEDURE + +[id="managed-post-import"] += Reconciling the target environment post-migration + +After a successful migration, perform the following tasks: + +.Procedure +. Log in to the Managed {PlatformNameShort} instance by using the local administrator account to confirm that data was properly imported. +. You might need to perform the following actions based on the configuration of the source deployment: +.. Reconfigure SSO authenticators and mappings to reflect the new URLs. +.. Update {PrivateHubName} content to reflect the new URLs. +... Run the following command to update the {HubName} repositories: ++ +---- +`curl -d '{"verify_checksums": true }' -X POST -k https:///api/galaxy/pulp/api/v3/repair/ -u :` +---- +... Perform a sync on any repositories configured in {HubName}. +... Push any custom execution environments from the source {HubName} to the target {HubName}. +.. Reconfigure {AutomationMesh}. +. Following migration, you can request standard SRE tasks through support tickets for the SRE team to perform such as configuration of custom certificates, a custom domain, or connectivity through private endpoints. diff --git a/downstream/modules/aap-migration/proc-managed-target-migration.adoc b/downstream/modules/aap-migration/proc-managed-target-migration.adoc new file mode 100644 index 0000000000..f340f17b91 --- /dev/null +++ b/downstream/modules/aap-migration/proc-managed-target-migration.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + +[id="managed-target-migration"] += Migrating to Managed {PlatformNameShort} + +.Prerequisites +* You have a migration artifact from your source environment. + +.Procedure + +. Submit a link:https://access.redhat.com/support/cases/#/case/new/get-support?caseCreate=true[support ticket] on the Red Hat Customer Portal requesting a migration to Managed {PlatformNameShort}. ++ +The support ticket should include: ++ +** Source installation type (RPM, Containerized, OpenShift) +** Managed {PlatformNameShort} URL or deployment name +** Source version (installer or Operator version) +. The Ansible Site Reliability Engineering (SRE) team provides instructions in the support ticket on how to upload the resulting migration artifact to secure storage for processing. +. The Ansible SRE team imports the migration artifact into the identified target instance and notifies the customer through the support ticket. +. The Ansible SRE team notifies customers of successful migration. diff --git a/downstream/modules/aap-migration/proc-ocp-post-import.adoc b/downstream/modules/aap-migration/proc-ocp-post-import.adoc new file mode 100644 index 0000000000..faf58635f4 --- /dev/null +++ b/downstream/modules/aap-migration/proc-ocp-post-import.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="ocp-post-import"] += Reconciling the target environment post-import + +After importing your migration artifact, perform the following steps to reconcile your target environment. + +.Procedure +. Modify the Django `SECRET_KEY` secrets to match the source platform. +. Deprovision and reconfigure {Gateway} service nodes. +. Re-run {Gateway} nodes and services register logic. +. Convert container-specific settings to {OCPShort}-appropriate formats. +. Reconcile container resource allocations to {OCPShort} resources. diff --git a/downstream/modules/aap-migration/proc-ocp-target-import.adoc b/downstream/modules/aap-migration/proc-ocp-target-import.adoc new file mode 100644 index 0000000000..9ca5aa4241 --- /dev/null +++ b/downstream/modules/aap-migration/proc-ocp-target-import.adoc @@ -0,0 +1,393 @@ +:_mod-docs-content-type: PROCEDURE + +[id="ocp-target-import"] += Importing the migration content to the target environment + +To import your environment, scale down {PlatformNameShort} components, restore databases, replace encryption secrets, and scale services back up. + +[NOTE] +==== +This guide assumes you have the latest version of {PlatformNameShort} named 'aap' in the default 'aap' namespace and all default database names and database users. +==== + +.Procedure + +. Begin by scaling down the {PlatformNameShort} deployment using `idle_aap`. ++ +---- +oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":true}}' +---- ++ +Wait for component pods to stop. Only the 6 Operator pods will remain running. ++ +---- +NAME READY STATUS RESTARTS AGE +pod/aap-controller-migration-4.6.13-5swc6 0/1 Completed 0 160m +pod/aap-gateway-operator-controller-manager-6b75c95458-4zrxv 2/2 Running 0 26h +pod/ansible-lightspeed-operator-controller-manager-b674c55b8-qncjp 2/2 Running 0 45h +pod/automation-controller-operator-controller-manager-6b79d48d4cchn 2/2 Running 0 45h +pod/automation-hub-operator-controller-manager-5cd674c984-5njfj 2/2 Running 0 45h +pod/eda-server-operator-controller-manager-645f4db5-d2flt 2/2 Running 0 45h +pod/resource-operator-controller-manager-86b8f7bb54-cvz6d 2/2 Running 0 45h +---- + +. Scale down the {PlatformNameShort} Gateway Operator and {OperatorController}. ++ +---- +oc scale --replicas=0 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager +---- ++ +Example output: ++ +---- +deployment.apps/aap-gateway-operator-controller-manager scaled +deployment.apps/automation-controller-operator-controller-manager scaled +---- + +. Scale up the idled Postgres `StatefulSet`. ++ +---- +oc scale --replicas=1 statefulset.apps/aap-postgres-15 +---- + +. Create a temporary Persistent Volume Claim (PVC) with appropriate settings and sizing. ++ +`aap-temp-pvc.yaml` ++ +---- +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: aap-temp-pvc + namespace: aap +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 200Gi +---- ++ +---- +oc create -f aap-temp-pvc.yaml +---- + +. Obtain the existing PostgreSQL image to use for temporary deployment. ++ +---- +echo $(oc get pod/aap-postgres-15-0 -o jsonpath="{.spec.containers[*].image}") +---- + +. Create a temporary PostgreSQL deployment with the mounted temporary PVC. ++ +`aap-temp-postgres.yaml` ++ +---- +kind: Deployment +apiVersion: apps/v1 +metadata: + name: aap-temp-postgres +spec: + replicas: 1 + selector: + matchLabels: + app: aap-temp-postgres + template: + metadata: + labels: + app: aap-temp-postgres + spec: + containers: + - name: aap-temp-postgres + image: + command: + - /bin/sh + - '-c' + - sleep infinity + imagePullPolicy: Always + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + volumeMounts: + - name: aap-temp-pvc + mountPath: /tmp/aap-temp-pvc + volumes: + - name: aap-temp-pvc + persistentVolumeClaim: + claimName: aap-temp-pvc +---- ++ +---- +oc create -f aap-temp-postgres.yaml +---- + +. Copy the export artifact to the temporary PostgreSQL pod. ++ +First, obtain the pod name and set it as an environment variable: ++ +---- +export AAP_TEMP_POSTGRES=$(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-temp-postgres) +---- ++ +Test the environment variable: ++ +---- +echo $AAP_TEMP_POSTGRES +---- ++ +Example output: ++ +---- +aap-temp-postgres-7b6c57f87f-s2ldp +---- ++ +Copy the artifact and checksum to the PVC: ++ +---- +oc cp artifact.tar $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/ +oc cp artifact.tar.sha256 $AAP_TEMP_POSTGRES:/tmp/aap-temp-pvc/ +---- + +. Restore databases to {PlatformNameShort} PostgreSQL using the temporary PostgreSQL pod. ++ +First, obtain PostgreSQL passwords for all three databases and the PostgreSQL admin password: ++ +---- +echo +for secret in aap-controller-postgres-configuration aap-hub-postgres-configuration aap-gateway-postgres-configuration +do + echo $secret + echo "PASSWORD: `oc get secrets $secret -o jsonpath="{.data['password']}" | base64 -d`" + echo "USER: `oc get secrets $secret -o jsonpath="{.data['username']}" | base64 -d`" + echo "DATABASE: `oc get secrets $secret -o jsonpath="{.data['database']}" | base64 -d`" + echo +done && echo "POSTGRES ADMIN PASSWORD: `oc get secrets aap-gateway-postgres-configuration -o jsonpath="{.data['postgres_admin_password']}" | base64 -d`" +---- ++ +Enter into the temporary PostgreSQL deployment and change directory to the mounted PVC containing the copied artifact: ++ +---- +oc exec -it deployment.apps/aap-temp-postgres /bin/bash +---- ++ +Inside the pod, change directory to `/tmp/aap-temp-pvc` and list its contents: ++ +---- +cd /tmp/aap-temp-pvc && ls -1 +---- ++ +Example output: ++ +---- +total 2240 +-rw-r--r-- 1 1000900000 1000900000 2273280 Jun 13 17:41 artifact.tar +-rw-r--r-- 1 1000900000 1000900000 79 Jun 13 17:42 artifact.tar.sha256 +drwxrws---. 2 root 1000900000 16384 Jun 13 17:40 lost+found +---- ++ +Verify the archive: ++ +---- +sha256sum --check artifact.tar.sha256 +---- ++ +Example output: ++ +---- +artifact.tar: OK +---- ++ +Extract the artifact and verify its contents: ++ +---- +tar xf artifact.tar && cd artifact && sha256sum --check sha256sum.txt +---- ++ +Example output: ++ +---- + ./controller/controller.pgc: OK + ./gateway/gateway.pgc: OK + ./hub/hub.pgc: OK +---- ++ +Drop the {ControllerName} database: ++ +---- +dropdb -h aap-postgres-15 automationcontroller +---- ++ +Alter the user temporarily with the `CREATEDB` role: ++ +---- +postgres=# ALTER USER automationcontroller WITH CREATEDB; +---- ++ +Create the database: ++ +---- +createdb -h aap-postgres-15 -U automationcontroller automationcontroller +---- ++ +Revert temporary user permission: ++ +---- +postgres=# ALTER USER automationcontroller WITH NOCREATEDB; +---- ++ +Restore the {ControllerName} database: ++ +---- +pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U automationcontroller -d automationcontroller controller/controller.pgc +---- ++ +Restore the {HubName} database: ++ +---- +pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U automationhub -d automationhub hub/hub.pgc +---- ++ +Restore the {Gateway} database: ++ +---- +pg_restore --clean-if-exists --no-owner -h aap-postgres-15 -U gateway -d gateway gateway/gateway.pgc +---- ++ +Exit the pod: ++ +---- +exit +---- + +. Replace database field encryption secrets. ++ +---- +oc set data secret/aap-controller-secret-key secret_key="" +oc set data secret/aap-db-fields-encryption-secret secret_key="" +oc set data secret/aap-hub-db-fields-encryption database_fields.symmetric.key="" +---- + +. Clean up the temporary PostgreSQL and PVC. ++ +---- +oc delete -f aap-temp-postgres.yaml +---- ++ +---- +oc delete -f aap-temp-pvc.yaml +---- + +. Scale the {Gateway} and {ControllerName} Operators back up and wait for the {Gateway} Operator reconciliation loop to complete. ++ +The PostgreSQL `StatefulSet` returns to idle. ++ +---- +oc scale --replicas=1 deployment aap-gateway-operator-controller-manager automation-controller-operator-controller-manager +---- ++ +Example output: ++ +---- +deployment.apps/aap-gateway-operator-controller-manager scaled +deployment.apps/automation-controller-operator-controller-manager scaled +---- ++ +---- +oc logs -f $(oc get pods --no-headers -o custom-columns="metadata.name" | grep aap-gateway-operator) +---- ++ +Wait for reconciliation to stop. ++ +Example output: ++ +---- +META: ending play +{"level":"info", "ts":"2025-06-12T15:41:29Z","logger":"runner", "msg": "Ansible-runner exited successfully", "job": "5672263053238024330","name":"aap", "namespace": "aap"} +PLAY RECAP *********** +localhost : ok=45 changed=0 unreachable=0 failed=0 skipped=63 rescued=0 ignored=0 +---- + +. Scale {PlatformNameShort} back up using `idle_aap`. ++ +---- +oc patch ansibleautomationplatform aap --type merge -p '{"spec":{"idle_aap":false}}' +---- ++ +Example output: ++ +---- +ansibleautomationplatform.aap.ansible.com/aap patched +---- + +. Wait for the `aap-gateway` pod to be running and clean up old service endpoints. ++ +Wait for the pod to be running. ++ +Example output: ++ +---- +pod/aap-gateway-6c989b846c-47b91 2/2 Running 0 45s +---- ++ +---- +for i in HTTPPort Route ServiceNode; do; oc exec -it deployment.apps/aap-gateway aap-gateway-manage shell -c 'from aap_gateway_api.models import '$i'; print('$i'.objects.all().delete())'; done +---- ++ +Example output: ++ +---- +(23, {'aap_gateway_api.ServiceAPIRoute': 4, 'aap_gateway_api.AdditionalRoute': 7, 'aap_gateway_api.Route': 11, 'aap_gateway_api.HTTPPort': 1}) +(0, {}) +(4, {'aap_gateway_api.ServiceNode': 4}) +---- +. Run `awx-manage` to deprovision instances. ++ +Obtain the {ControllerName} pod: ++ +---- +export AAP_CONTROLLER_POD=$(oc get pods --no-headers -o custom-columns=":metadata.name" | grep aap-controller-task) +---- ++ +Test the environment variable: ++ +---- +echo $AAP_CONTROLLER_POD +---- ++ +Example output: ++ +---- +aap-controller-task-759b6d9759-r59q9 +---- ++ +Enter into the {ControllerName} pod: ++ +---- +oc exec -it $AAP_CONTROLLER_POD /bin/bash +awx-manage list_instances +---- ++ +Example output: ++ +---- +bash-4.4$ +[controlplane capacity=642 policy=100%] +aap-controller-task-759b6d9759-r59q9 capacity=642 node_type=control version=4.6.15 heartbeat="2025-06-12 21:39:48" +node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" +[default capacity=0 policy=100%] +node1.example.org capacity=0 node_type=hybrid version=4.6.13 heartbeat="2025-05-30 17:22:11" +node2.example.org capacity=0 node_type=execution version ansible-runner-2.4.1 heartbeat="2025-05-30 17:22:08" +---- ++ +Remove old nodes with `awx-manage`, leaving only `aap-controller-task`: ++ +---- +awx-manage deprovision_instance --host=node1.example.org +awx-manage deprovision_instance --host=node2.example.org +---- + +. Run the `curl` command to repair {HubName} filesystem data. ++ +---- +curl -d '{"verify_checksums": true}' -X POST -k https:///api/galaxy/pulp/api/v3/repair/ -u : +---- diff --git a/downstream/modules/aap-migration/proc-ocp-target-prep.adoc b/downstream/modules/aap-migration/proc-ocp-target-prep.adoc new file mode 100644 index 0000000000..b185703ea4 --- /dev/null +++ b/downstream/modules/aap-migration/proc-ocp-target-prep.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + +[id="ocp-target-prep"] += Preparing and assessing the target environment + +To prepare and assess your target environment, perform the following steps. + +.Procedure + +. Configure {OperatorPlatformNameShort} for an {PlatformNameShort} deployment. +. Set up the database configuration (internal or external). +. Set up the Redis configuration (internal or external). +. Install {PlatformNameShort} using {OperatorPlatformNameShort}. +. Create a backup of the initial {OCPShort} deployment. +. Verify the fresh installation is functional. diff --git a/downstream/modules/aap-migration/proc-ocp-validation.adoc b/downstream/modules/aap-migration/proc-ocp-validation.adoc new file mode 100644 index 0000000000..c48b56d9f4 --- /dev/null +++ b/downstream/modules/aap-migration/proc-ocp-validation.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="ocp-validation"] += Validating the target environment + +To validate your migrated environment, perform the following steps. + +.Procedure +. Verify all migrated components are functional. +. Test workflows and automation processes. +. Validate user access and permissions. +. Confirm content synchronization and availability. +. Test integration with {OCPShort}-specific features. diff --git a/downstream/modules/aap-migration/proc-rpm-environment-source-prep.adoc b/downstream/modules/aap-migration/proc-rpm-environment-source-prep.adoc new file mode 100644 index 0000000000..d7f0a99882 --- /dev/null +++ b/downstream/modules/aap-migration/proc-rpm-environment-source-prep.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rpm-environment-source-prep"] += Preparing and assessing the source environment + +Before beginning your migration, document your current RPM deployment. This documentation serves as a reference throughout the migration process and is critical for properly configuring your target environment. + +.Procedure +. Document the full topology of your current RPM deployment: +.. Map out all servers, nodes, and their roles (for example control nodes, execution nodes, database servers). +.. Note the hostname, IP address, and function of each server in your deployment. +.. Document the network configuration between components. +. {PlatformNameShort} version information: +.. Record the exact {PlatformNameShort} version (X.Y) currently deployed. +. Document the specific version of each component: +.. {ControllerNameStart} version +.. {HubNameStart} version +.. {GatewayStart} version +. Database configuration: +.. Database names for each component +.. Database users and roles +.. Connection parameters and authentication methods +.. Any custom PostgreSQL configurations or optimizations diff --git a/downstream/modules/aap-migration/proc-rpm-source-environment-export.adoc b/downstream/modules/aap-migration/proc-rpm-source-environment-export.adoc new file mode 100644 index 0000000000..e400c402c0 --- /dev/null +++ b/downstream/modules/aap-migration/proc-rpm-source-environment-export.adoc @@ -0,0 +1,191 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rpm-source-environment-export"] += Exporting the source environment + +From your source environment, export the data and configurations needed for migration. + +.Procedure +. Verify the PostgreSQL database version is PostgreSQL version 15. ++ +You can verify your current PostgreSQL version by connecting to your database server and running the following command as the `postgres` user: ++ +---- +$ psql -c 'SELECT version();' +---- ++ +[IMPORTANT] +==== +PostgreSQL version 15 is a strict requirement for the migration process to succeed. If running PostgreSQL 13 or earlier, upgrade to version 15 before proceeding with the migration. + +If using an {PlatformNameShort} managed database, re-run the installation program to upgrade the PostgreSQL version. If using a customer provided (external) database, contact your database administrator or service provider to confirm the version and arrange for an upgrade if required. +==== ++ +. Create a complete backup of the source environment: ++ +---- +$ ./setup.sh -e 'backup_dest=/path/to/backup_dir/' -b +---- ++ +. Get the connection settings from one node from each of the component groups. ++ +For each command, access the host and become the `root` user. ++ +** Access the {ControllerName} node and run: ++ +---- +# awx-manage print_settings | grep '^DATABASES' +---- +** Access the {HubName} node and run: ++ +---- +# grep '^DATABASES' /etc/pulp/settings.py +---- +** Access the {Gateway} node and run: ++ +---- +# aap-gateway-manage print_settings | grep '^DATABASES' +---- + +. Stage the manually created artifact on the {Gateway} node. ++ +---- +# mkdir -p /tmp/backups/artifact/{controller,gateway,hub} +---- ++ +---- +# mkdir -p /tmp/backups/artifact/controller/custom_configs +---- ++ +---- +# touch /tmp/backups/artifact/secrets.yml +---- ++ +---- +# cd /tmp/backups/artifact/ +---- + +. Validate the database size and make sure you have enough space on the filesystem for the `pg_dump`. ++ +You can verify the database sizes by connecting to your database server and running the following command as the `postgres` user: ++ +---- +$ psql -c '\l+' +---- ++ +Adjust the filesystem size or mount an external filesystem as needed before performing the next step. ++ +[NOTE] +==== +This procedure assumes that all target files will be sent to the `/tmp` filesystem. You must adjust the commands to match your environment's needs. +==== ++ +. Perform database dumps of all components on the {Gateway} node within the artifact you created. ++ +---- +# psql -h -U -d -t -c 'SHOW server_version;' # ensure connectivity to the database +---- ++ +---- +# pg_dump -h -U -d --clean --create -Fc -f /.pgc +---- ++ +---- +# ls -ld /.pgc +---- ++ +---- +# echo "_pg_database: " >> secrets.yml ## Add the database name for the component to the secrets file +---- + +. Export secrets from the RPM environment from one node of each component group. ++ +For each of the following steps, use the `root` user to run the commands. ++ +** Access the {ControllerName} node, gather the secret key, and add it to the `controller_secret_key` value in the `secrets.yml` file. ++ +---- +# cat /etc/tower/SECRET_KEY +---- +** Access the {HubName} node, gather the secret key, and add it to the `hub_secret_key` value in the `secrets.yml` file. ++ +---- +# grep 'SECRET_KEY' /etc/pulp/settings.py | awk -F'=' '{ print $2}' +---- +** Access the {HubName} node, gather the `database_fields.symmetric.key` value, and add it to the `hub_db_fields_encryption_key` value in the `secrets.yml` file. ++ +---- +# cat /etc/pulp/certs/database_fields.symmetric.key +---- +** Access the {Gateway} node, gather the secret key, and add it to the `gateway_secret_key` value in the `secrets.yml` file. ++ +---- +# cat /etc/ansible-automation-platform/gateway/SECRET_KEY +---- + +. Export {ControllerName} custom configurations. ++ +If any custom settings exist on the `/etc/tower/conf.d`, copy them to `/tmp/backups/artifact/controller/custom_configs`. ++ +Configuration files on {ControllerName} that are managed by the installation program and not considered custom: + +* `/etc/tower/conf.d/postgres.py` +* `/etc/tower/conf.d/channels.py` +* `/etc/tower/conf.d/caching.py` +* `/etc/tower/conf.d/cluster_host_id.py` + +. Package the artifact. ++ +---- +# cd /tmp/backups/artifact/ +---- ++ +---- +# [ -f sha256sum.txt ] && rm -f sha256sum.txt; find . -type f -name "*.pgc" -exec sha256sum {} \; >> sha256sum.txt +---- ++ +---- +# cat sha256sum.txt +---- ++ +---- +# cd +---- ++ +---- +# tar cf artifact.tar artifact +---- ++ +---- +# sha256sum artifact.tar > artifact.tar.sha256 +---- ++ +---- +# sha256sum --check artifact.tar.sha256 +---- ++ +---- +# tar tvf artifact.tar +---- ++ +Example output of `tar tvf artifact.tar`: ++ +---- +drwxr-xr-x ansible/ansible 0 2025-05-08 16:48 artifact/ +drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/ +-rw-r--r-- ansible/ansible 732615 2025-05-08 16:26 artifact/controller/controller.pgc +drwxr-xr-x ansible/ansible 0 2025-05-08 16:33 artifact/controller/custom_configs/ +drwxr-xr-x ansible/ansible 0 2025-05-08 16:11 artifact/gateway/ +-rw-r--r-- ansible/ansible 231155 2025-05-08 16:28 artifact/gateway/gateway.pgc +drwxr-xr-x ansible/ansible 0 2025-05-08 16:26 artifact/hub/ +-rw-r--r-- ansible/ansible 29252002 2025-05-08 16:26 artifact/hub/hub.pgc +-rw-r--r-- ansible/ansible 614 2025-05-08 16:24 artifact/secrets.yml +-rw-r--r-- ansible/ansible 338 2025-05-08 16:48 artifact/sha256sum.txt +---- + +. Download the `artifact.tar` and `artifact.tar.sha256` to your local machine or transfer to the target node with the `scp` command. + +[role="_additional-resources"] +.Additional resources + +* link:{URLInstallationGuide}/assembly-platform-install-scenario#con-backup-aap_platform-install-scenario[Backing up your {PlatformNameShort} instance] diff --git a/downstream/modules/aap-migration/ref-migration-artifact-checklist.adoc b/downstream/modules/aap-migration/ref-migration-artifact-checklist.adoc new file mode 100644 index 0000000000..d067be57bd --- /dev/null +++ b/downstream/modules/aap-migration/ref-migration-artifact-checklist.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: REFERENCE + +[id="migration-artifact-checklist"] += Migration artifact creation checklist + +Use this checklist to verify the migration artifact. + +* Database dumps: Include complete database dumps for each component. +** Ensure the {ControllerName} database (`controller.pgc`) is present in the artifact. +** Ensure the {HubName} database (`hub.pgc`) is present in the artifact. +** Ensure the {Gateway} database (`gateway.pgc`) is present in the artifact. + +* Secret dumps: Export and include all security-related information. +** Validate that all secret values are present in the `secrets.yml` file. + +* Custom configurations: Package all customizations from the source environment. +** Validate that any custom Python scripts or modules (for example `foo.py`, `bar.py`) are present on the artifact. +** Document any non-standard configurations or environment-specific settings. + +* Database information: Document database details. +** Include the database names for all components. +** Document database users and required permissions. +** Note any database-specific configurations or optimizations. + +* Verification: Ensure artifact integrity and completeness. +** Verify that all required files are included in the artifact. +** Verify that checksums exist for all included database files. +** Test the artifact's structure and accessibility. +** Consider encrypting the artifact for secure transfer to the target environment. + +* Document any known limitations or special considerations. diff --git a/downstream/modules/analytics/con-jobs-explorer.adoc b/downstream/modules/analytics/con-jobs-explorer.adoc index e412928e5f..d1d5a98656 100644 --- a/downstream/modules/analytics/con-jobs-explorer.adoc +++ b/downstream/modules/analytics/con-jobs-explorer.adoc @@ -35,4 +35,4 @@ You can click on the arrow icon next to the job *Id/Name* column to view more de == Reviewing job details on {ControllerName} -Click the job in the *Id/Name* column to view the job itself on the {ControllerName} job details page. For more information on viewing job details on {ControllerName}, see _Jobs_ in the {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-jobs[Automation Controller User Guide]. +Click the job in the *Id/Name* column to view the job itself on the {ControllerName} job details page. For more information on job settings for {ControllerName}, see Jobs in automation controller in the {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_execution/controller-jobs[{TitleControllerUserGuide}]. \ No newline at end of file diff --git a/downstream/modules/analytics/con-review-savings-calculations.adoc b/downstream/modules/analytics/con-review-savings-calculations.adoc index 29d3addccd..6e72f8a094 100644 --- a/downstream/modules/analytics/con-review-savings-calculations.adoc +++ b/downstream/modules/analytics/con-review-savings-calculations.adoc @@ -6,7 +6,7 @@ = Review savings calculations for your automation plans -The {planner} offers a calculation of how much time and money you can save by automating a job. {InsightsName} takes data from the plan details and the associated job template to provide you with an accurate projection of your cost savings when you complete this savings plan. +The {planner} offers a calculation of how much time and money you can save by automating a job. automation analytics takes data from the plan details and the associated job template to provide you with an accurate projection of your cost savings when you complete this savings plan. To do so, navigate to your savings planner page, click the name of an existing plan, then navigate to the *Statistics* tab. diff --git a/downstream/modules/analytics/proc-ignoring-nested-workflows-jobs.adoc b/downstream/modules/analytics/proc-ignoring-nested-workflows-jobs.adoc index 3573e9ae6e..934b2fea97 100644 --- a/downstream/modules/analytics/proc-ignoring-nested-workflows-jobs.adoc +++ b/downstream/modules/analytics/proc-ignoring-nested-workflows-jobs.adoc @@ -12,5 +12,5 @@ Select the settings icon on the *Job Explorer* view and use the toggle switch to Nested workflows allow you to create workflow job templates that call other workflow job templates. Nested workflows promotes reuse, as modular components, of workflows that include existing business logic and organizational requirements in automating complex processes and operations. -To learn more about nested workflows, see _Workflows_ in the {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-workflows[Automation Controller User Guide]. +To learn more about nested workflows, see Workflows in automation controller in the {BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_execution/controller-workflows[{TitleControllerUserGuide}]. ==== diff --git a/downstream/modules/analytics/proc-link-plan-job-template.adoc b/downstream/modules/analytics/proc-link-plan-job-template.adoc index 47d7018e2d..5ee526ee80 100644 --- a/downstream/modules/analytics/proc-link-plan-job-template.adoc +++ b/downstream/modules/analytics/proc-link-plan-job-template.adoc @@ -6,7 +6,7 @@ = Link a savings plan to a job template -You can associate a job template to a savings plan to allow {InsightsShort} to provide a more accurate time and cost savings estimate for completing this savings plan. +You can associate a job template to a savings plan to allow automation analytics to provide a more accurate time and cost savings estimate for completing this savings plan. .Procedure . From the navigation panel, select {MenuAASavingsPlanner}. diff --git a/downstream/modules/analytics/proc-review-reports.adoc b/downstream/modules/analytics/proc-review-reports.adoc index 193155c99d..5d60d1b884 100644 --- a/downstream/modules/analytics/proc-review-reports.adoc +++ b/downstream/modules/analytics/proc-review-reports.adoc @@ -15,4 +15,4 @@ To view reports about your Ansible automation environment, proceed with the foll Each report presents data to monitor your Ansible automation environment. Use the filter toolbar on each report to adjust your graph view. -NOTE: We are constantly adding new reports to the system. If you have ideas for new reports that would be helpful for your team, please contact your account representative or log a feature enhancement for {InsightsShort}. +NOTE: We are constantly adding new reports to the system. If you have ideas for new reports that would be helpful for your team, please contact your account representative or log a feature enhancement for automation analytics. diff --git a/downstream/modules/builder/con-about-ee.adoc b/downstream/modules/builder/con-about-ee.adoc index 71d00bcacf..554a5905e3 100644 --- a/downstream/modules/builder/con-about-ee.adoc +++ b/downstream/modules/builder/con-about-ee.adoc @@ -7,10 +7,12 @@ All automation in {PlatformName} runs on container images called {ExecEnvName}. {ExecEnvNameStart} create a common language for communicating automation dependencies, and offer a standard way to build and distribute the automation environment. +Red Hat provides supported execution environments for you to use in the link:https://catalog.redhat.com/search?gs&q=execution%20environments&searchType=containers[Red Hat ecosystem catalog]. + An {ExecEnvNameSing} should contain the following: -* Ansible Core 2.15 or later -* Python 3.8-3.11 +* Ansible Core 2.16 or later +* Python 3.11 or later * {Runner} * Ansible content collections and their dependencies * System dependencies diff --git a/downstream/modules/builder/con-additional-build-files.adoc b/downstream/modules/builder/con-additional-build-files.adoc index 12c57fa0e3..c4c9f817de 100644 --- a/downstream/modules/builder/con-additional-build-files.adoc +++ b/downstream/modules/builder/con-additional-build-files.adoc @@ -2,15 +2,18 @@ = Additional build files -You can add any external file to the build context directory by referring or copying them to the `additional_build_steps` section of the definition file. The format is a list of dictionary values, each with a `src` and `dest` key and value. +You can add any external file to the build context directory by referring or copying them to the `additional_build_files` section of the definition file. The format is a list of dictionary values, each with a `src` and `dest` key and value. Each list item must be a dictionary containing the following required keys: -`src`:: Specifies the source files to copy into the build context directory. This can be an absolute path (for example, `/home/user/.ansible.cfg`), or a path that is relative to the {ExecEnvShort} file. Relative paths can be glob expressions matching one or more files (for example, `files/*.cfg`). +`src`:: Specifies the source files to copy into the build context directory. +This can be an absolute path (for example, `/home/user/.ansible.cfg`), or a path that is relative to the {ExecEnvShort} file. +Relative paths can be glob expressions matching one or more files (for example, `files/*.cfg`). [NOTE] ==== Absolute paths can not include a regular expression. If `src` is a directory, the entire contents of that directory are copied to `dest`. ==== -`dest`:: Specifies a subdirectory path underneath the `_build` subdirectory of the build context directory that contains the source files (for example, `files/configs`). This can not be an absolute path or contain `..` within the path. {Builder} creates this directory for you if it does not already exist. +`dest`:: Specifies a subdirectory path underneath the `_build` subdirectory of the build context directory that contains the source files (for example, `files/configs`). +This can not be an absolute path or contain `..` within the path. {Builder} creates this directory for you if it does not already exist. diff --git a/downstream/modules/builder/con-additional-custom-build-steps.adoc b/downstream/modules/builder/con-additional-custom-build-steps.adoc index 336e769094..be71ad8d2c 100644 --- a/downstream/modules/builder/con-additional-custom-build-steps.adoc +++ b/downstream/modules/builder/con-additional-custom-build-steps.adoc @@ -2,9 +2,11 @@ = Additional custom build steps -You can specify custom build commands for any build phase in the `additional_build_steps` section of the definition file. This allows fine-grained control over the build phases. +You can specify custom build commands for any build phase in the `additional_build_steps` section of the definition file. +This allows fine-grained control over the build phases. -Use the `prepend_` and `append_` commands to add directives to the `Containerfile` that run either before or after the main build steps are executed. The commands must conform to any rules required for the runtime system. +Use the `prepend_` and `append_` commands to add directives to the `Containerfile` that run either before or after the main build steps are executed. +The commands must conform to any rules required for the runtime system. See the following table for a list of values that can be used in `additional_build_steps`: @@ -12,33 +14,26 @@ See the following table for a list of values that can be used in `additional_bui |=== | Value | Description -| `prepend_base` -| Allows you to insert commands before building the base image. +| `prepend_base` | Allows you to insert commands before building the base image. -| `append_base` -| Allows you to insert commands after building the base image. +| `append_base` | Allows you to insert commands after building the base image. -| `prepend_galaxy` -| Allows you to insert before building the galaxy image. +| `prepend_galaxy` | Allows you to insert before building the galaxy image. -| `append_galaxy` -| Allows you to insert after building the galaxy image. +| `append_galaxy` | Allows you to insert after building the galaxy image. -| `prepend_builder` -| Allows you to insert commands before building the Python builder image. +| `prepend_builder` | Allows you to insert commands before building the Python builder image. -| `append_builder` -| Allows you to insert commands after building the Python builder image. +| `append_builder` | Allows you to insert commands after building the Python builder image. -| `prepend_final` -| Allows you to insert before building the final image. +| `prepend_final` | Allows you to insert before building the final image. -| `append_final` -| Allows you to insert after building the final image. +| `append_final` | Allows you to insert after building the final image. |=== -The syntax for `additional_build_steps` supports both multi-line strings and lists. See the following examples: +The syntax for `additional_build_steps` supports both multi-line strings and lists. +See the following examples: .A multi-line string entry [example] @@ -59,3 +54,26 @@ append_final: - RUN ls -la /etc ---- ==== + +.Copying arbitrary files to {ExecEnvShort}s +[example] +==== +---- +additional_build_files: + # copy arbitrary files next to this EE def into the build context - we can refer to them later... + - src: files/rootCA.crt + dest: configs + +additional_build_steps: + prepend_base: + # copy a custom CA cert into the base image and recompute the trust database + # because this is in "base", all stages will inherit (including the final EE) + - COPY _build/configs/rootCA.crt /usr/share/pki/ca-trust-source/anchors + - RUN update-ca-trust +---- +==== +The `additional_build_files` section enable you to add `rootCA.crt` to the build context directory. +When this file is copied to the build context directory, it can be used in the build process. +To use the file, copy it from the build context directory using the COPY directive specified in the prepend_base step of the additional_build_steps section. +You can perform any action based upon the copied file, such as in this example updating dynamic configuration of CA certificates by running RUN update-ca-trust. + diff --git a/downstream/modules/builder/con-build-an-ee-with-env-variables.adoc b/downstream/modules/builder/con-build-an-ee-with-env-variables.adoc new file mode 100644 index 0000000000..f2990718e5 --- /dev/null +++ b/downstream/modules/builder/con-build-an-ee-with-env-variables.adoc @@ -0,0 +1,19 @@ +[id="con-build-an-ee-with-env-variables"] + += Build {ExecEnvShort}s with environment variables + +The following example file specifies environment variables that might be required for the build process. + +To achieve this functionality it uses the `ENV` variable definition in the `prepend_base` step of the additional_build_steps section. + +[example] +==== +---- +— +additional_build_steps: + prepend_base: + - ENV FOO=bar + - RUN echo $FOO > /tmp/file1.txt +---- +==== +The same environment variables can be used in the later stage of the build process. \ No newline at end of file diff --git a/downstream/modules/builder/con-build-ee-with-env-vars-for-galaxy.adoc b/downstream/modules/builder/con-build-ee-with-env-vars-for-galaxy.adoc new file mode 100644 index 0000000000..bdd3b602b9 --- /dev/null +++ b/downstream/modules/builder/con-build-ee-with-env-vars-for-galaxy.adoc @@ -0,0 +1,30 @@ +[id="con-build-ee-with-env-vars-for-galaxy"] + += Building {ExecEnvShort}s with environment variables for Galaxy configuration + +{Builder} schema 3 enables you to perform complex scenarios such as specifying custom Galaxy configurations. +You can use this approach to pass sensitive information, such as authentication tokens, into the {ExecEnvShort} build without leaking them into the final execution environment image. + +The following example uses {Galaxy} Server environment variables. + +---- +additional_build_steps: + prepend_galaxy: + # Environment variables used for Galaxy client configurations + - ENV ANSIBLE_GALAXY_SERVER_LIST=automation_hub + - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_URL=https://console.redhat.com/api/automation-hub/content/xxxxxxx-synclist/ + - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_AUTH_URL=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token + # define a custom build arg env passthru - we still also have to pass + # `--build-arg ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN` to get it to pick it up from the env + - ARG ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN + +options: + package_manager_path: /usr/bin/microdnf # downstream images use non-standard package manager +---- + +You can provide environment variables such as `ANSIBLE_GALAXY_SERVER_LIST`, `ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_URL` and `ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_AUTH_URL` using the `ENV` directive. For more information, see the link:https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#configuring-the-ansible-galaxy-client[Galaxy User Guide] in the Ansible documentation. + +For security reasons, you must not store sensitive information in `ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN`. +You can use the ARG directive to receive sensitive information from the user as input. + +`–build-args` can be used to provide this information while invoking the ansible-builder command. \ No newline at end of file diff --git a/downstream/modules/builder/con-building-definition-file.adoc b/downstream/modules/builder/con-building-definition-file.adoc index 3bf85b2d85..c5079e0fa0 100644 --- a/downstream/modules/builder/con-building-definition-file.adoc +++ b/downstream/modules/builder/con-building-definition-file.adoc @@ -2,12 +2,16 @@ = Building a definition file -After you install {Builder}, you can create a definition file that {Builder} uses to create your {ExecEnvNameSing} image. {Builder} makes an {ExecEnvNameSing} image by reading and validating your definition file, then creating a `Containerfile`, and finally passing the `Containerfile` to Podman, which then packages and creates your {ExecEnvNameSing} image. The definition file that you create must be in `yaml` format and contain different sections. The default definition filename, if not provided, is `execution-environment.yml`. For more information on the parts of a definition file, see xref:assembly-definition-file-breakdown[Breakdown of definition file content]. +You can use {Builder} to create an {ExecEnvShort}. +Building a new {ExecEnvShort} involves a definition that specifies which content you want to include in your {ExecEnvShort}, such as collections, Python requirements, and system-level packages. + +After you install {Builder}, you can create a definition file that {Builder} uses to create your {ExecEnvNameSing} image. +{Builder} makes an {ExecEnvNameSing} image by reading and validating your definition file, then creating a `Containerfile`, and finally passing the `Containerfile` to Podman, which then packages and creates your {ExecEnvNameSing} image. +The definition file that you create must be in `YAML` format, with a `.yaml` or `.yml extension`, and contain different sections. +The default definition filename, if not provided, is `execution-environment.yml`. For more information on the parts of a definition file, see xref:con-definition-file-breakdown[Breakdown of definition file content]. The following is an example of a version 3 definition file. Each definition file must specify the major version number of the {Builder} feature set it uses. If not specified, {Builder} defaults to version 1, making most new features and definition keywords unavailable. -.Definition file example -==== ---- version: 3 @@ -26,8 +30,8 @@ images: <3> name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel9:latest # Custom package manager path for the RHEL based images - options: <4> - package_manager_path: /usr/bin/microdnf +options: <4> + package_manager_path: /usr/bin/microdnf additional_build_steps: <5> prepend_base: @@ -49,7 +53,6 @@ additional_build_steps: <5> - RUN echo This is a post-install command! - RUN ls -la /etc ---- -==== <1> Lists default values for build arguments. <2> Specifies the location of various requirements files. @@ -57,7 +60,6 @@ additional_build_steps: <5> <4> Specifies options that can affect builder runtime functionality. <5> Commands for additional custom build steps. -[role="_additional-resources"] .Additional resources -* For more information about the definition file content, see xref:assembly-definition-file-breakdown[Breakdown of definition file content]. -* To read more about the differences between {Builder} versions 2 and 3, see the link:https://docs.ansible.com/ansible/latest/porting_guides/porting_guide_3.html[Ansible 3 Porting Guide]. +* For more information about the definition file content, see xref:con-definition-file-breakdown[Breakdown of definition file content]. +* To read more about the differences between {Builder} versions 2 and 3, see the link:https://ansible.readthedocs.io/projects/builder/en/latest/porting_guides/porting_guide/[Ansible Builder Porting Guide]. diff --git a/downstream/modules/builder/con-container_file.adoc b/downstream/modules/builder/con-container_file.adoc index 4521842fae..aed74638bd 100644 --- a/downstream/modules/builder/con-container_file.adoc +++ b/downstream/modules/builder/con-container_file.adoc @@ -2,10 +2,15 @@ = Containerfile -After your definition file is created, {Builder} reads and validates it, creates a `Containerfile` and container build context, and optionally passes these to Podman to build your {ExecEnvNameSing} image. The container build occurs in several distinct stages: `base` , `galaxy`, `builder`, and `final`. The image build steps (along with any corresponding custom `prepend_` and `append_` steps defined in `additional_build_steps`) are: +After you have created your definition file, {Builder} reads and validates it, creates a `Containerfile` and container build context, and optionally passes these to Podman to build your {ExecEnvNameSing} image. +The container build occurs in several distinct stages: `base` , `galaxy`, `builder`, and `final`. The image build steps (along with any corresponding custom `prepend_` and `append_` steps defined in `additional_build_steps`) are: -. During the `base` build stage, the specified base image is (optionally) customized with components required by other build stages, including Python, `pip`, `ansible-core`, and `ansible-runner`. The resulting image is then validated to ensure that the required components are available (as they may have already been present in the base image). Ephemeral copies of the resulting customized `base` image are used as the base for all other build stages. -. During the `galaxy` build stage, collections specified by the definition file are downloaded and stored for later installation during the `final` build stage. Python and system dependencies declared by the collections, if any, are also collected for later analysis. -. During the `builder` build stage, Python dependencies declared by collections are merged with those listed in the definition file. This final set of Python dependencies is downloaded and built as Python wheels and stored for later installation during the `final` build stage. +. During the `base` build stage, the specified base image is (optionally) customized with components required by other build stages, including Python, `pip`, `ansible-core`, and `ansible-runner`. +The resulting image is then validated to ensure that the required components are available (as they may have already been present in the base image). +Ephemeral copies of the resulting customized `base` image are used as the base for all other build stages. +. During the `galaxy` build stage, collections specified by the definition file are downloaded and stored for later installation during the `final` build stage. +Python and system dependencies declared by the collections, if any, are also collected for later analysis. +. During the `builder` build stage, Python dependencies declared by collections are merged with those listed in the definition file. +This final set of Python dependencies is downloaded and built as Python wheels and stored for later installation during the `final` build stage. . During the `final` build stage, the previously-downloaded collections are installed, along with system packages and any previously-built Python packages that were declared as dependencies by the collections or listed in the definition file. //Note if a diagram with the Main step actions gets created, it should be included here. Check with @nitzmahone diff --git a/downstream/modules/builder/con-definition-dependencies.adoc b/downstream/modules/builder/con-definition-dependencies.adoc index 48909b41bf..860e838050 100644 --- a/downstream/modules/builder/con-definition-dependencies.adoc +++ b/downstream/modules/builder/con-definition-dependencies.adoc @@ -1,5 +1,7 @@ [id="con-definition-dependencies"] += Definition dependencies + You can include dependencies that must be installed into the final image in the dependencies section of your definition file. To avoid issues with your {ExecEnvNameSing} image, make sure that the entries for Galaxy, Python, and system point to a valid requirements file, or are valid content for their respective file types. diff --git a/downstream/modules/builder/con-definition-file-breakdown.adoc b/downstream/modules/builder/con-definition-file-breakdown.adoc new file mode 100644 index 0000000000..896c33fcb8 --- /dev/null +++ b/downstream/modules/builder/con-definition-file-breakdown.adoc @@ -0,0 +1,7 @@ +[id="con-definition-file-breakdown"] + += Breakdown of definition file content + +You must provide a definition file to build {ExecEnvName}s with {Builder}, because it specifies the content that is included in the {ExecEnvNameSing} container image. + +The following sections breaks down the different parts of a definition file. \ No newline at end of file diff --git a/downstream/modules/builder/con-ee-precedence.adoc b/downstream/modules/builder/con-ee-precedence.adoc index cf24802f1a..a752c080be 100644 --- a/downstream/modules/builder/con-ee-precedence.adoc +++ b/downstream/modules/builder/con-ee-precedence.adoc @@ -2,13 +2,13 @@ = {ExecEnvNameStart} precedence -Project updates will always use the control plane {ExecEnvName} by default, however, jobs will use the first available {ExecEnvName} as follows: +Project updates always use the control plane {ExecEnvName} by default, however, jobs use the first available {ExecEnvName} as follows: . The `execution_environment` defined on the template (job template or inventory source) that created the job. . The `default_environment` defined on the project that the job uses. . The `default_environment` defined on the organization of the job. . The `default_environment` defined on the organization of the inventory the job uses. -. The current `DEFAULT_EXECUTION_ENVIRONMENT` setting (configurable at `api/v2/settings/jobs/`) +. The current `DEFAULT_EXECUTION_ENVIRONMENT` setting (configurable at `api/v2/settings/system/`) . Any image from the `GLOBAL_JOB_EXECUTION_ENVIRONMENTS` setting. . Any other global {ExecEnvShort}. diff --git a/downstream/modules/builder/con-galaxy-dependencies.adoc b/downstream/modules/builder/con-galaxy-dependencies.adoc index 2643713161..31fc292fd5 100644 --- a/downstream/modules/builder/con-galaxy-dependencies.adoc +++ b/downstream/modules/builder/con-galaxy-dependencies.adoc @@ -7,12 +7,8 @@ The entry `requirements.yml` can be a relative path from the directory of the {E The content might look like the following: -.Galaxy entry -[example] -==== ---- collections: - community.aws - kubernetes.core ---- -==== diff --git a/downstream/modules/builder/con-optional-build-command-arguments.adoc b/downstream/modules/builder/con-optional-build-command-arguments.adoc index 1952a053a4..870d01ee55 100644 --- a/downstream/modules/builder/con-optional-build-command-arguments.adoc +++ b/downstream/modules/builder/con-optional-build-command-arguments.adoc @@ -2,13 +2,12 @@ = Optional build command arguments -The `-t` flag will tag your {ExecEnvNameSing} image with a specific name. For example, the following command will build an image named `my_first_ee_image`: +The `-t` flag will tag your {ExecEnvNameSing} image with a specific name. +For example, the following command builds an image named `my_first_ee_image`: -==== ---- $ ansible-builder build -t my_first_ee_image ---- -==== [NOTE] ==== @@ -17,13 +16,10 @@ If you do not use `-t` with `build`, an image called `ansible-execution-env` is If you have multiple definition files, you can specify which one to use by including the `-f` flag: -[[example1]] -==== ---- $ ansible-builder build -f another-definition-file.yml -t another_ee_image ---- -==== -In <> {Builder} will use the specifications provided in the file named `another-definition-file.yml` instead of the default `execution-environment.yml` to build an {ExecEnvNameSing} image named `another_ee_image`. +In this example, {Builder} uses the specifications provided in the file named `another-definition-file.yml` instead of the default `execution-environment.yml` to build an {ExecEnvNameSing} image named `another_ee_image`. For other specifications and flags that you can use with the `build` command, enter `ansible-builder build --help` to see a list of additional options. diff --git a/downstream/modules/builder/con-python-dependencies.adoc b/downstream/modules/builder/con-python-dependencies.adoc index 54c589b110..b3a205cfd8 100644 --- a/downstream/modules/builder/con-python-dependencies.adoc +++ b/downstream/modules/builder/con-python-dependencies.adoc @@ -2,13 +2,13 @@ = Python -The `python` entry in the definition file points to a valid requirements file or to an inline list of Python requirements in PEP508 format for the `pip install -r ...` command. +The `python` entry in the definition file points to a valid requirements file or to an inline list of Python requirements in link:https://ansible.readthedocs.io/projects/builder/en/latest/porting_guides/porting_guide_v3.1/#pep-508-standard[PEP508] format for the `pip install -r ...` command. -The entry `requirements.txt` is a file that installs extra Python requirements on top of what the Collections already list as their Python dependencies. It may be listed as a relative path from the directory of the {ExecEnvNameSing} definition's folder, or an absolute path. The contents of a `requirements.txt` file should be formatted like the following example, similar to the standard output from a `pip freeze` command: +The entry `requirements.txt` is a file that installs extra Python requirements on top of what the Collections already list as their Python dependencies. +It might be listed as a relative path from the directory of the {ExecEnvNameSing} definition's folder, or an absolute path. The contents of a `requirements.txt` file should be formatted as the following example, similar to the standard output from a `pip freeze` command. + +The content might look like the following: -.Python entry -[example] -==== ---- boto>=2.49.0 botocore>=1.12.249 @@ -24,4 +24,3 @@ requests-oauthlib openstacksdk>=0.13 ovirt-engine-sdk-python>=4.4.10 ---- -==== diff --git a/downstream/modules/builder/con-system-dependencies.adoc b/downstream/modules/builder/con-system-dependencies.adoc index 4ffd3a5c88..f45fa9735c 100644 --- a/downstream/modules/builder/con-system-dependencies.adoc +++ b/downstream/modules/builder/con-system-dependencies.adoc @@ -2,17 +2,17 @@ = System -The `system` entry in the definition points to a link:https://docs.opendev.org/opendev/bindep/latest/readme.html[bindep] requirements file or to an inline list of bindep entries, which install system-level dependencies that are outside of what the collections already include as their dependencies. It can be listed as a relative path from the directory of the {ExecEnvNameSing} definition's folder, or as an absolute path. At a minimum, the the collection(s) must specify necessary requirements for `[platform:rpm]`. +The `system` entry in the definition points to a link:https://docs.opendev.org/opendev/bindep/latest/readme.html[bindep] requirements file or to an inline list of bindep entries, which install system-level dependencies that are outside of what the collections already include as their dependencies. +The `system` entry can be listed as a relative path from the directory of the {ExecEnvNameSing} definition's folder, or as an absolute path. +At a minimum, the the collections must specify necessary requirements for `[platform:rpm]`. -To demonstrate this, the following is an example `bindep.txt` file that adds the `libxml2` and `subversion` packages to a container: +To demonstrate this, the following is an example `bindep.txt` file that adds the `libxml2` and `subversion` packages to a container. + +The content might look like the following: -.System entry -[example] -==== ---- libxml2-devel [platform:rpm] subversion [platform:rpm] ---- -==== Entries from multiple collections are combined into a single file. This is processed by `bindep` and then passed to `dnf`. Only requirements with no profiles or no runtime requirements will be installed to the image. diff --git a/downstream/modules/builder/con-why-builder.adoc b/downstream/modules/builder/con-why-builder.adoc index 6bf24afe72..2fe5935078 100644 --- a/downstream/modules/builder/con-why-builder.adoc +++ b/downstream/modules/builder/con-why-builder.adoc @@ -2,6 +2,5 @@ = Why use {Builder}? -Before {Builder} was developed, {PlatformName} users could run into dependency issues and errors when creating custom virtual environments or containers that included all of the required dependencies installed. - -Now, with {Builder}, you can easily create a customizable {ExecEnvName} definition file that specifies the content you want included in your {ExecEnvName} such as Ansible Core, Python, Collections, third-party Python requirements, and system level packages. This allows you to fulfill all of the necessary requirements and dependencies to get jobs running. +With {Builder}, you can easily create a customizable {ExecEnvName} definition file that specifies the content you want included in your {ExecEnvName} such as Ansible Core, Python, Collections, third-party Python requirements, and system level packages. +This enables you to fulfill all of the necessary requirements and dependencies to get jobs running. diff --git a/downstream/modules/builder/con-why-ee.adoc b/downstream/modules/builder/con-why-ee.adoc index a5731138c9..09ebdb75a1 100644 --- a/downstream/modules/builder/con-why-ee.adoc +++ b/downstream/modules/builder/con-why-ee.adoc @@ -10,7 +10,7 @@ In addition to speed, portability, and flexibility, {ExecEnvName} provide the fo * They ensure that automation runs consistently across multiple platforms and make it possible to incorporate system-level dependencies and collection-based content. * They give {PlatformName} administrators the ability to provide and manage automation environments to meet the needs of different teams. +* {ExecEnvNameStart}s enable automation teams to define, build, and update their automation environments themselves. +System administrators can provide execution environments, but each organization administrator can also provide their own. * They allow automation to be easily scaled and shared between teams by providing a standard way of building and distributing the automation environment. -* They enable automation teams to define, build, and update their automation environments themselves. -* {ExecEnvNameStart} provide a common language to communicate automation dependencies. diff --git a/downstream/modules/builder/proc-creating-containerfile-no-image.adoc b/downstream/modules/builder/proc-creating-containerfile-no-image.adoc index bf741f7afd..d1ed89c6f5 100644 --- a/downstream/modules/builder/proc-creating-containerfile-no-image.adoc +++ b/downstream/modules/builder/proc-creating-containerfile-no-image.adoc @@ -1,6 +1,7 @@ [id="proc-creating-containerfile-no-image"] = Creating a Containerfile without building an image + If you are required to use shared container images built in sandboxed environments for security reasons, you can create a shareable `Containerfile`. ---- diff --git a/downstream/modules/builder/proc-customize-ee-image.adoc b/downstream/modules/builder/proc-customize-ee-image.adoc index b0c17b6727..a55b137402 100644 --- a/downstream/modules/builder/proc-customize-ee-image.adoc +++ b/downstream/modules/builder/proc-customize-ee-image.adoc @@ -4,13 +4,15 @@ Ansible Controller includes the following default execution environments: -* `Minimal` - Includes the latest Ansible-core 2.15 release along with {Runner}, but does not include collections or other content +* `Minimal` - `ansible-automation-platform-25` Includes the latest Ansible-core 2.16 release along with Ansible Runner, but does not include collections or other content. Ansible-automation-platform-24 Includes the Ansible-core 2.15 release along with {Runner}, but does not include collections or other content. ++ +While supported {ExecEnvShort}s cover many automation prerequisites, minimal execution-environments are the recommended basis for your own custom images, to keep full control over dependencies and their versions. * `EE Supported` - Minimal, plus all Red Hat-supported collections and dependencies While these environments cover many automation use cases, you can add additional items to customize these containers for your specific needs. The following procedure adds the `kubernetes.core` collection to the `ee-minimal` default image: .Procedure -. Log in to `registry.redhat.io` via Podman: +. Log in to `registry.redhat.io` using Podman: + ---- $ podman login -u="[username]" -p="[token/hash]" registry.redhat.io @@ -24,14 +26,14 @@ podman pull registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:l . Configure your {Builder} files to specify the required base image and any additional content to add to the new {ExecEnvShort} image. .. For example, to add the link:https://galaxy.ansible.com/kubernetes/core[Kubernetes Core Collection from Galaxy] to the image, use the Galaxy entry: + -==== ---- collections: - kubernetes.core ---- -==== -.. For more information about definition files and their content, see the <>. -. In the {ExecEnvShort} definition file, specify the original `ee-minimal` container's URL and tag in the `EE_BASE_IMAGE` field. In doing so, your final `execution-environment.yml` file will look like the following: + +.. For more information about definition files and their content, see the xref:con-definition-file-breakdown[Breakdown of definition file content] section. +. In the {ExecEnvShort} definition file, specify the original `ee-minimal` container's URL and tag in the `EE_BASE_IMAGE` field. +In doing so, your final `execution-environment.yml` file appears similar to the following: + .A customized `execution-environment.yml` file [example] @@ -40,7 +42,7 @@ collections: version: 3 images: - base_image: 'registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel9:latest' + base_image: 'registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel9:latest' dependencies: galaxy: @@ -68,13 +70,13 @@ If you do not use `-t` with `build`, an image called `ansible-execution-env` is + * Use the `podman images` command to confirm that your new container image is in that list: + -.Output of a `podman images` command with the image `new-ee` -==== +The following shows the output of a 'podman images' command with the image `new-ee`. ++ ---- REPOSITORY TAG IMAGE ID CREATED SIZE localhost/new-ee latest f5509587efbb 3 minutes ago 769 MB ---- -==== + . Verify that the collection is installed: + ----- @@ -91,7 +93,7 @@ $ podman tag [username]/new-ee [automation-hub-IP-address]/[username]/new-ee + [NOTE] ===== -You must have `admin` or appropriate container repository permissions for {HubName} to push a container. For more information, see the _Manage containers in {PrivateHubName}_ section in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/managing_content_in_automation_hub/index#managing-containers-hub[Managing content in automation hub]. +You must have `admin` or appropriate container repository permissions for {HubName} to push a container. For more information, see link:{URLHubManagingContent}/index#managing-containers-hub[Manage containers in private automation hub]. ===== + ----- diff --git a/downstream/modules/builder/proc-executing-build.adoc b/downstream/modules/builder/proc-executing-build.adoc index d4148d6abf..fba705845c 100644 --- a/downstream/modules/builder/proc-executing-build.adoc +++ b/downstream/modules/builder/proc-executing-build.adoc @@ -2,18 +2,28 @@ = Building the {ExecEnvNameSing} image -After you create a definition file, you can proceed to build an {ExecEnvNameSing} image. +When you have created a definition file, you can proceed to build an {ExecEnvNameSing} image. + +[NOTE] +==== +When building an {ExecEnvShort} image, it must support the architecture that {PlatformNameShort} is deployed with. +==== .Prerequisites * You have created a definition file. .Procedure + To build an {ExecEnvNameSing} image, run the following from the command line: + ---- $ ansible-builder build ---- By default, {Builder} looks for a definition file named `execution-environment.yml` but a different file path can be specified as an argument with the `-f` flag: + +For example: + [subs=+quotes] ---- $ ansible-builder build -f _definition-file-name_.yml diff --git a/downstream/modules/builder/proc-installing-builder.adoc b/downstream/modules/builder/proc-installing-builder.adoc index 7715b5360e..aaaff4257c 100644 --- a/downstream/modules/builder/proc-installing-builder.adoc +++ b/downstream/modules/builder/proc-installing-builder.adoc @@ -3,13 +3,23 @@ = Installing {Builder} .Prerequisites + +* You have valid subscriptions attached on the host. +Doing so enables you to access the subscription-only resources needed to install ansible-builder, and ensures that the necessary repository for ansible-builder is automatically enabled. See Attaching your {PlatformName} subscription for more information. * You have installed the Podman container runtime. -* You have valid subscriptions attached on the host. Doing so allows you to access the subscription-only resources needed to install `ansible-builder`, and ensures that the necessary repository for `ansible-builder` is automatically enabled. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_planning_guide/index#proc-attaching-subscriptions_planning[Attaching your Red Hat {PlatformNameShort} subscription] for more information. +* You have valid subscriptions attached on the host. Doing so allows you to access the subscription-only resources needed to install `ansible-builder`, and ensures that the necessary repository for `ansible-builder` is automatically enabled. +See link:{URLCentralAuth}/assembly-gateway-licensing#proc-attaching-subscriptions[Attaching your Red Hat {PlatformNameShort} subscription] for more information. ++ +[NOTE] +==== +To install the developer tools without consuming a valid {PlatformName} managed node entitlement you can use MCT4589–Red Hat Ansible Developer, Standard (10 Managed Nodes), which is free. +This subscription is for enabling users of {PlatformNameShort}. This subscription requires the approval of Ansible Business Unit. +==== .Procedure -* In your terminal, run the following command to install {Builder} and activate your {PlatformNameShort} repo: +* Run the following command to install {Builder} and activate your {PlatformNameShort} repo: + ---- -# dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-builder +# dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms ansible-builder ---- diff --git a/downstream/modules/builder/ref-build-args-base-image.adoc b/downstream/modules/builder/ref-build-args-base-image.adoc index f0252009a0..fe11b7bd10 100644 --- a/downstream/modules/builder/ref-build-args-base-image.adoc +++ b/downstream/modules/builder/ref-build-args-base-image.adoc @@ -1,21 +1,31 @@ [id="ref-build-args-base-image"] -= Build args and base image += Build arguments -The `build_arg_defaults` section of the definition file is a dictionary whose keys can provide default values for arguments to {Builder}. See the following table for a list of values that can be used in `build_arg_defaults`: +The `build_arg_defaults` section of the definition file is a dictionary whose keys can provide default values for arguments to {Builder}. + +The following table lists values that can be used in `build_arg_defaults`: [cols="a,a"] |=== | Value | Description -| `ANSIBLE_GALAXY_CLI_COLLECTION_OPTS` -| Allows the user to pass arbitrary arguments to the ansible-galaxy CLI during the collection installation phase. For example, the –pre flag to enable the installation of pre-release collections, or -c to disable verification of the server's SSL certificate. +| `ANSIBLE_GALAXY_CLI_COLLECTION_OPTS` | Enables the user to pass arbitrary arguments to the ansible-galaxy CLI during the collection installation phase. + +For example, the `–pre` flag to enable the installation of pre-release collections, or `-c` to disable verification of the server's SSL certificate. -| `ANSIBLE_GALAXY_CLI_ROLE_OPTS` -| Allows the user to pass any flags, such as –no-deps, to the role installation. +| `ANSIBLE_GALAXY_CLI_ROLE_OPTS` | Enables the user to pass any flags, such as `–no-deps`, to the role installation. |=== +[NOTE] +==== +It is generally easier (especially in a pipeline context) to customize the base image into a custom base image using podman first, and then call `ansible-builder` on this custom image. +==== + The values given inside `build_arg_defaults` will be hard-coded into the `Containerfile`, so these values will persist if `podman build` is called manually. -NOTE: If the same variable is specified in the CLI `--build-arg` flag, the CLI value will take higher precedence. +[NOTE] +==== +If the same variable is specified in the CLI `--build-arg` flag, the CLI value takes higher precedence. +==== diff --git a/downstream/modules/builder/ref-definition-file-images.adoc b/downstream/modules/builder/ref-definition-file-images.adoc index 8f21512c58..a83a41c966 100644 --- a/downstream/modules/builder/ref-definition-file-images.adoc +++ b/downstream/modules/builder/ref-definition-file-images.adoc @@ -2,21 +2,22 @@ = Images -The `images` section of the definition file identifies the base image. Verification of signed container images is supported with the `podman` container runtime. +The `images` section of the definition file identifies the base image. +Verification of signed container images is supported with the `podman` container runtime. -See the following table for a list of values that you can use in `images`: +The following table shows a list of values that you can use in `images`: [cols="a,a"] |=== | Value | Description -| `base_image` -| Specifies the parent image for the {ExecEnvNameSing} which enables a new image to be built that is based on an existing image. This is typically a supported {ExecEnvShort} base image such as _ee-minimal_ or _ee-supported_, but it can also be an {ExecEnvShort} image that you have created and want to customize further. +| `base_image` | Specifies the parent image for the {ExecEnvNameSing} which enables a new image to be built that is based on an existing image. +This is typically a supported {ExecEnvShort} base image such as _ee-minimal_ or _ee-supported_, but it can also be an {ExecEnvShort} image that you have created and want to customize further. -A `name` key is required for the container image to use. Specify the `signature _original_name` key if the image is mirrored within your repository, but is signed with the image's original signature key. Image names must contain a tag, such as `:latest`. +A `name` key is required for the container image to use. +Specify the `signature _original_name` key if the image is mirrored within your repository, but is signed with the image's original signature key. +Image names must contain a tag, such as `:latest`. The default image is `registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest`. |=== - -NOTE: If the same variable is specified in the CLI `--build-arg` flag, the CLI value will take higher precedence. diff --git a/downstream/modules/builder/ref-example-yaml-image-files.adoc b/downstream/modules/builder/ref-example-yaml-image-files.adoc new file mode 100644 index 0000000000..69adbea867 --- /dev/null +++ b/downstream/modules/builder/ref-example-yaml-image-files.adoc @@ -0,0 +1,30 @@ +[id="ref-example-yaml-image-files"] + += Example YAML file to build an image + +The 'ansible-builder' build command takes an {ExecEnvShort} definition as an input. +It outputs the build context necessary for building an {ExecEnvShort} image, and then builds that image. +The image can be re-built with the build context elsewhere, and produces the same result. +By default, the builder searches for a file named `execution-environment.yml` in the current directory. + +The following example `execution-environment.yml` file can be used as a starting point: + +---- +version: 3 +dependencies: + galaxy: requirements.yml +The content of requirements.yml: +--- +collections: + - name: awx.awx +To build an execution environment using the preceding files and run the following command: +ansible-builder build +... +STEP 7: COMMIT my-awx-ee +--> 09c930f5f6a +09c930f5f6ac329b7ddb321b144a029dbbfcc83bdfc77103968b7f6cdfc7bea2 +Complete! The build context can be found at: context +---- + +In addition to producing a ready-to-use container image, the build context is preserved. +This can be rebuilt at a different time or location with the tools of your choice, such as docker build or podman build. diff --git a/downstream/modules/builder/ref-scenario-update-hub-ca-cert.adoc b/downstream/modules/builder/ref-scenario-update-hub-ca-cert.adoc index 5929caa169..d337a02f98 100644 --- a/downstream/modules/builder/ref-scenario-update-hub-ca-cert.adoc +++ b/downstream/modules/builder/ref-scenario-update-hub-ca-cert.adoc @@ -2,8 +2,6 @@ = Updating the {HubName} CA certificate - -[role="_abstract"] Use this example to customize the default definition file to include a CA certificate to the `additional-build-files` section, move the file to the appropriate directory and, finally, run the command to update the dynamic configuration of CA certificates to allow the system to trust this CA certificate. .Prerequisites diff --git a/downstream/modules/builder/ref-scenario-using-authentication-ee.adoc b/downstream/modules/builder/ref-scenario-using-authentication-ee.adoc index 979fea7571..131064b1f6 100644 --- a/downstream/modules/builder/ref-scenario-using-authentication-ee.adoc +++ b/downstream/modules/builder/ref-scenario-using-authentication-ee.adoc @@ -2,14 +2,13 @@ = Using {HubName} authentication details when building {ExecEnvName} - -[role="_abstract"] Use the following example to customize the default definition file to pass {HubName} authentication details into the {ExecEnvNameSing} build without exposing them in the final {ExecEnvNameSing} image. .Prerequisites -* You have link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/hub-create-api-token[created an {HubName} API token] and stored it in a secure location, for example in a file named `token.txt`. +* You have created an API token, as in link:{URLHubManagingContent}/managing-cert-valid-content#proc-create-api-token[Retrieving the API token for your Red Hat Certified Collection] and stored it in a secure location, for example in a file named `token.txt`. * Define a build argument that gets populated with the {HubName} API token: + ---- export ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_TOKEN=$(cat ) ---- @@ -24,3 +23,6 @@ additional_build_steps: - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_URL=https://console.redhat.com/api/automation-hub/content/-synclist/ - ENV ANSIBLE_GALAXY_SERVER_AUTOMATION_HUB_AUTH_URL=https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token ----- + +.Additional resources +* For information regarding the different parts of an automation execution environment definition file, see xref:con-definition-file-breakdown[Breakdown of definition file content]. \ No newline at end of file diff --git a/downstream/modules/core/con-about-ansible-cli.adoc b/downstream/modules/core/con-about-ansible-cli.adoc deleted file mode 100644 index 353e4c0229..0000000000 --- a/downstream/modules/core/con-about-ansible-cli.adoc +++ /dev/null @@ -1,14 +0,0 @@ - -[id="con-about-ansible-cli_{context}"] - -= About the Ansible command line interface - -[role="_abstract"] - -Using Ansible on the command line is a useful way to run tasks that you do not repeat very often. The recommended way to handle repeated tasks is to write a playbook. - -An ad hoc command for Ansible on the command line follows this structure: - ------ -$ ansible [pattern] -m [module] -a "[module options]" ------ diff --git a/downstream/modules/core/con-ansible-playbooks.adoc b/downstream/modules/core/con-ansible-playbooks.adoc deleted file mode 100644 index fca65435c3..0000000000 --- a/downstream/modules/core/con-ansible-playbooks.adoc +++ /dev/null @@ -1,11 +0,0 @@ - - -[id="con-about-ansible-playbooks_{context}"] - -= About Ansible Playbooks - -[role="_abstract"] - -Playbooks are files written in YAML that contain specific sets of human-readable instructions, or “plays”, that you send to run on a single target or groups of targets. - -Playbooks can be used to manage configurations of and deployments to remote machines, as well as sequence multi-tier rollouts involving rolling updates. Use playbooks to delegate actions to other hosts, interacting with monitoring servers and load balancers along the way. Once written, playbooks can be used repeatedly across your enterprise for automation. diff --git a/downstream/modules/core/con-ansible-roles.adoc b/downstream/modules/core/con-ansible-roles.adoc deleted file mode 100644 index 06a7587e9e..0000000000 --- a/downstream/modules/core/con-ansible-roles.adoc +++ /dev/null @@ -1,14 +0,0 @@ - -[id="con-about-ansible-roles_{context}"] - -= About Ansible Roles - -[role="_abstract"] - -A role is Ansible's way of bundling automation content in addition to loading related vars, files, tasks, handlers, and other artifacts automatically by utilizing a known file structure. Instead of creating huge playbooks with hundreds of tasks, you can use roles to break the tasks apart into smaller, more discrete units of work. - -You can find roles for provisioning infrastructure, deploying applications, and all of the tasks you do every day on {Galaxy}. Filter your search by *Type* and select *Role*. Once you find a role that you are interested in, you can download it by using the `ansible-galaxy` command that comes bundled with Ansible: - ------ -$ ansible-galaxy role install username.rolename ------ diff --git a/downstream/modules/core/con-content-collections.adoc b/downstream/modules/core/con-content-collections.adoc deleted file mode 100644 index 2d85df961f..0000000000 --- a/downstream/modules/core/con-content-collections.adoc +++ /dev/null @@ -1,42 +0,0 @@ - -[id="con-content-collections_{context}"] - - - -= About Content Collections - - -[role="_abstract"] - -An Ansible Content Collection is a ready-to-use toolkit for automation. It includes several types of content such as playbooks, roles, modules, and plugins all in one place. The following diagram shows the basic structure of a collection: - -.... -collection/ -├── docs/ -├── galaxy.yml -├── meta/ -│ └── runtime.yml -├── plugins/ -│ ├── modules/ -│ │ └── module1.py -│ ├── inventory/ -│ ├── lookup/ -│ ├── filter/ -│ └── .../ -├── README.md -├── roles/ -│ ├── role1/ -│ ├── role2/ -│ └── .../ -├── playbooks/ -│ ├── files/ -│ ├── vars/ -│ ├── templates/ -│ ├── playbook1.yml -│ └── tasks/ -└── tests/ - ├── integration/ - └── unit/ -.... - -In {PlatformName}, {HubName} serves as the source for {CertifiedName}. diff --git a/downstream/modules/dev-guide/con-architecture-overview.adoc b/downstream/modules/dev-guide/con-architecture-overview.adoc deleted file mode 100644 index 97f225a514..0000000000 --- a/downstream/modules/dev-guide/con-architecture-overview.adoc +++ /dev/null @@ -1,14 +0,0 @@ - -[id="con-architecture-overview_introduction"] - - -= Architecture overview - - -[role="_abstract"] -The following list shows the arrangements and uses of tools available on {PlatformNameShort} 2.0, along with how they can be utilized: - -* {Navigator} only -- can be used today in {PlatformNameShort} 1.2 -* {Navigator} + downloaded {ExecEnvName} — used directly on laptop/workstation -* {Navigator} + downloaded {ExecEnvName} + {ControllerName} — for pushing/executing locally → remotely -* {Navigator} + {ControllerName} + {Builder} + Layered custom EE — provides even more control over utilized content for how to execute automation jobs diff --git a/downstream/modules/dev-guide/con-content-workflows.adoc b/downstream/modules/dev-guide/con-content-workflows.adoc deleted file mode 100644 index c355c775f6..0000000000 --- a/downstream/modules/dev-guide/con-content-workflows.adoc +++ /dev/null @@ -1,18 +0,0 @@ - -[id="con-content-workflows_introduction"] - - -= About content workflows - - -[role="_abstract"] -Before {PlatformName} 2.0, an automation content developer may have needed so many Python virtual environments that they required their own automation in order to manage them. To reduce this level of complexity, {PlatformNameShort} 2.0 is moving away from virtual environments and using containers, referred to as {ExecEnvName}, instead, as they are straightforward to build and manage and are more shareable across teams and orgs. - -As {ControllerName} shifts to using {ExecEnvName}, tools like {Navigator} and {Builder} ensure that you can take advantage of those {ExecEnvName} locally within your own development system. - - -[role="_additional-resources"] -.Additional resources - -* See the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_creator_guide/tools#con-about-ansible-navigator_tools[Automation Content Navigator Creator Guide] for more on using {Navigator}. -* For more information on {Builder}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/creating_and_consuming_execution_environments/index[Creating and Consuming Execution Environments]. diff --git a/downstream/modules/dev-guide/proc-create-collections.adoc b/downstream/modules/dev-guide/proc-create-collections.adoc deleted file mode 100644 index dc167dc244..0000000000 --- a/downstream/modules/dev-guide/proc-create-collections.adoc +++ /dev/null @@ -1,50 +0,0 @@ -[id="creating-collections"] - - - -= Creating collections - -[role="_abstract"] -You can create your own Collections locally with the {Galaxy} CLI tool. You can use the `collection` subcommand to activate the Collection-specific commands. - - -.Prerequisites - -* You have Ansible-core version 2.15 or newer installed in your development environment. - - -.Procedure - -. In your terminal, go to where you want your namespace root directory to be. For simplicity, this should be a path in link:https://docs.ansible.com/ansible/latest/reference_appendices/config.html#collections-paths[COLLECTIONS_PATH] but that is not required. -. Run the following command, replacing `my_namespace` and `my_collection_name` with your own values: -+ ------ -$ ansible-galaxy collection init . ------ -+ -[NOTE] -==== -Make sure you have the proper permissions to upload to a namespace by checking under the *My Content* tab on galaxy.ansible.com or {Console}/ansible/automation-hub -==== - -The earlier command will create a directory named from the namespace argument (if one does not already exist) and then create a directory under that with the Collection name. Inside of that directory will be the default or "skeleton" Collection. This is where you can add your roles or plugins and start working on developing your own Collection. - -In relation to execution environments, Collection developers can declare requirements for their content by providing the appropriate metadata in {Builder}. - -Requirements from a Collection can be recognized in these ways: - -* A file `meta/execution-environment.yml`, which references the Python or `bindep` requirements files. -* A file named `requirements.txt`, which includes information about the Python dependencies, and is sometimes found at the root level of the Collection. -* A file named `bindep.txt`, which includes system-level dependencies, and is sometimes found at the root level of the Collection. -* If any of these files are in the `build_ignore` of the Collection, {Builder} will not pick up on these. The `build_ignore` section filters any files or directories that should not be included in the build artifact. - -Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the `introspect` command: - ------ -$ ansible-builder introspect --sanitize ~/.ansible/collections/ ------ - -[role="_additional-resources"] -.Additional resources - -* For more information about creating collections, see link:https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html#creating-collections[Creating collections] in the Ansible _Developer Guide_. diff --git a/downstream/modules/dev-guide/proc-create-execution-environment.adoc b/downstream/modules/dev-guide/proc-create-execution-environment.adoc deleted file mode 100644 index f66d913180..0000000000 --- a/downstream/modules/dev-guide/proc-create-execution-environment.adoc +++ /dev/null @@ -1,18 +0,0 @@ -[id="creating-execution-environments"] - - - -= Creating {ExecEnvName} - -[role="_abstract"] -An {ExecEnvName} definition file will specify:: - -* An Ansible version -* A Python version (defaults to system Python) -* A set of required Python libraries -* Zero or more Content Collections (optional) -* Python dependencies for those specific Collections - -The concept of specifying a set of Collections for an environment is to resolve and install their dependencies. The Collections themselves are not required to be installed on the machine that you are generating the {ExecEnvName} on. - -An {ExecEnvName} is built from this definition, and results in a container image. Please read the {Builder} documentation to learn the steps involved in creating these images. diff --git a/downstream/modules/dev-guide/proc-create-playbooks.adoc b/downstream/modules/dev-guide/proc-create-playbooks.adoc deleted file mode 100644 index 036a1a3e83..0000000000 --- a/downstream/modules/dev-guide/proc-create-playbooks.adoc +++ /dev/null @@ -1,39 +0,0 @@ -[id="creating-playbooks"] - - - -= Creating playbooks - -[role="_abstract"] -Playbooks contain one or more plays. A basic play contains the following sections: - -* Name: a brief description of the overall function of the playbook, which assists in keeping it readable and organized for all users. -* Hosts: identifies the target(s) for Ansible to run against. -* Become statements: this optional statement can be set to `true`/`yes` to enable privilege escalation using a become plugin (such as `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`). -* Tasks: this is the list actions that get executed against each host in the play. - -.Example playbook - ------ -- name: Set Up a Project and Job Template - hosts: host.name.ip - become: true - - tasks: - - name: Create a Project - ansible.controller.project: - name: Job Template Test Project - state: present - scm_type: git - scm_url: https://github.com/ansible/ansible-tower-samples.git - - - name: Create a Job Template - ansible.controller.job_template: - name: my-job-1 - project: Job Template Test Project - inventory: Demo Inventory - playbook: hello_world.yml - job_type: run - state: present - ------ diff --git a/downstream/modules/dev-guide/proc-create-role.adoc b/downstream/modules/dev-guide/proc-create-role.adoc deleted file mode 100644 index 860066c49e..0000000000 --- a/downstream/modules/dev-guide/proc-create-role.adoc +++ /dev/null @@ -1,61 +0,0 @@ -[id="creating-roles"] - - - -= Creating roles - -[role="_abstract"] -You can create roles by using the {Galaxy} CLI tool. You can access Role-specific commands from the `roles` subcommand. - ------ -ansible-galaxy role init ------ - -Standalone roles outside of Collections are still supported, but create new roles inside of a Collection to take advantage of all the features {PlatformNameShort} has to offer. - -.Procedure - -. In your terminal, go to the `roles` directory inside a collection. -. Create a role called `role_name` inside the collection: -+ ------ -$ ansible-galaxy role init my_role ------ -+ -The collection now includes a role named `my_role` inside the `roles` directory: -+ ------ - ~/.ansible/collections/ansible_collections// - ... - └── roles/ - └── my_role/ - ├── .travis.yml - ├── README.md - ├── defaults/ - │ └── main.yml - ├── files/ - ├── handlers/ - │ └── main.yml - ├── meta/ - │ └── main.yml - ├── tasks/ - │ └── main.yml - ├── templates/ - ├── tests/ - │ ├── inventory - │ └── test.yml - └── vars/ - └── main.yml ------ -+ -. A custom role skeleton directory can be supplied by using the `--role-skeleton` argument. This allows organizations to create standardized templates for new roles to suit their needs. - - ansible-galaxy role init my_role --role-skeleton ~/role_skeleton - -This will create a role named `my_role` by copying the contents of `~/role_skeleton` into `my_role`. The contents of `role_skeleton` can be any files or folders that are valid inside a role directory. - - -[role="_additional-resources"] -.Additional resources - -* For more information about creating roles, see link:https://galaxy.ansible.com/docs/contributing/creating_role.html[Creating roles] in the {Galaxy} documentation. diff --git a/downstream/modules/dev-guide/proc-downloading-base-ees.adoc b/downstream/modules/dev-guide/proc-downloading-base-ees.adoc deleted file mode 100644 index a17ed8a586..0000000000 --- a/downstream/modules/dev-guide/proc-downloading-base-ees.adoc +++ /dev/null @@ -1,28 +0,0 @@ - - -[id="downloading-base-ees"] - - - -= Downloading base {ExecEnvName} - -[role="_abstract"] -Base images that ship with {PlatformNameShort} 2.0 are hosted on the Red Hat Ecosystem Catalog (registry.redhat.io). - -.Prerequisites - -* You have a valid {PlatformName} subscription. - -.Procedure - -. Log in to registry.redhat.io -+ ------ -$ podman login registry.redhat.io ------ -+ -. Pull the base images from the registry -+ ------ -$ podman pull registry.redhat.io/aap/ ------ diff --git a/downstream/modules/dev-guide/proc-list-custom-venv-associations.adoc b/downstream/modules/dev-guide/proc-list-custom-venv-associations.adoc deleted file mode 100644 index 63a373f3ec..0000000000 --- a/downstream/modules/dev-guide/proc-list-custom-venv-associations.adoc +++ /dev/null @@ -1,39 +0,0 @@ - - -[id="list-custom-venvs-associations"] - - - -= Viewing objects associated with a custom virtual environment - - -[role="_abstract"] -View the organizations, jobs, and inventory sources associated with a custom virtual environment using the `awx-manage` command. - - -.Procedure - -. SSH into your {ControllerName} instance and run: -+ ------ -$ awx-manage custom_venv_associations /path/to/venv ------ - -A list of associated objects will appear. - ------ -inventory_sources: -- id: 15 - name: celery -job_templates: -- id: 9 - name: Demo Job Template @ 2:40:47 PM -- id: 13 - name: elephant -organizations -- id: 3 - name: alternating_bongo_meow -- id: 1 - name: Default -projects: [] ------ diff --git a/downstream/modules/dev-guide/proc-list-custom-virt-envs.adoc b/downstream/modules/dev-guide/proc-list-custom-virt-envs.adoc deleted file mode 100644 index 0a28e57c79..0000000000 --- a/downstream/modules/dev-guide/proc-list-custom-virt-envs.adoc +++ /dev/null @@ -1,31 +0,0 @@ - - -[id="list-custom-virts"] - - - -= Listing custom virtual environments - - -[role="_abstract"] -You can list the virtual environments on your {ControllerName} instance using the `awx-manage` command. - - -.Procedure - -. SSH into your {ControllerName} instance and run: -+ ------ -$ awx-manage list_custom_venvs ------ - -A list of discovered virtual environments will appear. - ------ -# Discovered virtual environments: -/var/lib/awx/venv/testing -/var/lib/venv/new_env - -To export the contents of a virtual environment, re-run while supplying the path as an argument: -awx-manage export_custom_venv /path/to/venv ------ diff --git a/downstream/modules/devtools/con-devtools-plan-roles-collection.adoc b/downstream/modules/devtools/con-devtools-plan-roles-collection.adoc new file mode 100644 index 0000000000..c6a66a9a2d --- /dev/null +++ b/downstream/modules/devtools/con-devtools-plan-roles-collection.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: CONCEPT + +[id="plan-roles-collection_{context}"] += Planning your collection + +Organize smaller bundles of curated automation into separate collections for specific functions, rather than creating one big general collection for all of your roles. + +For example, you could store roles that manage the networking for an internal system called `myapp` in a `company_namespace.myapp_network` collection, +and store roles that manage and deploy networking in AWS in a collection called `company_namespace.aws_net`. + diff --git a/downstream/modules/devtools/con-devtools-requirements.adoc b/downstream/modules/devtools/con-devtools-requirements.adoc new file mode 100644 index 0000000000..6f09cf87f6 --- /dev/null +++ b/downstream/modules/devtools/con-devtools-requirements.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: CONCEPT + +[id="devtools-requirements_{context}"] + += Requirements + +[role="_abstract"] +To install and use {ToolsName}, you must meet the following requirements. +Extra requirements for Windows installations and containerized installations are indicated in the procedures. + +* Python 3.10 or later. +* {VSCode} (Visual Studio Code) with the Ansible extension added. See +xref:devtools-install-vsc_installing-devtools[Installing {VScode}]. +* For containerized installations, the Microsoft Dev Containers {VSCode} extension. See +xref:devtools-ms-dev-containers-ext_installing-devtools[Installing and configuring the Dev Containers extension]. +* A containerization platform, for example Podman, Podman Desktop, Docker, or Docker Desktop. ++ +[NOTE] +==== +The installation procedure for {ToolsName} on Windows covers the use of Podman Desktop only. +See link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/developing_automation_content/index#devtools-install-podman-desktop-wsl_installing-devtools[Requirements for Ansible development tools on Windows]. +==== +* You have a Red Hat account and you can log in to the Red Hat container registry at `registry.redhat.io`. +For information about logging in to `registry.redhat.io`, see +xref:devtools-setup-registry-redhat-io_installing-devtools[Authenticating with the Red Hat container registry]. + diff --git a/downstream/modules/devtools/con-devtools-roles-collection-prerequisites.adoc b/downstream/modules/devtools/con-devtools-roles-collection-prerequisites.adoc new file mode 100644 index 0000000000..2b30223c52 --- /dev/null +++ b/downstream/modules/devtools/con-devtools-roles-collection-prerequisites.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="devtools-roles-collection-prerequisites_{context}"] += Prerequisites + +* You have installed {VSCode} and the Ansible extension. +* You have installed the Microsoft Dev Containers extension in {{VSCode}. +* You have installed {ToolsName}. +* You have installed a containerization platform, for example Podman, Podman Desktop, Docker, or Docker Desktop. +* You have a Red Hat account and you can log in to the Red Hat container registry at `registry.redhat.io`. +For information about logging in to `registry.redhat.io`, see +xref:devtools-setup-registry-redhat-io_installing-devtools[Authenticating with the Red Hat container registry]. +// * Considerations about environments / isolation (ADE / devcontainer files) + + diff --git a/downstream/modules/devtools/con-installation-prereqs.adoc b/downstream/modules/devtools/con-installation-prereqs.adoc new file mode 100644 index 0000000000..102e34db8b --- /dev/null +++ b/downstream/modules/devtools/con-installation-prereqs.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="self-service-installation-prereqs_{context}"] += Prerequisites + +* A valid subscription to {PlatformName}. +* {PlatformName} 2.5. +* An {PlatformNameShort} instance with the appropriate permissions to create an OAuth application. +* An {OCPShort} instance (Version 4.12 or newer) with the appropriate permissions within your project to create an application. +* You have installed the OpenShift CLI (`oc`). +See the +link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cli_tools/openshift-cli-oc#cli-getting-started[Getting started with the OpenShift CLI] +chapter of the _Understanding OpenShift Container Platform_ guide. +* You have installed Helm 3.10 or newer. +See the link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/building_applications/working-with-helm-charts#installing-helm[Installing Helm] +chapter of the _OpenShift Container Platform Building applications_ guide. + diff --git a/downstream/modules/devtools/con-rhdh-install-ocp-prereqs.adoc b/downstream/modules/devtools/con-rhdh-install-ocp-prereqs.adoc new file mode 100644 index 0000000000..c76ec76b52 --- /dev/null +++ b/downstream/modules/devtools/con-rhdh-install-ocp-prereqs.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="rhdh-install-ocp-prereqs_{context}"] += Prerequisites + +* {RHDH} installed on {OCP}. +** For Helm installation, follow the steps in the +https://docs.redhat.com/en/documentation/red_hat_developer_hub/{RHDHVers}/html/installing_red_hat_developer_hub_on_openshift_container_platform/index#assembly-install-rhdh-ocp-helm[Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart] +section of _Installing Red Hat Developer Hub on OpenShift Container Platform_. +** For Operator installation, follow the steps in the +https://docs.redhat.com/en/documentation/red_hat_developer_hub/{RHDHVers}/html/installing_red_hat_developer_hub_on_openshift_container_platform/index#assembly-install-rhdh-ocp-operator[Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator] +section of _Installing Red Hat Developer Hub on OpenShift Container Platform_. +* A valid subscription to {PlatformName}. +* An {OCPShort} instance with the appropriate permissions within your project to create an application. +* The {RHDH} instance can query the automation controller API. +* Optional: To use the integrated learning paths, you must have outbound access to developers.redhat.com. + diff --git a/downstream/modules/devtools/con-rhdh-recommended-preconfig.adoc b/downstream/modules/devtools/con-rhdh-recommended-preconfig.adoc new file mode 100644 index 0000000000..59c88dd504 --- /dev/null +++ b/downstream/modules/devtools/con-rhdh-recommended-preconfig.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="rhdh-recommended-preconfig_{context}"] += Recommended {RHDHShort} preconfiguration + +Red Hat recommends performing the following initial configuration tasks in {RHDHShort}. +However, you can install the {AAPRHDH} before completing these tasks. + +* link:{BaseURL}/red_hat_developer_hub/{RHDHVers}/html/authentication/index[Setting up authentication in {RHDHShort}] +* link:{BaseURL}/red_hat_developer_hub/{RHDHVers}/html/authorization/index[Installing and configuring RBAC in {RHDHShort}] + +[NOTE] +==== +Red Hat provides a link:https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml[repository of software templates for {RHDHShort}] that uses the `publish:github` action. +To use these software templates, you must install the required GitHub dynamic plugins. +==== + diff --git a/downstream/modules/devtools/con-self-service-telemetry-data.adoc b/downstream/modules/devtools/con-self-service-telemetry-data.adoc new file mode 100644 index 0000000000..f75c9bfd1c --- /dev/null +++ b/downstream/modules/devtools/con-self-service-telemetry-data.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="self-service-telemetry-data_{context}"] += Telemetry data collected by Red Hat + +Red Hat collects and analyses the following data: + +* Events of page visits and clicks on links or buttons. +* System-related information, for example, locale, timezone, user agent including browser and OS details. +* Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters. +* Anonymized IP addresses, recorded as `0.0.0.0`. +* Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application. +// * Feedback and sentiment provided in the feedback form. + diff --git a/downstream/modules/devtools/proc-configure-extension-settings.adoc b/downstream/modules/devtools/proc-configure-extension-settings.adoc deleted file mode 100644 index 7cb5f8d153..0000000000 --- a/downstream/modules/devtools/proc-configure-extension-settings.adoc +++ /dev/null @@ -1,35 +0,0 @@ -[id="configure-extension-settings"] - -= Configuring Ansible extension settings - -[role="_abstract"] - -The Ansible {VSCode} extension supports multiple configuration options. -You can configure the settings for the extension on a user level, on a workspace level, or for a particular directory. -Workspace settings are stored within your workspace and only apply when the current workspace is opened. - -It is useful to configure settings at the workspace level for the following reasons: - -* If you define and maintain configurations specific to your playbook project, you can customize your Ansible development environment for individual projects without altering your preferred setup for other work. -* You can have different settings for a Python project, an Ansible project, and a C++ project, each optimized for the respective stack without the need to manually reconfigure settings each time you switch projects. -* If you include workspace settings when setting up version control for a project you want to share with your team, everyone uses the same configuration for that project. - -.Procedure - -. To open the Ansible extension settings, click the *Extensions* icon in the activity bar. -. Select the Ansible extension, and click the *Manage* icon ({SettingsIcon}) and then btn:[Extension Settings] to display the extension settings. -+ -Alternatively, select menu:Code[Settings > Settings] to open the *Settings* page. -Enter `Ansible` in the search bar to display the extension settings. -. Select the *Workspace* tab to configure your settings for the current {VSCode} workspace. - -The Ansible extension settings are pre-populated. - -* Check the *Ansible > Validation > Lint: Enabled* box to enable ansible-lint. -* Check the *Ansible Execution Environment: Enabled* box to use an execution environment. -* Specify the execution environment image you want to use in the *Ansible > Execution Environment: image* field. -* To use Ansible Lightspeed, check the *Ansible > Lightspeed: Enabled* box, and enter the URL for Lightspeed. - -// The settings are documented on the link:https://marketplace.visualstudio.com/items?itemName=redhat.ansible[Ansible VS Code Extension by Red Hat page] in the VisualStudio marketplace documentation. - - diff --git a/downstream/modules/devtools/proc-debugging-playbook.adoc b/downstream/modules/devtools/proc-debugging-playbook.adoc index 1a4fa2d9c3..88e6d3bbfb 100644 --- a/downstream/modules/devtools/proc-debugging-playbook.adoc +++ b/downstream/modules/devtools/proc-debugging-playbook.adoc @@ -1,30 +1,12 @@ -[id="debugging-playbook"] +[id="debugging-playbook_{context}"] +:_mod-docs-content-type: PROCEDURE = Debugging your playbook -[role="_abstract"] -The Ansible extension provides syntax highlighting and assists you with indentation in `.yml` files. +Learn how to use {VSCode} to identify and understand error messages in playbooks. -The following rules apply for playbook files: - -* Every playbook file must finish with a blank line. -* Trailing spaces at the end of lines are not allowed. -* Every playbook and every play require an identifier (name). - -== Inline help - -* If you hover your mouse over a keyword or a module name, the Ansible extension provides documentation: -+ -image::ansible-lint-keyword-help.png[Ansible-lint showing no errors in a playbook] -* If you begin to type the name of a module, for example `ansible.builtin.ping`, the extension provides a list of suggestions. -Select one of the suggestions to autocomplete the line. +. The following playbook contains multiple errors. Copy and paste it into a new file in {VSCode}. + -image::ansible-lint-module-completion.png[Ansible-lint showing no errors in a playbook] - -== Error messages - -The following playbook contains multiple errors: - ---- - name: hosts: localhost @@ -32,16 +14,16 @@ The following playbook contains multiple errors: - name: ansible.builtin.ping: ---- - ++ The errors are indicated with a wavy underline in {VSCode}. -Hover your mouse over an error to view the details: - +. Hover your mouse over an error to view the details: ++ image::ansible-lint-errors.png[Popup message explaining a playbook error] - -The errors are listed in the *Problems* tab of the {VSCode} terminal. -Playbook files that contain errors are indicated with a number in the Explorer pane: - +. Playbook files that contain errors are indicated with a number in the *Explorer* pane. +. Select the *Problems* tab of the {VSCode} terminal to view a list of the errors. ++ image::ansible-lint-errors-explorer.png[Playbook errors shown in Problems tab and explorer list] ++ +`$[0].tasks[0].name None is not of type 'string'` indicates that the playbook does not have a label. -`$[0].tasks[0].name None is not of type 'string'` indicates that the playbook does not have a label. diff --git a/downstream/modules/devtools/proc-devtools-create-aap-job.adoc b/downstream/modules/devtools/proc-devtools-create-aap-job.adoc new file mode 100644 index 0000000000..eef0a27009 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-create-aap-job.adoc @@ -0,0 +1,25 @@ +[id="create-aap-job_{context}"] +:_mod-docs-content-type: PROCEDURE + += Running your playbook in {PlatformNameShort} + +To run your playbook in {PlatformNameShort}, you must create a project in {ControllerName} for the repository where you stored your playbook project. +You can then create a job template for each playbook from the project. + +.Procedure + +. In a browser, log in to {controllername}. +. Configure a Source Control credential type for your source control system if necessary. See the +link:{URLControllerUserGuide}/controller-credentials#controller-create-credential[Creating new credentials] +section of _{TitleControllerUserGuide}_ for more details. +https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-credentials#controller-create-credential +. In {controllername}, create a project for the GitHub repository where you stored your playbook project. Refer to the +link:{URLControllerUserGuide}/controller-projects[Projects] +chapter of _{TitleControllerUserGuide}_. +. Create a job template that uses a playbook from the project that you created. Refer to the +link:{URLControllerUserGuide}/controller-job-templates[Job Templates] +chapter of _{TitleControllerUserGuide}_. +. Run your playbook from {ControllerName} by launching the job template. Refer to the +link:{URLControllerUserGuide}/controller-job-templates#controller-launch-job-template[Launching a job template] +section of _{TitleControllerUserGuide}_. + diff --git a/downstream/modules/devtools/proc-devtools-create-new-role-in-collection.adoc b/downstream/modules/devtools/proc-devtools-create-new-role-in-collection.adoc new file mode 100644 index 0000000000..3fe714a063 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-create-new-role-in-collection.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="devtools-create-new-role-in-collection_{context}"] += Creating a new role in your collection + +.Procedure + +. To create a new role, copy the default `run` role directory that was scaffolded when you created the collection. +. Define the tasks that you want your role to perform in the `tasks/main.yml` file. +If you are creating a role to reuse tasks in an existing playbook, +copy the content in the tasks block of your playbook YAML file. +Remove the whitespace to the left of the tasks. +Use `ansible-lint` in {VSCode} to check your YAML code. +. If your role depends on another role, add the dependency in the `meta/main.yml` file. diff --git a/downstream/modules/devtools/proc-devtools-docs-roles-collection.adoc b/downstream/modules/devtools/proc-devtools-docs-roles-collection.adoc new file mode 100644 index 0000000000..ffbda03a1b --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-docs-roles-collection.adoc @@ -0,0 +1,32 @@ +:_mod-docs-content-type: PROCEDURE + +[id="devtools-docs-roles-collection_{context}"] += Adding documentation for your roles collection + +It is important to provide documentation for your roles and roles collection, so that other users understand what your roles do and how to use them. + +. To add documentation for a role, navigate to the role directory. +. Open the `README.md` file in an editor. +This file was added in the role directory when you scaffolded your collection directory. +. Provide the following information in the `README.md` files for every role in your collection: +** Role description: A brief summary of what the role does +** Requirements: List the collections, libraries, and required installations +** Dependencies +** Role variables: Provide the following information about the variables your role uses. +*** Description +*** Defaults +*** Example values +*** Required variables +** Example playbook: Show an example of a playbook that uses your role. +Use comments in the playbook to help users understand where to set variables. ++ +The `README.md` file in link:https://github.com/redhat-cop/controller_configuration/tree/devel/roles/ad_hoc_command_cancel[`controller_configuration.ad_hoc_command_cancel`] is an example of a role with standard documentation. +. To add documentation for your roles collection, navigate to the collection directory. +. In the `README.md` file for your collection, provide the following information: +** Collection description: Describe what the collection does. +** Requirements: List required collections. +** List the roles as a component of the collection. +** Using the collection: Describe how to run the components of the collection. +** Add a troubleshooting section. +** Versioning: Describe the release cycle of your collection. + diff --git a/downstream/modules/devtools/proc-devtools-extension-run-ansible-navigator.adoc b/downstream/modules/devtools/proc-devtools-extension-run-ansible-navigator.adoc new file mode 100644 index 0000000000..63f370eacd --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-extension-run-ansible-navigator.adoc @@ -0,0 +1,30 @@ +[id="extension-run-ansible-navigator_{context}"] +:_mod-docs-content-type: PROCEDURE + += Running your playbook with `ansible-navigator` + +.Prerequisites + +* In the Ansible extension settings, enable the use of an execution environment in *Ansible Execution Environment > Enabled*. +* Enter the path or URL for the execution environment image in *Ansible > Execution Environment: Image*. + +.Procedure + +. To run a playbook, right-click the playbook name in the Explorer pane, then select menu:Run Ansible Playbook via[Run playbook via ansible-navigator run]. ++ +The output is displayed in the *Terminal* tab of the {VSCode} terminal. +The *Successful* status indicates that the playbook ran successfully. ++ +image:devtools-extension-navigator-output.png[Output for ansible-navigator execution] +. Enter the number next to a play to step into the play results. +The example playbook only contains one play. +Enter `0` to view the status of the tasks executed in the play. ++ +image:devtools-extension-navigator-tasks.png[Tasks in ansible-navigator output] ++ +Type the number next to a task to review the task results. + +For more information on running playbooks with {Navigator}, see +link:{URLNavigatorGuide}/assembly-execute-playbooks-navigator_ansible-navigator#proc-execute-playbook-tui_execute-playbooks-navigator[Executing a playbook from automation content navigator] +in the _{TitleNavigatorGuide}_ Guide. + diff --git a/downstream/modules/devtools/proc-devtools-extension-run-ansible-playbook.adoc b/downstream/modules/devtools/proc-devtools-extension-run-ansible-playbook.adoc new file mode 100644 index 0000000000..361a1070d7 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-extension-run-ansible-playbook.adoc @@ -0,0 +1,16 @@ +[id="extension-run-ansible-playbook_{context}"] +:_mod-docs-content-type: PROCEDURE + += Running your playbook with `ansible-playbook` + +.Procedure + +* To run a playbook, right-click the playbook name in the *Explorer* pane, then select menu:Run Ansible Playbook via[Run playbook via `ansible-playbook`]. ++ +image:ansible-playbook-run.png[Run playbook via ansible-playbook] + +The output is displayed in the *Terminal* tab of the {VSCode} terminal. +The `ok=2` and `failed=0` messages indicate that the playbook ran successfully. + +image:ansible-playbook-success.png[Success message for ansible-playbook execution] + diff --git a/downstream/modules/devtools/proc-devtools-extension-set-language.adoc b/downstream/modules/devtools/proc-devtools-extension-set-language.adoc new file mode 100644 index 0000000000..c4c5c848f7 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-extension-set-language.adoc @@ -0,0 +1,44 @@ +[id="devtools-extension-set-language_{context}"] +:_mod-docs-content-type: PROCEDURE + += Associating the Ansible language to YAML files + +[role="_abstract"] + +The Ansible {VSCode} extension works only when the language associated with a file is set to Ansible. +The extension provides features that help create Ansible playbooks, such as auto-completion, hover, and diagnostics. + +The Ansible {VSCode} extension automatically associates the Ansible language with some files. +The procedures below describe how to set the language for files that are not recognized as Ansible files. + +.Manually associating the Ansible language to YAML files + +The following procedure describes how to manually assign the Ansible language to a YAML file that is open in {VSCode}. + +. Open or create a YAML file in {VSCode}. +. Hover the cursor over the language identified in the status bar at the bottom of the {VSCode} window to open the *Select Language Mode* list. +. Select *Ansible* in the list. ++ +The language shown in the status bar at the bottom of the {VSCode} window for the file is changed to Ansible. + +.Adding persistent file association for the Ansible language to `settings.json` + +Alternatively, you can add file association for the Ansible language in your `settings.json` file. + +. Open the `settings.json` file: +.. Click menu:View[Command Palette] to open the command palette. +.. Enter `Workspace settings` in the search box and select *Open Workspace Settings (JSON)*. +. Add the following code to `settings.json`. ++ +---- +{ + ... + + "files.associations": { + "*plays.yml": "ansible", + "*init.yml": "yaml", + } +} +---- + + diff --git a/downstream/modules/devtools/proc-devtools-extension-settings.adoc b/downstream/modules/devtools/proc-devtools-extension-settings.adoc new file mode 100644 index 0000000000..60592de68b --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-extension-settings.adoc @@ -0,0 +1,38 @@ +[id="devtools-extension-settings_{context}"] +:_mod-docs-content-type: PROCEDURE + += Configuring Ansible extension settings + +[role="_abstract"] + +The Ansible extension supports multiple configuration options. + +You can configure the settings for the extension on a user level, on a workspace level, or for a particular directory. +User-based settings are applied globally for any instance of VS Code that is opened. +Workspace settings are stored within your workspace and only apply when the current workspace is opened. + +It is useful to configure settings for your workspace for the following reasons: + +* If you define and maintain configurations specific to your playbook project, +you can customize your Ansible development environment for individual projects without altering your preferred setup for other work. +You can have different settings for a Python project, an Ansible project, and a C++ project, each optimized for the respective stack without the need to manually reconfigure settings each time you switch projects. +* If you include workspace settings when setting up version control for a project you want to share with your team, everyone uses the same configuration for that project. + +.Procedure + +. Open the Ansible extension settings: +.. Click the 'Extensions' icon in the activity bar. +.. Select the Ansible extension, and click the 'gear' icon and then *Extension Settings* to display the extension settings. ++ +Alternatively, click menu:Code[Settings>Settings] to open the *Settings* page. +.. Enter `Ansible` in the search bar to display the settings for the extension. +. Select the *Workspace* tab to configure your settings for the current {VSCode} workspace. +. The Ansible extension settings are pre-populated. +Modify the settings to suit your requirements: +** Check the menu:Ansible[Validation > Lint: Enabled] box to enable ansible-lint. +** Check the `Ansible Execution Environment: Enabled` box to use an {ExecEnvShort}. +** Specify the {ExecEnvShort} image you want to use in the *Ansible > Execution Environment: image* field. +** To use {LightspeedShortName}, check the *Ansible > Lightspeed: Enabled* box, and enter the URL for Lightspeed. + +The settings are documented on the link:https://marketplace.visualstudio.com/items?itemName=redhat.ansible[Ansible {VSCode} Extension by Red Hat page] in the VisualStudio marketplace documentation. + diff --git a/downstream/modules/devtools/proc-devtools-inspect-playbook.adoc b/downstream/modules/devtools/proc-devtools-inspect-playbook.adoc new file mode 100644 index 0000000000..0ccde249c3 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-inspect-playbook.adoc @@ -0,0 +1,18 @@ +[id="inspect-playbook_context_{context}"] +:_mod-docs-content-type: PROCEDURE + += Inspecting your playbook + +[role="_abstract"] +The Ansible {VSCode} extension provides inline help, syntax highlighting, and assists you with indentation in `.yml` files. + +. Open a playbook in {VSCode}. +. Hover your mouse over a keyword or a module name: the Ansible extension provides documentation: ++ +image::ansible-lint-keyword-help.png[Ansible-lint showing no errors in a playbook] +. If you begin to type the name of a module, for example `ansible.builtin.ping`, the extension provides a list of suggestions. ++ +Select one of the suggestions to autocomplete the line. ++ +image::ansible-lint-module-completion.png[Ansible-lint showing no errors in a playbook] + diff --git a/downstream/modules/devtools/proc-devtools-install-container.adoc b/downstream/modules/devtools/proc-devtools-install-container.adoc new file mode 100644 index 0000000000..b895eb9119 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-install-container.adoc @@ -0,0 +1,54 @@ +[id="devtools-install-container_{context}"] +:_mod-docs-content-type: PROCEDURE + += Installing {ToolsName} on a container inside {VSCode} + +The Dev Containers {VSCode} extension requires a `.devcontainer` file to store settings for your dev containers. +You must use the Ansible extension to scaffold a config file for your dev container, and reopen your directory in a container in {VSCode}. + +.Prerequisites + +* You have installed a containerization platform, for example Podman, Podman Desktop, Docker, or Docker Desktop. +* You have a Red Hat login and you have logged in to the Red Hat registry at `registry.redhat.io`. +For information about logging in to `registry.redhat.io`, see +xref:devtools-setup-registry-redhat-io_installing-devtools[Authenticating with the Red Hat container registry]. +* You have installed {VSCode}. +* You have installed the Ansible extension in {VSCode}. +* You have installed the link:https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers[Microsoft Dev Containers] extension in {VSCode}. +* If you are installing {ToolsName} on Windows, launch {VSCode} and connect to the WSL machine: +.. Click the `Remote` (image:vscode-remote-icon.png[Remote,15,15]) icon. +.. In the dropdown menu that appears, select the option to connect to the WSL machine. + +.Procedure + +. In {VSCode}, navigate to your project directory. +. Click the Ansible icon in the {VSCode} activity bar to open the Ansible extension. +. In the *Ansible Development Tools* section of the Ansible extension, scroll down to the *ADD* option and select *Devcontainer*. +. In the *Create a devcontainer* page, select the *Downstream* container image from the *Container image* options. ++ +This action adds `devcontainer.json` files for both Podman and Docker in a `.devcontainer` directory. +. Reopen or reload the project directory: +** If {VSCode} detects that your directory contains a `devcontainer.json` file, the following notification appears: ++ +image::devtools-reopen-in-container.png[Reopen in container] ++ +Click *Reopen in Container*. +** If the notification does not appear, click the `Remote` (image:vscode-remote-icon.png[Remote,15,15]) icon. In the dropdown menu that appears, select *Reopen in Container*. +. Select the dev container for Podman or Docker according to the containerization platform you are using. ++ +The *Remote ()* status in the {VSCode} Status bar displays `opening Remote` and a notification indicates the progress in opening the container. + +.Verification +When the directory reopens in a container, the *Remote ()* status displays `Dev Container: ansible-dev-container`. + + +[NOTE] +==== +The base image for the container is a Universal Base Image Minimal (UBI Minimal) image that uses `microdnf` as a package manager. +The `dnf` and `yum` package managers are not available in the container. + +For information about using `microdnf` in containers based on UBI Minimal images, see +link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/assembly_adding-software-to-a-ubi-container_building-running-and-managing-containers#proc_adding-software-in-a-minimal-ubi-container_assembly_adding-software-to-a-ubi-container[Adding software in a minimal UBI container] +in the Red Hat Enterprise Linux _Building, running, and managing containers_ guide. +==== + diff --git a/downstream/modules/devtools/proc-devtools-install-podman-desktop-wsl.adoc b/downstream/modules/devtools/proc-devtools-install-podman-desktop-wsl.adoc new file mode 100644 index 0000000000..b4af8aeb3f --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-install-podman-desktop-wsl.adoc @@ -0,0 +1,56 @@ +[id="devtools-install-podman-desktop-wsl_{context}"] +:_mod-docs-content-type: PROCEDURE + += Requirements for {ToolsName} on Windows + +[role="_abstract"] +If you are installing {ToolsName} on a container in {VSCode} on Windows, there are extra requirements: + +* Windows Subsystem for Linux(WSL2) +* Podman Desktop + +.Procedure + +. Install WSL2 without a distribution: ++ +---- +$ `wsl --install --no-distribution` +---- +. Use `cgroupsv2` by disabling `cgroupsv1` for WSL2: ++ +Edit the `%USERPROFILE%/wsl.conf` file and add the following lines to force `cgroupv2` usage: ++ +---- +[wsl2] +kernelCommandLine = cgroup_no_v1="all" +---- +. Install Podman Desktop. Follow the instructions in +link:https://podman-desktop.io/docs/installation/windows-install[Installing Podman Desktop and Podman on Windows] +in the Podman Desktop documentation. ++ +You do not need to change the default settings in the set-up wizard. +. Ensure the podman machine is using `cgroupsv2`: ++ +---- +$ podman info | findstr cgroup +---- +. Test Podman Desktop: ++ +---- +$ podman run hello +---- +. Configure the settings for Podman Desktop. +Add a `%USERPROFILE%\bin\docker.bat` file with the following content: ++ +---- +@echo off +podman %* +---- ++ +This avoids having to install Docker as required by the {VSCode} `Dev Container` extension. +. Add the `%USERPROFILE%\bin` directory to the `PATH`: +.. Select *Settings* and search for "Edit environment variables for your account" to display all of the user environment variables. +.. Highlight "Path" in the top user variables box, click btn:[Edit] and add the path. +.. Click btn:[Save] to set the path for any new console that you open. + + diff --git a/downstream/modules/devtools/proc-devtools-install.adoc b/downstream/modules/devtools/proc-devtools-install-rpm.adoc similarity index 59% rename from downstream/modules/devtools/proc-devtools-install.adoc rename to downstream/modules/devtools/proc-devtools-install-rpm.adoc index 5548275f93..32e77dd181 100644 --- a/downstream/modules/devtools/proc-devtools-install.adoc +++ b/downstream/modules/devtools/proc-devtools-install-rpm.adoc @@ -1,16 +1,23 @@ -[id="devtools-install_context"] +[id="devtools-install_{context}"] +:_mod-docs-content-type: PROCEDURE -= Installing {ToolsName} from an RPM package += Installing {ToolsName} from a package on RHEL [role="_abstract"] -{ToolsName} is bundled in the {PlatformNameShort} RPM (Red Hat Package Manager) package. -// As an {PlatformNameShort} administrator, you can install {ToolsName} when you are installing {PlatformNameShort}. -Refer to the {PlatformNameShort} guide for more information on installing {PlatformNameShort}. +{ToolsName} are bundled in the {PlatformNameShort} RPM (Red Hat Package Manager) package. +Refer to the _link:{LinkInstallationGuide}_ documentation for information on installing {PlatformNameShort}. .Prerequisites -* You have installed RHEL + +* You have installed RHEL 8 or RHEL 9. ++ +[NOTE] +==== +RPM installation is not supported on RHEL 10. +==== * You have registered your system with Red Hat Subscription Manager. +* You have installed a containerization platform, for example Podman or Docker. .Procedure @@ -18,7 +25,7 @@ Refer to the {PlatformNameShort} guide for more information on installing {Platf + [source,shell] ---- -$ su subscription-manager status +$ sudo subscription-manager status ---- + If Simple Content Access is enabled, the output contains the following message: @@ -30,17 +37,18 @@ Content Access Mode is set to Simple Content Access. + [source,shell] ---- -$ subscription-manager attach --pool= +$ sudo subscription-manager attach --pool= ---- . Install {Toolsname} with the following command: + [source,shell] ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-dev-tools +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms ansible-dev-tools ---- + +[source,shell] ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-dev-tools +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms ansible-dev-tools ---- .Verification: @@ -73,15 +81,16 @@ On successful installation, you can view the help documentation for ansible-crea ---- $ ansible-creator --help -usage: ansible-creator [-h] [--version] {init} ... +usage: ansible-creator [-h] [--version] command ... -Tool to scaffold Ansible Content. Get started by looking at the help text. +The fastest way to generate all your ansible content. -options: - -h, --help show this help message and exit - --version Print ansible-creator version and exit. +Positional arguments: + command + add Add resources to an existing Ansible project. + init Initialize a new Ansible project. -Commands: - {init} The subcommand to invoke. - init Initialize an Ansible Collection. +Options: + --version Print ansible-creator version and exit. + -h --help Show this help message and exit ---- diff --git a/downstream/modules/devtools/proc-devtools-install-vsc.adoc b/downstream/modules/devtools/proc-devtools-install-vsc.adoc new file mode 100644 index 0000000000..8df6ec3744 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-install-vsc.adoc @@ -0,0 +1,9 @@ +[id="devtools-install-vsc_{context}"] +:_mod-docs-content-type: PROCEDURE + += Installing {VScode} + +[role="_abstract"] + +* To install {VScode}, follow the instructions on the link:https://code.visualstudio.com/download[Download Visual Studio Code page] in the Visual Studio Code documentation. + diff --git a/downstream/modules/devtools/proc-devtools-install-vscode-extension.adoc b/downstream/modules/devtools/proc-devtools-install-vscode-extension.adoc new file mode 100644 index 0000000000..9b090a0f59 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-install-vscode-extension.adoc @@ -0,0 +1,34 @@ +[id="devtools-install-extension_{context}"] +:_mod-docs-content-type: PROCEDURE + += Installing the {VSCode} Ansible extension + +[role="_abstract"] + +The Ansible extension adds language support for Ansible to {VSCode}. +It incorporates {ToolsName} to facilitate creating and running automation content. + +For a full description of the Ansible extension, see the link:https://marketplace.visualstudio.com/items?itemName=redhat.ansible[Visual Studio Code Marketplace]. + +See link:https://red.ht/aap-lp-vscode-essentials[Learning path - Getting Started with the Ansible {VSCode} Extension] for tutorials on working with the extension. + +To install the Ansible {VSCode} extension: + +. Open {VSCode}. +. Click the *Extensions* (image:vscode-extensions-icon.png[Extensions,15,15]) icon in the Activity Bar, or click menu:View[Extensions], to display the *Extensions* view. +. In the search field in the *Extensions* view, type `Ansible Red Hat`. +. Select the Ansible extension and click btn:[Install]. + +When the language for a file is recognized as Ansible, the Ansible extension provides features such as auto-completion, hover, diagnostics, and goto. +The language identified for a file is displayed in the Status bar at the bottom of the {VSCode} window. + +The following files are assigned the Ansible language: + +* YAML files in a `/playbooks` directory +* Files with the following double extension: `.ansible.yml` or `.ansible.yaml` +* Certain YAML names recognized by Ansible, for example `site.yml` or `site.yaml` +* YAML files whose filename contains "playbook": `*playbook*.yml` or `*playbook*.yaml` + +If the extension does not identify the language for your playbook files as Ansible, follow the procedure in +xref:devtools-extension-set-language_installing-devtools[Associating the Ansible language to YAML files]. + diff --git a/downstream/modules/devtools/proc-devtools-migrate-existing-roles-collection.adoc b/downstream/modules/devtools/proc-devtools-migrate-existing-roles-collection.adoc new file mode 100644 index 0000000000..89d4cb6878 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-migrate-existing-roles-collection.adoc @@ -0,0 +1,89 @@ +:_mod-docs-content-type: PROCEDURE + +[id="devtools-migrate-existing-roles-collection_{context}"] += Migrating existing roles to your collection + +The directory for a standalone role has the following structure. +Your role might not contain all of these directories. + +---- +my_role +├── README.md +├── defaults +│ └── main.yml +├── files +├── handlers +│ └── main.yml +├── meta +│ └── main.yml +├── tasks +│ └── main.yml +├── templates +├── tests +│ ├── inventory +│ └── test.yml +└── vars + └── main.yml + +---- + +An Ansible role has a defined directory structure with seven main standard directories. +Each role must must include at least one of these directories. +You can omit any directories the role does not use. +Each directory contains a `main.yml` file. + +.Procedure + +. If necessary, rename the directory that contains your role to reflect its content, for example, `acl_config` or `tacacs`. ++ +Roles in collections cannot have hyphens in their names. Use the underscore character (`_`) instead. +. Copy the roles directories from your standalone role into the `roles/` directory in your collection. ++ +For example, in a collection called `myapp_network`, add your roles to the `myapp_network/roles/` directory. +. Copy any plug-ins from your standalone roles into the `plugins directory/` for your new collection. +The collection directory structure resembles the following. ++ +---- +company_namespace +└── myapp_network + ├── ... + ├── galaxy.yml + ├── docs + ├── extensions + ├── meta + ├── plugins + ├── roles + │ ├── acl_config + │ │ ├── README.md + │ │ ├── defaults + │ │ ├── files + │ │ ├── handlers + │ │ ├── meta + │ │ ├── tasks + │ │ ├── templates + │ │ ├── tests + │ │ └── vars + │ └── tacacs + │ ├── README.md + │ ├── default + │ ├── files + │ ├── handlers + │ ├── meta + │ ├── tasks + │ ├── templates + │ ├── tests + │ └── vars + │ ├── run + ├── ... + ├── tests + └── vars + +---- ++ +The `run` role is a default role directory that is created when you scaffold the collection. +. Update your playbooks to use the fully qualified collection name (FQDN) for your new roles in your collection. + +Not every standalone role will seamlessly integrate into your collection without modification of the code. +For example, if a third-party standalone role from Galaxy that contains a plug-in uses the `module_utils/` directory, +then the plug-in itself has import statements. + diff --git a/downstream/modules/devtools/proc-devtools-ms-dev-containers-ext.adoc b/downstream/modules/devtools/proc-devtools-ms-dev-containers-ext.adoc new file mode 100644 index 0000000000..4bec861e18 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-ms-dev-containers-ext.adoc @@ -0,0 +1,24 @@ +[id="devtools-ms-dev-containers-ext_{context}"] +:_mod-docs-content-type: PROCEDURE + += Installing and configuring the `Dev Containers` extension + +If you are installing the containerized version of {ToolsName}, you must install the +link:https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-containers[Microsoft Dev Containers] +extension in {VSCode}. + +. Open {VSCode}. +. Click the *Extensions* (image:vscode-extensions-icon.png[Extensions,15,15]) icon in the Activity Bar, or click menu:View[Extensions], to display the *Extensions* view. +. In the search field in the *Extensions* view, type `Dev Containers`. +. Select the Dev Containers extension from Microsoft and click btn:[Install]. + +If you are using Podman or Podman Desktop as your containerization platform, you must modify the default settings in the `Dev Containers` extension. + +. Replace docker with podman in the `Dev Containers` extension settings: +.. In {VSCode}, open the settings editor. +.. Search for `@ext:ms-vscode-remote.remote-containers`. ++ +Alternatively, click the *Extensions* icon in the activity bar and click the gear icon for the `Dev Containers` extension. +. Set `Dev > Containers:Docker Path` to `podman`. +. Set `Dev > Containers:Docker Compose Path` to `podman-compose`. + diff --git a/downstream/modules/devtools/proc-devtools-publish-roles-collection-pah.adoc b/downstream/modules/devtools/proc-devtools-publish-roles-collection-pah.adoc new file mode 100644 index 0000000000..8a06fcb597 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-publish-roles-collection-pah.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: PROCEDURE + +[id="devtools-publish-roles-collection-pah_{context}"] += Publishing your collection in {PrivateHubName} + +. Prerequisite + +* Package your collection into a tarball. +Format your collection file name as follows: + +`` + +For example, `company_namespace-myapp_network-1.0.0.tar.gz` + +.Procedure + +. Create a namespace for your collection in {PrivateHubName}. See +link:{URLHubManagingContent}/managing-collections-hub#proc-create-namespace[Creating a namespace] +in the _{TitleHubManagingContent}_ guide. +. Optional: Add information to your namespace. See +link:{URLHubManagingContent}/managing-collections-hub#proc-edit-namespace[Adding additional information and resources to a namespace] +in the _{TitleHubManagingContent}_ guide. +. Upload your roles collections tarballs to your namespace. See +link:{URLHubManagingContent}/managing-collections-hub#proc-uploading-collections[Uploading collections to your namespaces] +in the _{TitleHubManagingContent}_ guide. +. Approve your collection for internal publication. See +link:{URLHubManagingContent}/managing-collections-hub#proc-approve-collection[Uploading collections to your namespaces] +in the _{TitleHubManagingContent}_ guide. + diff --git a/downstream/modules/devtools/proc-devtools-run-playbook-extension.adoc b/downstream/modules/devtools/proc-devtools-run-playbook-extension.adoc new file mode 100644 index 0000000000..e5fe9913aa --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-run-playbook-extension.adoc @@ -0,0 +1,13 @@ +[id="running-playbook-extension_{context}"] +:_mod-docs-content-type: PROCEDURE + += Running your playbook + +[role="_abstract"] + +The Ansible {VSCode} extension provides two options to run your playbook: + +* `ansible-playbook` runs the playbook on your local machine using Ansible Core. +* `ansible-navigator` runs the playbook in an execution environment in the same manner that {PlatformNameShort} runs an automation job. +You specify the base image for the execution environment in the Ansible extension settings. + diff --git a/downstream/modules/devtools/proc-devtools-save-scm.adoc b/downstream/modules/devtools/proc-devtools-save-scm.adoc new file mode 100644 index 0000000000..b020de689f --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-save-scm.adoc @@ -0,0 +1,12 @@ +[id="devtools-save-scm_{context}"] +:_mod-docs-content-type: PROCEDURE + += Saving your project in SCM + +Save your playbook project as a repository in your source control management system, for example GitHub. + +.Procedure + +. Initialize your project directory as a git repository. +. Push your project up to a source control system such as GitHub. + diff --git a/downstream/modules/devtools/proc-devtools-scaffold-roles-collection.adoc b/downstream/modules/devtools/proc-devtools-scaffold-roles-collection.adoc new file mode 100644 index 0000000000..211ff54afb --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-scaffold-roles-collection.adoc @@ -0,0 +1,80 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-09-26 +:_mod-docs-content-type: PROCEDURE + +[id="devtools-scaffold-roles-collection_{context}"] += Scaffolding a collection for your roles + +You can scaffold a collection for your roles from the Ansible extension in {VSCode}. + +.Procedure + +. Open {VSCode}. +. Navigate to the directory where you want to create your roles collection. +. Click the Ansible icon in the {VSCode} activity bar to open the Ansible extension. +. Select *Get started* in the *Ansible content creator* section. ++ +The *Ansible content creator* tab opens. +. In the *Create* section, click *Ansible collection project*. ++ +The *Create new Ansible project* tab opens. +. In the form in the *Create Ansible project* tab, enter the following: +** *Namespace*: Enter a name for your namespace, for example `company_namespace`. +** *Collection*: Enter a name for your collection, for example, `myapp_network`. +** *Init path*: Enter the path to the directory where you want to scaffold your new collection. ++ +If you enter an existing directory name, the scaffolding process overwrites the contents of that directory. +The scaffold process only allows you to use an existing directory if you enable the Force option. + +*** If you are using the containerized version of Ansible development tools, +the destination directory path is relative to the container, not a path in your local system. +To discover the current directory name in the container, run the pwd command in a terminal in {VSCode}. +If the current directory in the container is `workspaces`, enter `workspaces//collections`. +*** If you are using a locally installed version of Ansible Dev tools, +enter the full path to the directory, for example `/user//path/to/`. +. Click btn:[Create]. + +.Verification + +The following message appears in the *Logs* pane of the *Create Ansible collection* tab. +// In this example, the destination directory name is + +---- +--------------------- ansible-creator logs --------------------- + + Note: collection company_namespace.myapp_network created at /path/to/collections/directory +---- + +The following directories and files are created in your `collections/` directory: + +---- +├── .devcontainer +├── .github +├── .gitignore +├── .isort.cfg +├── .pre-commit-config.yaml +├── .prettierignore +├── .vscode +├── CHANGELOG.rst +├── CODE_OF_CONDUCT.md +├── CONTRIBUTING +├── LICENSE +├── MAINTAINERS +├── README.md +├── changelogs +├── devfile.yaml +├── docs +├── extensions +├── galaxy.yml +├── meta +├── plugins +├── pyproject.toml +├── requirements.txt +├── roles +├── test-requirements.txt +├── tests +└── tox-ansible.ini + +---- + + diff --git a/downstream/modules/devtools/proc-devtools-set-up-ansible-config.adoc b/downstream/modules/devtools/proc-devtools-set-up-ansible-config.adoc new file mode 100644 index 0000000000..64c1511c74 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-set-up-ansible-config.adoc @@ -0,0 +1,16 @@ +[id="devtools-set-up-ansible-config_{context}"] +:_mod-docs-content-type: PROCEDURE + += Setting up the Ansible configuration file for your playbook project + +[role="_abstract"] +When you scaffolded your playbook project, an Ansible configuration file, `ansible.cfg`, +was added to the root directory of your project. + +If you have configured a default Ansible configuration file in `/etc/ansible/ansible.cfg`, +copy any settings that you want to reuse in your project from your default Ansible configuration file +to the `ansible.cfg` file in your project's root directory. + +To learn more about the Ansible configuration file, see +link:https://docs.ansible.com/ansible/latest/reference_appendices/config.html[Ansible Configuration Settings] +in the Ansible documentation. diff --git a/downstream/modules/devtools/proc-devtools-setup-registry-redhat-io.adoc b/downstream/modules/devtools/proc-devtools-setup-registry-redhat-io.adoc new file mode 100644 index 0000000000..bc6f46e694 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-setup-registry-redhat-io.adoc @@ -0,0 +1,50 @@ +[id="devtools-setup-registry-redhat-io_{context}"] +:_mod-docs-content-type: PROCEDURE + += Authenticating with the Red Hat container registry + +[role="_abstract"] +All container images available through the Red Hat container catalog are hosted on an image registry, +`registry.redhat.io`. +The registry requires authentication for access to images. + +To use the `registry.redhat.io` registry, you must have a Red Hat login. +This is the same account that you use to log in to the Red Hat Customer Portal (access.redhat.com) and manage your Red Hat subscriptions. + +[NOTE] +==== +If you are planning to install the {ToolsName} on a container inside {VSCode}, +you must log in to `registry.redhat.io` before launching {VSCode} so that {VSCode} can pull the +`devtools` container from `registry.redhat.io`. + +If you are running {ToolsName} on a container inside {VSCode} and you want to pull execution environments +or the `devcontainer` to use as an execution environment, +you must log in from a terminal prompt within the `devcontainer` from a terminal inside {VSCode}. +==== + +You can use the `podman login` or `docker login` commands with your credentials to access content on the registry. + +Podman:: ++ +---- +$ podman login registry.redhat.io +Username: my__redhat_username +Password: *********** +---- +Docker:: ++ +---- +$ docker login registry.redhat.io +Username: my__redhat_username +Password: *********** +---- + +For more information about Red Hat container registry authentication, see +link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry Authentication] +on the Red Hat customer portal. + +// * If you are an organization administrator, you can create profiles for users in your organization and configure Red Hat customer portal access permissions for them. +// Refer to link:https://access.redhat.com/start/learn:get-set-red-hat/resource/resources:create-and-manage-other-users[Create and manage other users] on the Red Hat customer portal for information. +// * If you are a member of an organization, ask your administrator to create a Red Hat customer portal account for you. +//Troubleshooting link:https://access.redhat.com/articles/3560571[Troubleshooting Authentication Issues with `registry.redhat.io`] + diff --git a/downstream/modules/devtools/proc-devtools-testing-playbook.adoc b/downstream/modules/devtools/proc-devtools-testing-playbook.adoc new file mode 100644 index 0000000000..f4e73cefcf --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-testing-playbook.adoc @@ -0,0 +1,32 @@ +[id="test-playbook_{context}"] +:_mod-docs-content-type: PROCEDURE + += Testing your playbooks + +[role="_abstract"] + +To test your playbooks in your project, run them in a non-production environment such as a lab setup or a virtual machine. + +{NavigatorStart} (`ansible-navigator`) is a text-based user interface (TUI) for developing and troubleshooting Ansible content with execution environments. + +Running a playbook using `ansible-navigator` generates verbose output that you can inspect to check whether the playbook is running the way you expected. +You can specify the execution environment that you want to run your playbooks on, so that your tests replicate the production setup on {PlatformNameShort}: + +* To run a playbook on an execution environment, run the following command from the terminal in {VsCode}: ++ +---- +$ ansible-navigator run -eei +---- +For example, to execute a playbook called `site.yml` on the {PlatformNameShort} RHEL 9 minimum execution environment, run the following command from the terminal in VS Code. ++ +---- +ansible-navigator run site.yml --eei ee-minimal-rhel8 +---- + +The output is displayed in the terminal. +You can inspect the results and step into each play and task that was executed. + +For more information about running playbooks, refer to +link:{URLNavigatorGuide}/assembly-execute-playbooks-navigator_ansible-navigator[Running Ansible playbooks with automation content navigator] +in the _{TitleNavigatorGuide}_ guide. + diff --git a/downstream/modules/devtools/proc-devtools-use-roles-collections-aap.adoc b/downstream/modules/devtools/proc-devtools-use-roles-collections-aap.adoc new file mode 100644 index 0000000000..fc29b8dd2e --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-use-roles-collections-aap.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="devtools-use-roles-collections-aap_{context}"] += Using your collection in projects in {PlatformName} + +To use your collection in {ControllerName}, you must add your collection to an +{ExecEnvShort} and push it to {PrivateHubName}. + +The following procedure describes the workflow to add a collection to an {ExecEnvShort}. +Refer to +link:{URLBuilder}/assembly-publishing-exec-env#proc-customize-ee-image[Customizing an existing automation executions environment image] +in the _{TitleBuilder}_ guide for the commands to execute these steps. + +. You can pull an {ExecEnvShort} base image from {HubName}, +or you can add your collection to your own custom {ExecEnvShort}. +. Add the collections that you want to include in the {ExecEnvShort}. +. Build the new {ExecEnvShort}. +. Verify that the collections are in the {ExecEnvShort}. +. Tag the image and push it to {PrivateHubName}. +. Pull your new image into your {ControllerName} instance. +. The playbooks that use the roles in your collection must use the fully qualified domain name (FQDN) for the roles. + diff --git a/downstream/modules/devtools/proc-devtools-working-with-ee.adoc b/downstream/modules/devtools/proc-devtools-working-with-ee.adoc new file mode 100644 index 0000000000..b5059c1537 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-working-with-ee.adoc @@ -0,0 +1,37 @@ +[id="working-with-ee_{context}"] +:_mod-docs-content-type: PROCEDURE + += Working with execution environments + +[role="_abstract"] + +You can view the automation execution environments provided by Red Hat in the +link:https://catalog.redhat.com/search?searchType=containers&build_categories_list=Automation%20execution%20environment&p=1[Red Hat Ecosystem Catalog]. + +Click on an execution environment for information on how to download it. + +. Log in to `registry.redhat.io` if you have not already done so. ++ +[NOTE] +==== +If you are running {ToolsName} on a container inside {VSCode} and you want to pull execution environments +or the `devcontainer` to use as an execution environment, +you must log in to `registry.redhat.io` from a terminal prompt within the `devcontainer` inside {VSCode}. +==== +. Using the information in the +link:https://catalog.redhat.com/search?searchType=containers&build_categories_list=Automation%20execution%20environment&p=1[Red Hat Ecosystem Catalog], download the execution environment you need. ++ +For example, to download the minimal RHEL 8 base image, run the following command: ++ +---- +$ podman pull registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel9 +---- + +You can build and create custom execution environments with `ansible-builder`. +For more information about working with execution environments locally, see +link:{LinkBuilder}. + +After customizing your execution environment, you can push your new image to the container registry in automation hub. See +link:{URLBuilder}/index#assembly-publishing-exec-env[Publishing an automation execution environment] +in the _{TitleBuilder}_ documentation. + diff --git a/downstream/modules/devtools/proc-devtools-writing-first-playbook.adoc b/downstream/modules/devtools/proc-devtools-writing-first-playbook.adoc new file mode 100644 index 0000000000..98f76f7c46 --- /dev/null +++ b/downstream/modules/devtools/proc-devtools-writing-first-playbook.adoc @@ -0,0 +1,40 @@ +[id="writing-playbook_{context}"] +:_mod-docs-content-type: PROCEDURE + += Writing your first playbook + +[role="_abstract"] +{ToolsName} help you to create and run playbooks in {VSCode}. + +.Prerequisites + +* You have installed and opened the Ansible {VSCode} extension. +* You have opened a terminal in {VSCode}. +* You have installed `ansible-devtools`. + +.Procedure + +. Create a new .yml file in {VSCode} for your playbook, for example `example_playbook.yml`. Put it in the same directory level as the example `site.yml` file. +. Add the following example code into the playbook file and save the file. +The playbook consists of a single play that executes a `ping` to your local machine. ++ +---- +--- +- name: My first play + hosts: localhost + tasks: + - name: Ping my hosts + ansible.builtin.ping: + +---- ++ +`Ansible-lint` runs in the background and displays errors in the *Problems* tab of the terminal. +There are no errors in this playbook: ++ +image::ansible-lint-no-errors.png[Ansible-lint showing no errors in a playbook] +. If you want to add new content to the playbook, use the following rules: +** Every playbook file must finish with a blank line. +** Trailing spaces at the end of lines are not allowed. +** Every playbook and every play require an identifier (name). +. Save your playbook file. + diff --git a/downstream/modules/devtools/proc-install-vscode-extension.adoc b/downstream/modules/devtools/proc-install-vscode-extension.adoc deleted file mode 100644 index 1c40f624fe..0000000000 --- a/downstream/modules/devtools/proc-install-vscode-extension.adoc +++ /dev/null @@ -1,30 +0,0 @@ -[id="install-vscode-extension"] - -= Installing the Ansible {VSCode} extension - -[role="_abstract"] - -The Ansible extension adds language support for Ansible to {VSCode}. -It incorporates {ToolsName} to facilitate creating and running automation content. - -For a full description of the Ansible extension, see the link:https://marketplace.visualstudio.com/items?itemName=redhat.ansible[Visual Studio Code Marketplace]. - -// See link:URL[Learning path - Getting Started with the Ansible {VSCode} Extension] for interactive training on working with the extension. - -To install the Ansible {VSCode} extension: - -. Click the *Extensions* icon in the {VSCode} Activity Bar, or select menu:View[Extensions], to display the *Extensions* view. -. In the search field in the *Extensions* view, type "Ansible Red Hat". -. Select the Ansible extension and click btn:[Install]. - -The Ansible extension becomes active when you open a workspace or directory that contains one of the following files: - -* Files with a `.yml`, `.yaml`, `.ansible.yml` or `.ansible.yaml` extension. -* Common YAML filenames recognized by Ansible, such as `site.yml` -* YAML files whose names contain "playbook". - -Open a `.yml` file in your workspace. The language identified for the file is displayed in the Status bar. - -When the language for a file is recognized as Ansible, the Ansible extension provides features for creating Ansible Playbooks and task files, such as auto-completion, hover, diagnostics, and goto. - - diff --git a/downstream/modules/devtools/proc-installing-vscode.adoc b/downstream/modules/devtools/proc-installing-vscode.adoc deleted file mode 100644 index a1b40b85a5..0000000000 --- a/downstream/modules/devtools/proc-installing-vscode.adoc +++ /dev/null @@ -1,10 +0,0 @@ -[id="installing-vscode_context"] - -= Installing {VSCode} - -[role="_abstract"] - -VS Code is a free open-source code editor available on Linux, Mac, and Windows. - -To install VS Code, follow the instructions on the link:https://code.visualstudio.com/download[Download Visual Studio Code page] in the Visual Studio Code documentation. - diff --git a/downstream/modules/devtools/proc-rhdh-add-custom-configmap.adoc b/downstream/modules/devtools/proc-rhdh-add-custom-configmap.adoc new file mode 100644 index 0000000000..077f33e738 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-add-custom-configmap.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-add-custom-configmap_{context}"] += Adding a custom ConfigMap + +Create a {RHDH} ConfigMap following the procedure in the +link:{BaseURL}/openshift_container_platform/{OCPLatest}/html-single/nodes/index#configmaps[Creating and using config maps] section of the {OCPShort} _Nodes_ guide. +The following examples use a custom ConfigMap named `app-config-rhdh`. + +To edit your custom ConfigMap, log in to the OpenShift UI and navigate to menu:Select Project ( developerHubProj )[ConfigMaps > {developer-hub}-app-config > EditConfigMaps > app-config-rhdh]. + diff --git a/downstream/modules/devtools/proc-rhdh-add-devtools-container.adoc b/downstream/modules/devtools/proc-rhdh-add-devtools-container.adoc new file mode 100644 index 0000000000..3b129a1d8e --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-add-devtools-container.adoc @@ -0,0 +1,46 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-add-devtools-container_{context}"] += Adding the Ansible Developer Tools container + +You must update the Helm chart configuration to add an extra container. + +.Procedure + +. Log in to the OpenShift UI. +. Navigate to menu:Helm[developer-hub > Actions > upgrade > Yaml view] to open the Helm chart. +. Update the `extraContainers` section in the YAML file. ++ +Add the following code: ++ +---- +upstream: + backstage: + ... + extraContainers: + - command: + - adt + - server + image: >- + registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest + imagePullPolicy: IfNotPresent + name: ansible-devtools-server + ports: + - containerPort: 8000 + ... +---- ++ +[NOTE] +==== +The image pull policy is `imagePullPolicy: IfNotPresent`. +The image is pulled only if it does not already exist on the node. +Update it to `imagePullPolicy: Always` if you always want to use the latest image. +==== +. Click btn:[Upgrade]. + +.Verification + +To verify that the container is running, check the container log: + +image::rhdh-check-devtools-container.png[View container log] + diff --git a/downstream/modules/devtools/proc-rhdh-add-plugin-config.adoc b/downstream/modules/devtools/proc-rhdh-add-plugin-config.adoc new file mode 100644 index 0000000000..64f5a9a2f4 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-add-plugin-config.adoc @@ -0,0 +1,76 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-add-plugin-config_{context}"] += Adding the Ansible plug-ins configuration + +. In the OpenShift Developer UI, navigate to menu:Helm[developer-hub > Actions > Upgrade > Yaml view]. +. Update the Helm chart configuration to add the dynamic plug-ins in the {RHDH} instance. +Under the `plugins` section in the YAML file, add the dynamic plug-ins that you want to enable. ++ +---- +global: + ... + plugins: + - disabled: false + integrity: + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + pluginConfig: + dynamicPlugins: + frontend: + ansible.plugin-backstage-rhaap: + appIcons: + - importName: AnsibleLogo + name: AnsibleLogo + dynamicRoutes: + - importName: AnsiblePage + menuItem: + icon: AnsibleLogo + text: Ansible + path: /ansible + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-scaffolder-backend-module-backstage-rhaap: null + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-backstage-rhaap-backend: null +---- +. In the `package` sections, replace `x.y.z` in the plug-in filenames with the correct version numbers for the Ansible plug-ins. +. For each Ansible plug-in, update the integrity values using the corresponding `.integrity` file content. +. Click btn:[Upgrade]. ++ +The developer hub pods restart and the plug-ins are installed. + +.Verification + +To verify that the plug-ins have been installed, open the `install-dynamic-plugin` container logs and check that the Ansible plug-ins are visible in {RHDH}: + +. Open the Developer perspective for the {RHDH} application in the OpenShift Web console. +. Select the *Topology* view. +. Select the {RHDH} deployment pod to open an information pane. +. Select the *Resources* tab of the information pane. +. In the *Pods* section, click *View logs* to open the *Pod details* page. +. In the *Pod details* page, select the *Logs* tab. +. Select `install-dynamic-plugins` from the drop-down list of containers to view the container log. +. In the `install-dynamic-plugin` container logs, search for the Ansible plug-ins. ++ +The following example from the log indicates a successful installation for one of the plug-ins: ++ +----- +=> Successfully installed dynamic plugin http://plugin-registry-1:8080/ansible-plugin-backstage-rhaap-dynamic-1.1.0.tgz +----- ++ +The following image shows the container log in the *Pod details* page. +The version numbers and file names can differ. ++ +image::rhdh-check-plugin-config.png[container logs for install-dynamic-plugin] + diff --git a/downstream/modules/devtools/proc-rhdh-add-plugin-software-templates.adoc b/downstream/modules/devtools/proc-rhdh-add-plugin-software-templates.adoc new file mode 100644 index 0000000000..e6e8592681 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-add-plugin-software-templates.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-add-plugin-software-templates_{context}"] += Adding Ansible plug-ins software templates + +Red Hat Ansible provides software templates for {RHDH} to provision new playbooks and collection projects based on Ansible best practices. + +.Procedure + +. Edit your custom {RHDH} config map, for example `app-config-rhdh`. +. Add the following code to your {RHDH} `app-config-rhdh.yaml` file. +---- +data: + app-config-rhdh.yaml: | + catalog: + ... + locations: + ... + - type: url + target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml + rules: + - allow: [Template] +---- + +For more information, refer to the +link:{BaseURL}/red_hat_developer_hub/1.2/html-single/administration_guide_for_red_hat_developer_hub/assembly-admin-templates#assembly-admin-templates[Managing templates] +section of the _Administration guide for Red Hat Developer Hub_. + diff --git a/downstream/modules/devtools/proc-rhdh-add-pull-secret-helm.adoc b/downstream/modules/devtools/proc-rhdh-add-pull-secret-helm.adoc new file mode 100644 index 0000000000..8b17784177 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-add-pull-secret-helm.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-add-pull-secret-helm_{context}"] += Adding a pull secret to the {RHDH} Helm configuration + +.Prerequisite + +The Ansible Development Container download requires a Red Hat Customer Portal account and Red Hat Service Registry account. + +.Procedure + +. Create a new link:https://access.redhat.com/terms-based-registry/[Red Hat Registry Service account], if required. +. Click the token name under the *Account name* column. +. Select the *OpenShift Secret* tab and follow the instructions to add the pull secret to your {RHDH} OpenShift project. +. Add the new secret to the {RHDH} Helm configuration, replacing `` with the name of the secret you generated on the Red Hat Registry Service Account website: ++ +---- +upstream: + backstage: + ... + image: + ... + pullSecrets: + - + ... + +---- + +For more information, refer to the link:https://access.redhat.com/RegistryAuthentication[Red Hat Container Registry documentation]. + diff --git a/downstream/modules/devtools/proc-rhdh-configure-aap-details.adoc b/downstream/modules/devtools/proc-rhdh-configure-aap-details.adoc new file mode 100644 index 0000000000..9e566fbc7e --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-configure-aap-details.adoc @@ -0,0 +1,44 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-configure-aap-details_{context}"] += Configuring Ansible Automation Platform details + +The Ansible plug-ins query your {PlatformNameShort} subscription status with the controller API using a token. + +[NOTE] +==== +The Ansible plug-ins continue to function regardless of the {PlatformNameShort} subscription status. +==== + +.Procedure + +. Create a Personal Access Token (PAT) with “Read” scope in automation controller, following the +link:{URLCentralAuth}/gw-token-based-authentication#assembly-controller-applications[Applications] +section of _{TitleCentralAuth}_. +. Edit your custom {RHDH} config map, for example `app-config-rhdh`. +. Add your {PlatformNameShort} details to `app-config-rhdh.yaml`. +.. Set the `baseURL` key with your automation controller URL. +.. Set the `token` key with the generated token value that you created in Step 1. +.. Set the `checkSSL` key to `true` or `false`. ++ +If `checkSSL` is set to `true`, the Ansible plug-ins verify whether the SSL certificate is valid. ++ +---- +data: + app-config-rhdh.yaml: | + ... + ansible: + ... + rhaap: + baseUrl: '' + token: '' + checkSSL: true +---- + +[NOTE] +==== +You are responsible for protecting your {RHDH} installation from external and unauthorized access. +Manage the backend authentication key like any other secret. +Meet strong password requirements, do not expose it in any configuration files, and only inject it into configuration files as an environment variable. +==== + diff --git a/downstream/modules/devtools/proc-rhdh-configure-devspaces.adoc b/downstream/modules/devtools/proc-rhdh-configure-devspaces.adoc new file mode 100644 index 0000000000..3904f731e8 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-configure-devspaces.adoc @@ -0,0 +1,44 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-configure-devspaces_{context}"] += Configuring OpenShift Dev Spaces + +When OpenShift Dev Spaces is configured for the Ansible plug-ins, users can click a link from the catalog item view in {RHDH} and edit their provisioned Ansible Git projects using Dev Spaces. + +[NOTE] +==== +OpenShift Dev Spaces is a separate product and it is optional. +The plug-ins will function without it. + +It is a separate Red Hat product and is not included in the {PlatformNameShort} or {RHDH} subscription. +==== + +If the OpenShift Dev Spaces link is not configured in the Ansible plug-ins, +the *Go to OpenShift Dev Spaces dashboard* link in the *DEVELOP* section of the Ansible plug-ins landing page redirects users to the +link:https://www.redhat.com/en/technologies/management/ansible/development-tools[Ansible development tools home page]. + +.Prerequisites + +* A Dev Spaces installation. +Refer to the +link:{BaseURL}/red_hat_openshift_dev_spaces/3.14/html-single/administration_guide/installing-devspaces[Installing Dev Spaces] +section of the _Red Hat OpenShift Dev Spaces Administration guide_. + +.Procedure + +. Edit your custom {RHDH} config map, for example `app-config-rhdh`. +. Add the following code to your {RHDH} `app-config-rhdh.yaml` file. ++ +---- +data: + app-config-rhdh.yaml: |- + ansible: + devSpaces: + baseUrl: >- + https:// +---- +. Replace `` with your OpenShift Dev Spaces URL. +. In the OpenShift Developer UI, select the `Red Hat Developer Hub` pod. +. Open *Actions*. +. Click *Restart rollout*. + diff --git a/downstream/modules/devtools/proc-rhdh-configure-devtools-server.adoc b/downstream/modules/devtools/proc-rhdh-configure-devtools-server.adoc new file mode 100644 index 0000000000..81ce7f9044 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-configure-devtools-server.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-configure-devtools-server_{context}"] += Configuring the Ansible Dev Tools Server + +The `creatorService` URL is required for the Ansible plug-ins to provision new projects using the provided software templates. + +.Procedure + +. Edit your custom {RHDH} config map, `app-config-rhdh`, that you created in +xref:rhdh-add-custom-configmap_rhdh-ocp-required-installation[Adding a custom ConfigMap]. +. Add the following code to your {RHDH} `app-config-rhdh.yaml` file. ++ +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: app-config-rhdh +... +data: + app-config-rhdh.yaml: |- + ansible: + creatorService: + baseUrl: 127.0.0.1 + port: '8000' +... + +---- + diff --git a/downstream/modules/devtools/proc-rhdh-configure-optional-integrations.adoc b/downstream/modules/devtools/proc-rhdh-configure-optional-integrations.adoc new file mode 100644 index 0000000000..62ded10c5f --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-configure-optional-integrations.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-configure-optional-integrations_{context}"] += Configuring Ansible plug-ins optional integrations + +The Ansible plug-ins provide integrations with {PlatformNameShort} and other optional Red Hat products. + +// Create a custom ConfigMap called `app-config-rhdh` as outlined in the +// link:{BaseURL}/red_hat_developer_hub/1.2/html-single/administration_guide_for_red_hat_developer_hub/assembly-install-rhdh-ocp#proc-add-custom-app-file-openshift-helm_assembly-install-rhdh-ocp[Adding a custom application configuration file to OpenShift Container Platform using the Helm chart] of the _Administration guide for Red Hat Developer Hub_. +// + +To edit your custom ConfigMap, log in to the OpenShift UI and navigate to menu:Select Project ( developerHubProj )[ConfigMaps > {developer-hub}-app-config-rhdh > app-config-rhdh]. + diff --git a/downstream/modules/devtools/proc-rhdh-configure-pah-url.adoc b/downstream/modules/devtools/proc-rhdh-configure-pah-url.adoc new file mode 100644 index 0000000000..218af264d3 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-configure-pah-url.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-configure-pah-url_{context}"] += Configuring the private automation hub URL + +{PrivateHubNameStart} provides a centralized, on-premise repository for certified Ansible collections, execution environments and any additional, vetted content provided by your organization. + +If the {PrivateHubName} URL is not configured in the Ansible plug-ins, users are redirected to the +link:https://console.redhat.com/ansible/automation-hub[Red Hat Hybrid Cloud Console automation hub]. + +[NOTE] +==== +The {PrivateHubName} configuration is optional but recommended. +The Ansible plug-ins will function without it. +==== + +.Prerequisites: +* A {PrivateHubName} instance. ++ +For more information on installing {PrivateHubName}, refer to the installation guides in the +link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}[{PlatformNameShort} documentation]. + +.Procedure: + +. Edit your custom {RHDH} config map, for example `app-config-rhdh`. +. Add the following code to your {RHDH} `app-config-rhdh.yaml` file. ++ +---- +data: + app-config-rhdh.yaml: |- + ansible: + ... + automationHub: + baseUrl: '' + ... + +---- +. Replace `<\https://MyOwnPAHUrl/>` with your {PrivateHubName} URL. +. In the OpenShift Developer UI, select the `Red Hat Developer Hub` pod. +. Open *Actions*. +. Click *Restart rollout*. + diff --git a/downstream/modules/devtools/proc-rhdh-configure-rbac.adoc b/downstream/modules/devtools/proc-rhdh-configure-rbac.adoc new file mode 100644 index 0000000000..820d358f13 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-configure-rbac.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-configure-rbac_{context}"] += Configuring Role Based Access Control + +{RHDH} offers Role-based Access Control (RBAC) functionality. +RBAC can then be applied to the Ansible plug-ins content. + +Assign the following roles: + +* Members of the `admin:superUsers` group can select templates in the *Create* tab of the Ansible plug-ins to create playbook and collection projects. +* Members of the `admin:users` group can view templates in the *Create* tab of the Ansible plug-ins. + +The following example adds RBAC to {RHDH}. + +---- +data: + app-config-rhdh.yaml: | + plugins: + ... + permission: + enabled: true + rbac: + admin: + users: + - name: user:default/ + superUsers: + - name: user:default/ +---- + + +For more information about permission policies and managing RBAC, refer to the +link:{BaseURL}/red_hat_developer_hub/{RHDHVers}/html-single/authorization/index[_Authorization_] +guide for Red Hat Developer Hub. + diff --git a/downstream/modules/devtools/proc-rhdh-configure-showcase-location.adoc b/downstream/modules/devtools/proc-rhdh-configure-showcase-location.adoc new file mode 100644 index 0000000000..645e235780 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-configure-showcase-location.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-configure-showcase-location_{context}"] += Configuring `showCaseLocation` + +You must configure `showCaseLocation` in your custom config map. + +// Prevents the following error: +// cause: Error: Missing required config value at 'ansible.rhaap.showCaseLocation.type' in 'env' + +.Procedure + +. Edit your custom {RHDH} config map, `app-config-rhdh`, that you created in +xref:rhdh-add-custom-configmap_rhdh-ocp-required-installation[Adding a custom ConfigMap]. +. Add the following code to your {RHDH} `app-config-rhdh.yaml` file. ++ +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: app-config-rhdh +... +data: + app-config-rhdh.yaml: |- + ansible: + rhaap: + ... + showCaseLocation: + type: file + target: '/tmp/aap-showcases/' +... + +---- + diff --git a/downstream/modules/devtools/proc-rhdh-create-plugin-registry.adoc b/downstream/modules/devtools/proc-rhdh-create-plugin-registry.adoc new file mode 100644 index 0000000000..44e7855e36 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-create-plugin-registry.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-create-plugin-registry_{context}"] += Creating a registry for the {AAPRHDHShort} + +Set up a registry in your OpenShift cluster to host the {AAPRHDHShort} and make them available for installation in {RHDH} ({RHDHShort}). + +.Procedure + +. Log in to your {OCPShort} instance with credentials to create a new application. +. Open your {RHDH} OpenShift project. ++ +---- +$ oc project +---- +. Run the following commands to create a plug-in registry build in the OpenShift cluster. ++ +---- +$ oc new-build httpd --name=plugin-registry --binary +$ oc start-build plugin-registry --from-dir=$DYNAMIC_PLUGIN_ROOT_DIR --wait +$ oc new-app --image-stream=plugin-registry +---- + +.Verification + +To verify that the plugin-registry was deployed successfully, open the *Topology* view in the *Developer* perspective on the {RHDH} application in the OpenShift Web console. + +. Click the plug-in registry to view the log. ++ +image::rhdh-plugin-registry.png[Developer perspective] ++ +(1) Developer hub instance ++ +(2) Plug-in registry +. Click the terminal tab and login to the container. +. In the terminal, run `ls` to confirm that the `.tar` files are in the plugin registry. ++ +---- +ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz +ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz +ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz +---- ++ +The version numbers and file names can differ. + diff --git a/downstream/modules/devtools/proc-rhdh-create.adoc b/downstream/modules/devtools/proc-rhdh-create.adoc new file mode 100644 index 0000000000..a26d4155d0 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-create.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-create_{context}"] += Creating a project + +.Prerequisite + +* Ensure you have the correct access (RBAC) to view the templates in {RHDH}. +Ask your administrator to assign access to you if necessary. + +.Procedure: + +. Log in to your {RHDH} UI. +. Click the Ansible `A` icon in the {RHDH} navigation panel. +. Navigate to the *Overview* page. +. Click *Create*. +. Click *Create Ansible Git Project*. The *Available Templates* page opens. +. Click *Create Ansible Playbook project*. +. In the *Create Ansible Playbook Project* page, enter information for your new project in the form. ++ +You can see sample values for this form in the Example project. ++ +[options="header"] +|=== +|Field |Description +|Source code repository organization name or username +|The name of your source code repository username or organization name +|Playbook repository name  +|The name of your new Git repository +|Playbook description +(Optional) +|A description of the new playbook project +|Playbook project's collection namespace +|The new playbook Git project creates an example collection folder for you. +Enter a value for the collection namespace. +|Playbook project's collection name +|The name of the collection +|Catalog Owner Name +|The name of the Developer Hub catalog item owner. +This is a Red Hat Developer Hub field. +|Source code repository organization name or username +|The name of your source code repository username or organization name +|Playbook repository name +|The name of your new Git repository +|Playbook description (Optional) +|A description of the new playbook project +|System (Optional) +|This is a {RHDH} field +|=== ++ +[NOTE] +==== +Collection namespaces must follow Python module naming conventions. +Collections must have short, all lowercase names. +You can use underscores in the collection name if it improves readability. + +For more information, see the link:https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_in_groups.html#naming-conventions[Ansible Collection naming conventions documentation]. +==== +. Click *Review*. + diff --git a/downstream/modules/devtools/proc-rhdh-develop-projects-devspaces.adoc b/downstream/modules/devtools/proc-rhdh-develop-projects-devspaces.adoc new file mode 100644 index 0000000000..3c2af0cc61 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-develop-projects-devspaces.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-develop-projects-devspaces_{context}"] += Developing projects on Dev Spaces + +link:https://access.redhat.com/products/red-hat-openshift-dev-spaces[OpenShift Dev Spaces] +is not included with your {PlatformNameShort} subscription or the {AAPRHDH}. + +The plug-ins provide context-aware links to edit your project in Dev Spaces. + +The Dev Spaces instance provides a default configuration that installs the Ansible VS Code extension and provides the Ansible command line tools. +You can activate Ansible Lightspeed in the Ansible VS Code extension. For more information, refer to the +link:{BaseURL}/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html-single/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index[Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide]. + diff --git a/downstream/modules/devtools/proc-rhdh-develop-projects.adoc b/downstream/modules/devtools/proc-rhdh-develop-projects.adoc new file mode 100644 index 0000000000..88b82333df --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-develop-projects.adoc @@ -0,0 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-develop-projects_{context}"] += Developing projects + diff --git a/downstream/modules/devtools/proc-rhdh-devtools-sidecar.adoc b/downstream/modules/devtools/proc-rhdh-devtools-sidecar.adoc new file mode 100644 index 0000000000..a8ef90fb61 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-devtools-sidecar.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-devtools-sidecar_{context}"] += Adding the Ansible Development Tools sidecar container + +After the plug-ins are loaded, add the Ansible Development Container (`ansible-devtools-server`) in the {RHDH} pod as a sidecar container. + diff --git a/downstream/modules/devtools/proc-rhdh-download-plugins.adoc b/downstream/modules/devtools/proc-rhdh-download-plugins.adoc new file mode 100644 index 0000000000..dd63db0a22 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-download-plugins.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-download-plugins_{context}"] += Downloading the Ansible plug-ins files + + +. Download the latest `.tar` file for the plug-ins from the link:{PlatformDownloadUrl}[Red Hat Ansible Automation Platform Product Software downloads page]. +The format of the filename is `ansible-backstage-rhaap-bundle-x.y.z.tar.gz`. +Substitute the Ansible plug-ins release version, for example `1.0.0`, for `x.y.z`. +. Create a directory on your local machine to store the `.tar` files. ++ +---- +$ mkdir /path/to/ +---- +. Set an environment variable (`$DYNAMIC_PLUGIN_ROOT_DIR`) to represent the directory path. ++ +---- +$ export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/ +---- +. Extract the `ansible-backstage-rhaap-bundle-.tar.gz` contents to `$DYNAMIC_PLUGIN_ROOT_DIR`. ++ +---- +$ tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C $DYNAMIC_PLUGIN_ROOT_DIR +---- ++ +Substitute the Ansible plug-ins release version, for example `1.0.0`, for `x.y.z`. + +.Verification + +Run `ls` to verify that the extracted files are in the `$DYNAMIC_PLUGIN_ROOT_DIR` directory: + +---- +$ ls $DYNAMIC_PLUGIN_ROOT_DIR +ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz +ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz.integrity +ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz +ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz.integrity +ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz +ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz.integrity + +---- + +The files with the `.integrity` file type contain the plugin SHA value. +The SHA value is used during the plug-in configuration. + diff --git a/downstream/modules/devtools/proc-rhdh-enable-rhdh-authentication.adoc b/downstream/modules/devtools/proc-rhdh-enable-rhdh-authentication.adoc new file mode 100644 index 0000000000..c461aefbf6 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-enable-rhdh-authentication.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-enable-rhdh-authentication_{context}"] += Enabling {RHDH} authentication + +{RHDH} (RHDH) provides integrations for multiple Source Control Management (SCM) systems. +This is required by the plug-ins to create repositories. + +Refer to the +link:{BaseURL}/red_hat_developer_hub/1.2/html-single/administration_guide_for_red_hat_developer_hub/index#enabling-authentication[Enabling authentication in Red Hat Developer Hub] +chapter of the _Administration guide for Red Hat Developer Hub_. + diff --git a/downstream/modules/devtools/proc-rhdh-execute-automation-devspaces.adoc b/downstream/modules/devtools/proc-rhdh-execute-automation-devspaces.adoc new file mode 100644 index 0000000000..1f388c8245 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-execute-automation-devspaces.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-execute-automation-devspaces_{context}"] += Executing automation tasks in Dev Spaces + +The default configuration for Dev Spaces provides access to the Ansible command line tools. + +To execute an automation task in Dev Spaces from the VSCode user interface, +right-click a playbook name in the list of files and select *Run Ansible Playbook via ansible-navigator run* or *Run playbook via ansible-playbook*. + +image::rhdh-vscode-run-playbook.png[Run a playbook from VS Code] + diff --git a/downstream/modules/devtools/proc-rhdh-firewall-example-create-playbook.adoc b/downstream/modules/devtools/proc-rhdh-firewall-example-create-playbook.adoc new file mode 100644 index 0000000000..b19d16cfd4 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-firewall-example-create-playbook.adoc @@ -0,0 +1,51 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-firewall-example-create-playbook_{context}"] += Create a new playbook project to configure a firewall + +Use the Ansible plug-ins to create a new Ansible Playbook project. + +. Click the Ansible `A` icon in the {RHDH} navigation panel. +. From the *Create* dropdown menu on the landing page, select *Create Ansible Git Project*. +. Click *Choose* in the *Create Ansible Playbook Project* software template. +. Fill in the following information in the *Create Ansible Playbook Project* page: + +[cols="3,1,3,3" options="header"] +|=== +|Field |Required |Description |Example value +|Source code repository organization name or username +|Yes +|The name of your source code repository username or organization name. +|`my_github_username` +|Playbook repository name +|Yes +|The name of your new Git repository. +|`rhel_firewall_config` +|Playbook description +|No +|A description of the new playbook project. +|`This playbook configures firewalls on Red Hat Enterprise Linux systems` +|Playbook project's collection namespace +|Yes +|The new playbook Git project creates an example collection folder for you. +Enter a value for the collection namespace. +|`my_galaxy_username` +|Playbook project's collection name +|Yes +|This is the name of the example collection. +|`rhel_firewall_config` +|Catalog Owner Name +|Yes +|The name of the Developer Hub catalog item owner. It is a Red Hat Developer Hub field. +|`my_rhdh_username` +|System +|No +|This is a Red Hat Developer Hub field. +|`my_rhdh_linux_system` +|=== + +[start=5] +. Click *Review*. +. Click *Create* to provision your new playbook project. +. Click *Open in catalog* to view your project. + diff --git a/downstream/modules/devtools/proc-rhdh-firewall-example-discover.adoc b/downstream/modules/devtools/proc-rhdh-firewall-example-discover.adoc new file mode 100644 index 0000000000..03eec004a3 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-firewall-example-discover.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-firewall-example-discover_{context}"] += Discovering existing Ansible content for RHEL system roles + +Red Hat recommends that you use trusted automation content that has been tested and approved by Red Hat or your organization. + +{HubNameStart} is a central repository for discovering, downloading, and managing trusted content collections from Red Hat and its partners. +{PrivateHubNameStart} provides an on-premise solution for managing content collections. + +. Click on the Ansible `A` icon in the {RHDH} navigation panel. +. Click *Discover existing collections*. +. Click *Go to Automation Hub*. ++ +-- +** If {PrivateHubName} has been configured in the Ansible plug-ins, you are redirected to your *PrivateHubName* instance. +** If {PrivateHubName} has not been configured in the Ansible plug-ins installation configuration, +you will be redirected to the Red Hat Hybrid Console (RHCC) automation hub. +-- +In this example, you are redirected to the RHCC automation hub. +. If you are prompted to log in, provide your Red Hat Customer Portal credentials. +. Filter the collections with the `rhel firewall` keywords. ++ +The search returns the `rhel_system_roles` collection. + +The RHEL System Roles collection contains certified Ansible content that you can reuse to configure your firewall. + diff --git a/downstream/modules/devtools/proc-rhdh-firewall-example-edit.adoc b/downstream/modules/devtools/proc-rhdh-firewall-example-edit.adoc new file mode 100644 index 0000000000..f4c43e22d4 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-firewall-example-edit.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-firewall-example-edit_{context}"] += Editing your firewall playbook project + +The Ansible plug-ins integrate OpenShift Dev Spaces to edit your Ansible projects. +OpenShift Dev Spaces provides on-demand, web-based Integrated Development Environments (IDEs). + +Ansible Git projects provisioned using the Ansible plug-ins include best practice configurations for OpenShift Dev Spaces. +These configurations include installing the Ansible VS Code extension and providing access from the IDE terminal to Ansible development tools, +such as Ansible Navigator and Ansible Lint. + +[NOTE] +==== +OpenShift Dev Spaces is optional and it is not required to run the Ansible plug-ins. +It is a separate Red Hat product and it is not included in the {PlatformNameShort} or {RHDH} subscription. +==== + +This example assumes that OpenShift Dev Spaces has been configured in the Ansible plug-ins installation. + +.Procedure + +* In the *catalog item* view of your playbook project, click *Open Ansible project in OpenShift Dev Spaces*. ++ +A VS Code instance of OpenShift Dev Spaces opens in a new browser tab. +It automatically loads your new Ansible Playbook Git project. + diff --git a/downstream/modules/devtools/proc-rhdh-firewall-example-learn.adoc b/downstream/modules/devtools/proc-rhdh-firewall-example-learn.adoc new file mode 100644 index 0000000000..f9e350dbaa --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-firewall-example-learn.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-firewall-example-learn_{context}"] += Learning more about playbooks + +The first step is to learn more about Ansible playbooks using the available learning paths. + +. Click the Ansible `A` icon in the {RHDH} navigation panel. +. Click *Learn* and select the *Getting Started with Ansible Playbooks* learning path. +This redirects you to the Red Hat Developer website. +. If you are prompted to log in, create a Red Hat Developer account, or enter your details. +. Complete the learning path. + diff --git a/downstream/modules/devtools/proc-rhdh-firewall-example-new-playbook.adoc b/downstream/modules/devtools/proc-rhdh-firewall-example-new-playbook.adoc new file mode 100644 index 0000000000..9dd295201c --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-firewall-example-new-playbook.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-firewall-example-new-playbook_{context}"] += Creating a new playbook to automate the firewall configuration + +Create a new playbook and use the RHEL System Role collection to automate your {RHEL} firewall configuration. + +. In your Dev Spaces instance, click menu:File[New File]. +. Enter `firewall.yml` for the filename and click *OK* to save it in the root directory. +. Add the following lines to your `firewall.yml` file: ++ +---- +--- +- name: Open HTTPS and SSH on firewall + hosts: rhel + become: true + tasks: + - name: Use rhel system roles to allow https and ssh traffic + vars: + firewall: + - service: https + state: enabled + permanent: true + immediate: true + zone: public + - service: ssh + state: enabled + permanent: true + immediate: true + zone: public + ansible.builtin.include_role: + name: redhat.rhel_system_roles.firewall +---- + +[NOTE] +==== +You can use Ansible Lightspeed with IBM watsonx Code Assistant from the Ansible VS Code extension to help you generate playbooks. +For more information, refer to the +link:{BaseURL}/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html-single/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index[Ansible Lightspeed with IBM watsonx Code Assistant User Guide]. +==== + diff --git a/downstream/modules/devtools/proc-rhdh-install-dynamic-plugins-operator.adoc b/downstream/modules/devtools/proc-rhdh-install-dynamic-plugins-operator.adoc new file mode 100644 index 0000000000..7f2c35d849 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-install-dynamic-plugins-operator.adoc @@ -0,0 +1,80 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-install-dynamic-plugins-operator_{context}"] += Installing the dynamic plug-ins + +To install the dynamic plugins, add them to your ConfigMap for your {RHDHShort} plugin settings (for example, `rhaap-dynamic-plugins-config`). + +If you have not already created a ConfigMap file for your {RHDHShort} plugin settings, +create one by following the procedure in the +link:{BaseURL}/openshift_container_platform/{OCPLatest}/html-single/nodes/index#configmaps[Creating and using config maps] section of the {OCPShort} _Nodes_ guide. + +The example ConfigMap used in the following procedure is called `rhaap-dynamic-plugins-config`. + +.Procedure + +. Select *ConfigMaps* in the navigation pane of the OpenShift console. +. Select the `rhaap-dynamic-plugins-config` ConfigMap from the list. +. Select the *YAML* tab to edit the `rhaap-dynamic-plugins-config` ConfigMap. +. In the `data.dynamic-plugins.yaml.plugins` block, add the three dynamic plug-ins from the plug-in registry. +** For the `integrity` hash values, use the `.integrity` files in your `$DYNAMIC_PLUGIN_ROOT_DIR` directory that correspond to each plug-in, for example use `ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz.integrity` for the `ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz` plug-in. +** Replace `x.y.z` with the correct version of the plug-ins. ++ +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: rhaap-dynamic-plugins-config +data: + dynamic-plugins.yaml: | + ... + plugins: + - disabled: false + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + integrity: # Use hash in ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz.integrity + pluginConfig: + dynamicPlugins: + frontend: + ansible.plugin-backstage-rhaap: + appIcons: + - importName: AnsibleLogo + name: AnsibleLogo + dynamicRoutes: + - importName: AnsiblePage + menuItem: + icon: AnsibleLogo + text: Ansible + path: /ansible + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + integrity: # Use hash in ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz.integrity + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-backstage-rhaap-backend: null + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + integrity: # Use hash in ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz.integrity + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-scaffolder-backend-module-backstage-rhaap: null + - ... + +---- +. Click btn:[Save]. +. To view the progress of the rolling restart: +.. In the *Topology* view, select the deployment pod and click *View logs*. +.. Select `install-dynamic-plugins` from the list of containers. + +.Verification + +. In the OpenShift console, select the *Topology* view. +. Click the *Open URL* icon on the deployment pod to open your {RHDH} instance in a browser window. + +The Ansible plug-in is present in the navigation pane, and if you select *Administration*, +the installed plug-ins are listed in the *Plugins* tab. + + diff --git a/downstream/modules/devtools/proc-rhdh-operator-add-sidecar-container.adoc b/downstream/modules/devtools/proc-rhdh-operator-add-sidecar-container.adoc new file mode 100644 index 0000000000..9da60a4396 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-operator-add-sidecar-container.adoc @@ -0,0 +1,55 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-operator-add-sidecar-container_{context}"] += Adding a sidecar container for {ToolsName} to the {RHDHShort} Operator Custom Resource + +Add a sidecar container for {ToolsName} in the Developer Hub pod. +To do this, you must modify the base ConfigMap for the {RHDH} deployment. + +. In the OpenShift console, select the *Topology* view. +. Click *More actions {MoreActionsIcon}* on the developer-hub instance and select *Edit backstage* to open the *Backstage details* page. +. Select the *YAML* tab. +. In the editing pane, add a `containers` block in the `spec.deployment.patch.spec.template.spec` block: ++ +---- +apiVersion: rhdh.redhat.com/v1alpha3 +kind: Backstage +metadata: + name: developer-hub +spec: + deployment: + patch: + spec: + template: + spec: + containers: + - command: + - adt + - server + image: registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest + imagePullPolicy: always + ports: + - containerPort: 8000 + protocol: TCP + terminationMessagePolicy: file +---- +. Click btn:[Save]. + +[NOTE] +==== +If you want to add extra environment variables to your deployment, +you can add them in the `spec.application.extraEnvs` block: + +---- +spec: + application: + ... + extraEnvs: + envs: + - name: + value: + +---- + +==== + diff --git a/downstream/modules/devtools/proc-rhdh-set-up-controller-project.adoc b/downstream/modules/devtools/proc-rhdh-set-up-controller-project.adoc new file mode 100644 index 0000000000..7a10700f40 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-set-up-controller-project.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-set-up-controller-project_{context}"] += Setting up a controller project to run your playbook project + +.Procedure + +. The Ansible plug-ins provide a link to {PlatformNameShort}. +. Log in to your {RHDH} UI. +. Click the Ansible `A` icon in the {RHDH} navigation panel. +. Click *Operate* to display a link to your {PlatformNameShort} instance. ++ +If {ControllerName} was not included in your plug-in installation, a link to the product feature page is displayed. +. Click *Go to {PlatformNameShort}* to open your platform instance in a new browser tab. ++ +Alternatively, if your platform instance was not configured during the Ansible plug-in installation, navigate to your {ControllerName} instance in a browser and log in. +. Log in to {ControllerName}. +. Create a project in {PlatformNameShort} for the GitHub repository where you stored your playbook project. +Refer to the +link:{URLControllerUserGuide}/controller-projects[Projects] +chapter of _TitleControllerUserGuide_. +. Create a job template that uses a playbook from the project that you created. +Refer to the +link:{URLControllerUserGuide}/controller-workflow-job-templates[Workflow job templates] +chapter of _TitleControllerUserGuide_. + diff --git a/downstream/modules/devtools/proc-rhdh-uninstall-ocp-helm.adoc b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-helm.adoc new file mode 100644 index 0000000000..a6637a61cc --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-helm.adoc @@ -0,0 +1,103 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-uninstall-ocp-helm_{context}"] += Uninstalling a Helm chart installation + +.Procedure + +. In {RHDH}, remove any software templates that use the `ansible:content:create` action. +. In the OpenShift Developer UI, navigate to menu:Helm[developer-hub > Actions > Upgrade > Yaml view]. +. Remove the Ansible plug-ins configuration under the `plugins` section. ++ +---- +... +global: +... + plugins: + - disabled: false + integrity: + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + pluginConfig: + dynamicPlugins: + frontend: + ansible.plugin-backstage-rhaap: + appIcons: + - importName: AnsibleLogo + name: AnsibleLogo + dynamicRoutes: + - importName: AnsiblePage + menuItem: + icon: AnsibleLogo + text: Ansible + path: /ansible + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-scaffolder-backend-module-backstage-rhaap: null + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-backstage-rhaap-backend: null +---- +. Remove the `extraContainers` section. ++ +---- +upstream: + backstage: | + ... + extraContainers: + - command: + - adt + - server + image: >- + registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest + imagePullPolicy: IfNotPresent + name: ansible-devtools-server + ports: + - containerPort: 8000 + image: + pullPolicy: Always + pullSecrets: + - ... + - rhdh-secret-registry + ... +---- +. Click btn:[Upgrade]. +. Edit your custom {RHDH} config map, for example `app-config-rhdh`. +. Remove the `ansible` section. ++ +---- +data: + app-config-rhdh.yaml: | + ... + ansible: + analytics: + enabled: true + devSpaces: + baseUrl: '' + creatorService: + baseUrl: '127.0.0.1' + port: '8000' + rhaap: + baseUrl: '' + token: '' + checkSSL: true + automationHub: + baseUrl: '' + +---- +. Restart the {RHDH} deployment. +. Remove the `plugin-registry` OpenShift application. ++ +---- +oc delete all -l app=plugin-registry +---- + diff --git a/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-plugins-cm.adoc b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-plugins-cm.adoc new file mode 100644 index 0000000000..06a64ef999 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-plugins-cm.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-uninstall-ocp-operator-plugins-cm_{context}"] += Removing the {AAPRHDHShort} from the ConfigMap + +// (this section covers uninstalling plugins only, not unloading or updating the sidecar container) +// To uninstall the dynamic plugins, you must update the `rhaap-dynamic-plugins-config` ConfigMap + +.Procedure + +. Open the custom ConfigMap where you referenced the {AAPRHDHShort}. +For this example, the ConfigMap name is `rhaap-dynamic-plugins-config`. +. Locate the dynamic plug-ins in the `plugins:` block. ++ +** To disable the plug-ins, update the `disabled` attribute to `true` for the three plug-ins. +** To delete the plug-ins, delete the lines that reference the plug-ins from the `plugins:` block: ++ +---- + +kind: ConfigMap +apiVersion: v1 +metadata: + name: rhaap-dynamic-plugins-config +data: + dynamic-plugins.yaml: | + ... + plugins: # Remove the Ansible plug-ins entries below the ‘plugins’ YAML key + - disabled: false + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + integrity: + ... + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + integrity: + ... + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + integrity: + ... + +---- +. Click btn:[Save]. + diff --git a/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-rhdh-cm.adoc b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-rhdh-cm.adoc new file mode 100644 index 0000000000..51c5d98d04 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-rhdh-cm.adoc @@ -0,0 +1,43 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-uninstall-ocp-operator-rhdh-cm_{context}"] += Removing {PlatformNameShort} and Dev Spaces from the custom {RHDH} ConfigMap + +.Procedure + +. Open the custom {RHDH} ConfigMap where you added configuration for the templates and for connecting to {PlatformNameShort} and Dev Spaces. +In this example, the {RHDH} ConfigMap name is `app-config-rhdh`. ++ +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: rhdh-app-config +data: + app-config-custom.yaml: | + ... + catalog: + ... + locations: # Remove the YAML entry below the 'locations' YAML key + - type: url + target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml + rules: + - allow: [Template] + ... + # Remove the entire 'ansible' YAML key and all sub-entries + ansible: + devSpaces: + baseUrl: '' + creatorService: + baseUrl: '127.0.0.1' + port: '8000' + rhaap: + baseUrl: '' + token: + checkSSL: false + +---- +. Remove the `url` in the `locations:` block to delete the templates from the {RHDHShort} instance. +. Remove the `ansible:` block to delete the Ansible-specific configuration. +. Click btn:[Save]. + diff --git a/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-sidecar-container.adoc b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-sidecar-container.adoc new file mode 100644 index 0000000000..263c51f15a --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-uninstall-ocp-operator-sidecar-container.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-uninstall-ocp-operator-sidecar-container_{context}"] += Uninstalling the sidecar container + +To remove the sidecar container for {ToolsName} from the developer-hub pod, +you must modify the base ConfigMap for the {RHDH} deployment. + +.Procedure + +. In the OpenShift console, select the *Topology* view. +. Click *More actions {MoreActionsIcon}* on the developer-hub instance and select *Edit backstage* to edit the base ConfigMap. +. Select the *YAML* tab. +. In the editing pane, remove the `containers` block for the sidecar container from the `spec.deployment.patch.spec.template.spec` block: ++ +---- +... +spec: + deployment: + patch: + spec: + template: + spec: + containers: + - command: + - adt + - server + image: ghcr.io/ansible/community-ansible-dev-tools:latest + imagePullPolicy: always + ports: + - containerPort: 8000 + protocol: TCP + terminationMessagePolicy: file + +---- +. Click btn:[Save]. + diff --git a/downstream/modules/devtools/proc-rhdh-update-plugin-registry.adoc b/downstream/modules/devtools/proc-rhdh-update-plugin-registry.adoc new file mode 100644 index 0000000000..97ea8005cf --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-update-plugin-registry.adoc @@ -0,0 +1,46 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-update-plugin-registry_{context}"] += Updating the plug-in registry + +Rebuild your plug-in registry application in your OpenShift cluster with the latest Ansible plug-ins files. + +.Prerequisites + +* You have downloaded the Ansible plug-ins files. +* You have set an environment variable, for example (`$DYNAMIC_PLUGIN_ROOT_DIR`), +to represent the path to the local directory where you have stored the `.tar` files. + +.Procedure + +. Log in to your {OCPShort} instance with credentials to create a new application. +. Open your {RHDH} OpenShift project. ++ +---- +$ oc project +---- +. Run the following commands to update your plug-in registry build in the OpenShift cluster. +The commands assume that `$DYNAMIC_PLUGIN_ROOT_DIR` represents the directory for your `.tar` files. +Replace this in the command if you have chosen a different environment variable name. ++ +---- +$ oc start-build plugin-registry --from-dir=$DYNAMIC_PLUGIN_ROOT_DIR --wait +---- +. When the registry has started, the output displays the following message: ++ +---- +Uploading directory "/path/to/dynamic_plugin_root" as binary input for the build … +Uploading finished +build.build.openshift.io/plugin-registry-1 started +---- + +.Verification + +Verify that the `plugin-registry` has been updated. + +. In the OpenShift UI, click *Topology*. +. Click the *redhat-developer-hub* icon to view the pods for the plug-in registry. +. Click *View logs* for the plug-in registry pod. +. Open the *Terminal* tab and run `ls` to view the `.tar` files in the `plug-in registry`. +. Verify that the new `.tar` file has been uploaded. + diff --git a/downstream/modules/devtools/proc-rhdh-update-plugins-helm-version-numbers.adoc b/downstream/modules/devtools/proc-rhdh-update-plugins-helm-version-numbers.adoc new file mode 100644 index 0000000000..a558cbe871 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-update-plugins-helm-version-numbers.adoc @@ -0,0 +1,59 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-update-plugins-helm-version-numbers_{context}"] += Updating the Ansible plug-ins version numbers for a Helm installation + +.Procedure + +. Log in to your {OCPShort} instance. +. In the OpenShift Developer UI, navigate to menu:Helm[developer-hub > Actions > Upgrade > Yaml view]. +. Update the Ansible plug-ins version numbers and associated `.integrity` file values. ++ +---- +... +global: +... + plugins: + - disabled: false + integrity: + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + pluginConfig: + dynamicPlugins: + frontend: + ansible.plugin-backstage-rhaap: + appIcons: + - importName: AnsibleLogo + name: AnsibleLogo + dynamicRoutes: + - importName: AnsiblePage + menuItem: + icon: AnsibleLogo + text: Ansible + path: /ansible + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-scaffolder-backend-module-backstage-rhaap: null + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-backstage-rhaap-backend: null + +---- +. Click btn:[Upgrade]. ++ +The developer hub pods restart and the plug-ins are installed. + +.Verification + +. In the OpenShift UI, click *Topology*. +. Make sure that the {RHDH} instance is available. + diff --git a/downstream/modules/devtools/proc-rhdh-update-plugins-operator-version-numbers.adoc b/downstream/modules/devtools/proc-rhdh-update-plugins-operator-version-numbers.adoc new file mode 100644 index 0000000000..1a70f2580e --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-update-plugins-operator-version-numbers.adoc @@ -0,0 +1,48 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-update-plugins-operator-version-numbers_{context}"] += Updating the Ansible plug-ins version numbers for an Operator installation + +.Procedure + +. Log in to your {OCPShort} instance. +. In the OpenShift UI, open the ConfigMap where you added the {AAPRHDHShort} during installation. +This example uses a ConfigMap file called `rhaap-dynamic-plugins-config`. +. Update `x.y.z` with the version numbers for the updated {AAPRHDHShort}. +. Update the integrity values for each plug-in with the `.integrity` value from the corresponding extracted {AAPRHDHShort} `.tar` file. +// For example, use the `.integrity` value from `ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz` for the `ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz.integrity` key. ++ +---- +kind: ConfigMap +apiVersion: v1 +metadata: + name: rhaap-dynamic-plugins-config +data: + dynamic-plugins.yaml: | + ... + plugins: # Update the Ansible plug-in entries below with the updated plugin versions + - disabled: false + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + integrity: # Use hash in ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz.integrity + ... + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + integrity: # Use hash in ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz.integrity + ... + - disabled: false + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + integrity: # Use hash in ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz.integrity + ... + +---- +. Click btn:[Save]. ++ +The developer hub pods restart and the plug-ins are installed. + +.Verification + +. In the OpenShift UI, click *Topology*. +. Make sure that the {RHDH} instance is available. + diff --git a/downstream/modules/devtools/proc-rhdh-view.adoc b/downstream/modules/devtools/proc-rhdh-view.adoc new file mode 100644 index 0000000000..0c78e60f73 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-view.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-view_{context}"] += Viewing your projects + +To view the projects that you have created in the plug-in, navigate to the *Overview* page for the Ansible plug-in and click *My Items*. + diff --git a/downstream/modules/devtools/proc-rhdh-warning-aap-ooc.adoc b/downstream/modules/devtools/proc-rhdh-warning-aap-ooc.adoc new file mode 100644 index 0000000000..defc59e976 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-warning-aap-ooc.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-warning-aap-ooc_{context}"] += {PlatformNameShort} subscription is out of compliance + +The following warning indicates that the Ansible plug-ins successfully retrieved the {PlatformNameShort} subscription status. +However, the subscription is out of compliance. + +---- +Subscription non-compliant +The connected Ansible Automation Platform subscription is out of compliance. +Contact your Red Hat account team to obtain a new subscription entitlement. +Learn more about account compliance. +---- + +. Contact your Red Hat account team to obtain a new subscription entitlement. +. Learn more about link:https://access.redhat.com/solutions/6988859[account compliance]. +. When the subscription is in compliance, restart the {RHDH} pod to initiate a new subscription query. + diff --git a/downstream/modules/devtools/proc-rhdh-warning-invalid-aap-config.adoc b/downstream/modules/devtools/proc-rhdh-warning-invalid-aap-config.adoc new file mode 100644 index 0000000000..a8eebf5b3b --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-warning-invalid-aap-config.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-warning-invalid-aap-config_{context}"] += Invalid Ansible Automation Platform configuration + +The following warning indicates that the {PlatformNameShort} configuration section is invalid or incomplete. + +---- +Invalid resource for Ansible Automation Platform +Verify that the resource url for Ansible Automation Platform are correctly configured in the Ansible plug-ins. +For help, please refer to the Ansible plug-ins installation guide. +---- + +. Verify that the `rhaap` section of the Ansible plug-ins ConfigMap is correctly configured and contains all the necessary entries. +For more information, refer to xref:rhdh-configure-aap-details_rhdh-ocp-required-installation[Configuring Ansible Automation Platform details]. +. After correcting the configuration, restart the {RHDH} pod to initiate a subscription query. + diff --git a/downstream/modules/devtools/proc-rhdh-warning-invalid-aap-subscription.adoc b/downstream/modules/devtools/proc-rhdh-warning-invalid-aap-subscription.adoc new file mode 100644 index 0000000000..5ef1e46c0c --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-warning-invalid-aap-subscription.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-warning-invalid-aap-subscription_{context}"] += Invalid {PlatformNameShort} subscription + +The following warning indicates that the Ansible plug-ins successfully retrieved the {PlatformNameShort} subscription status. +However, the subscription type is invalid for {PlatformNameShort}. + +---- +Invalid subscription +The connected Ansible Automation Platform subscription is invalid. +Contact your Red Hat account team, or start an Ansible Automation Platform trial. +---- + +. Contact your Red Hat account team to obtain a new subscription entitlement or link:http://red.ht/aap-rhdh-plugins-start-trial[start an {PlatformNameShort} trial]. +. When you have updated the subscription, restart the {RHDH} pod to initiate a new subscription query. + diff --git a/downstream/modules/devtools/proc-rhdh-warning-unable-authenticate-aap.adoc b/downstream/modules/devtools/proc-rhdh-warning-unable-authenticate-aap.adoc new file mode 100644 index 0000000000..f2e2c24b72 --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-warning-unable-authenticate-aap.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-warning-unable-authenticate-aap_{context}"] += Unable to authenticate to {PlatformNameShort} + +The following warning indicates that the Ansible plug-ins were not able to authenticate with {PlatformNameShort} to query the subscription status. + +---- +Unable to authenticate to Ansible Automation Platform +Verify that the authentication details for Ansible Automation Platform are correctly configured in the Ansible plug-ins. +For help, please refer to the Ansible plug-ins installation guide. +---- + +. Verify that the automation controller Personal Access Token (PAT) configured in the Ansible plug-ins is correct. +For more information, refer to the +link:{URLCentralAuth}/gw-token-based-authentication#proc-controller-apps-create-tokens[Adding tokens] +section of _TitleCentralAuth_. +. After correcting the authentication details, restart the {RHDH} pod to initiate a subscription query. + diff --git a/downstream/modules/devtools/proc-rhdh-warning-unable-connect-aap.adoc b/downstream/modules/devtools/proc-rhdh-warning-unable-connect-aap.adoc new file mode 100644 index 0000000000..c52f76de4c --- /dev/null +++ b/downstream/modules/devtools/proc-rhdh-warning-unable-connect-aap.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-warning-unable-connect-aap_{context}"] += Unable to connect to {PlatformNameShort} + +The following warning indicates that the automation controller details are not configured, or the controller instance API is unreachable to query the subscription status. + +---- +Unable to connect to Ansible Automation Platform +Verify that Ansible Automation Platform is reachable and correctly configured in the Ansible plug-ins. +To get help, please refer to the Ansible plug-ins installation guide. +---- + +. Verify that {PlatformNameShort} is reachable and correctly configured in the `rhaap` section of the ConfigMap. +. Ensure the `checkSSL` key is correctly set for your environment. +. After correcting the configuration details, restart the {RHDH} pod to initiate a subscription query. + diff --git a/downstream/modules/devtools/proc-running-playbook.adoc b/downstream/modules/devtools/proc-running-playbook.adoc deleted file mode 100644 index 303b180a20..0000000000 --- a/downstream/modules/devtools/proc-running-playbook.adoc +++ /dev/null @@ -1,36 +0,0 @@ -[id="running-playbook"] - -= Running your playbook - -[role="_abstract"] - -The Ansible {VSCode} extension provides two options to run your playbook: - -* `ansible-playbook` runs the playbook on your local machine using Ansible Core. -* `ansible-navigator` runs the playbook in an execution environment in the same manner that {PlatformNameShort} runs an automation job. -You specify the base image for the execution environment in the Ansible extension settings. - -== Running your playbook with `ansible-playbook` - -.Procedure - -* To run a playbook, right-click the playbook name in the Explorer pane, then select menu:Run Ansible Playbook via[Run playbook via `ansible-playbook`]. - -image:ansible-playbook-run.png[Run playbook via ansible-playbook] - -The output is displayed in the *Terminal* tab of the {VSCode} terminal. -The `ok=2` and `failed=0` messages indicate that the playbook ran successfully. - -image:ansible-playbook-success.png[Success message for ansible-playbook execution] - -== Running your playbook with `ansible-navigator` - -.Prerequisites - -* In the Ansible extension settings, enable the use of an execution environment in Ansible Execution Environment > Enabled. -* Enter the path or URL for the execution environment image in Ansible > Execution Environment: Image. - -.Procedure - -* To run a playbook, right-click the playbook name in the Explorer pane, then select menu:Run Ansible Playbook via[Run playbook via ansible-navigator run]. - diff --git a/downstream/modules/devtools/proc-scaffolding-playbook-project.adoc b/downstream/modules/devtools/proc-scaffolding-playbook-project.adoc index 57e07294fb..99ab529880 100644 --- a/downstream/modules/devtools/proc-scaffolding-playbook-project.adoc +++ b/downstream/modules/devtools/proc-scaffolding-playbook-project.adoc @@ -1,4 +1,5 @@ -[id="scaffolding-playbook-project"] +[id="scaffolding-playbook-project_{context}"] +:_mod-docs-content-type: PROCEDURE = Scaffolding a playbook project @@ -15,19 +16,32 @@ The following steps describe the process for scaffolding a new playbook project .Procedure +. Open {VSCode}. . Click the Ansible icon in the {VSCode} activity bar to open the Ansible extension. -. Type kbd:[Ctrl+Shift+P] to display the VSCode command palette. -. In the input field, enter `Create new Ansible project`. The **Create Ansible Project** tab opens. -. Enter a name for the directory where you want to scaffold your new playbook project. +. Select *Get started* in the *Ansible content creator* section. ++ +The *Ansible content creator* tab opens. +. In the *Create* section, click *Ansible playbook project*. ++ +The *Create Ansible project* tab opens. +. In the form in the *Create Ansible project* tab, enter the following: ++ +* *Destination directory*: Enter the path to the directory where you want to scaffold your new playbook project. + [NOTE] ==== -If you enter a current directory name, the scaffolding process will overwrite the contents of that directory. +If you enter an existing directory name, the scaffolding process overwrites the contents of that directory. +The scaffold process only allows you to use an existing directory if you enable the `Force` option. ==== -. Add an organization name and a project name. -. Click btn:[Create] to begin creating your project. +** If you are using the containerized version of Ansible Dev tools, the destination directory path is relative to the container, not a path in your local system. To discover the current directory name in the container, run the `pwd` command in a terminal in VS Code. If the current directory in the container is `workspaces`, enter `workspaces/`. +** If you are using a locally installed version of Ansible Dev tools, enter the full path to the directory, for example `/user//projects/`. +* *SCM organization and SCM project*: Enter a name for the directory and subdirectory where you can store roles that you create for your playbooks. +. Enter a name for the directory where you want to scaffold your new playbook project. -After the project directory has been created, the following message appears in the Logs pane of the Create Ansible Project tab: +.Verification + +After the project directory has been created, the following message appears in the *Logs* pane of the *Create Ansible Project* tab. +In this example, the destination directory name is `destination_directory_name`. ---- ------------------ ansible-creator logs ------------------ @@ -37,17 +51,34 @@ After the project directory has been created, the following message appears in t The following directories and files are created in your project directory: ---- +$ tree -a -L 5 . +├── .devcontainer +│   ├── devcontainer.json +│   ├── docker +│   │   └── devcontainer.json +│   └── podman +│   └── devcontainer.json +├── .gitignore ├── README.md ├── ansible-navigator.yml ├── ansible.cfg ├── collections -│ ├── ansible_collections -│ └── requirements.yml +│   ├── ansible_collections +│   │   └── scm_organization_name +│   │   └── scm_project_name +│   └── requirements.yml ├── devfile.yaml ├── inventory -│ ├── group_vars -│ ├── host_vars -│ └── hosts.yml +│   ├── group_vars +│   │   ├── all.yml +│   │   └── web_servers.yml +│   ├── host_vars +│   │   ├── server1.yml +│   │   ├── server2.yml +│   │   ├── server3.yml +│   │   ├── switch1.yml +│   │   └── switch2.yml +│   └── hosts.yml ├── linux_playbook.yml ├── network_playbook.yml └── site.yml diff --git a/downstream/modules/devtools/proc-self-service-add-deployment-url-oauth-app.adoc b/downstream/modules/devtools/proc-self-service-add-deployment-url-oauth-app.adoc new file mode 100644 index 0000000000..672e335534 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-add-deployment-url-oauth-app.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-add-deployment-url-oauth-app_{context}"] += Adding the deployment URL to the OAuth Application + +When you set up your OAuth application in {PlatformNameShort} before deploying {SelfServiceShort}, +you added placeholder text for the `Redirect URIs` value. + +You must update this value using the URL from the deployed application so that you can run automation on {SelfServiceShort} from {SelfServiceShort}. + + +. Determine the `Redirect URI` from your OpenShift deployment: +.. Open the URL for the deployment from the OpenShift console to display the sign-in page for {SelfServiceShort}. +.. Copy the URL. +.. To determine the `Redirect URI` value, append `/api/auth/rhaap/handler/frame` to the end of the deployment URL. ++ +For example, if the URL for the deployment is `\https://my-aap-self-service-tech-preview-backstage-myproject.mycluster.com`, +then the `Redirect URI` value is `\https://my-aap-self-service-tech-preview-backstage-myproject.mycluster.com/api/auth/rhaap/handler/frame`. +. Update the `Redirect URIs` field in the OAuth application in {PlatformNameShort}: +.. In a browser, open your instance of {PlatformNameShort}. +.. Navigate to {MenuAMAdminOauthApps}. +.. In the list view, click the OAuth application you created. +.. Replace the placeholder text in the `Redirect URIs` field with the value you determined from your OpenShift deployment. +.. Click `Save` to apply the changes. + diff --git a/downstream/modules/devtools/proc-self-service-add-scm-credentials-aap.adoc b/downstream/modules/devtools/proc-self-service-add-scm-credentials-aap.adoc new file mode 100644 index 0000000000..cdddde0746 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-add-scm-credentials-aap.adoc @@ -0,0 +1,31 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-add-scm-credentials-aap_{context}"] += Adding Source Control Credentials for GitHub or Gitlab in {PlatformNameShort} + +.Prerequisite +Ensure that the user who needs access to the repository has the appropriate permissions. + +.Procedure +. Sign in to {PlatformNameShort} as an administrator. +. Navigate to {MenuAECredentials}. +. On the *Credentials* page, click btn:[Create credential]. +. Add the following details: +** *Name*: Credential name. +** *Organization*: The name of the organization with which the credential is associated. +The default is *Default*. ++ +[NOTE] +==== +Credentials in {PlatformNameShort} are always created under a specific organization. +==== +** *Credential type*: Source Control. +** *Username*: Your Github username or the Gitlab group name under which the repository is hosted. ++ +[NOTE] +==== +{SelfServiceShortStart} does not support creating or modifying Gitlab repositories under personal user accounts. You must use a Gitlab Group instead. +==== + diff --git a/downstream/modules/devtools/proc-self-service-add-template.adoc b/downstream/modules/devtools/proc-self-service-add-template.adoc new file mode 100644 index 0000000000..dbac2bae53 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-add-template.adoc @@ -0,0 +1,26 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-add-template_{context}"] += Adding a template + +This procedure describes how to add a tile to the *Templates* view of your {SelfServiceShort} instance. + +.Prerequisite +You have created repositories in your SCM for the templates that you want to use. + +.Procedure +. In a browser, navigate to your {SelfServiceShort} instance and sign in with your {PlatformNameShort} credentials. +. Navigate to the *Templates* Page. +. Click *Add template*. +. Enter a valid Github URL for the template that you want to add. +. Click *Analyze* to fetch the template. +. After the template has been fetched, review the list of what will be imported and added to the catalog. +. Click *Import*. + +.Verification +After the import is complete, return to the *Templates* page to view the newly created template. +You can now launch your template. +// A populated *Templates* page resembles the following: + diff --git a/downstream/modules/devtools/proc-self-service-create-collection-repo.adoc b/downstream/modules/devtools/proc-self-service-create-collection-repo.adoc new file mode 100644 index 0000000000..e200828cae --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-create-collection-repo.adoc @@ -0,0 +1,29 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-create-collection-repo_{context}"] + += Creating a repository for a collection + +. Locate the `.tar` file for the collection. +. Create a new directory to store the unpacked files. +. Run the following command to unpack the `.tar` file: ++ +---- +$ tar -xvf .tar.gz – directory +---- +. Navigate to the extracted collection directory and initialize it as a Git repository: ++ +---- +$ cd +$ git init +---- +. Edit the collection if you wish to modify the template. ++ +The Ansible template definitions are stored in the `extensions/patterns/` directory of the repository. +. Push the repository to your SCM. +** See link:https://docs.github.com/en/migrations/importing-source-code/using-the-command-line-to-import-source-code/adding-locally-hosted-code-to-github#adding-a-local-repository-to-github-using-git[Adding a local repository to GitHub using Git] +in the GitHub documentation. +** See link:https://docs.gitlab.com/topics/git/project[Create a project with git push] in the Gitlab documentation. + diff --git a/downstream/modules/devtools/proc-self-service-create-gh-pat.adoc b/downstream/modules/devtools/proc-self-service-create-gh-pat.adoc new file mode 100644 index 0000000000..c2e7df08aa --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-create-gh-pat.adoc @@ -0,0 +1,18 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-create-gh-pat_{context}"] += Creating a Personal access token (PAT) on GitHub + +. In a browser, log in to GitHub and navigate to the +link:https://github.com/settings/tokens[Personal access tokens] +page. +. Click *Generate new token(classic)*. +. In the *Select scopes:* section, enable the following: +** repo +** read:org +** workflow (as needed) +. Click *Generate token*. +. Save the Personal access token. + diff --git a/downstream/modules/devtools/proc-self-service-create-gl-pat.adoc b/downstream/modules/devtools/proc-self-service-create-gl-pat.adoc new file mode 100644 index 0000000000..81f1eb2ceb --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-create-gl-pat.adoc @@ -0,0 +1,19 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-create-gl-pat_{context}"] += Creating a Personal access token (PAT) on Gitlab + + +. In a browser, log in to Gitlab and navigate to the +link:https://gitlab.com/-/user_settings/personal_access_tokens[Personal access tokens] +page. +. Click *Add new token*. +. Provide a name and expiration date for the token. +. In the *Scopes:* section, select the following: +** read_repository +** api +. Click *Create personal access token*. +. Save the Personal access token. + diff --git a/downstream/modules/devtools/proc-self-service-create-oauth-app.adoc b/downstream/modules/devtools/proc-self-service-create-oauth-app.adoc new file mode 100644 index 0000000000..a354878de9 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-create-oauth-app.adoc @@ -0,0 +1,37 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-create-oauth-app_{context}"] += Creating an OAuth application + +To use the Helm chart to deploy {SelfServiceShort}, you must have set up an OAuth application on your {PlatformNameShort} instance. +However, you cannot run automation on your {PlatformNameShort} instance until you have deployed your {SelfServiceShort} Helm chart, +because the OAuth configuration requires the URL for your deployment. + +Create the OAuth Application on your {PlatformNameShort} instance, +using a placeholder name for the deployment URL. + +After deploying {SelfServiceShort}, you must xref:self-service-add-deployment-url-oauth-app_self-service-accessing-deployment[replace the placeholder value with a URL derived from your deployment URL] in your OAuth application. + +The steps below describe how to create an OAuth Application in the {PlatformNameShort} Platform console. + +.Procedure +. Open your {PlatformNameShort} instance in a browser and log in. +. Navigate to {MenuAMAdminOauthApps}. +. Click *Create OAuth Application*. +. Complete the fields in the form. +** *Name*: Add a name for your application. +** *Organization*: Choose the organization. +** *Authorization grant type*: Choose `Authorization code`. +** *Client type*: choose `Confidential`. +** *Redirect URIs*: Add placeholder text for the deployment URL (for example `https//:example.com`). ++ +image::self-service-create-oauth-app.png[Create OAuth application] +. Click *Create OAuth application*. ++ +The *Application information* popup displays the `clientId` and `clientSecret` values. +. Copy the `clientId` and `clientSecret` values and save them. ++ +These values are used in an OpenShift secret for {PlatformNameShort} authentication. + diff --git a/downstream/modules/devtools/proc-self-service-create-ocp-auth-secrets.adoc b/downstream/modules/devtools/proc-self-service-create-ocp-auth-secrets.adoc new file mode 100644 index 0000000000..da1800c949 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-create-ocp-auth-secrets.adoc @@ -0,0 +1,40 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-create-ocp-auth-secrets_{context}"] += Creating {PlatformNameShort} authentication secrets + +. Log in to your {OCPShort} instance. +. Open your OpenShift project for {SelfServiceShort} in the *Administrator* view. +. Click *Secrets* in the side pane. +. Click *Create* to open the form for creating a new secret. +. Select the *Key/Value* option. +. Create a secret named `secrets-rhaap-self-service-preview`. ++ +[NOTE] +==== +The secret must use this exact name. +==== +. Add the following key-value pairs to the secret. ++ +[NOTE] +==== +The secrets must use the exact key names specified below. +==== ++ +** Key: `aap-host-url` ++ +Value needed: {PlatformNameShort} instance URL ++ +** Key: `oauth-client-id` ++ +Value needed: {PlatformNameShort} OAuth client ID ++ +** Key: `oauth-client-secret` ++ +Value needed: {PlatformNameShort} OAuth client secret value ++ +** Key: `aap-token` ++ +Value needed: Token for {PlatformNameShort} user authentication (must have `write` access). +. Click *Create* to create the secret. + diff --git a/downstream/modules/devtools/proc-self-service-create-pattern-loader-repo.adoc b/downstream/modules/devtools/proc-self-service-create-pattern-loader-repo.adoc new file mode 100644 index 0000000000..df6ddf3a07 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-create-pattern-loader-repo.adoc @@ -0,0 +1,21 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-create-pattern-loader-repo_{context}"] + += Creating a repository for `ansible-pattern-loader` + +To use one of the pre-loaded tiles in {SelfServiceShort}, +you must create a repository in GitHub or Gitlab for the link:https://github.com/ansible/ansible-pattern-loader[`ansible-pattern-loader`] repository. + +. Clone the repository: ++ +---- +$ git clone git@github.com:ansible/ansible-pattern-loader.git +---- +. Push the repository to your SCM. +** See link:https://docs.github.com/en/migrations/importing-source-code/using-the-command-line-to-import-source-code/adding-locally-hosted-code-to-github#adding-a-local-repository-to-github-using-git[Adding a local repository to GitHub using Git] +in the GitHub documentation. +** See link:https://docs.gitlab.com/topics/git/project[Create a project with git push] in the Gitlab documentation. + diff --git a/downstream/modules/devtools/proc-self-service-create-scm-secrets.adoc b/downstream/modules/devtools/proc-self-service-create-scm-secrets.adoc new file mode 100644 index 0000000000..47d52645d9 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-create-scm-secrets.adoc @@ -0,0 +1,33 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-create-scm-secrets_{context}"] += Creating Creating GitHub and Gitlab secrets + +. Log in to your {OCPShort} instance. +. Open your OpenShift project for {SelfServiceShort}. +. Click *Secrets* in the side pane. +. Click *Create* to open the form for creating a new secret. +. Select the *Key/Value* option. +. Create a secret named `secrets-scm`. ++ +[NOTE] +==== +The secret must use this exact name. +==== +. Add the following key-value pairs to the secret. +If you are only using one SCM, just add the key-value pair for that SCM. ++ +[NOTE] +==== +The secrets must use the exact key names specified below. +==== ++ +** Key: `github-token` ++ +Value needed: Github Personal Access Token (PAT) ++ +** Key: `gitlab-token` ++ +Value needed: Gitlab Personal Access Token (PAT) +. Click *Create* to create the secret. + diff --git a/downstream/modules/devtools/proc-self-service-deregister-dynamic-templates.adoc b/downstream/modules/devtools/proc-self-service-deregister-dynamic-templates.adoc new file mode 100644 index 0000000000..7fdf3481e4 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-deregister-dynamic-templates.adoc @@ -0,0 +1,20 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-deregister-dynamic-templates_{context}"] += Deregistering dynamically added templates + +Dynamically added templates are templates that you have added using *Add Template* in the {SelfServiceShort} console. + +. In a browser, navigate to the {SelfServiceShort} instance. +. Click the catalog template name to navigate to the *Template detail* view. +The navigation bar contains the *Unregister Template* option. +. Click *Unregister Template*. +. In the dialog, confirm that you want to deregister the template. +. Click *Delete Entity* to unregister the template. + +.Verification +In a browser, navigate to the *Templates* view for your {SelfServiceShort} instance. +Verify that the templates have been deleted. + diff --git a/downstream/modules/devtools/proc-self-service-deregister-preinstalled-templates.adoc b/downstream/modules/devtools/proc-self-service-deregister-preinstalled-templates.adoc new file mode 100644 index 0000000000..da18f0c595 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-deregister-preinstalled-templates.adoc @@ -0,0 +1,36 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-deregister-preinstalled-templates_{context}"] += Deregistering pre-installed templates + +{SelfServiceShortStart} comes preloaded with example templates to help you get started. +To remove the preloaded templates from the *Templates* page, you must edit the Helm chart for your Self-service installation. + +. In a browser, navigate to your OpenShift project for Ansible Self-service. +. Select the *Topology* view. +. Open the Helm chart for your deployment. +. Locate the `catalog.locations` section of the Helm chart: ++ +---- + locations: + - type: file + target: /software-templates/seed.yaml + rules: + - allow: [Template] +---- +. Comment out the `type`, `target`, and `rules` keys of `catalog.locations` by adding a `#` character: ++ +---- + locations: + # - type: file + # target: /software-templates/seed.yaml + # rules: + # - allow: [Template] +---- +. Click *Create* to re-launch the deployment. + +.Verification +In a browser, navigate to the *Templates* view for your {SelfServiceShort} instance. +Verify that the templates have been deleted. diff --git a/downstream/modules/devtools/proc-self-service-download-tar.adoc b/downstream/modules/devtools/proc-self-service-download-tar.adoc new file mode 100644 index 0000000000..308ca9b486 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-download-tar.adoc @@ -0,0 +1,46 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rhdh-download-tar_{context}"] += Downloading the TAR files + +. Create a directory on your local machine to store the `.tar` files. ++ +---- +$ mkdir /path/to/ +---- +. Set an environment variable (`$DYNAMIC_PLUGIN_ROOT_DIR`) to represent the directory path. ++ +---- +$ export DYNAMIC_PLUGIN_ROOT_DIR=/path/to/ +---- +. Download the latest `.tar` file for the plug-ins from the +link:{PlatformDownloadUrl}[Red Hat Ansible Automation Platform Product Software downloads page]. ++ +The format of the filename is `ansible-backstage-rhaap-bundle-x.y.z.tar.gz`. ++ +Substitute the Ansible plug-ins release version, for example `1.0.0`, for `x.y.z`. +. Extract the `ansible-backstage-rhaap-bundle-.tar.gz` contents to `$DYNAMIC_PLUGIN_ROOT_DIR`. ++ +---- +$ tar --exclude='*code*' -xzf ansible-backstage-rhaap-bundle-x.y.z.tar.gz -C $DYNAMIC_PLUGIN_ROOT_DIR +---- ++ +Substitute the Ansible plug-ins release version, for example `1.0.0`, for `x.y.z`. + +.Verification + +Run `ls` to verify that the extracted files are in the `$DYNAMIC_PLUGIN_ROOT_DIR` directory: + +---- +$ ls $DYNAMIC_PLUGIN_ROOT_DIR +ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz +ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz.integrity +ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz +ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz.integrity +ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz +ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz.integrity + +---- + +The files with the `.integrity` file type contain the plugin SHA value. + diff --git a/downstream/modules/devtools/proc-self-service-export-collection-pah.adoc b/downstream/modules/devtools/proc-self-service-export-collection-pah.adoc new file mode 100644 index 0000000000..99fdd43885 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-export-collection-pah.adoc @@ -0,0 +1,19 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-export-collection-pah_{context}"] += Exporting a collection from private automation hub + +. Log in to {PlatformNameShort}. +. From the navigation panel, select {MenuACCollections}. ++ +The *Collections* page displays all collections across all repositories. +You can search for a specific collection. +. Click into the collection that you want to export. ++ +The *Collection details* page opens. +. From the *Install* tab, select *Download tarball*. ++ +The `.tar` file is downloaded to your default browser downloads folder. + diff --git a/downstream/modules/devtools/proc-self-service-generate-oauth-token.adoc b/downstream/modules/devtools/proc-self-service-generate-oauth-token.adoc new file mode 100644 index 0000000000..e5aba10e9c --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-generate-oauth-token.adoc @@ -0,0 +1,25 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-generate-oauth-token_{context}"] += Generating a token for user authentication + +You must create a token in {PlatformNameShort}. +The token is used in an OpenShift secret for {PlatformNameShort} authentication. + +.Procedure +. Log in to your instance of {PlatformNameShort} as the `admin` user. +. Navigate to {MenuControllerUsers}. +. Select the `admin` user. +. Select the *Tokens* tab +. Click *Create Token*. +. Select your OAuth application. +In the *Scope* menu, select `Write`. ++ +image::self-service-generate-oauth-token.png[Create OAuth token] +. Click *Create Token* to generate the token. +. Save the new token. ++ +The token is used in an OpenShift secret that is fetched by the Helm chart. + diff --git a/downstream/modules/devtools/proc-self-service-install-helm-from-catalog.adoc b/downstream/modules/devtools/proc-self-service-install-helm-from-catalog.adoc new file mode 100644 index 0000000000..2a480ec117 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-install-helm-from-catalog.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-install-helm-from-catalog_{context}"] += Configuring the {SelfServiceShort} Helm chart from the OpenShift catalog + +.Prerequisites +. You have created a project for {SelfServiceShort} in OpenShift. +. You have created a plugin registry in your project. +. You have set up secrets for {PlatformNameShort} authentication and SCM authentication. + +.Procedure +. In a browser, navigate to your OpenShift project for {SelfServiceShort} that you created earlier. +. Select the *Developer* view. +. Click the *Helm* option in the OpenShift sidebar. +. In the *Helm* page, click *Create* and select *Helm Release*. +. Search for `AAP` in the Helm Charts filter, +and select the `AAP Technical Preview: Self-service automation` chart. +. In the modal dialog on the chart page, click *Create*. +. Select the *YAML view* in the *Create Helm Release* page. +. Locate the `clusterRouterBase` key in the YAML file and replace the placeholder value with the base URL of your OpenShift instance. ++ +The base URL is the URL portion of your OpenShift URL that follows `\https://console-openshift-console`, +for example `apps.example.com`: ++ +---- + redhat-developer-hub + global: + clusterRouterBase: apps.example.com +---- +. The Helm chart is set up for the Default {PlatformNameShort} organization. ++ +To update the Helm chart to use a different organization, +update the value for the `catalog.providers.rhaap.orgs` key from `Default` to your {PlatformNameShort} organization name. +. Click *Create* to launch the deployment. + diff --git a/downstream/modules/devtools/proc-self-service-install-verify.adoc b/downstream/modules/devtools/proc-self-service-install-verify.adoc new file mode 100644 index 0000000000..1e60925627 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-install-verify.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-install-verify_{context}"] += Verifying the installation + +. In a browser, log in to your OpenShift instance. +. In the *Developer* view, +navigate to the *Topology* view for the namespace where you deployed the Helm chart. ++ +The deployment appears: the label on the icon is `D`. +The name of the deployment is `-backstage`, +for example ``. ++ +While it is deploying, the icon is light blue. +The color changes to dark blue when deployment is complete. + +image::self-service-verify-helm-install.png[Deployment on OpenShift console] diff --git a/downstream/modules/devtools/proc-self-service-launch-template.adoc b/downstream/modules/devtools/proc-self-service-launch-template.adoc new file mode 100644 index 0000000000..fcb82da302 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-launch-template.adoc @@ -0,0 +1,26 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-launch-template_{context}"] += Launching a template + +This procedure describes how to launch a template from a tile in the *Templates* view of your {SelfServiceShort} instance. + +.Procedure +. In a browser, navigate to your {SelfServiceShort} instance and sign in with your {PlatformNameShort} credentials. +. Navigate to the *Templates* page. +The templates you have set up are displayed as tiles on the page. +. In the template that you want to launch, click *Start*. ++ +A description of the template is displayed. +. Click *Launch* to begin configuring the parameters for running the template. +. Fill out the required fields. +. Click *Next*. +. Review the entered information. +. Click *Create* to launch the template. +. The progress for the template execution is displayed. + +.Verification +To view the log for the template execution, click *Show logs*. + diff --git a/downstream/modules/devtools/proc-self-service-ocp-project-setup-ui.adoc b/downstream/modules/devtools/proc-self-service-ocp-project-setup-ui.adoc new file mode 100644 index 0000000000..878622137c --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-ocp-project-setup-ui.adoc @@ -0,0 +1,20 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-ocp-project-setup-ui_{context}"] +// Setting up a new project for {SelfServiceShort} in {OCPShort} UI += Setting up a project in the {OCPShort} web console + +You can use the {OCPShort} web console to create a project in your cluster. + +. In a browser, log in to the {OCPShort} web console. +. Choose the *Developer* perspective. +. Click the *Project* menu and select *Create project*. +.. In the *Create Project* dialog box, enter a unique name *Name* field. +*** Lowercase alphanumeric characters (`a-z`, `0-9`) and the hyphen character (`-`) are permitted for project names. +*** The underscore (`_`) character is not permitted. +*** The maximum length for project names is 63 characters. +.. Optional: display name and description for your project. +. Click btn:[Click] to create the project. + diff --git a/downstream/modules/devtools/proc-self-service-ocp-project-setup.adoc b/downstream/modules/devtools/proc-self-service-ocp-project-setup.adoc new file mode 100644 index 0000000000..21679cef21 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-ocp-project-setup.adoc @@ -0,0 +1,52 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-ocp-project-setup_{context}"] += Setting up an {OCPShort} project using `oc` + +. In a terminal, log in to {OCPShort} using your credentials: ++ +---- +oc login -u +---- ++ +For example: ++ +---- +$ oc login https://api..com:6443 -u kubeadmin +WARNING: Using insecure TLS client config. Setting this option is not supported! + +Console URL: https://api..com:6443/console +Authentication required for https://api..com:6443 (openshift) +Username: kubeadmin +Password: +Login successful. + +You have access to 22 projects, the list has been suppressed. You can list all projects with 'oc projects' + +Using project "default". +---- +. Create a new project. Use a unique project name. ++ +---- +$ oc new-project +---- ++ +Lowercase alphanumeric characters (`a-z`, `0-9`) and the hyphen character (`-`) are permitted for project names. +The underscore (`_`) character is not permitted. +The maximum length for project names is 63 characters. ++ +Example: ++ +---- +$ oc new-project + +Now using project "my-project" on server "https://openshift.example.com:6443". +---- +. Open your new project: ++ +---- +$ oc project +---- + diff --git a/downstream/modules/devtools/proc-self-service-set-up-rbac.adoc b/downstream/modules/devtools/proc-self-service-set-up-rbac.adoc new file mode 100644 index 0000000000..4b1ea9631e --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-set-up-rbac.adoc @@ -0,0 +1,38 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-set-up-rbac_{context}"] += Setting up RBAC + +RBAC is set up in the Helm chart with the `admin` user set as the RBAC administrator (`rbac_admin`). + +This procedure describes how to create a role in {SelfServiceShort} that allows only a selected team to view and execute particular templates. + +.Prerequisites +* As the admin user in your {PlatformNameShort} instance, you have created a user, for example `example-user`. ++ +See +link:{URLCentralAuth}/gw-managing-access#proc-controller-creating-a-user[Creating a user] +in the _{TitleCentralAuth}_ guide. +* You have added this user as a member of a team, for example `example-team`. ++ +See +link:{URLCentralAuth}/gw-managing-access#proc-gw-team-add-user[Adding users to a team] +in the _{TitleCentralAuth}_ guide. + +.Procedure +. In a browser, log in to your {SelfServiceShort} instance as the {PlatformNameShort} `admin` user. +. In the navigation panel, select menu:Administration[RBAC]. +. In the *RBAC* view, click *Create*. ++ +The *Create Role* view appears. ++ +.. Enter a name for the role. +.. Select the user or group that you want to allow to use the role. +.. In the *Add Permission policies* section, select the plug-ins that you want to enable for the role. +.. Select *Permission* in the list of plug-ins to configure the fine-grained permission policies for the role. +. Click *Next*. +. Review the settings that you have selected for the role. +. Click *Create* to create the role. + diff --git a/downstream/modules/devtools/proc-self-service-setup-registry-image.adoc b/downstream/modules/devtools/proc-self-service-setup-registry-image.adoc new file mode 100644 index 0000000000..8b2d0f8808 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-setup-registry-image.adoc @@ -0,0 +1,47 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-setup-registry-image_{context}"] += Setting up the plugin registry image + +Set up a registry in your OpenShift cluster to host the plug-ins and make them available for installation. + +.Procedure + +. Log in to your {OCPShort} instance with credentials to create a new application. +. Open your OpenShift project for {SelfServiceShort}. ++ +---- +$ oc project +---- +. Run the following commands to create a plugin registry build in in your OpenShift project. ++ +---- +$ oc new-build httpd --name=plugin-registry --binary +$ oc start-build plugin-registry --from-dir=$DYNAMIC_PLUGIN_ROOT_DIR --wait +$ oc new-app --image-stream=plugin-registry +---- + +.Verification + +Verify that the plugin-registry was deployed successfully: + +. Open the *Topology* view in the *Developer* perspective for your project in the OpenShift web console. +. Select the plugin registry icon to open the *plugin-registry* details pane. +. In the *Pods* section of the *plugin-registry* details pane, click *View logs* for the +`plugin-registry-#########-####` pod. +// Can't use multiple hashtags characters in Asciidoc: Asciidoctor interprets them as special characters. ++ +image::self-service-plugin-registry.png[Developer perspective] ++ +(1) Plug-in registry +. Click the *terminal* tab and log in to the container. +. In the terminal, run `ls` to confirm that the `.tar` files are in the plugin registry. ++ +---- +ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz +ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz +ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz +---- ++ +The version numbers and file names can differ. + diff --git a/downstream/modules/devtools/proc-self-service-share-credentials-aap.adoc b/downstream/modules/devtools/proc-self-service-share-credentials-aap.adoc new file mode 100644 index 0000000000..56cf3d3954 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-share-credentials-aap.adoc @@ -0,0 +1,27 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-share-credentials-aap_{context}"] += Sharing credential access with uses and teams in {PlatformNameShort} + +You can grant users access to credentials based on their team membership. +When you add a user as a member of a team, +they inherit access to the credentials assigned to that team. + +. In a browser, navigate to your {PlatformNameShort} instance. +. Navigate to {MenuAECredentials}. +. On the *Credentials* page, click btn:[Create credential]. +. Open the credential you created for Gitlab or GitHub. +. Select users or teams from the same organization: +** Select the *Team Access* tab if you want to provide access to the credential for a team. +** Select the *User Access* tab if you want to provide access to the credential for a user. +. Click btn:[Add roles]. +. Click the checkbox beside the team or user you want to share the credentials with, and click btn:[Next]. +. Select the roles you want applied to the team or user and click btn:[Next]. +. Review the settings and click btn:[Finish]. ++ +The *Add roles* dialog indicates whether the role assignments were successfully applied. +. You can remove access to a role for a team by selecting the *Remove role* icon next to the team. +This launches a confirmation dialog, asking you to confirm the removal. + diff --git a/downstream/modules/devtools/proc-self-service-sign-in.adoc b/downstream/modules/devtools/proc-self-service-sign-in.adoc new file mode 100644 index 0000000000..36a0990fa6 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-sign-in.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-sign-in_{context}"] += Signing in to {SelfServiceShort} + +.Prerequisites +. You have configured an OAuth application in {PlatformNameShort} for {SelfServiceShort}. +. You have configured a user account in {PlatformNameShort}. + +.Procedure + +. In a browser, navigate to the URL for {SelfServiceShort} to open the sign-in page. ++ +image::self-service-sign-in-page.png[Self-service sign-in page] +. Click *Sign In*. +. The sign-in page for {PlatformNameShort} appears: ++ +image::rhaap-sign-in-page.png[{PlatformNameShort} sign-in page] +. Enter your {PlatformNameShort} credentials and click *Log in*. +. The {SelfServiceShort} UI opens. +. Click *Templates* to open a landing page where tiles are displayed, +representing templates. +When the page is populated with templates, the layout resembles the following screenshot: ++ +image::self-service-templates-view.png[Templates view] + diff --git a/downstream/modules/devtools/proc-self-service-sync-frequency.adoc b/downstream/modules/devtools/proc-self-service-sync-frequency.adoc new file mode 100644 index 0000000000..a6d01ddf90 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-sync-frequency.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-sync-frequency_{context}"] += Adjusting synchronization frequency between {PlatformNameShort} and {SelfServiceShort} + +The Helm chart defines how frequently users, +teams and organization configuration information is synchronized from {PlatformNameShort} to {SelfServiceShort}. + +The frequency is set by the `catalog.providers.rhaap.schedule.frequency` key. +By default, the synchronization occurs hourly. + +* To adjust the synchronization frequency, edit the value for the `catalog.providers.rhaap.schedule.frequency` key in the Helm chart. ++ +---- + catalog: + ... + providers: + rhaap: + '{{- include "catalog.providers.env" . }}': + schedule: + frequency: {minutes: 60} + timeout: {seconds: 30} + +---- + +[NOTE] +==== +Increasing the synchronization frequency generates extra traffic. + +Bear this in mind when deciding the frequency, particularly if you have a large number of users. +==== + +// To run a synchronization outside of the scheduled frequency, restart your {SelfServiceShort} instance. + diff --git a/downstream/modules/devtools/proc-self-service-telemetry-disable.adoc b/downstream/modules/devtools/proc-self-service-telemetry-disable.adoc new file mode 100644 index 0000000000..db266925c5 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-telemetry-disable.adoc @@ -0,0 +1,30 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-telemetry-disable_{context}"] += Disabling telemetry data collection + +You can disable and enable the telemetry data collection feature for {SelfServiceShort} by updating the Helm chart for your {OCPShort} project. + +. Log in to the {OCPShort} console and open the project for {SelfServiceShort} in the *Developer* perspective. +. Navigate to *Helm*. +. Click the *More actions {MoreActionsIcon}* icon for your {SelfServiceShort} Helm chart and select *Upgrade*. +. Select *YAML view*. +. Locate the `redhat-developer-hub.global.dynamic.plugins` section of the Helm chart. +. To disable telemetry data collection, add the following lines to the `redhat-developer-hub.global.dynamic.plugins` section. ++ +---- +redhat-developer-hub: + global: + .... + dynamic: + plugins: + - disabled: true + package: >- + ./dynamic-plugins/dist/backstage-community-plugin-analytics-provider-segment +---- ++ +To re-enable telemetry data collection, delete these lines. +. Click btn:[Upgrade] to apply the changes to the Helm chart and restart the pod. + diff --git a/downstream/modules/devtools/proc-self-service-verify-rbac.adoc b/downstream/modules/devtools/proc-self-service-verify-rbac.adoc new file mode 100644 index 0000000000..771743ba90 --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-verify-rbac.adoc @@ -0,0 +1,18 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2025-05-05 +:_mod-docs-content-type: PROCEDURE + +[id="self-service-verify-rbac_{context}"] += Verifying RBAC + +This procedure describes how to verify that the role you set up is working correctly. + +. Verify that users with permissions can use a template: +.. Log in to {SelfServiceShort} as a user who is a member of a team that has been enabled to use a role. +.. Verify that RBAC is applied and that the user can use the templates that you enabled for the role. +. Log out of {SelfServiceShort}. +. Verify that users without permissions can not see or use a template: +.. Log in to {SelfServiceShort} as a user who is not a member of the new team that has been enabled to use the role. +.. Verify that RBAC is applied and that the user cannot use the templates that you enabled for the role. +. Log out of {SelfServiceShort}. + diff --git a/downstream/modules/devtools/proc-self-service-view-configmap.adoc b/downstream/modules/devtools/proc-self-service-view-configmap.adoc new file mode 100644 index 0000000000..88acc2491b --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-view-configmap.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-view-configmap_{context}"] += Viewing the ConfigMaps + +. In a browser, open the project for your {SelfServiceShort} in your OpenShift instance. +. In the *Developer* view, +select *ConfigMaps* in the navigation pane. +. Select the `-backstage-app-config` ConfigMap, for example `my-aap-self-service-tech-preview-backstage-app-config`. +. Verify that the ConfigMap conforms with the values you updated in the Helm chart. +. Return to the list of ConfigMaps and select the `-dynamic-plugins` ConfigMap, +for example `my-aap-self-service-tech-preview-dynamic-plugins`. +. Verify that the ConfigMap conforms with the expected plugin values. + diff --git a/downstream/modules/devtools/proc-self-service-view-deployment-logs.adoc b/downstream/modules/devtools/proc-self-service-view-deployment-logs.adoc new file mode 100644 index 0000000000..0b82b79ddc --- /dev/null +++ b/downstream/modules/devtools/proc-self-service-view-deployment-logs.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: PROCEDURE + +[id="self-service-view-deployment-logs_{context}"] += Viewing the deployment logs + +. In a browser, log in to your OpenShift instance. +. In the *Developer* view, +navigate to the *Topology* view for the namespace where you deployed the Helm chart. ++ +. The deployment appears: the label on the icon is `D`. ++ +The name of the deployment is `-backstage`, +for example ``. +. Click the icon representing the deployment. +. The *Details* pane for the deployment opens. +. Select the *Resources* tab. +. Click *View* logs for the deployment pod in the *Pods* section: ++ +image::self-service-view-deployment-logs.png[Deployment on OpenShift console] ++ +The *Pod details* page opens for the deployment pod. +. Select the *Logs* tab in the *Pod details* page. +. To view the install messages, +select the `install-dynamic-plugins` container from the *INIT CONTAINERS* section of the dropdown list of containers: ++ +image::self-service-view-install-messages.png[View install messages] ++ +The log stream displays the progress of the installation of the plug-ins from the plug-in registry. ++ +The log stream for successful installation of the plug-ins resembles the following output: ++ +---- + ======= Installing dynamic plugin http://plugin-registry:8080/ansible-backstage-plugin-catalog-backend-module-rhaap-dynamic-0.1.0.tgz + *=> Grabbing package archive through pm pack' + •=› Vertfying package Integrity + •*> Extracting package archtve /dynamtc-plugtns-root/anstble-backstage-plugtn-catalog-backend-nodule-rhaap-dynamic-0.1.0.tgz + •*› Removing package archive /dynamic-plugins-root/ansible-backstage-plugin-catalog-backend-module-rhaap-dynamic-0.1.0. tgz + -> Successfully installed dynamic plugin http://plugin-registry:8080/ansible-backstage-plugin-catalog-backend-module-rhaap-dynamic-0.1.0.tgz +---- +. Select the *Environment* tab in the *Pod details* page to view the environment variables for the containers. +If you set additional environment variables in your Helm chart, check that they are listed here. ++ +image::self-service-pod-env-variables.png[Pod environment variables] + + diff --git a/downstream/modules/devtools/proc-writing-playbook.adoc b/downstream/modules/devtools/proc-writing-playbook.adoc deleted file mode 100644 index 1374505b45..0000000000 --- a/downstream/modules/devtools/proc-writing-playbook.adoc +++ /dev/null @@ -1,34 +0,0 @@ -[id="writing-playbook"] - -= Writing your first playbook - -[role="_abstract"] -The instructions below describe how {ToolsName} help you to create and run your first playbook in {VSCode}. - -.Prerequisites - -.You have installed and opened the Ansible {VSCode} extension. -.You have installed `ansible-devtools`. -.You have set up and activated a Python virtual environment in {VSCode}. -.You have opened a terminal in {VSCode}. - -.Procedure - -. Open a YAML file in {VSCode} for your playbook. -You can create a new file or use the empty placeholder YAML file that you set up when you created the directory and playbook. -. Add the following example code into the playbook file and save the file. -The playbook consists of a single play that executes a ping to your local machine. -+ ----- -- name: My first play - hosts: localhost - tasks: - - name: Ping my hosts - ansible.builtin.ping: - ----- -+ -`Ansible-lint` runs in the background and displays errors in the *Problems* tab of the terminal. -There are no errors in this playbook: - -image::ansible-lint-no-errors.png[Ansible-lint showing no errors in a playbook] diff --git a/downstream/modules/devtools/ref-devtools-components.adoc b/downstream/modules/devtools/ref-devtools-components.adoc index a086e4f822..521be1f64c 100644 --- a/downstream/modules/devtools/ref-devtools-components.adoc +++ b/downstream/modules/devtools/ref-devtools-components.adoc @@ -1,26 +1,30 @@ -[id="devtools-components_context"] +:_mod-docs-content-type: REFERENCE + +[id="devtools-components_{context}"] = {ToolsName} components [role="_abstract"] -You can access most {ToolsName} from the Ansible {VSCode} extension, and others from the command line. +You can operate some {ToolsName} from the {VSCode} UI when you have installed the Ansible extension, +and the remainder from the command line. +{VSCode} is a free open-source code editor available on Linux, Mac, and Windows. -* Ansible {VSCode} extension: -This is not packaged with the {PlatformNameShort} RPM package, but it is an integral part of the automation creation process. -From the Ansible {VSCode} extension, you can use the {ToolsName} for the following tasks: +Ansible {VSCode} extension:: +This is not packaged with the {PlatformNameShort} RPM package, but it is an integral part of the automation creation workflow. +From the {VSCode} UI, you can use the {ToolsName} for the following tasks: + -- ** Scaffold directories for a playbook project or a collection. ** Write playbooks with the help of syntax highlighting and auto-completion. ** Debug your playbooks with a linter. -** Execute playbooks with Ansible Core with `ansible-playbook`. +** Execute playbooks with Ansible Core using `ansible-playbook`. ** Execute playbooks in an execution environment with `ansible-navigator`. -- + From the {VSCode} extension, you can also connect to {LightspeedFullName}. -* Command-line {ToolsName}: you can perform the following tasks with {ToolsName} from the command line, +Command-line {ToolsName}:: You can perform the following tasks with {ToolsName} from the command line, including the terminal in {VSCode}: ** Create an execution environment. ** Test your playbooks, roles, modules, plugins and collections. diff --git a/downstream/modules/devtools/ref-devtools-workflow.adoc b/downstream/modules/devtools/ref-devtools-workflow.adoc index 5e1e51c94d..a61d2fc635 100644 --- a/downstream/modules/devtools/ref-devtools-workflow.adoc +++ b/downstream/modules/devtools/ref-devtools-workflow.adoc @@ -1,21 +1,30 @@ -[id="devtools-workflow_context"] +:_mod-docs-content-type: REFERENCE + +[id="devtools-workflow_{context}"] = Workflow [role="_abstract"] -In the build stage, you create a new playbook project within a virtual environment, using {VSCode}. The following is a typical workflow: +== Create + +In the create stage, you create a new playbook project locally, using {VSCode}. The following is a typical workflow: . Install and run the Ansible extension in {VSCode}. -. Create or open a workspace for your playbooks directory in {VSCode}. -. Create and activate a Python virtual environment for your workspace and select it in {VSCode}. . Scaffold a playbook project from {VSCode}. -. Add the collection names that your playbook uses to the requirements file. -// . Use ansible-dev-environment to create a virtual environment for your project. This installs any dependencies from the requirements file. -. Edit your playbook. Ansible-lint suggests corrections. -. Add roles in the roles directory. -. Create an execution environment that reflects the environment that {PlatformNameShort} uses. -. Run your playbooks from the Ansible extension. -// . As you develop your playbooks and roles, you can incorporate new dependencies into your virtual environment by re-running ansible-dev-environment. -// . Use `molecule` to test your playbooks. Create one scenario for every playbook in your project. +. Add playbook files to your project and edit them in {VSCode}. + +== Test + +. Debug your playbook with the help of `ansible-lint`. +. Select or create an {ExecEnvNameSing} so that your local environment replicates the environment on {PlatformNameShort}. +. Run your playbooks from {VSCode}, using `ansible-playbook` or using `ansible-navigator` with an {ExecEnvShort}. +. Test your playbooks by running them on an {ExecEnvShort} that replicates your production environment. + +== Deploy + +. Push your playbooks project to a source control repository. +. Set up credentials on {PlatformNameShort} to pull from your source control repository and create a project for your playbook repository. +. If you have created an {ExecEnvShort}, push it to {PrivateHubName}. +. Create a job template on {PlatformNameShort} that runs a playbook from your project, and specify the {ExecEnvShort} that you want to use. diff --git a/downstream/modules/devtools/ref-rhdh-about-plugins.adoc b/downstream/modules/devtools/ref-rhdh-about-plugins.adoc new file mode 100644 index 0000000000..123313e0c3 --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-about-plugins.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-about-plugins_{context}"] += {AAPRHDH} + +{AAPRHDH} deliver an Ansible-first {RHDH} user experience that simplifies the automation experience for Ansible users of all skill levels. +The Ansible plug-ins provide curated content and features to accelerate Ansible learner onboarding and streamline Ansible use case adoption across your organization. + +The Ansible plug-ins provide: + +* A customized home page and navigation tailored to Ansible users. +* Curated Ansible learning paths to help users new to Ansible. +* Software templates for creating Ansible playbook and collection projects that follow best practices. +* Links to supported development environments and tools with opinionated configurations. + diff --git a/downstream/modules/devtools/ref-rhdh-about-rhdh.adoc b/downstream/modules/devtools/ref-rhdh-about-rhdh.adoc new file mode 100644 index 0000000000..21bac74849 --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-about-rhdh.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-about-rhdh_{context}"] += Red Hat Developer Hub + +{RHDH} (RHDH) serves as an open developer platform designed for building developer portals. diff --git a/downstream/modules/devtools/ref-rhdh-architecture.adoc b/downstream/modules/devtools/ref-rhdh-architecture.adoc new file mode 100644 index 0000000000..b8dc539e88 --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-architecture.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-architecture_{context}"] += Architecture + +image::rhdh-ansible-plugin-architecture.png[Ansible plugin for Red Hat Developer Hub architecture] + diff --git a/downstream/modules/devtools/ref-rhdh-dashboard.adoc b/downstream/modules/devtools/ref-rhdh-dashboard.adoc new file mode 100644 index 0000000000..a08bd4c941 --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-dashboard.adoc @@ -0,0 +1,32 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-dashboard_{context}"] += Dashboard navigation + +When you log in to {RHDH} (RHDH), the main RHDH menu and dashboard are displayed. + +To view the dashboard for {AAPRHDH}, click *Ansible* in the {RHDH} navigation panel. + +image::rhdh-plugin-dashboard.png[Ansible plug-in dashboard] + +The plug-in dashboard illustrates the steps you need to take from learning about Ansible to deploying automation jobs from {PlatformNameShort}: + +* *Overview* displays the main dashboard page. +* *Learn* provides links to resources curated by Red Hat that introduce you to Ansible and provide step-by-step examples to get you started. +For more information, see +xref:rhdh-learning_rhdh-using[Learning about Ansible]. +* *Discover existing collections* links to {PrivateHubName}, if configured in the plug-ins, or to {HubName} hosted on the Red Hat Hybrid Cloud Console. +{HubNameStart} stores existing collections and execution environments that you can use in your projects. +For more information, see +xref:rhdh-discover-collections_rhdh-using[Discovering existing collections]. +* *Create* creates new projects in your configured Source Control Management platforms such as GitHub. +For more information, see +xref:rhdh-create_rhdh-using[Creating a project]. +* *Develop* links you to OpenShift Dev Spaces, if configured in the Ansible plug-ins installation. +OpenShift Dev Spaces provides on-demand, web-based Integrated Development Environments (IDEs), where you can develop automation content. +For more information, see +xref:rhdh-develop-projects_rhdh-using[Developing projects]. +* *Operate* connects you to {PlatformNameShort}, where you can create and run automation jobs that use the projects you have developed. +For more information, see +xref:rhdh-set-up-controller-project_rhdh-using[Setting up a controller project to run your playbook project]. + diff --git a/downstream/modules/devtools/ref-rhdh-discover-collections.adoc b/downstream/modules/devtools/ref-rhdh-discover-collections.adoc new file mode 100644 index 0000000000..07f9d797cc --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-discover-collections.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-discover-collections_{context}"] += Discovering existing collections + +From the *Overview* page in the Ansible plug-ins dashboard on {RHDH}, click *Discover Existing Collections*. + +The links in this pane provide access to the source of reusable automation content collections that you configured during plug-in installation. + +If you configured {PrivateHubName} when installing the plug-in, you can click *Go to Automation Hub* to view the collections and {ExecEnvShort}s that your enterprise has curated. + +If you did not configure a {PrivateHubName} URL when installing the plug-in, the *Discover existing collection* pane provides a link to Red Hat {HubName} on console.redhat.com. +You can explore certified and validated Ansible content collections on this site. + diff --git a/downstream/modules/devtools/ref-rhdh-full-aap-configmap-example.adoc b/downstream/modules/devtools/ref-rhdh-full-aap-configmap-example.adoc new file mode 100644 index 0000000000..233682570d --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-full-aap-configmap-example.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-full-aap-configmap-example_{context}"] += Full app-config-rhdh ConfigMap example for Ansible plug-ins entries + +---- +kind: ConfigMap +... +metadata: + name: app-config-rhdh + ... +data: + app-config-rhdh.yaml: |- + ansible: + creatorService: + baseUrl: 127.0.0.1 + port: '8000' + rhaap: + baseUrl: '' + token: '' + checkSSL: + showCaseLocation: + type: file + target: '/tmp/aap-showcases/' + # Optional integrations + devSpaces: + baseUrl: '' + automationHub: + baseUrl: '' + + ... + catalog: + locations: + - type: url + target: https://github.com/ansible/ansible-rhdh-templates/blob/main/all.yaml + rules: + - allow: [Template] + ... + +---- + diff --git a/downstream/modules/devtools/ref-rhdh-full-helm-chart-ansible-plugins.adoc b/downstream/modules/devtools/ref-rhdh-full-helm-chart-ansible-plugins.adoc new file mode 100644 index 0000000000..ade9bd54a4 --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-full-helm-chart-ansible-plugins.adoc @@ -0,0 +1,65 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-full-helm-chart-ansible-plugins_{context}"] += Full Helm chart config example for Ansible plug-ins + +---- +global: + ... + dynamic: + ... + plugins: + - disabled: false + integrity: + package: 'http://plugin-registry:8080/ansible-plugin-backstage-rhaap-dynamic-x.y.z.tgz' + pluginConfig: + dynamicPlugins: + frontend: + ansible.plugin-backstage-rhaap: + appIcons: + - importName: AnsibleLogo + name: AnsibleLogo + dynamicRoutes: + - importName: AnsiblePage + menuItem: + icon: AnsibleLogo + text: Ansible + path: /ansible + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-scaffolder-backend-module-backstage-rhaap-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-scaffolder-backend-module-backstage-rhaap: null + - disabled: false + integrity: + package: >- + http://plugin-registry:8080/ansible-plugin-backstage-rhaap-backend-dynamic-x.y.z.tgz + pluginConfig: + dynamicPlugins: + backend: + ansible.plugin-backstage-rhaap-backend: null +... +upstream: + backstage: + ... + extraAppConfig: + - configMapRef: app-config-rhdh + filename: app-config-rhdh.yaml + extraContainers: + - command: + - adt + - server + image: >- + registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest + imagePullPolicy: IfNotPresent + name: ansible-devtools-server + ports: + - containerPort: 8000 +... + + +---- + diff --git a/downstream/modules/devtools/ref-rhdh-learning.adoc b/downstream/modules/devtools/ref-rhdh-learning.adoc new file mode 100644 index 0000000000..293fcaf7cf --- /dev/null +++ b/downstream/modules/devtools/ref-rhdh-learning.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: REFERENCE + +[id="rhdh-learning_{context}"] += Learning about Ansible + +To learn more about getting started with automation, click *Learn* from the *Overview* page of the plug-in dashboard. +The *Learn* page provides the following options for learning: + +* *Learning Paths* lists a curated selection of learning tools hosted on developers.redhat.com that guide you through the foundations of working with Ansible, the Ansible {VSCode} extension, and using YAML. ++ +You can select other Ansible learning paths from the *Useful links* section. +* *Labs* are self-led labs that are designed to give you hands-on experience in writing Ansible content and using {ToolsName}. + diff --git a/downstream/modules/devtools/snippets b/downstream/modules/devtools/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/modules/devtools/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/modules/eda/con-characterizing-your-workload.adoc b/downstream/modules/eda/con-characterizing-your-workload.adoc new file mode 100644 index 0000000000..cd064c636a --- /dev/null +++ b/downstream/modules/eda/con-characterizing-your-workload.adoc @@ -0,0 +1,12 @@ +[id="characterizing-your-workload"] + += Characterizing your workload + +[role="_abstract"] +In {EDAcontroller}, your workload includes the number of rulebook activations and events being received. Consider the following factors to characterize your {EDAcontroller} workload: + +. Number of simultaneous rulebook activations +. Number of events received by {EDAcontroller} + +include::con-modifying-simultaneous-activations.adoc[leveloffset=+1] +include::con-modifying-memory-limit.adoc[leveloffset=+1] diff --git a/downstream/modules/eda/con-credential-types-injector-config.adoc b/downstream/modules/eda/con-credential-types-injector-config.adoc new file mode 100644 index 0000000000..9029cb680d --- /dev/null +++ b/downstream/modules/eda/con-credential-types-injector-config.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: +[id="eda-cred-types-injector-config"] + += Injector Configuration + +You can use Injector configuration to extract information from Input configuration fields and map them into injector types that can be sent to ansible-rulebook when running a rulebook activation. {EDAName} supports the following types of injectors: + +* Environment variables (`env`) - Used in source plugins for the underlying package or shared library. +* Ansible extra variables (`extra_vars`) - Used for substitution in the rulebook conditions, actions or source plugin parameters. +* File-based templating (`file`) - Used to create file contents from the credential inputs such as certificates and keys, which might be required by source plugins. File injectors provide a way to deliver these certificates and keys to ansible-rulebook at runtime without having to store them in decision environments. As a result, ansible-rulebook creates temporary files and the file names can be accessed using `eda.filename` variables, which are automatically created for you after the files have been created (for instance, "{{eda.filename.my_cert}}”). + +[IMPORTANT] +==== +When creating `extra_vars` in rulebook activations and credential type injectors, avoid using `eda` or `ansible` as key names since that conflicts with internal usage and might cause failure in both rulebook activations and credential type creation. +==== + +Injectors enable you to adjust the fields so that they can be injected into a rulebook as one of the above-mentioned injector types, which cannot have duplicate keys at the top level. If you have two sources in a rulebook that both require parameters such as username and password, the injectors, along with the rulebook, help you adapt the arguments for each source. + +To view a sample injector and input, see the following GitHub gists, respectively: + +* link:https://gist.github.com/mkanoor/f080648917377da870bb002d4563294d[credential injectors] +* link:https://gist.github.com/mkanoor/04c32b20addb7898af299a9254a46e61#file-gssapi-input-credential-type[gssapi input credential type] \ No newline at end of file diff --git a/downstream/modules/eda/con-credential-types-input-config.adoc b/downstream/modules/eda/con-credential-types-input-config.adoc new file mode 100644 index 0000000000..a208494bdf --- /dev/null +++ b/downstream/modules/eda/con-credential-types-input-config.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: +[id="eda-cred-types-input-config"] + += Input Configuration + +The Input configuration has two attributes: + +* fields - a collection of properties for a credential type. +* required - a list of required fields. + +Fields can have multiple properties, depending on the credential type you select. + +.Input Configuration Field Properties +[cols="a,a,a"] +|=== +| Fields | Description | Mandatory (Y/N) + +h| id | Unique id of the field; must be a string type and stores the variable name | Yes + +h| type | Can be string or boolean type | No, default is string + +h| label | Used by the UI when rendering the UI element | Yes + +h| secret | Will be encrypted | No, default false + +h| multiline | If the field contains data from a file the multiline can be set to True | No, default false + +h| help_text | The help text associated with this field | No + +|=== diff --git a/downstream/modules/eda/con-credentials-list-view.adoc b/downstream/modules/eda/con-credentials-list-view.adoc index d56aec530b..e13277096d 100644 --- a/downstream/modules/eda/con-credentials-list-view.adoc +++ b/downstream/modules/eda/con-credentials-list-view.adoc @@ -1,12 +1,17 @@ +:_mod-docs-content-type: CONCEPT [id="eda-credentials-list-view"] = Credentials list view -On the *Credentials* page, you can view the list of created credentials that you have created along with the *Type* of credential. +When you log in to the {PlatformNameShort} and select {MenuADCredentials}, the Credentials page has a pre-loaded *Decision Environment Container Registry* credential. When you create your own credentials, they will be added to this list view. . -From the menu bar, you can search for credentials in the *Name* field. +From the menu bar, you can search for credentials in the *Name* search field. You also have the following options in the menu bar: -* Choose which columns are shown in the list view by clicking btn:[Manage columns]. +* Choose how fields are shown in the list view by clicking the btn:[Manage columns] icon. You have four options in which you can arrange your fields: +** *Column* - Shows the column in the table. +** *Description* - Shows the column when the item is expanded as a full width description. +** *Expanded* - Shows the column when the item is expanded as a detail. +** *Hidden* - Hides the column. * Choose between a btn:[List view] or a btn:[Card view], by clicking the icons. diff --git a/downstream/modules/eda/con-custom-credential-types.adoc b/downstream/modules/eda/con-custom-credential-types.adoc new file mode 100644 index 0000000000..6f65adbdfe --- /dev/null +++ b/downstream/modules/eda/con-custom-credential-types.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: +[id="eda-custom-credential-types"] + += Custom credential types + +As a system administrator, you can define a custom credential type that works in ways similar to existing credential types in a standard format using a YAML or JSON-like definition. + +Each credential type displays its own unique configurations in the *Input Configuration* and the *Injector Configuration* fields, if applicable. Both YAML and JSON formats are supported in the configuration fields. + +Custom credentials support Ansible extra variables as a means of injecting their authentication information. + +You can attach one or more cloud, vault, and {PlatformName} credential types to a rulebook activation. + +[NOTE] +==== +* When creating a new credential type, you must avoid collisions in the `extra_vars`. +* Extra variable names must not start with *EDA_* because they are reserved. +* You must have System administrator (superuser) permissions to be able to create and edit a credential type and to be able to view the *Injector configuration* field. +==== + +When you customize your own credential types, they display on the Credential Types page along with a list of built-in credential types. + diff --git a/downstream/modules/eda/con-eda-author-event-filters.adoc b/downstream/modules/eda/con-eda-author-event-filters.adoc new file mode 100644 index 0000000000..ec38ff555b --- /dev/null +++ b/downstream/modules/eda/con-eda-author-event-filters.adoc @@ -0,0 +1,33 @@ +[id="eda-author-event-filters"] + += Author event filters + +Event filters are functions in a python module that perform transformations on the event data. +They can remove, add, change, or move any data in the event data structure. +Event filters take the event as the first argument and additional keyword arguments are provided by the configuration in the rulebook. + +The basic structure follows: + +---- + # my_namespace.my_collection/extensions/eda/plugins/event_filter/my_filter.py + def main(event: dict, arg1, arg2): + # Process event data here + return event +---- + +You can use this filter in a rulebook by adding it to the filters list in an event source: + +---- + sources: + - name: azure_service_bus + ansible.eda.azure_service_bus: + conn_str: "{{connection_str}}" + queue_name: "{{queue_name}}" + filters: + - my_namespace.my_collection.my_filter: + arg1: hello + arg2: world +---- + +.Additional resources +See the event filter plugins in link:https://github.com/ansible/event-driven-ansible/tree/main/extensions/eda/plugins/event_filter[ansible.eda collection] for more examples of how to author them. diff --git a/downstream/modules/eda/con-eda-projects-list-view.adoc b/downstream/modules/eda/con-eda-projects-list-view.adoc index 64887ae029..bb6c6ac36b 100644 --- a/downstream/modules/eda/con-eda-projects-list-view.adoc +++ b/downstream/modules/eda/con-eda-projects-list-view.adoc @@ -6,7 +6,7 @@ On the *Projects* page, you can view the projects that you have created along wi [NOTE] ==== -If a rulebook changes in source control you can re-sync a project by selecting the sync icon next to the project from the *Projects* list view. +If a rulebook changes in source control, you can re-sync a project by selecting the sync icon next to the project from the *Projects* list view. The *Git hash* updates represent the latest commit on that repository. An activation must be restarted or recreated if you want to use the updated project. ==== diff --git a/downstream/modules/eda/con-eda-rulebook-activation-list-view.adoc b/downstream/modules/eda/con-eda-rulebook-activation-list-view.adoc index c395e506e4..2d1384fdf3 100644 --- a/downstream/modules/eda/con-eda-rulebook-activation-list-view.adoc +++ b/downstream/modules/eda/con-eda-rulebook-activation-list-view.adoc @@ -2,12 +2,13 @@ = Rulebook activation list view -On the *Rulebook Activations* page, you can view the rulebook activations that you have created along with the *Activation status*, *Number of rules associated* with the rulebook, the *Fire count*, and *Restart count*. +On the *Rulebook Activations* page, you can view the rulebook activations that you have created along with the *Status*, *Number of rules* with the rulebook, the *Fire count*, and *Restart count*. -If the *Activation Status* is *Running*, it means that the rulebook activation is running in the background and executing the required actions according to the rules declared in the rulebook. +If the *Status* is *Running*, it means that the rulebook activation is running in the background and executing the required actions according to the rules declared in the rulebook. You can view more details by selecting the activation from the *Rulebook Activations* list view. -image::eda-rulebook-activations-list-view.png[Rulebook activation][width=25px] +//[JMSelf] Remove this image for now +//image::eda-rulebook-activations-list-view.png[Rulebook activation][width=25px] For all activations that have run, you can view the *Details* and *History* tabs to get more information about what happened. diff --git a/downstream/modules/eda/con-event-streams.adoc b/downstream/modules/eda/con-event-streams.adoc new file mode 100644 index 0000000000..20fa6f0e39 --- /dev/null +++ b/downstream/modules/eda/con-event-streams.adoc @@ -0,0 +1,39 @@ + +[id="event-streams"] + += Event streams + +[role="_abstract"] +Event streams can send events from remote systems to {EDAcontroller}. In a typical set-up, a server sends data to an event stream over the internet to an {EDAName} event stream receiver. When the data comes over the internet, the request must be authenticated. Depending on the webhook vendor or remote system, the authentication method could differ. + +{EDAcontroller} supports six different event stream types. + +.Event Stream Types +[cols="a,a,a"] +|=== +| Type | Description | Vendors + +h| HMAC | Hashed Message Authentication Code (HMAC). Uses a shared secret between {EDAcontroller} and the vendors webhook server. This guarantees message integrity. | Github + +h| Basic Authentication | Uses HTTP basic authentication. | Datadog, Dynatrace + +h| Token Authentication | Uses Token Authentication. Usually the HTTP Header is *Authorization* but some vendors like Gitlab use *X-Gitlab-Token*. | Gitlab, ServiceNow + +h| OAuth2 | Uses Machine-to-Machine (M2M) mode with a grant type called *client credentials*. The token is opaque. | Dynatrace + +h| OAuth2 with JWT | Uses M2M mode with a grant type called *client credentials*. The token is JSON Web Token (JWT). | Datadog + +h| ECDSA | Elliptic Curve Digital Signature Algorithm | SendGrid, Twilio + +//[Jameria] Not currently supported; will leave commented out for now in the event that it is supported in the near future. h| Mutual TLS | Needs the vendor's CA certificate to be present in our servers at startup. This supports non-repudiation. +// | PagerDuty +|=== + +{EDAcontroller} also supports four other specialized event streams that are based on the six basic event stream types: + +* GitLab Event Stream +* GitHub Event Stream +* ServiceNow Event Stream +* Dynatrace Event Stream + +These specialized types limit the parameters you use by adding default values. For example, the GitHub Event Stream is a specialization of the HMAC Event Stream with many of the fields already populated. After the GitHub Event Stream credential has been saved, the recommended defaults for the GitHub Event Stream are displayed. \ No newline at end of file diff --git a/downstream/modules/eda/con-modifying-memory-limit.adoc b/downstream/modules/eda/con-modifying-memory-limit.adoc new file mode 100644 index 0000000000..bcce7b419e --- /dev/null +++ b/downstream/modules/eda/con-modifying-memory-limit.adoc @@ -0,0 +1,19 @@ +[id="modifying-memory-limit"] + += Modifying the default memory limit for each rulebook activation + +[role="_abstract"] +Memory usage is based on the number of events that {EDAcontroller} has to process. +Each rulebook activation container has a 200MB memory limit. +For example, with 4 CPU and 16GB of RAM, one rulebook activation container with an assigned 200MB memory limit can not handle more than 150,000 events per minute. +If the number of parallel running rulebook activations is higher, then the maximum number of events each rulebook activation can process is reduced. +If there are too many incoming events at a very high rate, the container can run out of memory trying to process the events. +This will kill the container, and your rulebook activation will fail with a status code of 137. + +To address this failure, you can increase the amount of memory allocated to rulebook activations in order to process a high number of events at a high rate by using one of the following procedures: + +* Modifying the default memory limit for each rulebook activation during installation +* Modifying the default memory limit for each rulebook activation after installation + +include::proc-modifying-memory-during-install.adoc[leveloffset=+1] +include::proc-modifying-memory-after-install.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/modules/eda/con-modifying-simultaneous-activations.adoc b/downstream/modules/eda/con-modifying-simultaneous-activations.adoc new file mode 100644 index 0000000000..fe6b71e312 --- /dev/null +++ b/downstream/modules/eda/con-modifying-simultaneous-activations.adoc @@ -0,0 +1,22 @@ +[id="modifying-simultaneous-activations"] + += Modifying the number of simultaneous rulebook activations + +[role="_abstract"] +By default, {EDAcontroller} allows 12 rulebook activations per node. For example, with two worker or hybrid nodes, it results in a limit of 24 activations in total to run simultaneously. +If more than 24 rulebook activations are created, the expected behavior is that subsequent rulebook activations wait until there is an available rulebook activation worker. +In this case, the rulebook activation status is displayed as *Pending* even if there is enough free memory and CPU on your {EDAcontroller} instance. +To change this behavior, you must change the default maximum number of running rulebook activations. + +[NOTE] +==== +* The value for `MAX_RUNNING_ACTIVATIONS` does not change when you modify the instance size, so it needs to be adjusted manually. +* If you are installing {EDAName} on {OCPShort}, the 12 rulebook activations per node is a global value since there is no concept of worker nodes when installing {EDAName} on {OCPShort}. For more information, see link:{URLOperatorInstallation}/operator-install-operator_operator-platform-doc#modifying_the_number_of_simultaneous_rulebook_activations_during_or_after_event_driven_ansible_controller_installation[Modifying the number of simultaneous rulebook activations during or after {EDAcontroller} installation] in link:{LinkOperatorInstallation}. +==== + +include::proc-modifying-activations-during-install.adoc[leveloffset=+1] +include::proc-modifying-activations-after-install.adoc[leveloffset=+1] + +.Additional Resources +* For more information about rulebook activations, see the link:https://access.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.4/html-single/event-driven_ansible_controller_user_guide/index#eda-rulebook-activations[Rulebook activations]. +* For more information about modifying simultaneous rulebook activations during or after {EDAName} on {OCPShort}, see the example in link:{URLOperatorInstallation}/appendix-operator-crs_appendix-operator-crs#eda_max_running_activations_yml[eda_max_running_activations_yml]. \ No newline at end of file diff --git a/downstream/modules/eda/con-replacing-controller-tokens.adoc b/downstream/modules/eda/con-replacing-controller-tokens.adoc new file mode 100644 index 0000000000..a97bbd5862 --- /dev/null +++ b/downstream/modules/eda/con-replacing-controller-tokens.adoc @@ -0,0 +1,6 @@ +[id="replacing-controller-tokens"] + += Replacing controller tokens in {PlatformName} {PlatformVers} + + +To use {EDAcontroller} in {PlatformName} {PlatformVers}, you must replace legacy controller tokens configured in your environment with {PlatformName} credentials because controller tokens have been deprecated. \ No newline at end of file diff --git a/downstream/modules/eda/con-system-level-monitoring.adoc b/downstream/modules/eda/con-system-level-monitoring.adoc new file mode 100644 index 0000000000..38d3e7b718 --- /dev/null +++ b/downstream/modules/eda/con-system-level-monitoring.adoc @@ -0,0 +1,18 @@ +[id="system-level-monitoring"] + += System level monitoring for {EDAcontroller} + +[role="_abstract"] +After characterizing your workload to determine how many rulebook activations you are running in parallel and how many events you are receiving at any given point, you must consider monitoring your {EDAcontroller} host at the system level. +Using system level monitoring to review information about {EDAName}’s performance over time helps when diagnosing problems or when considering capacity for future growth. + +System level monitoring includes the following information: + +* Disk I/O +* RAM utilization +* CPU utilization +* Network traffic + +Higher CPU, RAM, or Disk utilization can affect the overall performance of {EDAcontroller}. +For example, a high utilization of any of these system level resources indicates that either the {EDAcontroller} is running too many rulebook activations, or some of the individual rulebook activations are using a high volume of resources. +In this case, you must increase your system level resources to support your workload. diff --git a/downstream/modules/eda/proc-eda-activate-webhook.adoc b/downstream/modules/eda/proc-eda-activate-webhook.adoc index 30c282a800..8150d6b9ff 100644 --- a/downstream/modules/eda/proc-eda-activate-webhook.adoc +++ b/downstream/modules/eda/proc-eda-activate-webhook.adoc @@ -6,7 +6,7 @@ In Openshift environments, you can allow webhooks to reach an activation-job-pod .Prerequisites -* You have created a rulebook activation in the {EDAcontroller} Dashboard. +* You have created a rulebook activation. [NOTE] ==== @@ -71,4 +71,4 @@ test-sync-bug-dynatrace.apps.aap-dt.ocp4.testing.ansible.com -d [NOTE] ==== You do not need the port as it is specified on the Route (targetPort). -==== \ No newline at end of file +==== diff --git a/downstream/modules/eda/proc-eda-activation-keeps-restarting.adoc b/downstream/modules/eda/proc-eda-activation-keeps-restarting.adoc new file mode 100644 index 0000000000..cd7b049a58 --- /dev/null +++ b/downstream/modules/eda/proc-eda-activation-keeps-restarting.adoc @@ -0,0 +1,23 @@ +[id="eda-activation-keeps-restarting"] + += Activation keeps restarting + +Perform the following steps if your rulebook activation keeps restarting. + +.Procedure +. Log in to {PlatformNameShort}. +. From the navigation panel, select {MenuADRulebookActivations}. +. From the *Rulebook Activations* page, select the activation in your list that keeps restarting. The Details page is displayed. +. Click the *History* tab for more information and select the rulebook activation that keeps restarting. The Details tab is displayed and shows the output information. +. Check the *Restart policy* field for your activation. ++ +There are three selections available: *On failure* (restarts a rulebook activation when the container process fails), *Always* (always restarts regardless of success or failure with no more than 5 restarts), or *Never* (never restarts when the container process ends). ++ +.. Confirm that your rulebook activation Restart policy is set to *On failure*. This is an indication that an issue is causing it to fail. +.. To possibly diagnose the problem, check the YAML code and the instance logs of the rulebook activation for errors. +.. If you cannot find a solution with the restart policy values, proceed to the next steps related to the *Log level*. +. Check your log level for your activation. +.. If your default log level is *Error*, go back to the *Rulebook Activation* page and recreate your activation following procedures in link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-rulebook-activations#eda-set-up-rulebook-activation[Setting up rulebook a activation]. +.. Change the *Log level* to *Debug*. +.. Run the activation again and navigate to the *History* tab from the activation details page. +.. On the *History* page, click one of your recent activations and view the *Output*. diff --git a/downstream/modules/eda/proc-eda-activation-stuck-pending.adoc b/downstream/modules/eda/proc-eda-activation-stuck-pending.adoc new file mode 100644 index 0000000000..be0725f02b --- /dev/null +++ b/downstream/modules/eda/proc-eda-activation-stuck-pending.adoc @@ -0,0 +1,25 @@ +[id="eda-activation-stuck-pending"] + += Activation stuck in Pending state + +Perform the following steps if your rulebook activation is stuck in *Pending* state. + +.Procedure + +. Confirm whether there are other running activations and if you have reached the limits (for example, memory or CPU limits). +.. If there are other activations running, terminate one or more of them, if possible. +.. If not, check that the default worker, Redis, and activation worker are all running. If all systems are working as expected, check your eda-server internal logs in the worker, scheduler, API, and nginx containers and services to see if the problem can be determined. ++ +[NOTE] +==== +These logs reveal the source of the issue, such as an exception thrown by the code, a runtime error with network issues, or an error with the rulebook code. If your internal logs do not provide information that leads to resolution, report the issue to Red Hat support. +==== + +.. If you need to make adjustments, see the link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-performance-tuning#modifying-simultaneous-activations[Modifying the number of simultaneous rulebook activations]. ++ +[NOTE] +==== +To adjust the maximum number of simultaneous activations for {OperatorPlatformNameShort} on {OCPShort} deployments, see link:{URLOperatorInstallation}/operator-install-operator_operator-platform-doc#modifying_the_number_of_simultaneous_rulebook_activations_during_or_after_event_driven_ansible_controller_installation[Modifying the number of simultaneous rulebook activations during or after {EDAcontroller} installation] in link:{LinkOperatorInstallation}. +==== + + diff --git a/downstream/modules/eda/proc-eda-build-a-custom-decision-environment.adoc b/downstream/modules/eda/proc-eda-build-a-custom-decision-environment.adoc index 828d56594f..1a2176da98 100644 --- a/downstream/modules/eda/proc-eda-build-a-custom-decision-environment.adoc +++ b/downstream/modules/eda/proc-eda-build-a-custom-decision-environment.adoc @@ -1,22 +1,31 @@ [id="eda-build-a-custom-decision-environment"] -= Building a custom decision environment for {EDAName} within {PlatformNameShort} += Building a custom decision environment for {EDAName} -Use the following instructions if you need a custom decision environment to provide a custom maintained or third-party event source plugin that is not available in the default decision environment. +Decision Environments are {ExecEnvShort}s tailored towards running Ansible Rulebooks. + +Similar to {ExecEnvShort}s that run Ansible playbooks for {ControllerName}, decision environments are designed to run rulebooks for {EDAcontroller}. + +You can create a custom decision environment for {EDAName} that provides a custom maintained or third-party event source plugin that is not available in the default decision environment. .Prerequisites -* {PlatformNameShort} > = 2.4 +* {PlatformNameShort} > = 2.5 * {EDAName} * {Builder} > = 3.0 .Procedure -* Add the `de-supported` decision environment. This image is built from a base image provided by Red Hat called `de-minimal`. +* Use `de-minimal` as the base image with {Builder} to build your custom decision environments. +This image is built from a base image provided by Red Hat at link:https://catalog.redhat.com/software/containers/ansible-automation-platform-25/de-minimal-rhel9/650a5672a370728c710acaab[{PlatformNameShort} minimal decision environment]. + + -[NOTE] +[IMPORTANT] ==== -Red Hat recommends using `de-minimal` as the base image with {Builder} to build your custom decision environments. +* Use the correct {EDAcontroller} decision environment in {PlatformNameShort} to prevent rulebook activation failure. + +** If you want to connect {EDAcontroller} to {PlatformNameShort} 2.4, you must use `registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel9:latest` +** If you want to connect {EDAcontroller} to {PlatformNameShort} {PlatformVers}, you must use `registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest` ==== The following is an example of the {Builder} definition file that uses `de-minimal` as a base image to build a custom decision environment with the ansible.eda collection: @@ -25,7 +34,7 @@ version: 3 images: base_image: - name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest' + name: 'registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest' dependencies: galaxy: @@ -44,7 +53,7 @@ version: 3 images: base_image: - name: 'registry.redhat.io/ansible-automation-platform-24/de-minimal-rhel8:latest' + name: 'registry.redhat.io/ansible-automation-platform-25/de-minimal-rhel9:latest' dependencies: galaxy: diff --git a/downstream/modules/eda/proc-eda-cannot-connect-to-controller.adoc b/downstream/modules/eda/proc-eda-cannot-connect-to-controller.adoc new file mode 100644 index 0000000000..9376f32fbb --- /dev/null +++ b/downstream/modules/eda/proc-eda-cannot-connect-to-controller.adoc @@ -0,0 +1,12 @@ +[id="eda-cannot-connect-to-controller"] + += Cannot connect to the 2.5 {ControllerName} when running activations + +You might experience a failed connection to {ControllerName} when you run your activations. + +.Procedure +. To help resolve the issue, confirm that you have set up a {PlatformName} credential and have obtained the correct {ControllerName} URL. +.. If you have not set up a {PlatformName} credential, follow the procedures in link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-set-up-rhaap-credential-type#eda-set-up-rhaap-credential[Setting up a {PlatformName} credential]. Ensure that this credential has the host set to the following URL format: https:///api/controller + +.. When you have completed this process, try setting up your rulebook activation again. + diff --git a/downstream/modules/eda/proc-eda-check-rule-audit-event-stream.adoc b/downstream/modules/eda/proc-eda-check-rule-audit-event-stream.adoc new file mode 100644 index 0000000000..d56071acaa --- /dev/null +++ b/downstream/modules/eda/proc-eda-check-rule-audit-event-stream.adoc @@ -0,0 +1,13 @@ +[id="eda-check-rule-audit-event-stream"] + += Check the Rule Audit for events on your new event stream + +When events have been sent and received by {EDAcontroller}, you can confirm that actions have been triggered by going to the Rule Audit page and viewing the event stream results. + +.Procedure +. Log in to {PlatformNameShort}. +. From the navigation panel, select {MenuADRuleAudit}. ++ +If your rulebook activation received the event data from the event stream type you selected, the Rule Audit page displays the results for *Status*, *Rulebook activation*, and the *Last fired date* fields. +//[JMSelf]Remove screen shot for now +//image:eda-rule-audit-event-streams.png[Rule audit - Event stream] diff --git a/downstream/modules/eda/proc-eda-config-remote-sys-to-events.adoc b/downstream/modules/eda/proc-eda-config-remote-sys-to-events.adoc new file mode 100644 index 0000000000..d9820c9936 --- /dev/null +++ b/downstream/modules/eda/proc-eda-config-remote-sys-to-events.adoc @@ -0,0 +1,32 @@ +[id="eda-config-remote-sys-to-events"] + += Configuring your remote system to send events + +After you have created your event stream, you must configure your remote system to send events to {EDAcontroller}. The method used for this configuration varies, depending on the vendor for the event stream credential type you select. + +.Prerequisites + +* The URL that was generated when you created your event stream +* Secrets or passwords that you set up in your event stream credential + +.Procedure + +The following example demonstrates how to configure webhooks in a remote system like GitHub to send events to {EDAcontroller}. Each vendor will have unique methods for configuring your remote system to send events to {EDAcontroller}. + +. Log in to your GitHub repository. +. Click *Your profile name → Your repositories*. + +[NOTE] +==== +If you do not have a repository, click *New* to create a new one, select an owner, add a *Repository name*, and click *Create repository*. +==== + +. Navigate to *Settings* (tool bar). +. In the *General* navigation pane, select *Webhooks*. +. Click *Add webhook*. +. In the *Payload URL* field, paste the URL you saved when you created your event stream. +. Select *application/json* in the *Content type* list. +. Enter your *Secret*. +. Click *Add webhook*. + +After the webhook has been added, it attempts to send a test payload to ensure there is connectivity between the two systems (GitHub and {EDAcontroller}). If it can successfully send the data you will see a green check mark next to the *Webhook URL* with the message, *Last delivery was successful*. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-copy-rulebook-activation.adoc b/downstream/modules/eda/proc-eda-copy-rulebook-activation.adoc new file mode 100644 index 0000000000..4c7875429a --- /dev/null +++ b/downstream/modules/eda/proc-eda-copy-rulebook-activation.adoc @@ -0,0 +1,33 @@ +[id="eda-copy-rulebook-activation"] + += Duplicating a rulebook activation + +When setting up a new rulebook activation with field inputs that are similar to one of your existing rulebook activations, you can use the *Duplicate rulebook activation* feature instead of manually entering input into each field. While setting up rulebook activations can be a lengthy process, the ability to duplicate the required fields from an existing activation saves time and, in some cases, reduces the possibility of human error. + +.Procedure + +. On the Rulebook Activations page, click the *More Actions* icon *{MoreActionsIcon}* on the row of the activation you want to duplicate. The More Actions list is displayed with three options: +** *Restart rulebook activation* +** *Duplicate rulebook activation* +** *Delete rulebook activation* +. Select btn:[Duplicate rulebook activation]. ++ +A message is displayed: " duplicated." Initially, the newly duplicated activation is displayed as disabled on the Rulebook Activations page with the same name as the original activation followed by a time stamp in 24-hour format (for example, @ 18:43:27). ++ +[IMPORTANT] +==== +The original rulebook activation continues to run after you have duplicated it. If you try to enable the duplicated activation without editing the fields (including the Name field) to distinguish it from the original, a message is displayed reminding you that the rulebook activation was duplicated from an original, and enabling it might fail or result in duplicate jobs and other complications. +==== + +. Before you run the duplicated rulebook activation, edit the fields by completing the following: +.. Next to the duplicated rulebook activation, click the *Edit* icon. This takes you to the Edit form. +.. Edit the desired fields. ++ +[NOTE] +==== +Ensure that you have given your newly duplicated activation a meaningful *Name* that distinguishes it from the original activation. +==== +. Toggle the btn:[Enable rulebook activation] button to the on position. +. After confirming all of your edits are complete, click btn:[Save rulebook activation]. ++ +This initiates the rulebook activation, and if it runs successfully, the status changes to *Running* or *Completed*. diff --git a/downstream/modules/eda/proc-eda-create-event-stream-credential.adoc b/downstream/modules/eda/proc-eda-create-event-stream-credential.adoc new file mode 100644 index 0000000000..837841d008 --- /dev/null +++ b/downstream/modules/eda/proc-eda-create-event-stream-credential.adoc @@ -0,0 +1,32 @@ +[id="eda-create-event-stream-credential"] + += Creating an event stream credential + +You must create an event stream credential first before you can use an event stream. + +.Prerequisites + +* Each event stream must have exactly one credential. + +.Procedure + +. Log in to the {PlatformNameShort} Dashboard. +. From the navigation panel, select {MenuADCredentials}. +. Click btn:[Create credential]. +. Insert the following: ++ +Name:: Insert the name. +Description:: This field is optional. +Organization:: Click the list to select an organization or select *Default*. +Credential type:: Click the list to select your Credential type. ++ +[NOTE] +==== +When you select the credential type, the *Type Details* section is displayed with fields that are applicable for the credential type you selected. +==== + +Type Details:: Add the requested information for the credential type you selected. For example, if you selected the GitHub Event Stream credential type, you are required to add an HMAC Secret (symmetrical shared secret) between {EDAcontroller} and the remote server. + +. Click btn:[Create credential]. + +The Details page is displayed. From there or the *Credentials* list view, you can edit or delete it. diff --git a/downstream/modules/eda/proc-eda-create-event-stream.adoc b/downstream/modules/eda/proc-eda-create-event-stream.adoc new file mode 100644 index 0000000000..580e5822e6 --- /dev/null +++ b/downstream/modules/eda/proc-eda-create-event-stream.adoc @@ -0,0 +1,48 @@ +[id="eda-create-event-stream"] + += Creating an event stream + +You can create event streams that will be attached to a rulebook activation. + +.Prerequisites + +* If you will be attaching your event stream to a rulebook activation, ensure that your activation has a decision environment and project already set up. +* If you plan to connect to {ControllerName} to run your rulebook activation, ensure that you have created a {PlatformName} credential type in addition to the decision environment and project. For more information, see xref:eda-set-up-rhaap-credential[Setting up a {PlatformName} credential]. + +.Procedure + +. Log in to {PlatformNameShort}. +. From the navigation panel, select {MenuADEventStreams}. +. Click btn:[Create event stream]. +. Insert the following: ++ +Name:: Insert the name. +Organization:: Click the list to select an organization or select *Default*. +Event stream type:: Select the event stream type you prefer. ++ +[NOTE] +==== +This list displays at least 10 default event stream types that can be used to authenticate the connection coming from your remote server. +==== +Credentials:: Select a credential from the list, preferably the one you created for your event stream. +Headers:: Enter HTTP header keys, separated by commas, that you want to include in the event payload. To include all headers, leave the field empty. + +Forward events to rulebook activation:: Use this option to enable or disable the capability of forwarding events to rulebook activations. ++ +[NOTE] +==== +The event stream's event forwarding can be disabled for testing purposes while diagnosing connections and evaluating the incoming data. Disabling the *Forward events to rulebook activation* option allows you to test the event stream connection with the remote system, analyze the header and payload, and if necessary, diagnose credential issues. This ensures that events are not be forwarded to rulebook activations causing rules and conditions to be triggered inadvertently while you are in test mode. Some enterprises might have policies to change secrets and passwords at regular cadence. You can enable/disable this option anytime after the event stream is created. +==== + +. Click btn:[Create event stream]. + +After creating your event stream, the following outputs occur: + +* The Details page is displayed. From there or the Event Streams list view, you can edit or delete it. Also, the Event Streams page shows all of the event streams you have created and the following columns for each event: *Events received*, *Last event received*, and *Event stream type*. As the first two columns receive external data through the event stream, they are continuously updated to let you know they are receiving events from remote systems. +* If you disabled the event stream, the Details page is displayed with a warning message, *This event stream is disabled*. +* Your new event stream generates a URL that is necessary when you configure the webhook on the remote system that sends events. + +[NOTE] +==== +After an event stream is created, the associated credential cannot be deleted until the event stream it is attached to is deleted. +==== diff --git a/downstream/modules/eda/proc-eda-delete-controller-token.adoc b/downstream/modules/eda/proc-eda-delete-controller-token.adoc new file mode 100644 index 0000000000..eb9d9895b4 --- /dev/null +++ b/downstream/modules/eda/proc-eda-delete-controller-token.adoc @@ -0,0 +1,18 @@ +[id="eda-delete-controller-token"] + += Deleting controller tokens + +Before you can set up {PlatformName} credentials, you must delete any existing controller tokens. + +.Prerequisites +* You have deleted all rulebook activations that use controller tokens. + +.Procedure + +. Log in to the {PlatformNameShort} Dashboard. +. From the top navigation panel, select your profile. +. Click *User details*. +. Select the *Tokens* tab. +. Delete all of your previous controller tokens. + +After deleting the controller tokens and rulebook activations, proceed with xref:eda-set-up-rhaap-credential[Setting up a {PlatformName} credential]. diff --git a/downstream/modules/eda/proc-eda-delete-credential.adoc b/downstream/modules/eda/proc-eda-delete-credential.adoc index da26398250..3e55b84931 100644 --- a/downstream/modules/eda/proc-eda-delete-credential.adoc +++ b/downstream/modules/eda/proc-eda-delete-credential.adoc @@ -1,13 +1,22 @@ +:_mod-docs-content-type: PROCEDURE [id="eda-delete-credential"] = Deleting a credential +You can delete credentials if they are no longer needed for your organization. + .Procedure . Delete the credential by using one of these methods: * From the *Credentials* list view, click the btn:[More Actions] icon *{MoreActionsIcon}* next to the desired credential and click btn:[Delete credential]. * From the *Credentials* list view, select the name of the credential, click the btn:[More Actions] icon *{MoreActionsIcon}* next to btn:[Edit credential], and click btn:[Delete credential]. . In the pop-up window, select *Yes, I confirm that I want to delete this credential*. ++ +[NOTE] +==== +If your credential is still in use by other resources in your organization, a warning message is displayed letting you know that the credential cannot be deleted. Also, if your credential is being used in an event stream, you cannot delete it until the event stream is deleted or attached to a different credential. In general, avoid deleting a credential that is in use because it can lead to broken activations. +==== . Click btn:[Delete credential]. -You can delete multiple credentials at a time by selecting the checkbox next to each credential and clicking the btn:[More Actions] icon *{MoreActionsIcon}* in the menu bar and then clicking btn:[Delete selected credentials]. +.Results +You can delete multiple credentials at a time by selecting the checkbox next to each credential, clicking the btn:[More Actions] icon *{MoreActionsIcon}* in the menu bar, and then clicking btn:[Delete selected credentials]. diff --git a/downstream/modules/eda/proc-eda-delete-project.adoc b/downstream/modules/eda/proc-eda-delete-project.adoc index 3bf025b256..ef0a7ba5d5 100644 --- a/downstream/modules/eda/proc-eda-delete-project.adoc +++ b/downstream/modules/eda/proc-eda-delete-project.adoc @@ -2,8 +2,12 @@ = Deleting a project +If you need to delete a project, the {EDAcontroller} interface provides multiple options. + .Procedure -. From the *Projects* list view, select the btn:[More Actions] icon *{MoreActionsIcon}* next to the desired project. +. To delete a project, complete one of the following: +* From the *Projects* list view, select the checkbox next to the desired project, and click the btn:[More Actions] icon *{MoreActionsIcon}* from the page menu. +* From the *Projects* list view, click the btn:[More Actions] icon *{MoreActionsIcon}* next to the desired project. . Select btn:[Delete project]. -. In the popup window, select btn:[Yes, I confirm that I want to delete this project]. +. In the *Permanently delete projects* window, select btn:[Yes, I confirm that I want to delete this project]. . Select btn:[Delete project]. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-delete-rulebook-activations-with-cont-tokens.adoc b/downstream/modules/eda/proc-eda-delete-rulebook-activations-with-cont-tokens.adoc new file mode 100644 index 0000000000..8f96a3a7e8 --- /dev/null +++ b/downstream/modules/eda/proc-eda-delete-rulebook-activations-with-cont-tokens.adoc @@ -0,0 +1,17 @@ +[id="eda-delete-rulebook-activations-with-cont-tokens"] + += Deleting rulebook activations with controller tokens + +To replace the controller tokens, you must delete the rulebook activations that were associated with them. + +.Procedure + +. Log in to the {PlatformNameShort} Dashboard. +. From the top navigation panel, select {MenuADRulebookActivations}. +. Select the rulebook activations that have controller tokens. +. Select the btn:[More Actions] icon *{MoreActionsIcon}* next to the *Rulebook Activation enabled/disabled* toggle. +. Select btn:[Delete rulebook activation]. +. In the window, select btn:[Yes, I confirm that I want to delete these X rulebook activations]. +. Select btn:[Delete rulebook activations]. + + diff --git a/downstream/modules/eda/proc-eda-delete-rulebook-activations.adoc b/downstream/modules/eda/proc-eda-delete-rulebook-activations.adoc index 0a510d8df0..da43e61516 100644 --- a/downstream/modules/eda/proc-eda-delete-rulebook-activations.adoc +++ b/downstream/modules/eda/proc-eda-delete-rulebook-activations.adoc @@ -2,7 +2,9 @@ = Deleting rulebook activations +.Procedure + . Select the btn:[More Actions] icon *{MoreActionsIcon}* next to the *Rulebook Activation enabled/disabled* toggle. . Select btn:[Delete rulebook activation]. -. In the popup window, select btn:[Yes, I confirm that I want to delete these X rulebook activations]. +. In the window, select btn:[Yes, I confirm that I want to delete these X rulebook activations]. . Select btn:[Delete rulebook activations]. diff --git a/downstream/modules/eda/proc-eda-duplicate-credential.adoc b/downstream/modules/eda/proc-eda-duplicate-credential.adoc new file mode 100644 index 0000000000..b6c221caf0 --- /dev/null +++ b/downstream/modules/eda/proc-eda-duplicate-credential.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE +[id="eda-duplicate-credential"] + += Duplicating a credential + +When setting up a new credential with field inputs that are similar to your existing credentials, you can use the *Duplicate credential* feature in the Details tab to duplicate information instead of manually entering it. While setting up credentials can be a lengthy process, the ability to duplicate the required fields from an existing credential saves time and, in some cases, reduces the possibility of human error. + +.Procedure + +. On the Credentials list page, click the name of the credential that you want to duplicate. This takes you to the *Details* tab of the credential. +. Click btn:[Duplicate credential] in the top right of the Details tab. ++ +[NOTE] +==== +You can also click the btn:[Duplicate credential] icon next to the desired credential on the Credentials list page. +==== +A message is displayed confirming that your selected credential has been duplicated: " duplicated." +. Click the btn:[Back to credentials] tab to view the credential you just duplicated. ++ +The duplicated credential is displayed with the same name as the original credential followed by a time stamp in 24-hour format (for example, * @ 17:26:30*). +. Edit the details you prefer for your duplicated credential. +. Click btn:[Save credential]. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-edit-credential.adoc b/downstream/modules/eda/proc-eda-edit-credential.adoc index aadb801f03..c6fb83e64e 100644 --- a/downstream/modules/eda/proc-eda-edit-credential.adoc +++ b/downstream/modules/eda/proc-eda-edit-credential.adoc @@ -1,7 +1,10 @@ +:_mod-docs-content-type: PROCEDURE [id="eda-edit-credential"] = Editing a credential +You can edit existing credentials to ensure the appropriate level of access for your organization. + .Procedure . Edit the credential by using one of these methods: diff --git a/downstream/modules/eda/proc-eda-edit-rulebook-activation.adoc b/downstream/modules/eda/proc-eda-edit-rulebook-activation.adoc new file mode 100644 index 0000000000..2900f76ba7 --- /dev/null +++ b/downstream/modules/eda/proc-eda-edit-rulebook-activation.adoc @@ -0,0 +1,27 @@ +[id="eda-edit-rulebook-activation"] + += Editing a rulebook activation + +You can edit a rulebook activation after you have created or run it to correct input for fields (log levels, Restart policy, turn auditing off or on, and the like) or help mitigate issues caused by failure. + +.Procedure + +. On the Rulebook Activations page, next to the activation you want to edit, toggle the btn:[Rulebook Activation enabled] button to the off position first to disable the activation. ++ +The *Disable rulebook activations* message is displayed asking you to confirm that you want to disable the activation. +. Select the *Yes, I confirm that I want to disable these <1> rulebook activations* checkbox and click btn:[Disable rulebook activations]. +. Next to the rulebook activation, click the *Edit* icon. This takes you to the Edit form. ++ +[NOTE] +==== +You can also access the *Edit* feature by clicking the rulebook activation on the Rulebook Activations page, toggling the btn:[Rulebook activation enabled] button to the off position, confirming that you want to disable the activation, and clicking the btn:[Edit rulebook activation] button on the top right of the page to access the Edit form. +==== +. Edit the desired fields. ++ +[NOTE] +==== +If you prefer to run your activation immediately, you can toggle the btn:[Rulebook activation enabled] button to the on position, and then save your changes. +==== +. Click btn:[Save rulebook activation]. ++ +This takes you back to the Rulebook Activations page. diff --git a/downstream/modules/eda/proc-eda-editing-a-project.adoc b/downstream/modules/eda/proc-eda-editing-a-project.adoc index c09adc25f8..79edf34ba2 100644 --- a/downstream/modules/eda/proc-eda-editing-a-project.adoc +++ b/downstream/modules/eda/proc-eda-editing-a-project.adoc @@ -4,8 +4,7 @@ .Procedure -. From the *Projects* list view, select the btn:[More Actions] icon *{MoreActionsIcon}* next to the desired project. -. Select btn:[Edit project]. +. From the *Projects* list view, select the btn:[More Actions] icon *{MoreActionsIcon}* next to the desired project. The Edit page is displayed. . Enter the required changes and select btn:[Save project]. - -image::eda-edit-project.png[Edit project] +//[J. Self]replace the following image, if possible +//::eda-edit-project.png[Edit project] \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-enable-rulebook-activations.adoc b/downstream/modules/eda/proc-eda-enable-rulebook-activations.adoc index 9657ef0aa3..7bce0494f4 100644 --- a/downstream/modules/eda/proc-eda-enable-rulebook-activations.adoc +++ b/downstream/modules/eda/proc-eda-enable-rulebook-activations.adoc @@ -2,6 +2,8 @@ = Enabling and disabling rulebook activations +.Procedure + . Select the switch on the row level to enable or disable your chosen rulebook. -. In the popup window, select btn:[Yes, I confirm that I want to enable/disable these X rulebook activations]. +. In the window, select btn:[Yes, I confirm that I want to enable/disable these X rulebook activations]. . Select btn:[Enable/Disable rulebook activation]. diff --git a/downstream/modules/eda/proc-eda-event-streams-not-sending-events.adoc b/downstream/modules/eda/proc-eda-event-streams-not-sending-events.adoc new file mode 100644 index 0000000000..9767e7c22c --- /dev/null +++ b/downstream/modules/eda/proc-eda-event-streams-not-sending-events.adoc @@ -0,0 +1,23 @@ +[id="eda-event-streams-not-sending-events"] + += Event streams not sending events to activation + +If you are using event streams to send events to your rulebook activations, occasionally those events might not be successfully routed to your rulebook activation. + +.Procedure +* Try the following options to resolve this. +.. Ensure that each of your event streams in {EDAcontroller} is _not_ in *Test* mode . This means activations would not receive the events. +.. Verify that the origin service is sending the request properly. +.. Check that the network connection to your {Gateway} instance is stable. If you have set up event streams, this is the entry of the event stream request from the sender. +.. Verify that the proxy in the {Gateway} is running. +.. Confirm that the event stream worker is up and running, and able to process the request. +.. Verify that your credential is correctly set up in the event stream. +.. Confirm that the request complies with the authentication mechanism determined by the set credential (for example, basic must contain a header with the credentials or HMAC must contain the signature of the content in a header, and similar). ++ +[NOTE] +==== +The credentials might have been changed in {EDAcontroller}, but not updated in the origin service. +==== + +.. Verify that the rulebook that is running in the activation reacts to these events. This would indicate that you wrote down the event source _and_ added actions that consume the events coming in. Otherwise, the event does reach the activation but there is nothing to activate it. +.. If you are using self-signed certificates, you might want to disable certificate validation when sending webhooks from vendors. Most of the vendors have an option to disable certificate validation for testing or non-production environments. diff --git a/downstream/modules/eda/proc-eda-replace-sources-with-event-streams.adoc b/downstream/modules/eda/proc-eda-replace-sources-with-event-streams.adoc new file mode 100644 index 0000000000..70db42435c --- /dev/null +++ b/downstream/modules/eda/proc-eda-replace-sources-with-event-streams.adoc @@ -0,0 +1,96 @@ +[id="eda-replace-sources-with-event-streams"] + += Replacing sources and attaching event streams to activations + +When you create rulebook activations, you can use event streams to swap out source mappings in rulebook activations and simplify routing from external sources to {EDAcontroller}. + +There are several key points to keep in mind regarding source mapping: + +. An event stream can only be used once in a rulebook source swap. If you have multiple sources in the rulebook, you can only replace each source once. +. The source mapping happens only in the current rulebook activation. You must repeat this process for any other activations using the same rulebook. +. The source mapping is valid only if the rulebook doesn't get modified. If the rulebook gets modified during the source mapping process, the source mapping would fail and it would have to be repeated. +. If the rulebook is modified after the source mapping has been created and a *Restart* happens, the rulebook activation fails. + + +.Procedure + +. Log in to {PlatformNameShort}. +. From the navigation panel, select {MenuADRulebookActivations}. +. Click btn:[Create rulebook activation]. +. Insert the following: ++ +Name:: Insert the name. +Description:: This field is optional. +Organization:: Enter your organization name or select Default from the list. +Project:: Projects are a logical collection of rulebooks. This field is optional. ++ +[NOTE] +==== +Although this field is optional, selecting a project helps refine your list of rulebooks choices. +==== + +Rulebook:: Rulebooks are shown according to the project selected. Select a rulebook. ++ +[NOTE] +==== +After you have selected a rulebook, the Event streams field is enabled. You can click the gear icon to display the Event streams mapping form. +==== + +Event streams:: All the event streams available and set up to forward events to rulebook actiavtions are displayed. If you have not created any event streams, this field remains disabled. ++ +Click the gear icon to display the Event streams mapping UI. ++ +image:eda-latest-event-streams-mapping.png[Event streams mapping UI] ++ +Complete the following fields: ++ +Rulebook source::: A rulebook can contain multiple sources across multiple rulesets. You can map the same rulebook in multiple activations to multiple event streams. While managing event streams, unnamed sources are assigned temporary names (__SOURCE {n}) for identification purposes. ++ +Select __SOURCE_1 from the list. ++ +Event stream::: Select your event stream name from the list. ++ +Click btn:[Save]. ++ +Event streams can replace matching sources in a rulebook, and are server-side webhooks that enable you to connect various event sources to your rulebook activations. Source types that can be replaced with the event stream's source of type ansible.eda.pg_listener include ansible.eda.webhook and other compatible webhook source plugins. Replacing selected sources affects this activation only, and modifies the rulebook's source type, source name, and arguments. Filters, rules, conditions, and actions are all unaffected. ++ +You can select which source you want to replace with a single event stream. If there are multiple sources in your rulebook, you can choose to replace each one of them with event streams, but you are not required to replace each one. The following image displays which sources can be replaced. ++ +image:eda-event-streams-swapping-sources.png[Event streams replacement sources] ++ +The items in pink demonstrate the sources that can be replaced: source type, source name, and arguments. The remaining items (filters, rules, and actions) are not replaced. ++ +Credential:: Select 0 or more credentials for this rulebook activation. This field is optional. ++ +[NOTE] +==== +The credentials that display in this field are customized based on your rulebook activation and only include the following credential types: Vault, {PlatformName}, or any custom credential types that you have created. For more information on credentials, see link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/using_automation_decisions/index#eda-credentials[Credentials]. +==== ++ +Decision environment:: A decision environment is a container image used to run Ansible rulebooks. ++ +[NOTE] +==== +In {EDAcontroller}, you cannot customize the pull policy of the decision environment. By default, it follows the behavior of the always policy. Every time an activation is started, the system tries to pull the most recent version of the image. +==== +Restart policy:: This is the policy that determines how an activation should restart after the container process running the source plugin ends. +*** Policies: +... *Always*: This restarts the rulebook activation immediately, regardless of whether it ends successfully or not, and occurs no more than 5 times. +... *Never*: This never restarts a rulebook activation when the container process ends. +... *On failure*: This restarts the rulebook activation after 60 seconds by default, only when the container process fails, and occurs no more than 5 times. +Log level:: This field defines the severity and type of content in your logged events. +*** Levels: +... *Error*: Logs that contain error messages that are displayed in the *History* tab of an activation. +... *Info*: Logs that contain useful information about rulebook activations, such as a success or failure, triggered action names and their related action events, and errors. +... *Debug*: Logs that contain information that is only useful during the debug phase and might be of little value during production. +This log level includes both error and log level data. +Service name:: This defines a service name for Kubernetes to configure inbound connections if the activation exposes a port. This field is optional. +Rulebook activation enabled?:: This automatically enables the rulebook activation to run. +Variables:: The variables for the rulebook are in a JSON or YAML format. +The content would be equivalent to the file passed through the `--vars` flag of ansible-rulebook command. +Options:: Check the *Skip audit events* option if you do not want to see your events in the Rule Audit. +. Click btn:[Create rulebook activation]. ++ +After you create your rulebook activation, the *Details* page is displayed. ++ +You can navigate to the Event streams page to confirm your events have been received. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-resend-webhook-data-event-streams.adoc b/downstream/modules/eda/proc-eda-resend-webhook-data-event-streams.adoc new file mode 100644 index 0000000000..f92c7dd90d --- /dev/null +++ b/downstream/modules/eda/proc-eda-resend-webhook-data-event-streams.adoc @@ -0,0 +1,13 @@ +[id="eda-resend-webhook-data-event-streams"] + += Resending webhook data from your event stream type + +After you have replaced your sources with the event stream you created, you can now resend data from the event stream to ensure that it is attached to your rulebook activation. In the example shared earlier, the GitHub event stream was used. The following example demonstrates how to resend webhook data if you were using a GitHub event stream. + +.Procedure +. Go back to the *GitHub Webhook / Manage webhook* page. +. Click the *Recent Deliveries* tab. +. Click the btn:[ellipsis]. +. Click btn:[Redeliver]. A *Redeliver payload?* window is displayed with a delivery message. +. Click *Yes, redeliver this payload*. +. Return to the {PlatformNameShort} to check your rule audit. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-restart-rulebook-activations.adoc b/downstream/modules/eda/proc-eda-restart-rulebook-activations.adoc index 0ae85f2aab..7347a30c71 100644 --- a/downstream/modules/eda/proc-eda-restart-rulebook-activations.adoc +++ b/downstream/modules/eda/proc-eda-restart-rulebook-activations.adoc @@ -7,7 +7,9 @@ You can only restart a rulebook activation if it is currently enabled and the restart policy was set to *Always* when it was created. ==== +.Procedure + . Select the btn:[More Actions] icon *{MoreActionsIcon}* next to *Rulebook Activation enabled/disabled* toggle. . Select btn:[Restart rulebook activation]. -. In the popup window, select btn:[Yes, I confirm that I want to restart these X rulebook activations]. +. In the window, select btn:[Yes, I confirm that I want to restart these X rulebook activations]. . Select btn:[Restart rulebook activations]. diff --git a/downstream/modules/eda/proc-eda-set-up-credential-types.adoc b/downstream/modules/eda/proc-eda-set-up-credential-types.adoc new file mode 100644 index 0000000000..a5bd0af425 --- /dev/null +++ b/downstream/modules/eda/proc-eda-set-up-credential-types.adoc @@ -0,0 +1,100 @@ +:_mod-docs-content-type: +[id="eda-set-up-new-credential-types"] + += Creating a new credential type + +You can create a credential type to use with a source plugin that you select based on the supported, default credential types. You can make your credential type available to a team or individuals. + +.Procedure + +. Log in to the {PlatformNameShort} Dashboard. +. From the navigation panel, select {MenuADCredentialType}. +. Click btn:[Create credential type]. +. Insert the following: ++ +Name:: Insert the name. +Description:: This field is optional. +. In the *Input Configuration* field, specify an input schema that defines a set of ordered fields for that type. The format can be in YAML or JSON: ++ +*YAML* ++ +[literal, options="nowrap" subs="+attributes"] +---- +fields: + - type: string + id: username + label: Username + - type: string + id: password + label: Password + secret: true +required: + - username + - password +---- ++ + +View more YAML examples at the link:https://yaml.org/spec/1.2.2/[YAML page]. ++ +*JSON* ++ +[literal, options="nowrap" subs="+attributes"] +---- +{ +"fields": [ + { + "type": "string", + "id": "username", + "label": "Username" + }, + { + "secret": true, + "type": "string", + "id": "password", + "label": "Password" + } + ], + "required": ["username", "password"] +} +---- ++ +View more JSON examples at link:https://www.json.org/json-en.html[The JSON website]. + +. In the *Injector Configuration* field, enter environment variables or extra variables that specify the values a credential type can inject. +The format can be in YAML or JSON (see examples in the previous step). ++ +The following configuration in JSON format shows each field and how they are used: ++ +[literal, options="nowrap" subs="+attributes"] +---- + +{ + "extra_vars": { + "some_extra_var": "{{ username }}:{{ password }}" + } +} +---- + +. Click btn:[Create credential type]. ++ +Your newly created credential type is displayed in the list of credential types. ++ +//[JMS] Hide images for now +//image:credential-types-new-listed.png[New credential type] + +. Click the btn:[Edit credential type] image:leftpencil.png[Edit,15,15] icon to modify the credential type options. + +.Verification + +* Verify that the newly created credential type can be selected from the *Credential Type* list when creating a new credential. +//[JMS] Hide images for now; outdated +//+ +//image:credential-types-new-listed-verify.png[Verify new credential type] + +.Next steps +* On the *Edit* page, you can modify the details or delete the credential. +* If the *Delete* option is disabled, this means that the credential type is being used by a credential, and you must delete the credential type from all the credentials that use it before you can delete it. + +.Additional resources + +For information about how to create a new credential, see xref:eda-set-up-credential[Setting up credentials]. diff --git a/downstream/modules/eda/proc-eda-set-up-credential.adoc b/downstream/modules/eda/proc-eda-set-up-credential.adoc index 7cdb5d0a93..d6288fd043 100644 --- a/downstream/modules/eda/proc-eda-set-up-credential.adoc +++ b/downstream/modules/eda/proc-eda-set-up-credential.adoc @@ -1,37 +1,37 @@ +:_mod-docs-content-type: PROCEDURE [id="eda-set-up-credential"] = Setting up credentials -Create a credential to use with a private repository (GitHub or GitLab) or a private container registry. +You can create a credential to use with a source plugin or a private container registry that you select. You can make your credential available to a team or individuals. -[IMPORTANT] -==== -If you are using a GitHub or GitLab repository, use the `basic auth` method. -Both SCM servers are officially supported. -You can use any SCM provider that supports `basic auth`. -==== +//[IMPORTANT] +//==== +//If you are using a GitHub or GitLab repository, use the `basic auth` method. +//Both SCM servers are officially supported. +//You can use any SCM provider that supports `basic auth`. +//==== .Procedure // ddacosta: I'm not sure whether there will be an EDA specific dashboard in the gateway. Step 1 might need to change to something like "Log in to AAP". // Also, Credentials will be centrally defined at the platform level for 2.5. Steps here should be verified/rewritten as appropriate and possibly relocated to Authentication docs -. Log in to the {EDAcontroller} Dashboard. -. From the navigation panel, select {MenuAMCredentials}. +. Log in to the {PlatformNameShort} Dashboard. +. From the navigation panel, select {MenuADCredentials}. . Click btn:[Create credential]. . Insert the following: + Name:: Insert the name. Description:: This field is optional. -Credential type:: The options available are a GitHub personal access token, a GitLab personal access token, or a container registry. -Username:: Insert the username. -Token:: Insert a token that allows you to authenticate to your destination. +Organization:: Click the list to select an organization or select *Default*. +Credential type:: Click the list to select your Credential type. + [NOTE] ==== -If you are using a container registry, the token field can be a token or a password, depending on the registry provider. -If you are using the {PlatformNameShort} hub registry, insert the password for that in the token field. -==== -+ -. Click btn:[Create credential]. +When you select the credential type, the *Type Details* section is displayed with fields that are applicable for the credential type you chose. +==== + +. Complete the fields that are applicable to the credential type you selected. +. Click btn:[Create credential]. -After saving the credential, the credentials details page is displayed. -From there or the *Credentials* list view, you can edit or delete it. +.Next steps +After saving the credential, the credentials details page is displayed. From there or the *Credentials* list view, you can edit or delete it. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-set-up-new-decision-environment.adoc b/downstream/modules/eda/proc-eda-set-up-new-decision-environment.adoc index 2de25c071e..14ae014b49 100644 --- a/downstream/modules/eda/proc-eda-set-up-new-decision-environment.adoc +++ b/downstream/modules/eda/proc-eda-set-up-new-decision-environment.adoc @@ -1,29 +1,30 @@ [id="eda-set-up-new-decision-environment"] = Setting up a new decision environment -// [ddacosta] I don't think there will be an EDA specific dashboard in the gateway. This might need to be changed to reflect the changes for 2.5. -The following steps describe how to import a decision environment into your {EDAcontroller} Dashboard. + +You can import a decision environment into your {EDAcontroller} using a default or custom decision environment. .Prerequisites -* You are logged in to the {EDAcontroller} Dashboard as a Content Consumer. * You have set up a credential, if necessary. For more information, see the xref:eda-set-up-credential[Setting up credentials] section. -* You have pushed a decision environment image to an image repository or you chose to use the image `de-supported` provided at link:http://registry.redhat.io/[registry.redhat.io]. +* You have pushed a decision environment image to an image repository or you chose to use the `de-minimal` image located in link:http://registry.redhat.io/[registry.redhat.io]. .Procedure -// ddacosta I'm not sure whether there will be an EDA specific dashboard in the gateway. Step 1 might need to change to something like "Log in to AAP". -. Navigate to the {EDAcontroller} Dashboard. -. From the navigation panel, select {MenuADDecisionEnvironments}. + +. Log in to {PlatformNameShort}. +. Navigate to {MenuADDecisionEnvironments}. +. Click btn:[Create decision environment]. . Insert the following: + Name:: Insert the name. Description:: This field is optional. +Organization:: Select an organization to associate with the decision environment. Image:: This is the full image location, including the container registry, image name, and version tag. -Credential:: This field is optional. This is the token needed to utilize the decision environment image. +Credential:: This field is optional. This is the credential needed to use the decision environment image. . Select btn:[Create decision environment]. -Your decision environment is now created and can be managed on the *Decision Environments* screen. +Your decision environment is now created and can be managed on the *Decision Environments* page. After saving the new decision environment, the decision environment's details page is displayed. From there or the *Decision Environments* list view, you can edit or delete it. diff --git a/downstream/modules/eda/proc-eda-set-up-new-project.adoc b/downstream/modules/eda/proc-eda-set-up-new-project.adoc index a933cfec2c..84ccb95c6b 100644 --- a/downstream/modules/eda/proc-eda-set-up-new-project.adoc +++ b/downstream/modules/eda/proc-eda-set-up-new-project.adoc @@ -2,32 +2,46 @@ = Setting up a new project +You can set up projects to manage and store your rulebooks in {EDAcontroller}. + .Prerequisites // [ddacosta] I'm not sure whether there will be an EDA specific dashboard in the gateway. Step 1 might need to change to something like "Log in to AAP". -* You are logged in to the {EDAcontroller} Dashboard as a Content Consumer. +* You are logged in to the {PlatformNameShort} Dashboard as a Content Consumer. * You have set up a credential, if necessary. For more information, see the xref:eda-set-up-credential[Setting up credentials] section. * You have an existing repository containing rulebooks that are integrated with playbooks contained in a repository to be used by {ControllerName}. .Procedure // [ddacosta] I'm not sure whether there will be an EDA specific dashboard in the gateway. Step 1 might need to change to something like "Log in to AAP". -. Log in to the {EDAcontroller} Dashboard. -. From the navigation panel, select *{MenuADProjects}*. +. Log in to the {PlatformNameShort} Dashboard. +. Navigate to *{MenuADProjects}*. +. Click btn:[Create project]. . Insert the following: + Name:: Enter project name. Description:: This field is optional. -SCM type:: Git is the only SCM type available for use. -SCM URL:: HTTP[S] protocol address of a repository, such as GitHub or GitLab. +Source control type:: Git is the only source control type available for use. This field is optional. +Source control URL:: Enter Git, SSH, or HTTP[S] protocol address of a repository, such as GitHub or GitLab. This field is not editable. ++ +[NOTE] +==== +This field accepts SSH private key or private key phrase. To enable the use of these private keys, your project URL must begin with `git@`. +==== +Proxy:: This is used to access access HTTP or HTTPS servers. This field is optional. +Source control branch/tag/commit:: This is the branch to checkout. In addition to branches, you can input tags, commit hashes, and arbitrary refs. Some commit hashes and refs may not be available unless you also provide a custom refspec. This field is optional. +Source control refspec:: A refspec to fetch (passed to the Ansible git module). This parameter allows access to references via the branch field not otherwise available. This field is optional. +For more information, see link:https://docs.ansible.com/ansible/latest/collections/ansible/builtin/git_module.html#examples[Examples]. +Source control credential:: You must have this credential to utilize the source control URL. This field is optional. +Content signature validation credential:: Enable content signing to verify that the content has remained secure when a project is synced. If the content has been tampered with, the job will not run. This field is optional. +Options:: The Verify SSL option is enabled by default. Enabling this option verifies the SSL with HTTPS when the project is imported. + [NOTE] ==== -You cannot edit the SCM URL after you create the project. +You can disable this option if you have a local repository that uses self-signed certificates. ==== -Credential:: This field is optional. This is the token needed to utilize the SCM URL. . Select btn:[Create project]. -Your project is now created and can be managed in the *Projects* screen. +Your project is now created and can be managed in the *Projects* page. After saving the new project, the project's details page is displayed. From there or the *Projects* list view, you can edit or delete it. diff --git a/downstream/modules/eda/proc-eda-set-up-rhaap-credential.adoc b/downstream/modules/eda/proc-eda-set-up-rhaap-credential.adoc new file mode 100644 index 0000000000..6d349bbc42 --- /dev/null +++ b/downstream/modules/eda/proc-eda-set-up-rhaap-credential.adoc @@ -0,0 +1,40 @@ +[id="eda-set-up-rhaap-credential"] + += Setting up a {PlatformName} credential + +You can create a {PlatformName} credential type to run your rulebook activations. + +.Prerequisites + +* You have created a user. +* You have obtained the URL and the credentials to access {ControllerName}. + + +.Procedure + +. Log in to the {PlatformNameShort} Dashboard. +. From the navigation panel, select {MenuADCredentials}. +. Click btn:[Create credential]. +. Insert the following: ++ +Name:: Insert the name. +Description:: This field is optional. +Organization:: Click the list to select an organization or select *Default*. +Credential type:: Click the list and select *{PlatformName}*. ++ +[NOTE] +==== +When you select the credential type, the *Type Details* section is displayed with fields that are applicable for the credential type you chose. +==== +. In the required {PlatformName} field, enter your automation controller URL. ++ +[NOTE] +==== +For {EDAcontroller} {PlatformVers} with {ControllerName} 2.4, use the following example: \https:// + +For {PlatformNameShort} {PlatformVers}, use the following example: \https:///api/controller/ +==== +. Enter a valid *Username* and *Password*, or *Oauth Token*. +. Click btn:[Create credential]. + +After you create this credential, you can use it for configuring your rulebook activations. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-set-up-rulebook-activation.adoc b/downstream/modules/eda/proc-eda-set-up-rulebook-activation.adoc index a0628bd76f..d9ec4c87f7 100644 --- a/downstream/modules/eda/proc-eda-set-up-rulebook-activation.adoc +++ b/downstream/modules/eda/proc-eda-set-up-rulebook-activation.adoc @@ -4,21 +4,30 @@ .Prerequisites // [ddacosta] I'm not sure whether there will be an EDA specific dashboard in the gateway. Step 1 might need to change to something like "Log in to AAP". -* You are logged in to the {EDAcontroller} Dashboard as a Content Consumer. +* You are logged in to the {PlatformNameShort} Dashboard as a Content Consumer. * You have set up a project. * You have set up a decision environment. -* You have set up an {ControllerName} token. .Procedure // [ddacosta] I'm not sure whether there will be an EDA specific dashboard in the gateway. Step 1 might need to change to something like "Log in to AAP". -. Navigate to the {EDAcontroller} Dashboard. -. From the navigation panel, select {MenuADRulebookActivations}. +. Log in to {PlatformNameShort}. +. Navigate to the {MenuADRulebookActivations}. +. Click btn:[Create rulebook activation]. . Insert the following: + Name:: Insert the name. Description:: This field is optional. -Project:: Projects are a logical collection of rulebooks. -Rulebook:: Rulebooks are shown according to the project selected. +Organization:: Enter your organization name or select Default from the list. +Project:: Projects are a logical collection of rulebooks. This field is optional. +Rulebook:: Rulebooks are displayed according to the project selected. +Credential:: Select 0 or more credentials for this rulebook activation. This field is optional. ++ +[NOTE] +==== +* The credentials that display in this field are customized based on your rulebook activation and only include the following credential types: Vault, {PlatformName}, or any custom credential types that you have created. For more information about credentials, see xref:eda-credentials[Credentials]. +* If you plan to use a {PlatformName} credential, you can _only_ select 1 {PlatformName} credential type for a rulebook activation. +==== + Decision environment:: Decision environments are a container image to run Ansible rulebooks. + [NOTE] @@ -27,18 +36,38 @@ In {EDAcontroller}, you cannot customize the pull policy of the decision environ By default, it follows the behavior of the *always* policy. Every time an activation is started, the system tries to pull the most recent version of the image. ==== -Restart policy:: This is a policy to decide when to restart a rulebook. +Restart policy:: This is the policy that determines how an activation should restart after the container process running the source plugin ends. *** Policies: -... Always: Restarts when a rulebook finishes -... Never: Never restarts a rulebook when it finishes -... On failure: Only restarts when it fails +... *Always*: This restarts the rulebook activation immediately, regardless of whether it ends successfully or not, and occurs no more than 5 times. +... *Never*: This never restarts a rulebook activation when the container process ends. +... *On failure*: This restarts the rulebook activation after 60 seconds by default, only when the container process fails, and occurs no more than 5 times. +Log level:: This field defines the severity and type of content in your logged events. +*** Levels: +... *Error*: Logs that contain error messages that are displayed in the *History* tab of an activation. +... *Info*: Logs that contain useful information about rulebook activations, such as a success or failure, triggered action names and their related action events, and errors. +... *Debug*: Logs that contain information that is only useful during the debug phase and might be of little value during production. +This log level includes both error and log level data. +Service name:: This defines a service name for Kubernetes to configure inbound connections if the activation exposes a port. This field is optional. Rulebook activation enabled?:: This automatically enables the rulebook activation to run. -Variables:: The variables for the rulebook are in a JSON/YAML format. +Variables:: The variables for the rulebook are in a JSON or YAML format. The content would be equivalent to the file passed through the `--vars` flag of ansible-rulebook command. ++ +[NOTE] +==== +In the context of {ControllerName} and {EDAcontroller}, you can use both `extra_vars` and credentials to store a variety of information. However, credentials are the preferred method of storing sensitive information such as passwords or API keys because they offer better security and centralized management, whereas `extra_vars` are more suitable for passing dynamic, non-sensitive data. +==== +Options:: Check the *Skip audit events* option if you do not want to see your events in the Rule Audit. . Click btn:[Create rulebook activation]. -Your rulebook activation is now created and can be managed in the *Rulebook Activations* screen. +Your rulebook activation is now created and can be managed on the *Rulebook Activations* page. + +After saving the new rulebook activation, the rulebook activation's details page is displayed, with either a *Pending*, *Running*, or *Failed* status. +From there or the *Rulebook Activations* list view, you can restart or delete it. -After saving the new rulebook activation, the rulebook activation's details page is displayed. -From there or the *Rulebook Activations* list view you can edit or delete it. +[NOTE] +==== +Occasionally, when a source plugin shuts down, it causes a rulebook to exit gracefully after a certain amount of time. +When a rulebook activation shuts down, any tasks that are waiting to be performed will be canceled, and an info level message is sent to the activation log. +For more information, see link:https://ansible.readthedocs.io/projects/rulebook/en/stable/rulebooks.html#[Rulebooks]. +==== diff --git a/downstream/modules/eda/proc-eda-verify-event-streams-work.adoc b/downstream/modules/eda/proc-eda-verify-event-streams-work.adoc new file mode 100644 index 0000000000..c28c61117d --- /dev/null +++ b/downstream/modules/eda/proc-eda-verify-event-streams-work.adoc @@ -0,0 +1,27 @@ +[id="eda-verify-event-streams"] + += Verifying your event streams work + +Verify that you can use your event stream to connect to a remote system and receive data. + +. Log in to {PlatformNameShort}. +. From the navigation panel, select {MenuADEventStreams}. +. Select the event stream that you created to validate connectivity and ensure that the event stream sends data to the rulebook activation. +. Verify that the events were received. The number of *Events received* is displayed along with a header that contains details about the event. ++ +//[JMS] Replace this image with one that shows a number for the Events received field. +image:eda-verify-event-streams.png[Verify event streams work] ++ +If you scroll down in the UI, you can also see the body of the payload with more information about the webhook. ++ +The *Header* and *Body* sections for the event stream are displayed on the Details page. They differ based on the vendor who is sending the event. The header and body can be used to check the attributes in the event payload, which will help you in writing conditions in your rulebook. ++ +//[JMSelf] Hide or maybe replace this image for now. +//+ +//image:eda-payload-body-event-streams.png[Payload body] ++ + +. Toggle the *Forward events to rulebook activation* option to enable you to push your events to a rulebook activation. +This moves the event stream to production mode and makes it easy to attach to rulebook activations. ++ +When this option is toggled off, your ability to forward events to a rulebook activation is disabled and the *This event stream is disabled* message is displayed. \ No newline at end of file diff --git a/downstream/modules/eda/proc-eda-view-activation-output.adoc b/downstream/modules/eda/proc-eda-view-activation-output.adoc index c7e0e54460..1598be0edc 100644 --- a/downstream/modules/eda/proc-eda-view-activation-output.adoc +++ b/downstream/modules/eda/proc-eda-view-activation-output.adoc @@ -7,8 +7,9 @@ You can view the output of the activations in the *History* tab. .Procedure . Select the *History* tab to access the list of all the activation instances. An activation instance represents a single execution of the activation. -. Then select the activation instance in question, this will show you the *Output* produced by that specific execution. +. Then select the activation instance you want to view. The *Output* for the activation instance is displayed. -image::eda-rulebook-activation-history.png[Rulebook activation history] +//[JMSelf] Remove this screenshot due to outdated view. +//image::eda-rulebook-activation-history.png[Rulebook activation history] To view events that came in and triggered an action, you can use the xref:eda-rule-audit[Rule Audit] section in the {EDAcontroller} Dashboard. diff --git a/downstream/modules/eda/proc-eda-view-rule-audit-actions.adoc b/downstream/modules/eda/proc-eda-view-rule-audit-actions.adoc index e0f52c1174..9e3487a6fa 100644 --- a/downstream/modules/eda/proc-eda-view-rule-audit-actions.adoc +++ b/downstream/modules/eda/proc-eda-view-rule-audit-actions.adoc @@ -5,7 +5,7 @@ .Procedure . From the navigation panel select *{MenuADRuleAudit}*. -. Select the desired rule, this brings you to the *Actions* tab. +. Select the desired rule, then select the *Actions* tab. From here you can view executed actions that were taken. -Some actions are linked out to {ControllerName} where you can view the output. +Some actions are linked out to {MenuTopAE} where you can view the output. diff --git a/downstream/modules/eda/proc-eda-view-rule-audit-details.adoc b/downstream/modules/eda/proc-eda-view-rule-audit-details.adoc index 09fdd402a7..a8e73af7dd 100644 --- a/downstream/modules/eda/proc-eda-view-rule-audit-details.adoc +++ b/downstream/modules/eda/proc-eda-view-rule-audit-details.adoc @@ -4,7 +4,8 @@ From the *Rule Audit* list view you can check the event that triggered specific actions. -image::eda-rule-audit-list-view.png[Rule audit list view] +//[JMSelf] Remove outdated image. +//image::eda-rule-audit-list-view.png[Rule audit list view] .Procedure . From the navigation panel select *{MenuADRuleAudit}*. diff --git a/downstream/modules/eda/proc-eda-view-rule-audit-events.adoc b/downstream/modules/eda/proc-eda-view-rule-audit-events.adoc index eebc5d1cda..f388e26f4f 100644 --- a/downstream/modules/eda/proc-eda-view-rule-audit-events.adoc +++ b/downstream/modules/eda/proc-eda-view-rule-audit-events.adoc @@ -9,4 +9,5 @@ This shows you the event that triggered actions. . Select an event to view the *Event log*, along with the *Source type* and *Timestamp*. -image::eda-event-details.png[Event details] +//[JMSelf] Hide/remove images and preapre for UI changes. The content should be clear without the image. +//image::eda-event-details.png[Event details] diff --git a/downstream/modules/eda/proc-modifying-activations-after-install.adoc b/downstream/modules/eda/proc-modifying-activations-after-install.adoc new file mode 100644 index 0000000000..48d2c536e4 --- /dev/null +++ b/downstream/modules/eda/proc-modifying-activations-after-install.adoc @@ -0,0 +1,14 @@ +[id="modifying-activations-after-install"] + += Modifying the number of simultaneous rulebook activations after {EDAcontroller} installation + +[role="_abstract"] +By default, {EDAcontroller} allows 12 rulebook activations per node. For example, with two worker or hybrid nodes, it results in a limit of 24 activations in total to run simultaneously. +You can modify this default value after installation by using the following procedure: + +.Procedure +. Navigate to the environment file at `/etc/ansible-automation-platform/eda/settings.yaml`. +. Choose the number of maximum running activations that you need. +For example, `MAX_RUNNING_ACTIVATIONS = 16` +. Use the following command to restart {EDAName} services: `automation-eda-controller-service restart` + diff --git a/downstream/modules/eda/proc-modifying-activations-during-install.adoc b/downstream/modules/eda/proc-modifying-activations-during-install.adoc new file mode 100644 index 0000000000..80bbd8b82c --- /dev/null +++ b/downstream/modules/eda/proc-modifying-activations-during-install.adoc @@ -0,0 +1,14 @@ +[id="modifying-activations-during-install"] + += Modifying the number of simultaneous rulebook activations during {EDAcontroller} installation + +[role="_abstract"] +By default, {EDAcontroller} allows 12 rulebook activations per node. For example, with two worker or hybrid nodes, it results in a limit of 24 activations in total to run simultaneously. You can modify this default value during installation by using the following procedure: + +.Procedure +Provide a variable to the VM installer: + +. Navigate to the setup inventory file. +. Add `automationedacontroller_max_running_activations` in the [all:vars] section. +For example, `automationedacontroller_max_running_activations=16`. +. Run the setup. \ No newline at end of file diff --git a/downstream/modules/eda/proc-modifying-memory-after-install.adoc b/downstream/modules/eda/proc-modifying-memory-after-install.adoc new file mode 100644 index 0000000000..929fad12b9 --- /dev/null +++ b/downstream/modules/eda/proc-modifying-memory-after-install.adoc @@ -0,0 +1,13 @@ +[id="modifying-memory-after-install"] + += Modifying the default memory limit for each rulebook activation after installation + +[role="_abstract"] +By default, each rulebook activation container has a 200MB memory limit. +You can modify this default value after installation by using the following procedure: + +.Procedure +. Navigate to the environment file at `/etc/ansible-automation-platform/eda/settings.yaml`. +. Modify the default container memory limit. +For example, `PODMAN_MEM_LIMIT = '300m'`. +. Restart the {EDAcontroller} services using `automation-eda-controller-service restart`. diff --git a/downstream/modules/eda/proc-modifying-memory-during-install.adoc b/downstream/modules/eda/proc-modifying-memory-during-install.adoc new file mode 100644 index 0000000000..cfd7d29008 --- /dev/null +++ b/downstream/modules/eda/proc-modifying-memory-during-install.adoc @@ -0,0 +1,13 @@ +[id="modifying-memory-during-install"] + += Modifying the default memory limit for each rulebook activation during installation + +[role="_abstract"] +By default, each rulebook activation container has a 200MB memory limit. +You can modify this default value during installation by using the following procedure: + +.Procedure +. Navigate to the setup inventory file. +. Add `automationedacontroller_podman_mem_limit` in the [all:vars] section. +For example, `automationedacontroller_podman_mem_limit='400m'`. +. Run the setup. diff --git a/downstream/modules/eda/ref-deploy-eda-controller-with-aap-operator-on-ocp.adoc b/downstream/modules/eda/ref-deploy-eda-controller-with-aap-operator-on-ocp.adoc deleted file mode 100644 index c9d9d55181..0000000000 --- a/downstream/modules/eda/ref-deploy-eda-controller-with-aap-operator-on-ocp.adoc +++ /dev/null @@ -1,7 +0,0 @@ -[id="deploying-eda-controller-with-aap-operator-on-ocp"] - -= Deploying {EDAcontroller} with {OperatorPlatform} on {OCPShort} - -{EDAName} is not limited to {PlatformNameShort} on VMs. You can also access this feature on {OperatorPlatform} on {OCPShort}. To deploy {EDAName} with {OperatorPlatform}, follow the instructions in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/index#deploy-eda-controller-on-aap-operator-ocp[Deploying Event-Driven Ansible controller with Ansible Automation Platform Operator on OpenShift Container Platform]. - -After successful deployment, you can connect to event sources and resolve issues more efficiently. diff --git a/downstream/modules/eda/ref-eda-controller-install-builder.adoc b/downstream/modules/eda/ref-eda-controller-install-builder.adoc new file mode 100644 index 0000000000..b9ae42f0a5 --- /dev/null +++ b/downstream/modules/eda/ref-eda-controller-install-builder.adoc @@ -0,0 +1,11 @@ +[id="eda-controller-install-builder"] + += Installing ansible-builder + +To build images, you must have Podman or Docker installed, along with the `ansible-builder` Python package. + +The `--container-runtime` option must correspond to the Podman or Docker executable you intend to use. + +When building a decision environment image, it must support the architecture that {PlatformNameShort} is deployed with. + +For more information, see link:https://ansible.readthedocs.io/projects/builder/en/latest/#quickstart-for-ansible-builder[Quickstart for Ansible Builder] or link:{LinkBuilder}. diff --git a/downstream/modules/eda/ref-eda-logging-samples.adoc b/downstream/modules/eda/ref-eda-logging-samples.adoc new file mode 100644 index 0000000000..c91a1ddecf --- /dev/null +++ b/downstream/modules/eda/ref-eda-logging-samples.adoc @@ -0,0 +1,75 @@ +[id="eda-logging-samples"] + += Logging samples + +When the following APIs are called for each operation, you see the following audit logs: + +.Rulebook activation + +---- +1. Create + 1. 2024-08-15 14:13:20,384 aap_eda.api.views.activation INFO Action: Create / ResourceType: RulebookActivation / ResourceName: quick_start_project / ResourceID: 53 / Organization: Default +2. Read + 1. 2024-08-15 14:21:26,844 aap_eda.api.views.activation INFO Action: Read / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default +3. Disable + 1. 2024-08-15 14:23:57,798 aap_eda.api.views.activation INFO Action: Disable / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default +4. Enable + 1. 2024-08-15 14:24:16,472 aap_eda.api.views.activation INFO Action: Enable / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default +5. Delete + 1. 2024-08-15 14:24:53,847 aap_eda.api.views.activation INFO Action: Delete / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default +6. Restart + 2024-08-15 14:24:34,169 aap_eda.api.views.activation INFO Action: Restart / ResourceType: RulebookActivation / ResourceName: quick_start_activation / ResourceID: 1 / Organization: Default +---- + +.EventStream Logs +---- +1. Create + 1. 2024-08-15 13:46:26,903 aap_eda.api.views.webhook INFO Action: Create / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: 1 / Organization: Default +2. Update + 1. 2024-08-15 13:56:17,440 aap_eda.api.views.webhook INFO Action: Update / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: 1 / Organization: Default +3. Read + 1. 2024-08-15 13:56:56,271 aap_eda.api.views.webhook INFO Action: Read / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: 1 / Organization: Default +4. List + 1. 2024-08-15 13:56:17,492 aap_eda.api.views.webhook INFO Action: List / ResourceType: EventStream / ResourceName: * / ResourceID: * / Organization: * +5. Delete + 1. 2024-08-15 13:57:13,124 aap_eda.api.views.webhook INFO Action: Delete / ResourceType: EventStream / ResourceName: ZackTest / ResourceID: None / Organization: Default +---- + +.Decision Environment +---- +1. Create + 1. 2024-08-15 14:10:53,311 aap_eda.api.views.decision_environment INFO Action: Create / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: 86 / Organization: Default +2. Read + 1. 2024-08-15 14:10:53,349 aap_eda.api.views.decision_environment INFO Action: Read / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: 86 / Organization: Default +3. Update + 2024-08-15 14:11:20,970 aap_eda.api.views.decision_environment INFO Action: Update / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: 86 / Organization: Default +4. Delete +2024-08-15 14:11:42,369 aap_eda.api.views.decision_environment INFO Action: Delete / ResourceType: DecisionEnvironment / ResourceName: quick_start_de / ResourceID: None / Organization: Default +---- + +.Project +---- +1. Create + 1. 2024-08-15 14:05:26,874 aap_eda.api.views.project INFO Action: Create / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default +2. Read + 1. 2024-08-15 14:05:26,913 aap_eda.api.views.project INFO Action: Read / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default +3. Update + 1. 2024-08-15 14:06:08,255 aap_eda.api.views.project INFO Action: Update / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default +4. Sync + 1. 2024-08-15 14:06:30,580 aap_eda.api.views.project INFO Action: Sync / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default +5. Delete + 1. 2024-08-15 14:06:49,481 aap_eda.api.views.project INFO Action: Delete / ResourceType: Project / ResourceName: quick_start_project / ResourceID: 86 / Organization: Default +---- + +.Activation Start/Stop +---- +1. Start + 1. 2024-08-15 14:21:29,076 aap_eda.services.activation.activation_manager INFO Requested to start activation 1, starting. + 2024-08-15 14:21:29,093 aap_eda.services.activation.activation_manager INFO Creating a new activation instance for activation: 1 + 2024-08-15 14:21:29,104 aap_eda.services.activation.activation_manager INFO Starting container for activation instance: 1 +2. Stop + 1. eda-activation-worker-1 | 2024-08-15 14:40:52,547 aap_eda.services.activation.activation_manager INFO Stop operation requested for activation id: 2 Stopping activation. + eda-activation-worker-1 | 2024-08-15 14:40:52,550 aap_eda.services.activation.activation_manager INFO Activation 2 is already stopped. + eda-activation-worker-1 | 2024-08-15 14:40:52,550 aap_eda.services.activation.activation_manager INFO Activation manager activation id: 2 Activation restart scheduled for 1 second. + eda-activation-worker-1 | 2024-08-15 14:40:52,562 rq.worker INFO activation: Job OK (activation-2) +---- diff --git a/downstream/modules/eda/ref-performance-troubleshooting.adoc b/downstream/modules/eda/ref-performance-troubleshooting.adoc new file mode 100644 index 0000000000..275dff5418 --- /dev/null +++ b/downstream/modules/eda/ref-performance-troubleshooting.adoc @@ -0,0 +1,20 @@ +[id="performance-troubleshooting"] + += Performance Troubleshooting for {EDAcontroller} + +[role="_abstract"] +Based on the default parameters within {EDAcontroller}, you might encounter scenarios that pose challenges to completing your workload. +The following section provides descriptions of these scenarios and troubleshooting guidance. + +* My activation status displays as “running”, but it is not processing the events. +** Ensure that you are using the correct event source in the rulebook activation. +If the event you are expecting is coming from a source other than what is in the rulebook, {EDAcontroller} will not process the event. + +* My activation status displays as “running”, and {EDAcontroller} is also receiving the events, but no actions are occuring. +** Ensure that you have set the correct conditions for matching the event and taking actions in the rulebook activation. + +* My activation keeps restarting in an infinite loop. +** By default, the reset policy for rulebook activations is set to *On Failure*. Change the restart policy using the following procedure: +. Navigate to {MenuADRulebookActivations}. +. Select the *Restart Policy* list to display the options. +. Select the appropriate value: *On Failure*, *Always*, *Never*. diff --git a/downstream/modules/hub/con-approval-pipeline.adoc b/downstream/modules/hub/con-approval-pipeline.adoc index efd5a9b3a0..c6c77a71f1 100644 --- a/downstream/modules/hub/con-approval-pipeline.adoc +++ b/downstream/modules/hub/con-approval-pipeline.adoc @@ -1,18 +1,16 @@ -// Module included in the following assemblies: -// assembly-repo-management.adoc - +:_mod-docs-content-type: CONCEPT [id="con-approval-pipeline"] = Approval pipeline for custom repositories in {HubName} -In {HubName} you can approve collections into any repository marked with the `pipeline=approved` label. By default, {HubName} ships with one repository for approved content, but you have the option to add more from the repository creation screen. You cannot directly publish into a repository marked with the `pipeline=approved` label. A collection must first go through a staging repository and be approved before being published into a 'pipleline=approved' repository. +In {HubName} you can approve collections into any repository marked with the `pipeline=approved` label. By default, {HubName} includes one repository for approved content, but you have the option to add more from the repository creation screen. You cannot directly publish into a repository marked with the `pipeline=approved` label. A collection must first go through a staging repository and be approved before being published into a 'pipleline=approved' repository. Auto approval:: When auto approve is enabled, any collection you upload to a staging repository is automatically promoted to all of the repositories marked as `pipeline=approved`. Approval required:: -When auto approve is disabled, the administrator can view the approval dashboard and see collections that have been uploaded into any of the staging repositories. Clicking btn:[Approve] displays a list of approved repositories. From this list, the administrator can select one or more repositories to which the content should be promoted. +When auto approve is disabled, the administrator can view the approval dashboard and see collections that have been uploaded into any of the staging repositories. Sorting by *Approved* displays a list of approved repositories. From this list, the administrator can select one or more repositories to which the content should be promoted. + If only one approved repository exists, the collection is automatically promoted into it and the administrator is not prompted to select a repository. diff --git a/downstream/modules/hub/con-approval.adoc b/downstream/modules/hub/con-approval.adoc index 43b210b120..a33f83bafc 100644 --- a/downstream/modules/hub/con-approval.adoc +++ b/downstream/modules/hub/con-approval.adoc @@ -1,9 +1,10 @@ +:_mod-docs-content-type: CONCEPT [id="con-approval"] = About Approval -You can manage uploaded collections in {HubName} by using the *Approval* feature located in the navigation panel. +You can manage uploaded collections in {HubName} by using the *Collection Approvals* feature located in the navigation panel. Approval Dashboard:: By default, the *Approval* dashboard lists all collections with *Needs Review* status. You can check these for inclusion in your *Published* repository. Viewing collection details:: You can view more information about the collection by clicking the *Version* number. -Filtering collections:: Filter collections by *Namespace*, *Collection Name* or *Repository*, to locate content and update its status. +Filtering collections:: Filter collections by *Namespace*, *Collection*, or *Repository* to locate content and update its status. diff --git a/downstream/modules/hub/con-container-registry.adoc b/downstream/modules/hub/con-container-registry.adoc index f412919450..5463811007 100644 --- a/downstream/modules/hub/con-container-registry.adoc +++ b/downstream/modules/hub/con-container-registry.adoc @@ -1,4 +1,4 @@ - +:_mod-docs-content-type: CONCEPT [id="container-registries"] @@ -6,13 +6,13 @@ [role="_abstract"] -The {HubName} container registry is used for storing and managing container images. -When you have built or sourced a container image, you can push that container image to the registry portion of {PrivateHubName} to create a container repository. +The {HubName} remote registry is used for storing and managing {ExecEnvShort}s. +When you have built or sourced an {ExecEnvShort}, you can push that {ExecEnvShort} to the registry portion of {PrivateHubName} to create a container repository. [role="_additional-resources"] .Next steps -* Push a container image to the {HubName} container registry. +* Push an {ExecEnvShort} to the {HubName} remote registry. * Create a group with access to the container repository in the registry. * Add the new group to the container repository. * Add a README to the container repository to provide users with information and relevant links. diff --git a/downstream/modules/hub/con-repo-rbac.adoc b/downstream/modules/hub/con-repo-rbac.adoc index d325971e3e..19fe9091e4 100644 --- a/downstream/modules/hub/con-repo-rbac.adoc +++ b/downstream/modules/hub/con-repo-rbac.adoc @@ -1,9 +1,8 @@ -// Module included in the following assemblies: -// assembly-repo-management.adoc +:_mod-docs-content-type: CONCEPT [id="con-repo-rbac"] = Role based access control to restrict access to custom repositories -Use Role Based Access Control (RBAC) to restrict user access to custom repositories by defining access permissions based on user roles. By default, users can view all public repositories in their {HubName}, but they cannot modify a repository unless their role allows them access to do so. The same logic applies to other operations on the repository. For example, you can remove a user's ability to download content from a custom repository by changing their role permissions. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access[Configuring user access for your {PrivateHubName}] for information about managing user access in {HubName}. +Use Role Based Access Control (RBAC) to restrict user access to custom repositories by defining access permissions based on user roles. By default, users can view all public repositories in their {HubName}, but they cannot modify a repository unless their role allows them access to do so. The same logic applies to other operations on the repository. For example, you can remove a user's ability to download content from a custom repository by changing their role permissions. See link:{LinkCentralAuth} for information about managing user access to {HubName}. diff --git a/downstream/modules/hub/con-token-management-hub.adoc b/downstream/modules/hub/con-token-management-hub.adoc new file mode 100644 index 0000000000..1c019842d5 --- /dev/null +++ b/downstream/modules/hub/con-token-management-hub.adoc @@ -0,0 +1,24 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-11-19 + +:_mod-docs-content-type: CONCEPT + +[id="token-management-hub_{context}"] += Token management in {HubName} + +Before you can interact with {HubName} by uploading or downloading collections, you must create an API token. The {HubName} API token authenticates your `ansible-galaxy` client to the Red Hat {HubName} server. + +[NOTE] +==== +{HubNameStart} does not support basic authentication or authenticating through service accounts. You must authenticate using token management. +==== + +Your method for creating the API token differs according to the type of {HubName} that you are using: + +* {HubNameStart} uses offline token management. See xref:proc-create-api-token_cloud-sync[Creating the offline token in {HubName}]. + +* {PrivateHubNameStart} uses API token management. See xref:proc-create-api-token-pah_cloud-sync[Creating the API token in {PrivateHubName}]. + +* If you are using Keycloak to authenticate your {PrivateHubName}, follow the procedure for xref:proc-create-api-token_cloud-sync[Creating the offline token in {HubName}]. + + diff --git a/downstream/modules/hub/proc-add-container-readme.adoc b/downstream/modules/hub/proc-add-container-readme.adoc index a70afd8298..4a846b559a 100644 --- a/downstream/modules/hub/proc-add-container-readme.adoc +++ b/downstream/modules/hub/proc-add-container-readme.adoc @@ -1,17 +1,4 @@ -//// -Base the file name and the ID on the module title. For example: -* file name: proc-doing-procedure-a.adoc -* ID: [id="doing-procedure-a_{context}"] -* Title: = Doing procedure A - -The ID is an anchor that links to the module. Avoid changing it after the module has been published to ensure existing links are not broken. -//// - -[id="proc-doing-one-procedure_{context}"] - -//// -The `context` attribute enables module reuse. Every module ID includes {context}, which ensures that the module has a unique ID even if it is reused multiple times in a guide. -//// +:_mod-docs-content-type: PROCEDURE = Adding a README to your container repository @@ -26,10 +13,10 @@ By default, the README is empty. * You have permissions to change containers. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACExecEnvironments}. -. Select your container repository. +. Select your {ExecEnvShort}. . On the *Detail* tab, click btn:[Add]. . In the *Raw Markdown* text field, enter your README text in Markdown. . Click btn:[Save] when you are finished. diff --git a/downstream/modules/hub/proc-add-group-to-container-repo.adoc b/downstream/modules/hub/proc-add-group-to-container-repo.adoc index 4454281db4..d52df8fa30 100644 --- a/downstream/modules/hub/proc-add-group-to-container-repo.adoc +++ b/downstream/modules/hub/proc-add-group-to-container-repo.adoc @@ -1,22 +1,23 @@ +:_mod-docs-content-type: PROCEDURE [id="providing-access-to-containers"] -= Providing access to your container repository += Providing access to your {ExecEnvName} [role="_abstract"] -Provide access to your container repository for users who need to work with the images. -Adding a group allows you to modify the permissions the group can have to the container repository. -You can use this option to extend or restrict permissions based on what the group is assigned. +Provide access to your {ExecEnvName} for users who need to work with the images. +Adding a team allows you to modify the permissions the team can have to the container repository. +You can use this option to extend or restrict permissions based on what the team is assigned. .Prerequisites * You have *change container namespace* permissions. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACExecEnvironments}. -. Select your container repository. -. From the *Access* tab, click btn:[Select a group]. -. Select the group or groups to which you want to grant access and click btn:[Next]. +. Select your {ExecEnvNameSing}. +. From the *Team Access* tab, click btn:[Add roles]. +. Select the team or teams to which you want to grant access and click btn:[Next]. . Select the roles that you want to add to this {ExecEnvShort} and click btn:[Next]. -. Click btn:[Add]. +. Click btn:[Finish]. diff --git a/downstream/modules/hub/proc-adding-an-execution-environment.adoc b/downstream/modules/hub/proc-adding-an-execution-environment.adoc index eb729a3453..920d5cae20 100644 --- a/downstream/modules/hub/proc-adding-an-execution-environment.adoc +++ b/downstream/modules/hub/proc-adding-an-execution-environment.adoc @@ -1,31 +1,32 @@ - +:_mod-docs-content-type: PROCEDURE [id="adding-an-execution-environment"] -= Adding an {ExecEnvShort} -{ExecEnvNameStart} are container images that make it possible to incorporate system-level dependencies and collection-based content. -Each {ExecEnvShort} allows you to have a customized image to run jobs, and each of them contain only what you need when running the job. += Adding and signing an {ExecEnvShort} +{ExecEnvNameStart} are container images that make it possible to incorporate system-level dependencies and collection-based content. Each {ExecEnvShort} allows you to have a customized image to run jobs, and each of them contain only what you need when running the job. .Procedure . From the navigation panel, select {MenuACExecEnvironments}. -. Click btn:[Add execution environment]. +. Click btn:[Create execution environment] and enter the relevant information in the fields that appear. -. Enter the name of the {ExecEnvShort}. +.. The *Name* field displays the name of the {ExecEnvShort} on your local registry. -. Optional: Enter the upstream name. +.. The *Upstream name* field is the name of the image on the remote server. -. Under *Registry*, select the name of the registry from the drop-down menu. +.. Under *Registry*, select the name of the registry from the drop-down menu. -. Enter tags in the *Add tag(s) to include* field. +.. Optional: Enter tags in the *Add tag(s) to include* field. If the field is blank, all the tags are passed. -You must specify which repository specific tags to pass. +You must specify which repository-specific tags to pass. + +.. Optional: Enter tags to exclude in the *Add tag(s) to exclude* field. + +. Click btn:[Create {ExecEnvShort}]. You should see your new {ExecEnvShort} in the list that appears. + +. Sync and sign your new {ExecEnvNameSing}. -. The remaining fields are optional: -* *Currently included tags* -* *Add tag(s) to exclude* -* *Currently excluded tag(s)* -* *Description* +.. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Sync execution environment*. -. Click btn:[Save]. +.. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Sign execution environment*. -. Synchronize the image. +. Click on your new {ExecEnvShort}. On the Details page, find the *Signed* label to determine that your {ExecEnvShort} has been signed. diff --git a/downstream/modules/hub/proc-adding-collections-repository.adoc b/downstream/modules/hub/proc-adding-collections-repository.adoc index ebe2430159..70533d0649 100644 --- a/downstream/modules/hub/proc-adding-collections-repository.adoc +++ b/downstream/modules/hub/proc-adding-collections-repository.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-basic-repo-management.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-adding-collections-repository"] = Adding collections to an {HubName} repository @@ -9,7 +7,7 @@ After you create your repository, you can begin adding automation content collec .Procedure . From the navigation panel, select {MenuACAdminRepositories}. -. Locate your repository in the list and click the btn:[More Actions] icon *{MoreActionsIcon}*, then select *Edit*. -. Select the *Collections version* tab. -. Click btn:[Add Collection] and select the collections that you want to add to your repository. +. Click into your repository in the list. +. Select the *Collection versions* tab. +. Click btn:[Add Collections] and select the collections that you want to add to your repository. . Click btn:[Select]. diff --git a/downstream/modules/hub/proc-adding-containers-remotely-to-the-automation-hub.adoc b/downstream/modules/hub/proc-adding-containers-remotely-to-the-automation-hub.adoc index d2f595e4fb..f4a9bb6c34 100644 --- a/downstream/modules/hub/proc-adding-containers-remotely-to-the-automation-hub.adoc +++ b/downstream/modules/hub/proc-adding-containers-remotely-to-the-automation-hub.adoc @@ -1,22 +1,20 @@ -//Module included in the following assemblies: - +:_mod-docs-content-type: PROCEDURE [id="adding-containers-remotely-to-the-automation-hub"] = Adding containers remotely to {HubName} You can add containers remotely to {HubName} in one of the following two ways: -* Create Remotes -* Execution Environment +* By creating remotes +* By using an {ExecEnvNameSing} .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRemoteRegistries}. -. Click btn:[Add remote registry]. +. Click btn:[Create remote registry]. * In the *Name* field, enter the name of the registry where the container resides. @@ -26,4 +24,4 @@ You can add containers remotely to {HubName} in one of the following two ways: * In the *Password* field, enter the password if necessary. -* Click btn:[Save]. +* Click btn:[Create remote registry]. diff --git a/downstream/modules/hub/proc-approve-collection.adoc b/downstream/modules/hub/proc-approve-collection.adoc index 816964c920..81a690fb92 100644 --- a/downstream/modules/hub/proc-approve-collection.adoc +++ b/downstream/modules/hub/proc-approve-collection.adoc @@ -1,10 +1,9 @@ -// Module included in the following assemblies: -// obtaining-token/master.adoc +:_mod-docs-content-type: PROCEDURE [id="proc-approve-collection"] = Approving collections for internal publication -You can approve collections uploaded to individual namespaces for internal publication and use. All collections awaiting review are located under the *Approval* tab in the *Staging* repository. +You can approve collections uploaded to individual namespaces for internal publication and use. All collections awaiting review are located in {MenuACAdminCollectionApproval}. .Prerequisites @@ -16,8 +15,7 @@ You can approve collections uploaded to individual namespaces for internal publi + Collections requiring approval have the status *Needs review*. + -. Select a collection to review. -. Click the *Version* to view the contents of the collection. -. Click btn:[Certify] to approve the collection. +. Find the collection you want to review in the list. You can also filter collections by Namespace, Repository, and Status using the search bar. +. Click the thumbs up icon to approve and sign the collection. Confirm your choice in the modal that appears. Approved collections are moved to the *Published* repository where users can view and download them for use. diff --git a/downstream/modules/hub/proc-basic-repo-sync.adoc b/downstream/modules/hub/proc-basic-repo-sync.adoc index b5193b9e52..c83d601e41 100644 --- a/downstream/modules/hub/proc-basic-repo-sync.adoc +++ b/downstream/modules/hub/proc-basic-repo-sync.adoc @@ -1,14 +1,13 @@ -// Module included in the following assemblies: -// assembly-repo-sync.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-basic-repo-sync"] += Synchronizing repositories in {HubName} .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRepositories}. -. Locate your repository in the list and click *Sync*. +. Locate your repository in the list and click btn:[More Actions] icon *{MoreActionsIcon}*, then select *Sync repository*. + All collections in the configured remote are downloaded to your custom repository. To check the status of the collection sync, select {MenuACAdminTasks} from the navigation panel. + @@ -19,4 +18,4 @@ To limit repository synchronization to specific collections within a remote, you [role="_additional-resources"] .Additional resources -For more information about using requirements files, see link:https://docs.ansible.com/ansible/latest/collections_guide/collections_installing.html#install-multiple-collections-with-a-requirements-file[Install multiple collections with a requirements file] in the _Using Ansible collections_ guide. +For more information about using requirements files, see link:{URLHubManagingContent}/managing-cert-valid-content#create-requirements-file_managing-cert-validated-content[Creating a requirements file]. diff --git a/downstream/modules/hub/proc-configure-ansible-galaxy-cli-verify.adoc b/downstream/modules/hub/proc-configure-ansible-galaxy-cli-verify.adoc index 7b92856c12..144c5c9973 100644 --- a/downstream/modules/hub/proc-configure-ansible-galaxy-cli-verify.adoc +++ b/downstream/modules/hub/proc-configure-ansible-galaxy-cli-verify.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-configure-ansible-galaxy-cli-verify"] = Configuring Ansible-Galaxy CLI to verify collections @@ -5,7 +6,7 @@ You can configure Ansible-Galaxy CLI to verify collections. This ensures that downloaded collections are approved by your organization and have not been changed after they were uploaded to {HubName}. -If a collection has been signed by {HubName}, the server provides ASCII armored, GPG-detached signatures to verify the authenticity of `MANIFEST.json` before using it to verify the collection’s contents. +If a collection has been signed by {HubName}, the server provides ASCII armored, GPG-detached signatures to verify the authenticity of `MANIFEST.json` before using it to verify the collection's contents. You must opt into signature verification by link:https://docs.ansible.com/ansible/devel/reference_appendices/config.html#galaxy-gpg-keyring[configuring a keyring] for `ansible-galaxy` or providing the path with the `--keyring` option. .Prerequisites @@ -24,7 +25,7 @@ gpg --import --no-default-keyring --keyring ~/.ansible/pubring.kbx my-public-key + [NOTE] ==== -In addition to any signatures provided by the {HubName}, signature sources can also be provided in the requirements file and on the command line. +In addition to any signatures provided by {HubName}, signature sources can also be provided in the requirements file and on the command line. Signature sources should be URIs. ==== + @@ -61,8 +62,8 @@ Create a collection with `company_name.product` format. This format means that multiple products can have different collections under the company namespace. [discrete] -= How do I get a namespace on {HubNameMain}? += How do I get a namespace on {HubName}? -By default namespaces used on {Galaxy} are also used on {HubNameMain} by the Ansible partner team. +By default namespaces used on {Galaxy} are also used on {HubName} by the Ansible partner team. For any queries and clarifications contact ansiblepartners@redhat.com. diff --git a/downstream/modules/hub/proc-configure-content-signing-on-pah.adoc b/downstream/modules/hub/proc-configure-content-signing-on-pah.adoc index 6297da4ff2..224d5b7ee3 100644 --- a/downstream/modules/hub/proc-configure-content-signing-on-pah.adoc +++ b/downstream/modules/hub/proc-configure-content-signing-on-pah.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-configure-content-signing-on-pah"] = Configuring content signing on {PrivateHubName} @@ -53,7 +54,6 @@ else exit $STATUS fi ---- - + After you deploy a {PrivateHubName} with signing enabled to your {PlatformNameShort} cluster, new UI additions are displayed in collections. diff --git a/downstream/modules/hub/proc-configure-proxy-remote.adoc b/downstream/modules/hub/proc-configure-proxy-remote.adoc index 9969e092b4..74313edbbb 100644 --- a/downstream/modules/hub/proc-configure-proxy-remote.adoc +++ b/downstream/modules/hub/proc-configure-proxy-remote.adoc @@ -11,14 +11,14 @@ If your {PrivateHubName} is behind a network proxy, you can configure proxy sett .Prerequisites * You have valid *Modify Ansible repo content* permissions. -For more information on permissions, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access[Configuring user access for your {PrivateHubName}]. +For more information on permissions, see link:{LinkCentralAuth} * You have a proxy URL and credentials from your local network administrator. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {PrivateHubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRemotes}. -. In either the *rh-certified* or *Community* remote, click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Edit*. +. In either the *rh-certified* or *Community* remote, click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Edit remote*. . Expand the *Show advanced options* drop-down menu. . Enter your proxy URL, proxy username, and proxy password in the appropriate fields. -. Click btn:[Save]. +. Click btn:[Save remote]. diff --git a/downstream/modules/hub/proc-configuring-the-client-to-verify-signatures.adoc b/downstream/modules/hub/proc-configuring-the-client-to-verify-signatures.adoc index 2a9149809a..f4ce26902c 100644 --- a/downstream/modules/hub/proc-configuring-the-client-to-verify-signatures.adoc +++ b/downstream/modules/hub/proc-configuring-the-client-to-verify-signatures.adoc @@ -1,9 +1,9 @@ - +:_mod-docs-content-type: PROCEDURE [id="configuring-the-client-to-verify-signatures"] = Configuring the client to verify signatures -To ensure a container image pulled from the remote registry is properly signed, you must first configure the image with the proper public key in a policy file. +To ensure an {ExecEnvShort} pulled from the remote registry is properly signed, you must first configure the {ExecEnvShort} with the proper public key in a policy file. .Prerequisites * The client must have sudo privileges configured to verify signatures. @@ -75,7 +75,7 @@ name of your key file. > podman pull /: --tls-verify=false ---- -This response verifies the image has been signed with no errors. If the image is not signed, the command fails. +This response verifies the {ExecEnvShort} has been signed with no errors. If the {ExecEnvShort} is not signed, the command fails. .Additional resources * For more information about policy.json, see link:https://github.com/containers/image/blob/main/docs/containers-policy.json.5.md#signedby[documentation for containers-policy.json]. \ No newline at end of file diff --git a/downstream/modules/hub/proc-create-api-token-pah.adoc b/downstream/modules/hub/proc-create-api-token-pah.adoc index 5fefff2d10..b937f31ad5 100644 --- a/downstream/modules/hub/proc-create-api-token-pah.adoc +++ b/downstream/modules/hub/proc-create-api-token-pah.adoc @@ -1,30 +1,26 @@ -// Module included in the following assemblies: -// obtaining-token/master.adoc -[id="proc-create-api-token-pah"] +:_mod-docs-content-type: PROCEDURE +[id="proc-create-api-token-pah_{context}"] = Creating the API token in {PrivateHubName} -In {PrivateHubName}, you can create an API token using API token management. The API token is a secret token used to protect your content. +In {PrivateHubName}, you can create an API token using API token management. The API token is a secret token used to protect your content, so be sure to store it in a secure location. + +[Note] +==== +The API token does not expire. +==== .Prerequisites * Valid subscription credentials for {PlatformName}. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... + . Log in to your {PrivateHubName}. . From the navigation panel, select {MenuACAPIToken}. . Click btn:[Load Token]. . To copy the API token, click the btn:[Copy to clipboard] icon. . Paste the API token into a file and store in a secure location. -[IMPORTANT] -==== -The API token is a secret token used to protect your content. Store your API token in a secure location. -==== - +.Next step The API token is now available for configuring {HubName} as your default collections server or uploading collections using the `ansible-galaxy` command line tool. -[NOTE] -==== -The API token does not expire. -==== diff --git a/downstream/modules/hub/proc-create-api-token.adoc b/downstream/modules/hub/proc-create-api-token.adoc index c7b450b3a9..d4418d7b20 100644 --- a/downstream/modules/hub/proc-create-api-token.adoc +++ b/downstream/modules/hub/proc-create-api-token.adoc @@ -1,26 +1,20 @@ -// Module included in the following assemblies: -// obtaining-token/master.adoc -[id="proc-create-api-token"] -= Creating the API token in {HubName} +:_mod-docs-content-type: PROCEDURE +[id="proc-create-api-token_{context}"] += Creating the offline token in {HubName} -In {HubName}, you can create an API token by using *Token management*. The API token is a secret token used to protect your content. +In {HubName}, you can create an offline token using *Token management*. The offline token is a secret token used to protect your content, so be sure to store it in a secure location. .Procedure . Navigate to link:https://console.redhat.com/ansible/automation-hub/token/[{PlatformNameShort} on the Red Hat Hybrid Cloud Console]. . From the navigation panel, select menu:Automation Hub[Connect to Hub]. . Under *Offline token*, click btn:[Load Token]. -. Click the btn:[Copy to clipboard] icon to copy the API token. -. Paste the API token into a file and store in a secure location. +. Click the btn:[Copy to clipboard] icon to copy the offline token. +. Paste the token into a file and store in a secure location. -[IMPORTANT] -==== -The API token is a secret token used to protect your content. Store your API token in a secure location. -==== +.Additional resources +Your offline token expires after 30 days of inactivity. For more on obtaining a new offline token, see link:{URLHubManagingContent}/managing-cert-valid-content#con-offline-token-active_cloud-sync[Keeping your offline token active]. -The API token is now available for configuring {HubName} as your default collections server or for uploading collections by using the `ansible-galaxy` command line tool. +.Next step +The offline token is now available for configuring {HubName} as your default collections server or for uploading collections by using the `ansible-galaxy` command line tool. -[NOTE] -==== -The API token does not expire. -==== \ No newline at end of file diff --git a/downstream/modules/hub/proc-create-content-developers.adoc b/downstream/modules/hub/proc-create-content-developers.adoc index b17a2cb1b3..2b85217a62 100644 --- a/downstream/modules/hub/proc-create-content-developers.adoc +++ b/downstream/modules/hub/proc-create-content-developers.adoc @@ -1,26 +1,35 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-create-content-developers"] -= Creating a new group for content curators += Creating a new team for content curators -You can create a new group in {PrivateHubName} designed to support content curation in your organization. This group can contribute internally developed collections for publication in {PrivateHubName}. +You can create a new team in {PlatformNameShort} designed to support content curation in your organization. This team can contribute internally-developed collections for publication in {PrivateHubName}. -To help content developers create a namespace and upload their internally developed collections to {PrivateHubName}, you must first create and edit a group and assign the required permissions. +To help content developers create a namespace and upload their internally developed collections to {PrivateHubName}, you must first create and edit a team and assign the required permissions. .Prerequisites -* You have administrative permissions in {PrivateHubName} and can create groups. +* You have administrative permissions in {PlatformNameShort} and can create teams. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {PrivateHubName}. -. From the navigation panel, select {MenuHubGroups} and click btn:[Create]. -. Enter *Content Engineering* as a *Name* for the group in the modal and click btn:[Create]. You have created the new group and the *Groups* page opens. -. On the *Permissions* tab, click btn:[Edit]. -. Under *Namespaces*, add permissions for *Add Namespace*, *Upload to Namespace*, and *Change Namespace*. -. Click btn:[Save]. + +. Log in to your {PlatformNameShort}. +. From the navigation panel, select {MenuAMTeams} and click btn:[Create team]. +. Enter *Content Engineering* as a *Name* for the team. +. Select an *Organization* for the team. +. Click btn:[Create team]. You have created the new team and the team Details page opens. +. Select the *Roles* tab and then select the *Automation Content* tab. +. Click btn:[Add roles]. +. Select *Namespace* from the *Resource type* list and click btn:[Next]. +. Select the namespaces that will receive the new roles and click btn:[Next]. +. Select the roles to apply to the selected namespaces and click btn:[Next]. +. Review your selections and click btn:[Finish]. +. Click btn:[Close] to complete the process. + -The new group is created with the permissions that you assigned. You can then add users to the group. +The new team is created with the permissions that you assigned. You can then add users to the team. + -. Click the *Users* tab on the *Groups* page. -. Click btn:[Add]. -. Select users and click btn:[Add]. +. Click the *Users* tab on the *Teams* page. +. Click btn:[Add users]. +. Select users and click btn:[Add users]. + +For further instructions on managing access with teams, see link:{URLCentralAuth}/gw-managing-access#assembly-controller-teams_gw-manage-rbac[Teams] in the {TitleCentralAuth} guide. \ No newline at end of file diff --git a/downstream/modules/hub/proc-create-credential.adoc b/downstream/modules/hub/proc-create-credential.adoc index 53aa31498b..d384b9cb26 100644 --- a/downstream/modules/hub/proc-create-credential.adoc +++ b/downstream/modules/hub/proc-create-credential.adoc @@ -1,25 +1,26 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-create-credential"] -= Creating a credential in {ControllerName} += Creating a credential -To pull container images from a password or token-protected registry, you must create a credential in {ControllerName}. +To pull {ExecEnvName} images from a password or token-protected registry, you must create a credential. In earlier versions of {PlatformNameShort}, you were required to deploy a registry to store {ExecEnvShort} images. -On {PlatformNameShort} 2.0 and later, the system operates as if you already have a container registry up and running. -To store {ExecEnvShort} images, add the credentials of only your selected container registries. +On {PlatformNameShort} 2.0 and later, the system operates as if you already have a remote registry up and running. +To store {ExecEnvShort} images, add the credentials of only your selected remote registries. .Procedure -// For 2.5 this will be Log in to Ansible Automation Platform. From the navigation panel select Access Management > Credentials. Select the Automation Execution tab -. Navigate to {ControllerName}. -. From the navigation panel, select {MenuAMCredentials}. -. Click btn:[Add] to create a new credential. + +. Log in to {PlatformNameShort}. +. From the navigation panel, select {MenuAECredentials}. +. Click btn:[Create credential] to create a new credential. . Enter an authorization *Name*, *Description*, and *Organization*. -. Select the *Credential Type*. -. Enter the *Authentication URL*. This is the container registry address. -. Enter the *Username* and *Password or Token* required to log in to the container registry. +. In the *Credential Type* drop-down, select *Container Registry*. +. Enter the *Authentication URL*. This is the remote registry address. +. Enter the *Username* and *Password or Token* required to log in to the remote registry. . Optional: To enable SSL verification, select *Verify SSL*. -. Click btn:[Save]. +. Click btn:[Create credential]. -Filling in at least one of the fields organization, user, or team is mandatory, and can be done through the user interface +Filling in at least one of the fields organization, user, or team is mandatory, and can be done through the user interface. //[dcd-This should be replaced with a link; otherwise, it's not helpful]For more information, please reference the Pulling from Protected Registries section of the Execution Environment documentation. diff --git a/downstream/modules/hub/proc-create-groups.adoc b/downstream/modules/hub/proc-create-groups.adoc index 352abdfce1..336d2211d6 100644 --- a/downstream/modules/hub/proc-create-groups.adoc +++ b/downstream/modules/hub/proc-create-groups.adoc @@ -2,9 +2,9 @@ // obtaining-token/master.adoc [id="proc-create-group"] -= Creating a new group in {PrivateHubName} += Creating a new team in {PrivateHubName} -You can create and assign permissions to a group in {PrivateHubName} that enables users to access specified features in the system. -By default, the *Admin* group in the {HubName} has all permissions assigned and is available on initial login. Use the credentials created when installing {PrivateHubName}. +You can create and assign permissions to a team in {PrivateHubName} that enables users to access specified features in the system. +By default, new teams do not have any assigned permissions. You can add permissions when first creating a team or edit an existing team to add or remove permissions. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/getting_started_with_automation_hub/index#proc-create-group[Creating a new group in {PrivateHubName}] in the Getting started with {HubName} guide. +For more information, see link:{URLCentralAuth}/gw-managing-access#assembly-controller-teams_gw-manage-rbac[Teams] in the {TitleCentralAuth} guide. diff --git a/downstream/modules/hub/proc-create-namespace.adoc b/downstream/modules/hub/proc-create-namespace.adoc index f101a287f0..2d071e15dd 100644 --- a/downstream/modules/hub/proc-create-namespace.adoc +++ b/downstream/modules/hub/proc-create-namespace.adoc @@ -1,20 +1,27 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-create-namespace"] = Creating a namespace You can create a namespace to organize collections that your content developers upload to {HubName}. -When creating a namespace, you can assign a group in {HubName} as owners of that namespace. +When creating a namespace, you can assign a team in {HubName} as owners of that namespace. .Prerequisites * You have *Add Namespaces* and *Upload to Namespaces* permissions. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {PrivateHubName}. + +. Log in to your {PlatformNameShort}. . From the navigation panel, select {MenuACNamespaces}. -. Click btn:[Create] and enter a *namespace name*. -. Assign a group of *Namespace owners*. -. Click btn:[Create]. +. Click btn:[Create namespace] and enter a *Name* for your namespace. +. Optional: enter a description, company, logo URL, resources, or useful links in the appropriate fields. +. Click btn:[Create namespace]. +. Select the *Team Access* tab and click btn:[Add roles] to assign roles to your namespace. +. Select the team to which you want to grant a role, then click btn:[Next]. +. Select the roles you want to apply to the selected team, and then click btn:[Next]. +. Review your selections and click btn:[Finish]. +. Click btn:[Close] to complete the process. -Your content developers can now upload collections to your new namespace and allow users in groups assigned as owners to upload collections. +.Next steps +Your content developers can now upload collections to your new namespace and allow users in teams assigned as owners to upload collections. diff --git a/downstream/modules/hub/proc-create-remote.adoc b/downstream/modules/hub/proc-create-remote.adoc index 2629860411..c6f5c9358c 100644 --- a/downstream/modules/hub/proc-create-remote.adoc +++ b/downstream/modules/hub/proc-create-remote.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-remote-management.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-create-remote_{context}"] = Creating a remote configuration in {HubName} @@ -8,18 +6,20 @@ You can use {PlatformName} to create a remote configuration to an external collection source. Then, you can sync the content from those collections to your custom repositories. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRemotes}. -. Click btn:[Add Remote]. +. Click btn:[Create Remote]. . Enter a *Name* for the remote configuration. . Enter the *URL* for the remote server, including the path for the specific repository. + [NOTE] ==== -To find the remote server URL and repository path, navigate to {MenuACAdminRepositories}, select your repository, and click btn:[Copy CLI configuration]. +To find the remote server URL and repository path, navigate to {MenuACAdminRepositories}, select the btn:[More Actions] icon *{MoreActionsIcon}*, and select btn:[Copy CLI configuration]. ==== + +. To sync signed collections only, check the box labeled *Signed collections only*. +. To sync dependencies, check the box labeled *Sync all dependencies*. To turn off dependency syncing, leave this box unchecked. . Configure the credentials to the remote server by entering a *Token* or *Username* and *Password* required to access the external collection. + [NOTE] @@ -28,7 +28,7 @@ To generate a token from the navigation panel, select {MenuACAPIToken}, click bt ==== + . To access collections from {Console}, enter the *SSO URL* to sign in to the identity provider (IdP). -. Select or create a *YAML requirements* file to identify the collections and version ranges to synchronize with your custom repository. For example, to download only the kubernetes and AWS collection versions 5.0.0 or later the requirements file would look like this: +. Select or create a *Requirements file* to identify the collections and version ranges to synchronize with your custom repository. For example, to download only the kubernetes and AWS collection versions 5.0.0 or later the requirements file would look like this: + ----- Collections: @@ -37,12 +37,8 @@ Collections: version:”>=5.0.0” ----- + -[NOTE] -==== -All collection dependencies are downloaded during the Sync process. -==== -+ -. Optional: To configure your remote further, use the options available under *Advanced configuration*: + +. Optional: To configure your remote further, use the options available under *Show advanced options*: .. If there is a corporate proxy in place for your organization, enter a *Proxy URL*, *Proxy Username* and *Proxy Password*. .. Enable or disable transport layer security using the *TLS validation* checkbox. .. If digital certificates are required for authentication, enter a *Client key* and *Client certificate*. diff --git a/downstream/modules/hub/proc-create-repository.adoc b/downstream/modules/hub/proc-create-repository.adoc index 1bb21abc7e..bf2f4453b2 100644 --- a/downstream/modules/hub/proc-create-repository.adoc +++ b/downstream/modules/hub/proc-create-repository.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-basic-repo-management.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-create-repository"] = Creating a custom repository in {HubName} @@ -8,13 +6,13 @@ When you use {PlatformName} to create a repository, you can configure the repository to be private or hide it from search results. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRepositories}. -. Click btn:[Add repository]. -. Enter a *Repository name*. +. Click btn:[Create repository]. +. Enter a *Name* for your repository. . In the *Description* field, describe the purpose of the repository. -. To retain previous versions of your repository each time you make a change, select *Retained number of versions*. The number of retained versions can range anywhere between 0 and unlimited. To save all versions, leave this set to null. +. To retain previous versions of your repository each time you make a change, enter a figure in the field labeled *Retained number of versions*. The number of retained versions can range anywhere between 0 and unlimited. To save all versions, leave this set to null. + [NOTE] ==== @@ -27,10 +25,10 @@ Staging:: Anyone is allowed to publish automation content into the repository. Approved:: Collections added to this repository are required to go through the approval process by way of the staging repository. When auto approve is enabled, any collection uploaded to a staging repository is automatically promoted to all of the approved repositories. None:: Any user with permissions on the repository can publish to the repository directly, and the repository is not part of the approval pipeline. + -. Optional: To hide the repository from search results, select *Hide from search*. This option is selected by default. +. Optional: To hide the repository from search results, select *Hide from search*. . Optional: To make the repository private, select *Make private*. This hides the repository from anyone who does not have permissions to view the repository. -. To sync the content from a remote repository into this repository, select *Remote* and select the remote that contains the collections you want included in your custom repository. For more information, see xref:proc-basic-repo-sync[Repository sync]. -. Click btn:[Save]. +. To sync the content from a remote repository into this repository, in the *Remote* field select the remote that contains the collections you want included in your custom repository. For more information, see xref:proc-basic-repo-sync[Repository sync]. +. Click btn:[Create repository]. [role="_additional-resources"] .Next steps diff --git a/downstream/modules/hub/proc-create-requirements-file.adoc b/downstream/modules/hub/proc-create-requirements-file.adoc new file mode 100644 index 0000000000..ad939103c7 --- /dev/null +++ b/downstream/modules/hub/proc-create-requirements-file.adoc @@ -0,0 +1,33 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-09-18 +:_mod-docs-content-type: PROCEDURE + +[id="create-requirements-file_{context}"] += Creating a requirements file + +Use a requirements file to add collections to your {HubName}. Requirements files are in YAML format and list the collections that you want to install in your {HubName}. + +A standard `requirements.yml` file contains the following parameters: + +* `name`: the name of the collection formatted as `.` +* `version`: the collection version number + +.Procedure + +* Create your requirements file. ++ +In YAML format, collection information in your requirements file should look like this: ++ +[source,bash] +---- +collections: + name: namespace.collection_name + version: 1.0.0 +---- +[Important] +==== +Be sure to specify the collection version number, otherwise you will sync all collection versions. Syncing all versions can require more space than expected. +==== + +.Next step +To sync the collections in your requirements file, follow the steps in link:{URLHubManagingContent}/managing-cert-valid-content#proc-create-synclist[Syncing Ansible content collections]. \ No newline at end of file diff --git a/downstream/modules/hub/proc-create-synclist.adoc b/downstream/modules/hub/proc-create-synclist.adoc index 5239b778db..858b1415e5 100644 --- a/downstream/modules/hub/proc-create-synclist.adoc +++ b/downstream/modules/hub/proc-create-synclist.adoc @@ -1,19 +1,25 @@ -// Module included in the following assemblies: -// obtaining-token/master.adoc +:_mod-docs-content-type: PROCEDURE [id="proc-create-synclist"] -= Creating a synclist of Red Hat {CertifiedName} += Syncing Ansible content collections -You can create a synclist of curated {CertifiedCon} in {HubNameMain} on {Console}. +You can sync certified and validated collections in {HubNameMain} on {Console}. //[ddacosta]This needs to be checked. I don't see a Repositories selection in the console verion. I think the way I've rewritten is correct. -Your synclist repository is located on the {HubName} navigation panel under {MenuACAdminRepositories}, which is updated whenever you manage content within {CertifiedName}. +// [hherbly] Looks like there is no synclist info in console or the test instance; commenting out this info for 2.5 +// Your synclist repository is located on the {HubName} navigation panel under {MenuACAdminRepositories}, which is updated whenever you manage content within {CertifiedName}. -All {CertifiedName} are included by default in your initial organization synclist. +//All {CertifiedName} are included by default in your initial organization synclist. + +[NOTE] +==== +When syncing content, keep in mind that {HubName} does not check other repositories for dependencies. To avoid an error, turn off dependency downloading by editing your remote settings. See link:{URLHubManagingContent}/managing-collections-hub#proc-create-remote_remote-management[Creating a remote configuration in {HubName}] for more information. +==== .Prerequisites * You have a valid {PlatformNameShort} subscription. -* You have Organization Administrator permissions for {Console}. +* You have organization administrator permissions for {Console}. +* You have created a link:{URLHubManagingContent}/managing-cert-valid-content#create-requirements-file_cloud-sync[requirements file]. * The following domain names are part of either the firewall or the proxy's allowlist. They are required for successful connection and download of collections from {HubName} or Galaxy server: ** `galaxy.ansible.com` @@ -26,11 +32,28 @@ The following domain names must be in the allow list: ** `ansible-galaxy.s3.amazonaws.com` * SSL inspection is disabled either when using self signed certificates or for the Red Hat domains. +[IMPORTANT] + +==== + +Before you begin your content sync, consult the Knowledgebase article link:https://access.redhat.com/articles/7118757[Resource requirements for syncing automation content] to ensure that you have the resources to sync the collections you need. + +==== + .Procedure -// ddacosta I don't know if a change will be needed here for Gateway as this is referring to the Console version of Hub. Will console pull in nav changes? Also, there is no repositories selection on the console version right now. -. Log in to `{Console}`. -. Navigate to menu:Automation Hub[Collections]. -. Set the toggle switch on each collection to exclude or include it on your synclist. -. To initiate the remote repository synchronization, navigate to {HubName} and select {MenuACAdminRepositories}. -. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Sync* to initiate the remote repository synchronization to your {PrivateHubName}. -. Optional: If your remote repository is already configured, update the collections content that you made available to local users by manually synchronizing Red Hat {CertifiedName} to your {PrivateHubName}. + +. From the navigation panel, select {MenuACAdminRemotes}. +. Find the remote you want to sync from and click the pencil icon image:leftpencil.png[Edit,15,15] to edit. +. Find the field labeled *Requirements file*. There, you can either paste the contents of your requirements file, or upload the file from your hard drive by selecting the upload button. +. Click btn:[Save remote]. +. To begin synchronization, from the navigation panel select {MenuACAdminRepositories}. +. In the row containing the repository you want to sync, click the {MoreActionsIcon} icon and select the image:sync.png[Sync repository,15,15] *Sync repository* icon to initiate the remote repository synchronization to your {PrivateHubName}. +. On the modal that appears, you can toggle the following options: +* *Mirror*: Select if you want your repository content to mirror the remote repository's content. +* *Optimize*: Select if you want to sync only when no changes are reported by the remote server. +. Click btn:[Sync] to complete the sync. + +.Verification +The *Sync status* column updates to notify you whether the synchronization is successful. + +* Navigate to {MenuACCollections} to confirm that your collections content has synchronized successfully. diff --git a/downstream/modules/hub/proc-delete-collection.adoc b/downstream/modules/hub/proc-delete-collection.adoc index c364a4bfa3..87f1c7c6e4 100644 --- a/downstream/modules/hub/proc-delete-collection.adoc +++ b/downstream/modules/hub/proc-delete-collection.adoc @@ -1,7 +1,7 @@ - +:_mod-docs-content-type: PROCEDURE [id="delete-collection"] -= Deleting a collection on {HubName} += Deleting a collection You can further manage your collections by deleting unwanted collections, if the collection is not dependent on other collections. The *Dependencies* tab on a collection displays a list of other collections that use the current collection. @@ -10,19 +10,15 @@ You can further manage your collections by deleting unwanted collections, if the * You have *Delete Collections* permissions. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {PrivateHubName}. + +. Log in to your {PlatformNameShort}. . From the navigation panel, select {MenuACCollections}. . Before deleting the collection, check to see if it has collections that are dependent on it: ** Click the *Dependencies* tab for that collection. If it is blank, you will be able to delete the collection. If the *Dependencies* tab is not blank, you must delete these dependencies before you can delete the collection. . Click the collection to delete. . Click the btn:[More Actions] icon *{MoreActionsIcon}*, and then select an option: -.. *Delete entire collection* to delete all versions in this collection. -.. *Delete version [number]* to delete the current version of this collection. You can change versions by using the *Version* drop-down menu. -+ -[NOTE] -==== -If the selected collection has any dependencies with other collections, these actions are disabled until you delete those dependencies. Click the *Dependencies* tab to see a list of dependencies to delete. -==== -+ +.. *Delete version from system* removes the specific version of the collection from the entire instance, including all repositories and namespaces. +.. *Delete version from repository* removes the specific version of the collection from the repository where it was uploaded. This does not affect the collection in other repositories or namespaces. +.. *Delete entire collection from repository* removes all versions of the entire collection from the repository where it was uploaded, but does not affect other repositories or namespaces. +.. *Delete entire collection from system* removes all versions of the entire collection from the instance, including all repositories and namespaces. . When the confirmation window opens, verify that the collection or version number is correct, and then select *Delete*. diff --git a/downstream/modules/hub/proc-delete-namespace.adoc b/downstream/modules/hub/proc-delete-namespace.adoc index 2498bcbd06..1cbd6dbb80 100644 --- a/downstream/modules/hub/proc-delete-namespace.adoc +++ b/downstream/modules/hub/proc-delete-namespace.adoc @@ -1,23 +1,27 @@ -// Module included in the following assemblies: -// assembly-working-with-namespaces.adoc +:_mod-docs-content-type: PROCEDURE [id="proc-delete-namespace"] = Deleting a namespace You can delete unwanted namespaces to manage storage on your {HubName} server. -You must first ensure that the namespace does not contain a collection with dependencies. +You must first ensure that the namespace you want to delete does not contain a collection with dependencies. .Prerequisites * The namespace you are deleting does not have a collection with dependencies. * You have *Delete namespace* permissions. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {PrivateHubName}. + +. Log in to your {PlatformNameShort}. . From the navigation panel, select {MenuACNamespaces}. . Click the namespace to be deleted. . Click the btn:[More Actions] icon *{MoreActionsIcon}*, then click btn:[Delete namespace]. + -NOTE: If the btn:[Delete namespace] button is disabled, the namespace contains a collection with dependencies. Review the collections in this namespace, and delete any dependencies. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/uploading-content-hub#delete-collection[Deleting a collection on automation hub] for information. +[NOTE] +==== +If the btn:[Delete namespace] button is disabled, the namespace contains a collection with dependencies. Review the collections in this namespace, and delete any dependencies. +==== +// hherbly: LINK NEEDS UPDATE See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/uploading-content-hub#delete-collection[Deleting a collection on automation hub] for information. +.Result The namespace that you deleted, as well as its associated collections, is now deleted and removed from the namespace list view. diff --git a/downstream/modules/hub/proc-deploying-your-system-for-container-signing.adoc b/downstream/modules/hub/proc-deploying-your-system-for-container-signing.adoc index bfb33d4014..671beef939 100644 --- a/downstream/modules/hub/proc-deploying-your-system-for-container-signing.adoc +++ b/downstream/modules/hub/proc-deploying-your-system-for-container-signing.adoc @@ -1,11 +1,13 @@ - +:_mod-docs-content-type: PROCEDURE [id="deploying-your-system-for-container-signing"] = Deploying your system for container signing -{HubNameStart} implements image signing to offer better security for the {ExecEnvShort} container images. -To deploy your system so that it is ready for container signing, create a signing script. +To deploy your system so that it is ready for container signing, first ensure that you have +link:{URLContainerizedInstall}/aap-containerized-installation#enabling-automation-hub-collection-and-container-signing_aap-containerized-installation[enabled automation content collection and container signing]. +Then you can create a signing script, or +link:{URLHubManagingContent}/managing-containers-hub#adding-an-execution-environment[add and sign an {ExecEnvShort}] manually. [NOTE] ==== @@ -58,10 +60,8 @@ automationhub_container_signing_service_key = /absolute/path/to/key/to/sign automationhub_container_signing_service_script = /absolute/path/to/script/that/signs ----- + -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Once installation is complete, navigate to your {HubName}. -. From the navigation panel, select {MenuACAdminSignatureKeys}. +. Once installation is complete, log in to {PlatformNameShort} and navigate to {MenuACAdminSignatureKeys}. . Ensure that you have a key titled *container-default*, or *container*-_anyname_. diff --git a/downstream/modules/hub/proc-downloading-signature-public-keys.adoc b/downstream/modules/hub/proc-downloading-signature-public-keys.adoc index 08cfdc1e0a..6f439fed95 100644 --- a/downstream/modules/hub/proc-downloading-signature-public-keys.adoc +++ b/downstream/modules/hub/proc-downloading-signature-public-keys.adoc @@ -1,15 +1,14 @@ -//this module appears in assembly-collections-and-content-signing-in-pah - +:_mod-docs-content-type: PROCEDURE [id="proc-downloading-signature-public-keys"] = Downloading signature public keys -After you sign and approve collections, download the signature public keys from the {HubName} UI. +After you sign and approve collections, download the signature public keys from the {PlatformNameShort} UI. You must download the public key before you add it to the local system keyring. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminSignatureKeys}. The Signature Keys dashboard displays a list of multiple keys: collections and container images. @@ -19,8 +18,8 @@ The Signature Keys dashboard displays a list of multiple keys: collections and c . Choose one of the following methods to download your public key: -* Select the menu icon and click btn:[Download Key] to download the public key. -* Select the public key from the list and click the _Copy to clipboard_ icon. -* Click the drop-down menu under the *_Public Key_* tab and copy the entire public key block. +* Click the btn:[Download Key] icon to download the public key. +* Click the btn:[Copy to clipboard] next to the public key you want to copy. +.Verification Use the public key that you copied to verify the content collection that you are installing. diff --git a/downstream/modules/hub/proc-edit-namespace.adoc b/downstream/modules/hub/proc-edit-namespace.adoc index f5b05842a6..97b25c4302 100644 --- a/downstream/modules/hub/proc-edit-namespace.adoc +++ b/downstream/modules/hub/proc-edit-namespace.adoc @@ -1,22 +1,23 @@ -// Module included in the following assemblies: -// obtaining-token/master.adoc +:_mod-docs-content-type: PROCEDURE [id="proc-edit-namespace"] = Adding additional information and resources to a namespace -You can add information and provide resources for your users to accompany collections included in the namespace. Add a logo and a description, and link users to your GitHub repository, issue tracker, or other online assets. You can also enter markdown text in the *Edit resources* tab to include more information. This is helpful to users who use your collection in their automation tasks. +You can add information and provide resources for your users to accompany collections included in the namespace. For example, you can add a logo and a description, and link users to your GitHub repository, issue tracker, or other online assets. You can also enter markdown text in the *Resources* field to include more information. This is helpful to users who use your collection in their automation tasks. .Prerequisites * You have *Change Namespaces* permissions. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {PrivateHubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACNamespaces}. -. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Edit namespace*. -. In the *Edit details* tab, enter information in the fields. -. Click the *Edit resources* tab to enter markdown in the text field. -. Click btn:[Save]. +. Select the namespace you want to edit. +. Click the btn:[Edit namespace]. +. Enter the relevant information in the fields. +. Optional: enter markdown information in the *Resources* field. +. Click btn:[Save namespace]. -Your content developers can now upload collections to your new namespace, or allow users in groups assigned as owners to upload collections. +.Result +Your content developers can now upload collections to your new namespace, or allow users in teams assigned as owners to upload collections. diff --git a/downstream/modules/hub/proc-export-collection.adoc b/downstream/modules/hub/proc-export-collection.adoc index e4b8dfd37d..e3973abbec 100644 --- a/downstream/modules/hub/proc-export-collection.adoc +++ b/downstream/modules/hub/proc-export-collection.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-collection-import-export.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-export-collection"] = Exporting an automation content collection in {HubName} @@ -8,8 +6,8 @@ After collections are finalized, you can import them to a location where they can be distributed to others across your organization. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {PrivateHubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACCollections}. The *Collections* page displays all collections across all repositories. You can search for a specific collection. -. Select the collection that you want to export. The collection details page opens. +. Click into the collection that you want to export. The collection details page opens. . From the *Install* tab, select *Download tarball*. The .tar file is downloaded to your default browser downloads folder. You can now import it to the location of your choosing. diff --git a/downstream/modules/hub/proc-import-collection.adoc b/downstream/modules/hub/proc-import-collection.adoc index b3891d5b00..dd5b555331 100644 --- a/downstream/modules/hub/proc-import-collection.adoc +++ b/downstream/modules/hub/proc-import-collection.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-collection-import-export.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-import-collection"] = Importing an automation content collection in {HubName} @@ -8,15 +6,17 @@ As an automation content creator, you can import a collection to use in a custom repository. To use a collection in your custom repository, you must first import the collection into your namespace so the {HubName} administrator can approve it. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACNamespaces}. The *Namespaces* page displays all of the namespaces available. -. Click btn:[View Collections]. +. Select the namespace to which you want to add your collection. +. Select the *Collections* tab. . Click btn:[Upload Collection]. -. Navigate to the collection tarball file, select the file and click btn:[Open]. -. Click btn:[Upload]. +. Enter or browse to select a collection file. +. Select the repository pipeline to add the collection. The choices are *Staging repos* and *Repositories without pipeline*. +. Click btn:[Upload collection]. + -The *My Imports* screen displays a summary of tests and notifies you if the collection upload is successful or has failed. +The *Imports* screen displays a summary of tests and notifies you if the collection upload is successful or has failed. To find your imports, on your namespace click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Imports*. + [NOTE] ==== diff --git a/downstream/modules/hub/proc-obtain-images.adoc b/downstream/modules/hub/proc-obtain-images.adoc index 56e3593dc4..dbc55dfcfe 100644 --- a/downstream/modules/hub/proc-obtain-images.adoc +++ b/downstream/modules/hub/proc-obtain-images.adoc @@ -1,15 +1,13 @@ - - +:_mod-docs-content-type: PROCEDURE [id="obtain-images"] - - -= Pulling images for use in {HubName} += Pulling {ExecEnvShort}s for use in {HubName} [role="_abstract"] -Before you can push container images to your {PrivateHubName}, you must first pull them from an existing registry and tag them for use. The following example details how to pull an image from the Red Hat Ecosystem Catalog (registry.redhat.io). +Before you can push {ExecEnvShort}s to your {PrivateHubName}, you must first pull them from an existing registry and tag them for use. The following example details how to pull an {ExecEnvShort} from the Red Hat Ecosystem Catalog (registry.redhat.io). .Prerequisites -You have permissions to pull images from registry.redhat.io. + +* You have permissions to pull {ExecEnvName} from registry.redhat.io. .Procedure @@ -20,17 +18,16 @@ $ podman login registry.redhat.io ----- + . Enter your username and password. -. Pull a container image: +. Pull an {ExecEnvShort}: + [subs="+quotes"] ----- -$ podman pull registry.redhat.io/____:____ +$ podman pull registry.redhat.io/____:____ ----- - .Verification -To verify that the image you recently pulled is contained in the list, take these steps: +To verify that the {ExecEnvShort} you recently pulled is contained in the list, take these steps: . List the images in local storage: + @@ -38,9 +35,10 @@ To verify that the image you recently pulled is contained in the list, take thes $ podman images ----- + -. Check the image name, and verify that the tag is correct. +. Check the {ExecEnvShort} name, and verify that the tag is correct. [role="_additional-resources"] .Additional resources -* See link:https://redhat-connect.gitbook.io/catalog-help/[Red Hat Ecosystem Catalog Help] for information on registering and getting images. +* See link:https://redhat-connect.gitbook.io/catalog-help/[Red Hat Ecosystem Catalog Help] for information on registering and getting {ExecEnvShort}s. + diff --git a/downstream/modules/hub/con-offline-token-active.adoc b/downstream/modules/hub/proc-offline-token-active.adoc similarity index 50% rename from downstream/modules/hub/con-offline-token-active.adoc rename to downstream/modules/hub/proc-offline-token-active.adoc index c52d398eb5..d9d2add50a 100644 --- a/downstream/modules/hub/con-offline-token-active.adoc +++ b/downstream/modules/hub/proc-offline-token-active.adoc @@ -1,15 +1,15 @@ - -[id="con-offline-token-active"] +:_mod-docs-content-type: PROCEDURE +[id="con-offline-token-active_{context}"] = Keeping your offline token active -Offline tokens expire after 30 days of inactivity. You can keep your offline token from expiring by periodically refreshing your offline token. +Offline tokens expire after 30 days of inactivity. You can keep your offline token from expiring by keeping it active. -Keeping an online token active is useful when an application performs an action on behalf of the user; for example, this allows the application to perform a routine data backup when the user is offline. +Keeping an offline token active is useful when an application performs an action on behalf of the user; for example, this allows the application to perform a routine data backup when the user is offline. [NOTE] ==== -If your offline token expires, you must request a new one. +If your offline token expires, you must xref:proc-create-api-token_cloud-sync[obtain a new one]. ==== .Procedure diff --git a/downstream/modules/hub/proc-provide-remote-access.adoc b/downstream/modules/hub/proc-provide-remote-access.adoc index 0677ef0640..d43d17f951 100644 --- a/downstream/modules/hub/proc-provide-remote-access.adoc +++ b/downstream/modules/hub/proc-provide-remote-access.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-remote-management.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-provide-remote-access_{context}"] = Providing access to a remote configuration @@ -8,11 +6,12 @@ After you create a remote configuration, you must provide access to it before anyone can use it. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {PrivateHubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRemotes}. -. Locate your repository in the list, click the btn:[More Actions] icon *{MoreActionsIcon}*, and select *Edit*. -. Select the *Access* tab. -. Select a group for *Repository owners*. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access[Configuring user access for your {PrivateHubName}] for information about implementing user access. -. Select the appropriate roles for the selected group. -. Click btn:[Save]. +. Click into your repository in the list, and then select the *Team Access* tab. +. Click btn:[Add roles]. +. Select the team to which you want to grant a role, then click btn:[Next]. +. Select the roles you want to apply to the selected team, and then click btn:[Next]. +. Review your selections and click btn:[Finish]. +. Click btn:[Close] to complete the process. diff --git a/downstream/modules/hub/proc-provide-repository-access.adoc b/downstream/modules/hub/proc-provide-repository-access.adoc index d69c7ae3f1..83c6854c17 100644 --- a/downstream/modules/hub/proc-provide-repository-access.adoc +++ b/downstream/modules/hub/proc-provide-repository-access.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-basic-repo-management.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-provide-repository-access"] = Providing access to a custom {HubName} repository @@ -8,14 +6,16 @@ By default, private repositories and the automation content collections are hidden from all users in the system. Public repositories can be viewed by all users, but cannot be modified. Use this procedure to provide access to your custom repository. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... + . Log in to {PrivateHubName}. . From the navigation panel, select {MenuACAdminRepositories}. -. Locate your repository in the list and click the btn:[More Actions] icon *{MoreActionsIcon}*, then select *Edit*. -. Select the *Access* tab. -. Select a group for *Repository owners*. -+ -See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access[Configuring user access for your {PrivateHubName}] for information about implementing user access. -+ -. Select the roles you want assigned to the selected group. -. Click btn:[Save]. +. Click into your repository in the list and select the *Team Access* tab. +. Click btn:[Add roles]. +. Select the team to which you want to grant a role, then click btn:[Next]. +. Select the roles you want to apply to the selected team, and then click btn:[Next]. +. Review your selections and click btn:[Finish]. +. Click btn:[Close] to complete the process. + +.Additional resources +See link:{LinkCentralAuth} for more information about implementing user access. + diff --git a/downstream/modules/hub/proc-pull-image.adoc b/downstream/modules/hub/proc-pull-image.adoc index 80bd9c4509..108e1d7e61 100644 --- a/downstream/modules/hub/proc-pull-image.adoc +++ b/downstream/modules/hub/proc-pull-image.adoc @@ -1,10 +1,11 @@ +:_mod-docs-content-type: PROCEDURE [id="pulling-image"] = Pulling an image [role="_abstract"] -You can pull images from the {HubName} container registry to make a copy to your local machine. +You can pull {ExecEnvName} from the {HubName} remote registry to make a copy to your local machine. .Prerequisites @@ -12,9 +13,9 @@ You can pull images from the {HubName} container registry to make a copy to your .Procedure -. If you are pulling container images from a password or token-protected registry, xref:proc-create-credential[create a credential in {ControllerName}] before pulling the image. +. If you are pulling {ExecEnvName} from a password or token-protected registry, xref:proc-create-credential[create a credential] before pulling the {ExecEnvName}. . From the navigation panel, select {MenuACExecEnvironments}. -. Select your container repository. +. Select your {ExecEnvName}. . In the *Pull this image* entry, click btn:[Copy to clipboard]. . Paste and run the command in your terminal. diff --git a/downstream/modules/hub/proc-push-container.adoc b/downstream/modules/hub/proc-push-container.adoc index 8ed64ec309..ab17cbb99c 100644 --- a/downstream/modules/hub/proc-push-container.adoc +++ b/downstream/modules/hub/proc-push-container.adoc @@ -1,26 +1,25 @@ - - +:_mod-docs-content-type: PROCEDURE [id="push-containers"] -= Pushing a container image to {PrivateHubName} += Pushing an {ExecEnvShort} to {PrivateHubName} [role="_abstract"] -You can push tagged container images to {PrivateHubName} to create new containers and populate the container registry. +You can push tagged {ExecEnvShort}s to {PrivateHubName} to create new containers and populate the remote registry. .Prerequisites * You have permissions to create new containers. -* You have the FQDN or IP address of the {HubName} instance. +* You have the FQDN or IP address of the {PlatformNameShort} instance. .Procedure -. Log in to Podman using your {HubName} location and credentials: +. Log in to Podman using your {PlatformNameShort} location and credentials: + [subs="+quotes"] ----- -$ podman login -u=____ -p=____ ____ +$ podman login -u=____ -p=____ ____ ----- + [WARNING] @@ -28,11 +27,11 @@ $ podman login -u=____ -p=____ ____ Let Podman prompt you for your password when you log in. Entering your password at the same time as your username can expose your password to the shell history. ==== + -. Push your container image to your {HubName} container registry: +. Push your {ExecEnvShort} to your {HubName} remote registry: + [subs="+quotes"] ----- -$ podman push ____/____ +$ podman push ____/____ ----- .Troubleshooting @@ -42,8 +41,8 @@ This may lead to image-layer digest changes and a failed push operation, resulti .Verification -. Log in to your {HubName}. -//[ddacosta] I see no such selection. Should this be changed to Execution Environments > Remote Registries? If so, replace with {MenuACAdminRemoteRegistries} -. Navigate to menu:Container Registry[]. +. Log in to your {PlatformNameShort}. + +. Navigate to {MenuACExecEnvironments}. . Locate the container in the container repository list. diff --git a/downstream/modules/hub/proc-pushing-container-images-from-your-local.adoc b/downstream/modules/hub/proc-pushing-container-images-from-your-local.adoc index 8cdc4ab79d..1028d10f55 100644 --- a/downstream/modules/hub/proc-pushing-container-images-from-your-local.adoc +++ b/downstream/modules/hub/proc-pushing-container-images-from-your-local.adoc @@ -1,56 +1,56 @@ - +:_mod-docs-content-type: PROCEDURE [id="pushing-container-images-from-your-local"] = Pushing container images from your local environment -Use the following procedure to sign images on a local system and push those signed images to the {HubName} registry. +Use the following procedure to sign an {ExecEnvNameSing} on a local system and push the signed {ExecEnvShort} to the {HubName} registry. .Procedure -. From a terminal, log into podman, or any container client currently in use: +. From a terminal, log in to Podman, or any container client currently in use: + ---- > podman pull ---- + -. After the image is pulled, add tags (for example: latest, rc, beta, or version numbers, such as 1.0; 2.3, and so on): +. After the {ExecEnvShort} is pulled, add tags (for example: latest, rc, beta, or version numbers, such as 1.0; 2.3, and so on): + ---- > podman tag /: ---- + -. Sign the image after changes have been made, and push it back up to the {HubName} registry: +. Sign the {ExecEnvShort} after changes have been made, and push it back up to the {HubName} registry: + ---- > podman push /: --tls-verify=false --sign-by ---- + -If the image is not signed, it can only be pushed with any current signature embedded. Alternatively, you can use the following script to push the image without signing it: +If the {ExecEnvShort} is not signed, it can only be pushed with any current signature embedded. Alternatively, you can use the following script to push the {ExecEnvShort} without signing it: + ---- > podman push /: --tls-verify=false ---- + -. Once the image has been pushed, navigate to your {HubName}. - -. From the navigation panel, select {MenuACExecEnvironments}. +. Once the {ExecEnvShort} has been pushed, navigate to {MenuACExecEnvironments}. . To display the new {ExecEnvShort}, click the *Refresh* icon. -. Click the name of the image to view your pushed image. +. Click the name of the image to view your pushed image. .Troubleshooting -The details page in {HubName} indicates whether or not an image has been signed. If the details page indicates that an image is *Unsigned*, you can sign the image from {HubName} using the following steps: +The details page for each {ExecEnvShort} indicates whether it has been signed. If the details page indicates that an image is *Unsigned*, you can sign the {ExecEnvShort} from {HubName} using the following steps: -. Click the image name to navigate to the details page. +. Click the {ExecEnvShort} name to navigate to the details page. . Click the btn:[More Actions] icon *{MoreActionsIcon}*. Three options are available: +* *Sign {ExecEnvShort}* * *Use in Controller* -* *Delete* -* *Sign* +* *Delete {ExecEnvShort}* + -. Click *Sign* from the drop-down menu. +. Click *Sign {ExecEnvShort}* from the drop-down menu. -The signing service signs the image. -After the image is signed, the status changes to "signed". +.Verification +The signing service signs the {ExecEnvShort}. +After the {ExecEnvShort} is signed, the status changes to "signed". diff --git a/downstream/modules/hub/proc-reject-collections.adoc b/downstream/modules/hub/proc-reject-collections.adoc index 5f46dfb1da..354232ca04 100644 --- a/downstream/modules/hub/proc-reject-collections.adoc +++ b/downstream/modules/hub/proc-reject-collections.adoc @@ -1,10 +1,11 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-reject-collections"] = Rejecting collections uploaded for review -You can reject collections uploaded to individual namespaces. All collections awaiting review are located under the *Approval* tab in the *Staging* repository. +You can reject collections uploaded to individual namespaces. All collections awaiting review are located in {MenuACAdminCollectionApproval}. -Collections requiring approval have the status *Needs review*. Click the *Version* to view the contents of the collection. +Collections requiring approval have the status *Needs review*. .Prerequisites @@ -13,7 +14,8 @@ Collections requiring approval have the status *Needs review*. Click the *Versio .Procedure . From the navigation panel, select {MenuACAdminCollectionApproval}. -. Locate the collection to review. -. Click btn:[Reject] to decline the collection. +. Find the collection you want to review in the list. You can also filter collections by Namespace, Repository, and Status using the search bar. +. Click the thumbs down icon to reject the collection. Confirm your choice in the modal that appears. +.Verification Collections you decline for publication are moved to the *Rejected* repository. diff --git a/downstream/modules/hub/proc-revert-repository-version.adoc b/downstream/modules/hub/proc-revert-repository-version.adoc index 0c5ea68389..c27503510f 100644 --- a/downstream/modules/hub/proc-revert-repository-version.adoc +++ b/downstream/modules/hub/proc-revert-repository-version.adoc @@ -1,6 +1,4 @@ -// Module included in the following assemblies: -// assembly-basic-repo-management.adoc - +:_mod-docs-content-type: PROCEDURE [id="proc-revert-repository-version"] = Revert to a different {HubName} repository version @@ -8,9 +6,9 @@ When automation content collections are added or removed from a repository, a new version is created. If a change to your repository causes a problem, you can revert to a previous version. Reverting is a safe operation and does not delete collections from the system, but rather, changes the content associated with the repository. The number of versions saved is defined in the *Retained number of versions* setting when a xref:proc-create-repository[repository is created]. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {PrivateHubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRepositories}. -. Locate your repository in the list and click the btn:[More Actions] icon *{MoreActionsIcon}*, then select *Edit*. +. Click into your repository in the list and then select the *Versions* tab. . Locate the version you want to revert to and click the btn:[More Actions] icon *{MoreActionsIcon}*, and select *Revert to this version*. -. Click btn:[Revert]. +. Check the box confirming your selection, and then click btn:[Revert to repository version]. diff --git a/downstream/modules/hub/proc-review-collection-imports.adoc b/downstream/modules/hub/proc-review-collection-imports.adoc index 2911d6a6c6..c58580d966 100644 --- a/downstream/modules/hub/proc-review-collection-imports.adoc +++ b/downstream/modules/hub/proc-review-collection-imports.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-review-collection-imports"] = Reviewing your namespace import logs @@ -14,11 +15,11 @@ Import log:: activities executed during the collection import * You have access to a namespace to which you can upload collections. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {PrivateHubName}. + +. Log in to your {PlatformNameShort}. . From the navigation panel, select {MenuACNamespaces}. . Select a namespace. -. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *My imports*. +. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Imports*. . Use the search field or locate an imported collection from the list. . Click the imported collection. . Review collection import details to determine the status of the collection in your namespace. diff --git a/downstream/modules/hub/proc-set-community-remote.adoc b/downstream/modules/hub/proc-set-community-remote.adoc index d2353ad6d8..6d9a1a6b4e 100644 --- a/downstream/modules/hub/proc-set-community-remote.adoc +++ b/downstream/modules/hub/proc-set-community-remote.adoc @@ -1,15 +1,21 @@ -// Module included in the following assemblies: -// obtaining-token/master.adoc +:_mod-docs-content-type: PROCEDURE [id="proc-set-community-remote"] -= Configuring the community remote repository and syncing {Galaxy} collections +ifndef::operationG[] += Configuring the community remote repository to sync {Galaxy} collections You can edit the *community* remote repository to synchronize chosen collections from {Galaxy} to your {PrivateHubName}. By default, your {PrivateHubName} community repository directs to `galaxy.ansible.com/api/`. +endif::operationG[] +ifdef::operationG[] += Configuring Proxy settings on {HubName} + +If your private automation hub is behind a network proxy, you can configure proxy settings on the remote to sync content located outside of your local network. +endif::operationG[] .Prerequisites * You have *Modify Ansible repo content* permissions. -For more information on permissions, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access[Configuring user access for your {PrivateHubName}]. +For more information on permissions, see link:{LinkCentralAuth}. * You have a `requirements.yml` file that identifies those collections to synchronize from {Galaxy} as in the following example: + .Requirements.yml example @@ -22,18 +28,19 @@ collections: ----- .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to {HubName}. + +. Log in to {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRemotes}. -. In the *Community* remote, click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Edit*. -. In the *YAML requirements* field, click btn:[Browse] and locate the `requirements.yml` file on your local machine. -. Click btn:[Save]. -+ -You can now synchronize collections identified in your `requirements.yml` file from {Galaxy} to your {PrivateHubName}. +. In the *Details* tab in the *Community* remote, click btn:[Edit remote]. +. In the *YAML requirements* field, paste the contents of your `requirements.yml` file. +. Click btn:[Save remote]. + +.Result +You can now synchronize collections identified in your `requirements.yml` file from {Galaxy} to your {PrivateHubName}. + +.Next steps +See link:{URLHubManagingContent}/managing-cert-valid-content#assembly-synclists[Synchronizing Ansible content collections in {HubName}] for syncing steps. + -. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Sync* to sync collections from {Galaxy} and {HubNameMain}. -.Verification -The *Sync status* notification updates to notify you of completion or failure of {Galaxy} collections synchronization to your {HubNameMain}. -* Select *Community* from the collections content drop-down list to confirm successful synchronization. diff --git a/downstream/modules/hub/proc-set-rhcertified-remote.adoc b/downstream/modules/hub/proc-set-rhcertified-remote.adoc index 7ddc8c7c99..9eb3d26061 100644 --- a/downstream/modules/hub/proc-set-rhcertified-remote.adoc +++ b/downstream/modules/hub/proc-set-rhcertified-remote.adoc @@ -1,39 +1,32 @@ -// Module included in the following assemblies: -// obtaining-token/master.adoc -[id="proc-set-rhcertified-remote"] -= Configuring the rh-certified remote repository and synchronizing {CertifiedColl} +:_mod-docs-content-type: PROCEDURE +[id="proc-set-rhcertified-remote_{context}"] += Configuring the rh-certified remote repository to sync {CertifiedName} You can edit the *rh-certified* remote repository to synchronize collections from {HubName} hosted on {Console} to your {PrivateHubName}. By default, your {PrivateHubName} `rh-certified` repository includes the URL for the entire group of {CertifiedName}. To use only those collections specified by your organization, a {PrivateHubName} administrator can upload manually-created requirements files from the `rh-certified` remote. -For more information about using requirements files, see link:https://docs.ansible.com/ansible/latest/collections_guide/collections_installing.html#install-multiple-collections-with-a-requirements-file[Install multiple collections with a requirements file] in the _Using Ansible collections_ guide. - If you have collections `A`, `B`, and `C` in your requirements file, and a new collection `X` is added to {Console} that you want to use, you must add `X` to your requirements file for {PrivateHubName} to synchronize it. - .Prerequisites * You have valid *Modify Ansible repo content* permissions. -For more information on permissions, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_hub/assembly-user-access[Configuring user access for your {PrivateHubName}]. +For more information on permissions, see link:{LinkCentralAuth}. * You have retrieved the Sync URL and API Token from the {HubName} hosted service on {Console}. -* You have configured access to port 443. This is required for synchronizing certified collections. For more information, see the {HubName} table in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/ref-network-ports-protocols_planning[Network ports and protocols] chapter of the {PlatformName} Planning Guide. +* You have configured access to port 443. This is required for synchronizing certified collections. For more information, see the {HubName} table in the link:{URLPlanningGuide}/ref-network-ports-protocols_planning[Network ports and protocols] chapter of {TitlePlanningGuide}. .Procedure -//[ddacosta] For 2.5 this will be Log in to Ansible Automation Platform and select Automation Content. Automation hub opens in a new tab. From the navigation ... -. Log in to your {PrivateHubName}. + +. Log in to your {PlatformNameShort}. . From the navigation panel, select {MenuACAdminRemotes}. -. In the *rh-certified* remote repository, click the btn:[More Actions] icon *{MoreActionsIcon}* and click btn:[Edit]. +. In the *rh-certified* remote repository, click btn:[Edit remote]. . In the *URL* field, paste the *Sync URL*. . In the *Token* field, paste the token you acquired from {Console}. -. Click btn:[Save]. -+ -You can now synchronize collections between your organization synclist on {Console} and your {PrivateHubName}. -+ -. Click the btn:[More Actions] icon *{MoreActionsIcon}* and select *Sync*. +. Click btn:[Save remote]. -.Verification -The *Sync status* notification updates to notify you that the Red Hat Certified Content Collections synchronization is complete. +.Result +You can now synchronize collections from {Console} to your {PrivateHubName}. -* Select *Red Hat Certified* from the collections content drop-down list to confirm that your collections content has synchronized successfully. +.Next steps +See link:{URLHubManagingContent}/managing-cert-valid-content#assembly-synclists[Synchronizing Ansible content collections in {HubName}] for syncing steps. diff --git a/downstream/modules/hub/proc-sync-image.adoc b/downstream/modules/hub/proc-sync-image.adoc index 1cfc99be65..826e6ed38b 100644 --- a/downstream/modules/hub/proc-sync-image.adoc +++ b/downstream/modules/hub/proc-sync-image.adoc @@ -3,8 +3,8 @@ [id="proc-sync-image-adoc_{context}"] = Syncing images from a container repository -You can pull images from the {HubName} container registry to sync an image to your local machine. -To sync an image from a remote container registry, you must first configure a remote registry. +You can pull {ExecEnvName} from the {HubName} remote registry to sync an image to your local machine. +To sync an {ExecEnvNameSing} from a remote registry, you must first configure a remote registry. .Prerequisites @@ -20,16 +20,16 @@ You must have permission to view and pull from a private container repository. + [NOTE] ==== -Some container registries are aggressive with rate limiting. +Some remote registries are aggressive with rate limiting. Set a rate limit under *Advanced Options*. ==== + . From the navigation panel, select {MenuACExecEnvironments}. -. Click btn:[Add execution environment] in the page header. +. Click btn:[Create execution environment] in the page header. . Select the registry you want to pull from. -The *Name* field displays the name of the image displayed on your local registry. +The *Name* field displays the name of the {ExecEnvName} displayed on your local registry. + [NOTE] ==== @@ -38,7 +38,7 @@ For example, if the upstream name is set to "alpine" and the *Name* field is "lo ==== + . Set a list of tags to include or exclude. -Syncing images with a large number of tags is time consuming and uses a lot of disk space. +Syncing {ExecEnvName} with a large number of tags is time consuming and uses a lot of disk space. [role="_additional-resources"] .Additional resources diff --git a/downstream/modules/hub/proc-tag-image.adoc b/downstream/modules/hub/proc-tag-image.adoc index b71fa08364..ab2a925212 100644 --- a/downstream/modules/hub/proc-tag-image.adoc +++ b/downstream/modules/hub/proc-tag-image.adoc @@ -1,25 +1,23 @@ - - +:_mod-docs-content-type: PROCEDURE [id="proc-tag-image"] = Tagging container images [role="_abstract"] -Tag images to add an additional name to images stored in your {HubName} container repository. If no tag is added to an image, {HubName} defaults to `latest` for the name. +Tag {ExecEnvName} to add an additional name to {ExecEnvName} stored in your {HubName} container repository. If no tag is added to an {ExecEnvNameSing}, {HubName} defaults to `latest` for the name. .Prerequisites -* You have *change image tags* permissions. +* You have *change {ExecEnvNameSing} tags* permissions. .Procedure . From the navigation panel, select {MenuACExecEnvironments}. -. Select your container repository. +. Select your {ExecEnvName}. . Click the *Images* tab. . Click the btn:[More Actions] icon *{MoreActionsIcon}*, and click btn:[Manage tags]. . Add a new tag in the text field and click btn:[Add]. . Optional: Remove *current tags* by clicking btn:[x] on any of the tags for that image. -. Click btn:[Save]. .Verification * Click the *Activity* tab and review the latest changes. diff --git a/downstream/modules/hub/proc-tag-pulled-image.adoc b/downstream/modules/hub/proc-tag-pulled-image.adoc index b7e5ae9176..09bb55e4dd 100644 --- a/downstream/modules/hub/proc-tag-pulled-image.adoc +++ b/downstream/modules/hub/proc-tag-pulled-image.adoc @@ -1,36 +1,34 @@ - - +:_mod-docs-content-type: PROCEDURE [id="tag-pulled-images"] -= Tagging images for use in {HubName} += Tagging {ExecEnvShort}s for use in {HubName} [role="_abstract"] -After you pull images from a registry, tag them for use in your {PrivateHubName} container registry. +After you pull {ExecEnvShort}s from a registry, tag them for use in your {PrivateHubName} remote registry. .Prerequisites -* You have pulled a container image from an external registry. +* You have pulled an {ExecEnvShort} from an external registry. * You have the FQDN or IP address of the {HubName} instance. .Procedure -* Tag a local image with the {HubName} container repository: +* Tag a local {ExecEnvShort} with the {HubName} container repository: + [subs="+quotes"] ----- -$ podman tag registry.redhat.io/____:____ ____/____ +$ podman tag registry.redhat.io/____:____ ____/____ ----- .Verification - . List the images in local storage: + ----- $ podman images ----- + -. Verify that the image you recently tagged with your {HubName} information is contained in the list. +. Verify that the {ExecEnvShort} you recently tagged with your {HubName} information is contained in the list. diff --git a/downstream/modules/hub/proc-uploading-collections.adoc b/downstream/modules/hub/proc-uploading-collections.adoc index 0c648447fc..5bc461a046 100644 --- a/downstream/modules/hub/proc-uploading-collections.adoc +++ b/downstream/modules/hub/proc-uploading-collections.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-uploading-collections"] = Uploading collections to your namespaces @@ -13,14 +14,18 @@ Format your collection file name as follows: account” and entering their credentials (username and password). If the login is successful, they may be prompted to link their account with another component account for example, {HubName} and {ControllerName}. If the login credentials are the same for both {HubName} and {ControllerName}, account linking is automatically done for that user. + +After successful account linking, user accounts from both components are merged into a `gateway:legacy external password` authenticator. If user accounts are not automatically merged into the `gateway:legacy external password` authenticator, you must auto migrate directly to LDAP without linking accounts. + +For more information about account linking, see link:{URLUpgrade}/aap-post-upgrade#account-linking_aap-post-upgrade[Linking your accounts]. \ No newline at end of file diff --git a/downstream/modules/platform/con-aap-migrate-LDAP-users.adoc b/downstream/modules/platform/con-aap-migrate-LDAP-users.adoc new file mode 100644 index 0000000000..aa86051dac --- /dev/null +++ b/downstream/modules/platform/con-aap-migrate-LDAP-users.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: CONCEPT + + + +[id="con-migrate-LDAP-users_{context}"] + += Migrating LDAP users + +[role="_abstract"] + +As a platform administrator upgrading from {PlatformNameShort} 2.4 to 2.5, you must migrate your LDAP user accounts if you want to continue using LDAP authentication capabilities after the upgrade. Follow the steps in this procedure to ensure the smoothest possible LDAP user migration. + +There are two primary scenarios for migrating users from legacy authentication systems to LDAP-based authentication: + +. Legacy user login and account linking +. Migration to LDAP without account linking + +== Key considerations + +*LDAP configurations are not migrated automatically during upgrade to 2.5:* While the legacy LDAP authentication settings are carried over during the upgrade process and allow seamless initial access to the platform after upgrade, LDAP configurations must be manually migrated over to a new {PlatformNameShort} 2.5 LDAP configuration. The legacy configuration acts as a reference to preserve existing authentication capabilities and facilitate the migration process. The legacy authentication configuration should not be modified directly or used after migration is complete. + +*UID collision risk:* LDAP and legacy password authenticators both use usernames as the UID. This can cause UID collisions between users or users with the same name owned by different people. Any user accounts that are not secure for auto-migration due to UID conflicts must be manually migrated to ensure proper handling. You can manually migrate these users through the API `/api/gateway/v1/authenticator_users/` before setting auto-migrations. + +*Do not log in using legacy LDAP authentication if you do not have a user account in the platform prior to the upgrade:* Instead, you must link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-migrate-LDAP-users[auto migrate directly to LDAP without linking accounts]. diff --git a/downstream/modules/platform/con-aap-migrate-SAML-users.adoc b/downstream/modules/platform/con-aap-migrate-SAML-users.adoc new file mode 100644 index 0000000000..11cc0bf04e --- /dev/null +++ b/downstream/modules/platform/con-aap-migrate-SAML-users.adoc @@ -0,0 +1,43 @@ +:_mod-docs-content-type: CONCEPT + + + +[id="con-migrate-SAML-users_{context}"] + += Migrating Single Sign-On (SSO) users + +[role="_abstract"] + +When upgrading from {PlatformNameShort} 2.4 to 2.5, you must migrate your Single Sign-On (SSO) user accounts if you want to continue using SSO capabilities after the upgrade. Follow the steps in this procedure to ensure a smooth SSO user migration. + +== Key considerations + +*SSO configurations are not migrated automatically during upgrade to 2.5:* While the legacy authentication settings are carried over during the upgrade process and allow seamless initial access to the platform after upgrade, SSO configurations must be manually migrated over to a new {PlatformNameShort} 2.5 authentication configuration. The legacy configuration acts as a reference to preserve existing authentication capabilities and facilitate the migration process. The legacy authentication configuration should not be modified directly or used after migration is complete. + +*SSO migration is supported in the UI:* Migration of legacy SSO accounts is supported in 2.5 UI, and is done by selecting your legacy authenticator from the *Auto migrate users from* list when you configure a new authentication method. This is the legacy authenticator from which to automatically migrate users to a new {Gateway} authentication configuration. + +*Migration of SSO must happen before users log in and start account linking:* You must enable the *Auto migrate users to* setting _after_ configuring SSO in 2.5 and _before_ any users log in. + +[Removed for AAP-41494] +//.Prerequisites +//You have configured a SSO authentication method in the {Gateway} following the steps in link:{URLCentralAuth}/gw-configure-authentication#gw-config-authentication-type[Configuring an authentication type]. This will be the configuration that you will migrate your previous SSO users to. + +[NOTE] +==== +{PlatformNameShort} 2.4 SSO configurations are renamed during the upgrade process and are displayed in the *Authentication Methods* list view with a prefix to indicate a legacy configuration: for example, `legacy_sso-saml-`. The *Authentication type* is also listed as *legacy sso*. These configurations can not be modified. +==== + +//[This procedure is obsolete now that migration is supported in the UI AAP-41494] +//.Procedure +//. Log in to the {Gateway} API. +//. Go to `/api/gateway/v1/authenticators/`, locate the legacy authenticator and click the link. +//. This opens the HTML form for that authenticator. +//. Select the new {Gateway} authenticator from the *Auto migrate users to* list. +//. Click btn:[PUT]. + +Once you set up the auto migrate functionality, you should be able to login with SSO in the {Gateway} and it will automatically link any matching accounts from the legacy SSO authenticator. + +[role="_additional-resources"] +.Additional resources + +link:https://interact.redhat.com/share/baxthgXBQZ3kSRKPLn5L[{PlatformNameShort} 2.4 to 2.5. Linking accounts post upgrade, and Setting up SAML authentication] \ No newline at end of file diff --git a/downstream/modules/platform/con-aap-migrate-normal-users.adoc b/downstream/modules/platform/con-aap-migrate-normal-users.adoc new file mode 100644 index 0000000000..3af7888cb5 --- /dev/null +++ b/downstream/modules/platform/con-aap-migrate-normal-users.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + + + +[id="aap-migrate-normal-users_{context}"] + += Migrating normal users + +[role="_abstract"] + +When you upgrade from {PlatformNameShort} 2.4 to 2.5, your existing user account is automatically migrated to a single platform account. However, if you have multiple component accounts (such as, {ControllerName}, {PrivateHubName} and {EDAName}), your accounts must be linked to use the centralized features of the platform. + +[role="_additional-resources"] + +== Additional resources + +* link:{URLCentralAuth}/gw-managing-access#proc-controller-creating-a-user[Creating a user] diff --git a/downstream/modules/platform/con-aap-migration-considerations.adoc b/downstream/modules/platform/con-aap-migration-considerations.adoc index 453523b5a0..2b31a7a693 100644 --- a/downstream/modules/platform/con-aap-migration-considerations.adoc +++ b/downstream/modules/platform/con-aap-migration-considerations.adoc @@ -1,7 +1,11 @@ +:_mod-docs-content-type: CONCEPT + [id="aap-migration-considerations"] = Migration considerations [role="_abstract"] -If you are upgrading from {PlatformNameShort} 1.2 on {OCPShort} 3 to {PlatformNameShort} 2.x on {OCPShort} 4, you must provision a fresh {OCPShort} version 4 cluster and then migrate the {PlatformNameShort} to the new cluster. +If you are upgrading from any version of {PlatformNameShort} older than 2.4, you must upgrade through {PlatformNameShort} first. +If you are on {OCPShort} 3 and you want to upgrade to {OCPShort} 4, you must provision a fresh {OCPShort} version 4 cluster and then migrate the {PlatformNameShort} to the new cluster. + diff --git a/downstream/modules/platform/con-aap-migration-prepare.adoc b/downstream/modules/platform/con-aap-migration-prepare.adoc index dba1b6eff6..ad64619c9a 100644 --- a/downstream/modules/platform/con-aap-migration-prepare.adoc +++ b/downstream/modules/platform/con-aap-migration-prepare.adoc @@ -1,12 +1,14 @@ +:_mod-docs-content-type: CONCEPT + [id="aap-migration-prepare"] = Preparing for migration [role="_abstract"] -Before migrating your current {PlatformNameShort} deployment to {OperatorPlatform}, you need to back up your existing data, create k8s secrets for your secret key and postgresql configuration. +Before migrating your current {PlatformNameShort} deployment to {OperatorPlatformNameShort}, you must back up your existing data, and create Kubernetes secrets for your secret key and postgresql configuration. [NOTE] ==== -If you are migrating both {ControllerName} and {HubName} instances, repeat the steps in xref:create-secret-key-secret_aap-migration[Creating a secret key secret] and xref:create-postresql-secret_aap-migration[Creating a postgresql configuration secret] for both and then proceed to xref:aap-data-migration_aap-migration[Migrating data to the {PlatformNameShort} Operator]. +If you are migrating both {ControllerName} and {HubName} instances, repeat the steps in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#create-secret-key-secret_aap-migration[Creating a secret key secret] and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#create-postresql-secret_aap-migration[Creating a postgresql configuration secret] for both and then proceed to link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#aap-data-migration_aap-migration[Migrating data to the {PlatformNameShort} Operator]. ==== diff --git a/downstream/modules/platform/con-aap-upgrade-planning.adoc b/downstream/modules/platform/con-aap-upgrade-planning.adoc index 73b7ca8a25..4eb42064cb 100644 --- a/downstream/modules/platform/con-aap-upgrade-planning.adoc +++ b/downstream/modules/platform/con-aap-upgrade-planning.adoc @@ -1,36 +1,45 @@ +:_mod-docs-content-type: CONCEPT + [id="aap-upgrade-planning_{context}"] = {PlatformNameShort} upgrade planning - + [role="_abstract"] Before you begin the upgrade process, review the following considerations to plan and prepare your {PlatformNameShort} deployment: -[discrete] -== {ControllerNameStart} - -* Even if you have a valid license from a previous version, you must provide your credentials or a subscriptions manifest upon upgrading to the latest version of {ControllerName}. -* If you need to upgrade {RHEL} and {ControllerName}, you must first backup and restore your {ControllerName} data. -* Clustered upgrades require special attention to instance and instance groups before upgrading. - -[role="_additional-resources"] -.Additional resources -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-managing-subscriptions#controller-importing-subscriptions[Importing a subscription] -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-backup-and-restore[Backup and restore] -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-clustering[Clustering] +* See link:{URLPlanningGuide}/platform-system-requirements[System requirements] in the {TitlePlanningGuide} guide to ensure you have the topologies that fit your use case. ++ +[NOTE] +==== +2.4 to 2.5 upgrades now include link:{URLPlanningGuide}/ref-aap-components#con-about-platform-gateway_planning[{GatewayStart}]. Ensure you review the 2.5 link:{URLPlanningGuide}/ref-network-ports-protocols_planning[Network ports and protocols] for architectural changes. +==== ++ +[IMPORTANT] +==== +When upgrading from {PlatformNameShort} 2.4 to 2.5, the API endpoints for the {ControllerName}, {HubName}, and {EDAcontroller} are all available for use. These APIs are being deprecated and will be disabled in an upcoming release. This grace period is to allow for migration to the new APIs put in place with the {Gateway}. +==== ++ +* Verify that you have a valid subscription before upgrading from a previous version of {PlatformNameShort}. Existing subscriptions are carried over during the upgrade process. +* Ensure you have a backup of an {PlatformNameShort} 2.4 environment before upgrading in case any issues occur. See link:{URLControllerAdminGuide}/controller-backup-and-restore[Backup and restore] and link:{LinkOperatorBackup} for the specific topology of the environment. +* Ensure you capture your inventory or instance group details before upgrading. +* Ensure you have upgraded to the latest version of {PlatformNameShort} 2.4 before upgrading your {PlatformName}. +* Upgrades of {EDAName} version 2.4 to 2.5 are not supported. Database migrations between {EDAName} 2.4 and {EDAName} 2.5 are not compatible. For more information, see xref:upgrade-controller-hub-eda-unified-ui_aap-upgrading-platform[{ControllerName} and {HubName} 2.4 and {EDAName} 2.5 with unified UI upgrades]. ++ +If you are currently running {EDAcontroller} 2.5, it is recommended that you disable all {EDAName} activations before upgrading to ensure that only new activations run after the upgrade process is complete. +* {ControllerNameStart} OAuth applications on the platform UI are not supported for 2.4 to 2.5 migration. See this link:https://access.redhat.com/solutions/7091987[Knowledgebase article] for more information. To learn how to recreate your OAuth applications, see link:{URLCentralAuth}/gw-token-based-authentication#assembly-controller-applications[Applications] in the {TitleCentralAuth} guide. +* During the upgrade process, user accounts from the individual services are migrated. If there are accounts from multiple services, they must be linked to access the unified platform. See xref:account-linking_aap-post-upgrade[Account linking] for details. +* {PlatformNameShort} 2.5 offers a centralized Redis instance in both link:{URLPlanningGuide}/ha-redis_planning#gw-single-node-redis_planning[standalone] and link:{URLPlanningGuide}/ha-redis_planning#gw-clustered-redis_planning[clustered] topologies. For information on how to configure Redis, see link:{URLInstallationGuide}/assembly-platform-install-scenario#redis-config-enterprise-topology_platform-install-scenario[Configuring Redis] in the {TitleInstallationGuide} guide. +* When upgrading from {PlatformNameShort} 2.4 to {PlatformVers}, connections to the {Gateway} URL might fail on the {Gateway} UI if you are using the {ControllerName} behind a load balancer. The following error message is displayed: `Error connecting to Controller API` ++ +To resolve this issue, for each controller host, add the {Gateway} URL as a trusted source in the `CSRF_TRUSTED_ORIGIN` setting in the *settings.py* file for each controller host. You must then restart each controller host so that the URL changes are implemented. For more information, see _Upgrading_ in link:{LinkTroubleshootingAAP}. -[discrete] -== {HubNameStart} - -* When upgrading to {PlatformNameShort} {PlatformVers}, you can either add an existing {HubName} API token or generate a new one and invalidate any existing tokens. [role="_additional-resources"] .Additional resources -* <> - -[discrete] -== {EDAcontroller} -//ATTENTION: Remove this section for EDA 1.0.4; customers will no longer need to perform deactivation because services will be automatically restored after upgrade and migration. +* link:{URLCentralAuth}/assembly-gateway-licensing#proc-attaching-subscriptions[Attaching a subscription] +* xref:con-backup-aap_aap-upgrading-platform[Backup and restore] +* link:{URLControllerAdminGuide}/controller-clustering[Clustering] +* link:{LinkPlanningGuide} -* If you are currently running {EDAcontroller} and plan to deploy it when you upgrade to {PlatformNameShort} {PlatformVers}, it is recommended that you disable all {EDAName} activations before upgrading to ensure that only new activations run after the upgrade process has completed. This prevents possibilities of orphaned containers running activations from the previous version. \ No newline at end of file diff --git a/downstream/modules/platform/con-aap-upgrades.adoc b/downstream/modules/platform/con-aap-upgrades.adoc index d2be8be3bb..c909f69167 100644 --- a/downstream/modules/platform/con-aap-upgrades.adoc +++ b/downstream/modules/platform/con-aap-upgrades.adoc @@ -1,14 +1,49 @@ +:_mod-docs-content-type: CONCEPT + [id="aap-upgrades_{context}"] = {PlatformNameShort} upgrades -Upgrading to version {PlatformVers} from {PlatformNameShort} 2.1 or later involves downloading the installation package and then performing the following steps: +Currently, it is possible to perform {PlatformNameShort} upgrades using one of the following supported upgrade paths. + +[IMPORTANT] +==== +Upgrading from {EDAName} 2.4 is not supported. If you’re using {EDAName} 2.4 in production, contact Red Hat before you upgrade. +==== + +Before beginning your upgrade be sure to review the prerequisites and upgrade planning sections of this guide. + +[cols="a,a"] +|=== +h|Supported upgrade path h| Steps to upgrade +|{PlatformNameShort} 2.4 to 2.5 | xref:proc-choosing-obtaining-installer_aap-upgrading-platform[Download the installation package]. + +xref:editing-inventory-file-for-updates_aap-upgrading-platform[Set up your inventory file] to match your installation environment. See link:{LinkTopologies} for a list of example inventory files. + +link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html-single/red_hat_ansible_automation_platform_upgrade_and_migration_guide/index#con-backup-aap_upgrading-to-ees[Back up your {PlatformNameShort} instance]. + +xref:proc-running-setup-script-for-updates[Run the 2.5 installation program] over your current {PlatformNameShort} instance. + +xref:account-linking_aap-post-upgrade[Link your existing service level accounts] to a single unified platform account. + +|{PlatformNameShort} 2.5 to 2.5.x | xref:proc-choosing-obtaining-installer_aap-upgrading-platform[Download the installation package]. + +xref:editing-inventory-file-for-updates_aap-upgrading-platform[Set up your inventory file] to match your installation environment. See link:{LinkTopologies} for a list of example inventory files. + +xref:con-backup-aap_aap-upgrading-platform[Back up your {PlatformNameShort} instance]. + +xref:proc-running-setup-script-for-updates[Run the 2.5 installation program] over your current {PlatformNameShort} instance. + +|xref:upgrade-controller-hub-eda-unified-ui_aap-upgrading-platform[{ControllerNameStart} and {HubName} 2.4 and {EDAName} 2.5 with unified UI upgrades] | Upgrade the 2.4 services (using inventory file to only specify {ControllerName} and {HubName} VMs) to get them to the initial version of AAP 2.5. -* Set up your inventory to match your installation environment. -* Run the {PlatformVers} installation program over your current {PlatformNameShort} installation. +After all the services are at the same version, run a 2.5 upgrade on all the services +|=== + -[role="_additional-resources"] -.Additional resources -* <> +// [hherbly]: not sure we need the addt'l resources block? the xref goes to the next section of the document. +// [ddacosta]: agree, it's not needed. +//[role="_additional-resources"] +//.Additional resources +//* xref:aap-upgrading-platform[Upgrading to {PlatformName} {PlatformVers}] diff --git a/downstream/modules/platform/con-about-automation-hub.adoc b/downstream/modules/platform/con-about-automation-hub.adoc index 14bf50f00b..38087a00c3 100644 --- a/downstream/modules/platform/con-about-automation-hub.adoc +++ b/downstream/modules/platform/con-about-automation-hub.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-about-automation-hub_{context}"] = {HubNameMain} diff --git a/downstream/modules/platform/con-about-automation-mesh.adoc b/downstream/modules/platform/con-about-automation-mesh.adoc index d4987af011..756c9bf4da 100644 --- a/downstream/modules/platform/con-about-automation-mesh.adoc +++ b/downstream/modules/platform/con-about-automation-mesh.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-automation-mesh"] @@ -6,8 +8,8 @@ [role="_abstract"] {AutomationMeshStart} is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks. -{PlatformName} 2 replaces Ansible Tower and isolated nodes with {ControllerName} and {HubName}. -{ControllerNameStart} provides the control plane for automation through its UI, RESTful API, RBAC, workflows and CI/CD integration, while {AutomationMesh} can be used for setting up, discovering, changing or modifying the nodes that form the control and execution layers. +{PlatformName} 2 replaces Ansible Tower and isolated nodes with {PlatformNameShort} and {HubName}. +{PlatformNameShort} provides the control plane for automation through its UI, RESTful API, RBAC, workflows and CI/CD integration, while {AutomationMesh} can be used for setting up, discovering, changing or modifying the nodes that form the control and execution layers. ifdef::operator-mesh[] {AutomationMeshStart} is useful for: diff --git a/downstream/modules/platform/con-about-controller.adoc b/downstream/modules/platform/con-about-controller.adoc new file mode 100644 index 0000000000..4b74de7c07 --- /dev/null +++ b/downstream/modules/platform/con-about-controller.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-about-controller"] + += {ControllerNameStart} + +{ControllerNameStart} is an enterprise framework enabling users to define, operate, scale, and delegate Ansible automation across their enterprise. \ No newline at end of file diff --git a/downstream/modules/platform/con-about-eda-controller.adoc b/downstream/modules/platform/con-about-eda-controller.adoc index 95bdffbaf5..0fd21b5e2b 100644 --- a/downstream/modules/platform/con-about-eda-controller.adoc +++ b/downstream/modules/platform/con-about-eda-controller.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="about-event-driven-ansible-controller_{context}"] = {EDAcontroller} @@ -11,10 +13,5 @@ The {EDAcontroller} is the interface for event-driven automation and introduces [role="_additional-resources"] .Additional resources - -//// -The following link will not work until published. -//// - -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_event-driven_ansible_guide/index[Getting Started with Event-Driven Ansible Guide]. +* link:{URLEDAUserGuide}[{TitleEDAUserGuide}] diff --git a/downstream/modules/platform/con-about-execution-env.adoc b/downstream/modules/platform/con-about-execution-env.adoc index 018920ee96..766fbede19 100644 --- a/downstream/modules/platform/con-about-execution-env.adoc +++ b/downstream/modules/platform/con-about-execution-env.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-about-execution-env_{context}"] = {ExecEnvNameStart} diff --git a/downstream/modules/platform/con-about-galaxy.adoc b/downstream/modules/platform/con-about-galaxy.adoc index 664ec9dd63..3ac9b0fcef 100644 --- a/downstream/modules/platform/con-about-galaxy.adoc +++ b/downstream/modules/platform/con-about-galaxy.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-about-galaxy_{context}"] = {Galaxy} diff --git a/downstream/modules/platform/con-about-ha-hub.adoc b/downstream/modules/platform/con-about-ha-hub.adoc index 4ff9b2b21f..056fb0f6e7 100644 --- a/downstream/modules/platform/con-about-ha-hub.adoc +++ b/downstream/modules/platform/con-about-ha-hub.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-about-ha-automation-hub_{context}"] = High availability {HubName} diff --git a/downstream/modules/platform/con-about-lightspeed-intelligent-assistant.adoc b/downstream/modules/platform/con-about-lightspeed-intelligent-assistant.adoc new file mode 100644 index 0000000000..b621effc2b --- /dev/null +++ b/downstream/modules/platform/con-about-lightspeed-intelligent-assistant.adoc @@ -0,0 +1,98 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-about-lightspeed-intelligent-assistant_{context}"] + += Overview + +[role="_abstract"] + +The {AAPchatbot} is available on {PlatformNameShort} {PlatformVers} on {OCPShort} as a Technology Preview release. It is an intuitive chat interface embedded within the {PlatformNameShort}, utilizing generative artificial intelligence (AI) to answer questions about the {PlatformNameShort}. + +The {AAPchatbot} interacts with users in their natural language prompts in English, and uses Large Language Models (LLMs) to generate quick, accurate, and personalized responses. These responses empower Ansible users to work more efficiently, thereby improving productivity and the overall quality of their work. + +{AAPchatbot} requires the following configurations: + +* Installation of {PlatformNameShort} {PlatformVers} on {OCP} +* Deployment of an LLM served by either a Red Hat AI platform or a third-party AI platform. To know the LLM providers that you can use, see xref:#LLMproviders[LLM providers]. + +[IMPORTANT] +==== +* Red Hat does not collect any telemetry data from your interactions with the {AAPchatbot}. +* {AAPchatbot} is available as a Technology Preview feature only. ++ +Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. ++ +For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope]. +==== + += Prerequisites + += {PlatformNameShort} {PlatformVers} + +* You have installed {PlatformNameShort} {PlatformVers} on your {OCPShort} environment. +* You have administrator privileges for the {PlatformNameShort}. +* You have provisioned an OpenShift cluster with Operator Lifecycle Management installed. + +[#LLMproviders] +== Large Language Model (LLM) provider + +You must have configured an LLM provider that you will use before deploying the {AAPchatbot}. + +An LLM is a type of machine learning model that can interpret and generate human-like language. When an LLM is used with the {AAPchatbot}, the LLM can interpret questions accurately and provide helpful answers in a conversational manner. + +As part of the Technology Preview release, {AAPchatbot} can rely on the following Software as a Service (SaaS) LLM providers: + +*Red Hat LLM providers* + +* {RHELAI} ++ +{RHELAI} is OpenAI API-compatible and is configured in a similar manner to the OpenAI provider. You can configure {RHELAI} as the LLM provider. For more information, see the link:https://www.redhat.com/en/products/ai/enterprise-linux-ai[{RHELAI}] product page. + +* {OCPAI} ++ +{OCPAI} is OpenAI API-compatible and is configured in a similar manner to the {OpenAI} provider. You can configure {OCPAI} as the LLM provider. For more information, see the link:https://www.redhat.com/en/products/ai/openshift-ai[{OCPAI}] product page. + +[NOTE] +==== +For configurations with {RHELAI} or {OCPAI}, you must host your own LLM provider instead of using a SaaS LLM provider. +==== + +*Third-party LLM providers* + +* {IBMwatsonxai} ++ +To use IBM watsonx with the {AAPchatbot}, you need an account with link:https://www.ibm.com/products/watsonx-ai[{IBMwatsonxai}]. + +* {OpenAI} ++ +To use {OpenAI} with the {AAPchatbot}, you need access the link:https://openai.com/api/[{OpenAI} API platform]. + +* {AzureOpenAI} ++ +To use Microsoft Azure with the {AAPchatbot}, you need access to link:https://azure.microsoft.com/en-us/products/ai-services/openai-service[{AzureOpenAI}]. ++ +[NOTE] +==== +Many self-hosted or self-managed model servers claim API compatibility with {OpenAI}. It is possible to configure the {AAPchatbot} {OpenAI} provider to point to an API-compatible model server. If the model server is truly API-compatible, especially with respect to authentication, then it might work. These configurations have not been tested by Red Hat, and issues related to their use are outside the scope of Technology Preview support. +==== + += Process +Perform the following tasks to set up and use the {AAPchatbot} in your {PlatformNameShort} instance on the {OCPShort} environment: + +[%header,cols="35%,65%"] +|==== +| Task +| Description + +|Deploy the {AAPchatbot} on {OCPShort} +a|An {PlatformNameShort} administrator who wants to deploy the {AAPchatbot} for all Ansible users in the organization. + +Perform the following tasks: + +. Install and configure the {PlatformNameShort} operator +. Create a chatbot configuration secret +. Update the YAML file of the {PlatformNameShort} to use the chatbot connection secret + +| Access and use the {AAPchatbot} +| All Ansible users who want to use the intelligent assistant to get answers to their questions about the {PlatformNameShort}. +|==== diff --git a/downstream/modules/platform/con-about-navigator.adoc b/downstream/modules/platform/con-about-navigator.adoc index a10ed1bdbe..7957ba3895 100644 --- a/downstream/modules/platform/con-about-navigator.adoc +++ b/downstream/modules/platform/con-about-navigator.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-about-navigator_{context}"] = {NavigatorStart} diff --git a/downstream/modules/platform/con-about-operator.adoc b/downstream/modules/platform/con-about-operator.adoc index d38d9620a2..87b4067caa 100644 --- a/downstream/modules/platform/con-about-operator.adoc +++ b/downstream/modules/platform/con-about-operator.adoc @@ -1,13 +1,19 @@ +:_mod-docs-content-type: CONCEPT + [id="con-about-operator_{context}"] -= About {OperatorPlatform} += About {OperatorPlatformNameShort} [role="_abstract"] -The {OperatorPlatform} provides cloud-native, push-button deployment of new {PlatformNameShort} instances in your OpenShift environment. -The {OperatorPlatform} includes resource types to deploy and manage instances of {ControllerNameStart} and {PrivateHubName}. +The {OperatorPlatformNameShort} provides cloud-native, push-button deployment of new {PlatformNameShort} instances in your OpenShift environment. +The {OperatorPlatformNameShort} includes resource types to deploy and manage instances of {ControllerName} and {PrivateHubName}. It also includes {ControllerName} job resources for defining and launching jobs inside your {ControllerName} deployments. Deploying {PlatformNameShort} instances with a Kubernetes native operator offers several advantages over launching instances from a playbook deployed on {OCP}, including upgrades and full lifecycle support for your {PlatformName} deployments. -You can install the {OperatorPlatform} from the Red Hat Operators catalog in OperatorHub. +You can install the {OperatorPlatformNameShort} from the Red Hat Operators catalog in OperatorHub. + +For information about the {OperatorPlatformNameShort} system requirements and infrastructure topology see +link:{URLTopologies}/ocp-topologies[Operator topologies] in _{TitleTopologies}_ + diff --git a/downstream/modules/platform/con-about-pa-hub.adoc b/downstream/modules/platform/con-about-pa-hub.adoc index fbbeb50929..7c8251f97a 100644 --- a/downstream/modules/platform/con-about-pa-hub.adoc +++ b/downstream/modules/platform/con-about-pa-hub.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-about-pa-hub_{context}"] = {PrivateHubNameStart} diff --git a/downstream/modules/platform/con-about-platform-gateway.adoc b/downstream/modules/platform/con-about-platform-gateway.adoc new file mode 100644 index 0000000000..484c1daac2 --- /dev/null +++ b/downstream/modules/platform/con-about-platform-gateway.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-about-platform-gateway_{context}"] + += {GatewayStart} + +[role="_abstract"] +// content taken from snippets/snip-gateway-component-description.adoc and con-gw-activity-stream.adoc +{GatewayStart} is the service that handles authentication and authorization for the {PlatformNameShort}. It provides a single entry into the {PlatformNameShort} and serves the platform user interface so you can authenticate and access all of the {PlatformNameShort} services from a single location. For more information about the services available in the {PlatformNameShort}, refer to link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/getting_started_with_ansible_automation_platform/index#assembly-gs-key-functionality[Key functionality and concepts] in _{TitleGettingStarted}_. + +The {Gateway} includes an activity stream that captures changes to gateway resources, such as the creation or modification of organizations, users, and service clusters, among others. For each change, the activity stream collects information about the time of the change, the user that initiated the change, the action performed, and the actual changes made to the object, when possible. The information gathered varies depending on the type of change. + +You can access the details captured by the activity stream from the API: + +----- +/api/gateway/v1/activitystream/ +----- diff --git a/downstream/modules/platform/con-about-postgres.adoc b/downstream/modules/platform/con-about-postgres.adoc new file mode 100644 index 0000000000..9135fade63 --- /dev/null +++ b/downstream/modules/platform/con-about-postgres.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-about-postgres"] + += PostgreSQL + +PostgreSQL (Postgres) is an open-source relational database management system. For {PlatformNameShort}, Postgres serves as the backend database to store automation data such as job templates, inventory, credentials, and execution history. \ No newline at end of file diff --git a/downstream/modules/platform/con-adding-subscription-manifest.adoc b/downstream/modules/platform/con-adding-subscription-manifest.adoc new file mode 100644 index 0000000000..f9d7e22448 --- /dev/null +++ b/downstream/modules/platform/con-adding-subscription-manifest.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-adding-subscription-manifest"] + += Adding a subscription manifest to {PlatformNameShort} + +[role="_abstract"] + +Before you first log in, you must add your subscription information to the platform. To add a subscription to {PlatformNameShort}, see link:{URLCentralAuth}/assembly-gateway-licensing#assembly-aap-obtain-manifest-files[Obtaining a manifest file] in the link:{LinkCentralAuth} guide. diff --git a/downstream/modules/platform/con-alternative-capacity-limits.adoc b/downstream/modules/platform/con-alternative-capacity-limits.adoc index ffd322fb5d..377adee6f2 100644 --- a/downstream/modules/platform/con-alternative-capacity-limits.adoc +++ b/downstream/modules/platform/con-alternative-capacity-limits.adoc @@ -1,4 +1,6 @@ -[id="con-alternative-capacity-limits"] +:_mod-docs-content-type: CONCEPT + +[id="con-alternative-capacity-limits_{context}"] = Alternative capacity limiting with {ControllerName} settings @@ -6,7 +8,7 @@ The capacity of a control node in OpenShift is determined by the memory and CPU However, if these are not set then the capacity is determined by the memory and CPU detected by the pod on the filesystem, which are actually the CPU and memory of the underlying Kubernetes node. This can lead to issues with overwhelming the underlying Kubernetes pod if the {ControllerName} pod is not the only pod on that node. -If you do not want to set limits directly on the task container, you can use `extra_settings`, see _Extra Settings_ in xref:proc-set-custom-pod-timeout[Custom pod timeouts] section for how to configure the following +If you do not want to set limits directly on the task container, you can use `extra_settings`, see _Extra Settings_ in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/performance_considerations_for_operator_environments/index#proc-specify-nodes-job-execution[Custom pod timeouts] section for how to configure the following: [options="nowrap" subs="+quotes,attributes"] ---- diff --git a/downstream/modules/platform/con-automation-mesh-node-types.adoc b/downstream/modules/platform/con-automation-mesh-node-types.adoc index 4948f1ef18..aab50af8e4 100644 --- a/downstream/modules/platform/con-automation-mesh-node-types.adoc +++ b/downstream/modules/platform/con-automation-mesh-node-types.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-automation-mesh-node-types"] @@ -24,7 +26,7 @@ The *control plane* consists of hybrid and control nodes. Instances in the contr * *Control nodes* - control nodes run project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes. endif::mesh-VM[] ifdef::operator-mesh[] -Instances in the control plane run persistent {ControllerName} services such as the web server and task dispatcher, in addition to project updates, and management jobs. +Instances in the control plane run persistent {PlatformNameShort} services such as the web server and task dispatcher, in addition to project updates, and management jobs. However, in the operator-based model, there are no hybrid or control nodes. There are container groups, which make up containers running on the Kubernetes cluster. That comprises the control plane. @@ -43,4 +45,12 @@ ifdef::mesh-VM[] == Peers Peer relationships define node-to-node connections. You can define peers within the `[automationcontroller]` and `[execution_nodes]` groups or using the `[automationcontroller:vars]` or `[execution_nodes:vars]` groups -endif::mesh-VM[] \ No newline at end of file +endif::mesh-VM[] +ifdef::operator-mesh[] + +== Peers + +Peer relationships define node-to-node connections. +Peers are defined through the UI for individual instances. +For further information, see link:{URLOperatorMesh}/assembly-automation-mesh-operator-aap#proc-define-mesh-node-types[Defining {AutomationMesh} node types] +endif::operator-mesh[] \ No newline at end of file diff --git a/downstream/modules/platform/con-backup-aap.adoc b/downstream/modules/platform/con-backup-aap.adoc index d0984d3350..66e9f18f9f 100644 --- a/downstream/modules/platform/con-backup-aap.adoc +++ b/downstream/modules/platform/con-backup-aap.adoc @@ -1,17 +1,48 @@ +:_mod-docs-content-type: CONCEPT + [id="con-backup-aap_{context}"] -= Back up your {PlatformNameShort} instance += Backing up your {PlatformNameShort} instance + +Back up an existing {PlatformNameShort} instance by running the `.setup.sh` script with the `backup_dir` flag, which saves the content and configuration of your current environment. Use the compression flags `use_archive_compression` and `use_db_compression` to compress the backup artifacts before they are sent to the host running the backup operation. -Back up an existing {PlatformNameShort} instance by running the `.setup.sh` script with the `backup_dir` flag, which saves the content and configuration of your current environment: +.Procedure -. Navigate to your `ansible-tower-setup-latest` directory. +. Navigate to your {PlatformNameShort} installation directory. . Run the `./setup.sh` script following the example below: + ---- -$ ./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e ‘use_compression=True’ @credentials.yml -b <1><2> +$ ./setup.sh -e ‘backup_dir=/ansible/mybackup’ -e +‘use_archive_compression=true’ ‘use_db_compression=true’ @credentials.yml -b ---- -<1> `backup_dir` specifies a directory to save your backup to. -<2> `@credentials.yml` passes the password variables and their values encrypted via `ansible-vault`. +Where: +* `backup_dir`: Specifies a directory to save your backup to. + +* `use_archive_compression=true` and `use_db_compression=true`: Compresses the backup artifacts before they are sent to the host running the backup operation. ++ +You can use the following variables to customize the compression: + +** For global control of compression for filesystem related backup files: `use_archive_compression=true` + +** For component-level control of compression for filesystem related backup files: `_use_archive_compression` ++ +For example: + +*** `automationgateway_use_archive_compression=true` +*** `automationcontroller_use_archive_compression=true` +*** `automationhub_use_archive_compression=true` +*** `automationedacontroller_use_archive_compression=true` + +** For global control of compression for database related backup files: `use_db_compression=true` + +** For component-level control of compression for database related backup files: `_use_db_compression=true` ++ +For example: + +*** `automationgateway_use_db_compression=true` +*** `automationcontroller_use_db_compression=true` +*** `automationhub_use_db_compression=true` +*** `automationedacontroller_use_db_compression=true` -With a successful backup, a backup file is created at `/ansible/mybackup/tower-backup-latest.tar.gz` . +After a successful backup, a backup file is created at `/ansible/mybackup/automation-platform-backup-.tar.gz`. diff --git a/downstream/modules/platform/con-building-an-execution-environment-in-a-disconnected-environment.adoc b/downstream/modules/platform/con-building-an-execution-environment-in-a-disconnected-environment.adoc index f702eb5c9b..b40546bde7 100644 --- a/downstream/modules/platform/con-building-an-execution-environment-in-a-disconnected-environment.adoc +++ b/downstream/modules/platform/con-building-an-execution-environment-in-a-disconnected-environment.adoc @@ -1,26 +1,30 @@ -//Used in downstream/titles/aap-installation-guide/platform/assembly-disconnected-installation.adoc +:_mod-docs-content-type: CONCEPT +//Used in downstream/titles/builder/builder/assembly-using-builder.adoc -[id="building-an-execution-environment-in-a-disconnected-environment_{context}"] +[id="building-an-execution-environment-in-a-disconnected-environment"] -= Building an {ExecEnvShort} in a disconnected environment += Build an {ExecEnvShort} in a disconnected environment - -link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/creating_and_consuming_execution_environments/index[Creating execution environments] for {PlatformNameShort} is a common task which works differently in disconnected environments. When building a custom {ExecEnvShort}, the ansible-builder tool defaults to downloading content from the following locations on the internet: +Creating execution environments for {PlatformNameShort} is a common task which works differently in disconnected environments. When building a custom {ExecEnvShort}, the ansible-builder tool defaults to downloading content from the following locations on the internet: * Red Hat {HubNameStart} ({Console}) or {Galaxy} (galaxy.ansible.com) for any Ansible content collections added to the {ExecEnvShort} image. * PyPI (pypi.org) for any python packages required as collection dependencies. -* RPM repositories such as the RHEL or UBI repositories (cdn.redhat.com) for adding or updating RPMs to the execution environment image, if needed. +* RPM repositories such as the RHEL or UBI repositories (cdn.redhat.com) for adding or updating RPMs to the {ExecEnvShort} image, if needed. + +* `registry.redhat.io` for access to the base container images. -* registry.redhat.io for access to the base container images. +Building an {ExecEnvShort} image in a disconnected environment requires mirroring content from these locations. +For information about importing collections from {Galaxy} or {HubName} into a {PrivateHubName}, see link:{URLHubManagingContent}/managing-collections-hub#proc-import-collection[Importing an automation content collection in {HubName}] . -Building an {ExecEnvShort} image in a disconnected environment requires mirroring content from these locations. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_installation_guide/index#importing-collections-into-private-automation-hub_disconnected-installation[Importing Collections into private automation hub] for information on importing collections from Ansible {Galaxy} or {HubName} into a {PrivateHubName}. +Mirrored PyPI content once transferred into the disconnected network can be made available by using a web server or an artifact repository such as Nexus. The RHEL and UBI repository content can be exported from an internet-facing Red Hat Satellite Server, copied into the disconnected environment, then imported into a disconnected Satellite so it is available for building custom {ExecEnvShort}s. See link:{BaseURL}/red_hat_satellite/{SatelliteVers}/html-single/installing_satellite_server_in_a_disconnected_network_environment/index#iss_export_sync_in_an_air_gapped_scenario[ISS Export Sync in an Air-Gapped Scenario] for details. -Mirrored PyPI content once transferred into the disconnected network can be made available using a web server or an artifact repository like Nexus. The RHEL and UBI repository content can be exported from an internet-facing Red Hat Satellite server, copied into the disconnected environment, then imported into a disconnected Satellite so it is available for building custom {ExecEnvShort}s. See link:{BaseURL}/red_hat_satellite/{SatelliteVers}/html-single/installing_satellite_server_in_a_disconnected_network_environment/index#iss_export_sync_in_an_air_gapped_scenario[ISS Export Sync in an Air-Gapped Scenario] for details. +The default base container image, `ee-minimal-rhel8`, is used to create custom {ExecEnvShort} images and is included with the bundled installer. +This image is added to the {PrivateHubName} at install time. -The default base container image, ee-minimal-rhel8, is used to create custom {ExecEnvShort} images and is included with the bundled installer. This image is added to the {PrivateHubName} at install time. If a different base container image such as ee-minimal-rhel9 is required, it must be imported to the disconnected network and added to the {PrivateHubName} container registry. +If a different base container image such as `ee-minimal-rhel9` is required, it must be imported to the disconnected network and added to the {PrivateHubName} container registry. Once all of the prerequisites are available on the disconnected network, the ansible-builder command can be used to create custom {ExecEnvShort} images. diff --git a/downstream/modules/platform/con-certs-per-service-considerations.adoc b/downstream/modules/platform/con-certs-per-service-considerations.adoc new file mode 100644 index 0000000000..cb9634540c --- /dev/null +++ b/downstream/modules/platform/con-certs-per-service-considerations.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="certs-per-service-considerations"] += Considerations for certificates provided per service + +When providing custom TLS certificates for each individual service, consider the following: + +* It is possible to provide unique certificates per host. This requires defining the specific `_tls_cert` and `_tls_key` variables in your inventory file as shown in the earlier inventory file example. +* For services deployed across many nodes (for example, when following the enterprise topology), the provided certificate for that service must include the FQDN of all associated nodes in its Subject Alternative Name (SAN) field. +* If an external-facing service (such as {ControllerName} or {Gateway}) is deployed behind a load balancer that performs SSL/TLS offloading, the service's certificate must include the load balancer's FQDN in its SAN field, in addition to the FQDNs of the individual service nodes. + +[role="_additional-resources"] +.Additional resources +* link:{BaseURL}/red_hat_enterprise_linux/9/html/securing_networks/index[Securing networks] diff --git a/downstream/modules/platform/con-configuring-the-metrics-utility.adoc b/downstream/modules/platform/con-configuring-the-metrics-utility.adoc new file mode 100644 index 0000000000..4a1b780c56 --- /dev/null +++ b/downstream/modules/platform/con-configuring-the-metrics-utility.adoc @@ -0,0 +1,9 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-07-15 +:_mod-docs-content-type: CONCEPT + +[id="configuring-the-metrics-utility"] + += Configuring metrics-utility + +Configure the metrics-utility to gather and report usage data for your {PlatformNameShort}, both on {RHEL} and {OCPShort}. diff --git a/downstream/modules/platform/con-controller-access-GCE-in-a-playbook.adoc b/downstream/modules/platform/con-controller-access-GCE-in-a-playbook.adoc new file mode 100644 index 0000000000..ea6f3f1f5b --- /dev/null +++ b/downstream/modules/platform/con-controller-access-GCE-in-a-playbook.adoc @@ -0,0 +1,14 @@ +[id="con-controller-access-GCE-in-a-playbook"] + += Access Google Compute Engine credentials in an Ansible Playbook + +You can get GCE credential parameters from a job runtime environment: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + gce: + email: '{{ lookup("env", "GCE_EMAIL") }}' + project: '{{ lookup("env", "GCE_PROJECT") }}' + pem_file_path: '{{ lookup("env", "GCE_PEM_FILE_PATH") }}' +---- diff --git a/downstream/modules/platform/con-controller-access-machine-credentials-playbook.adoc b/downstream/modules/platform/con-controller-access-machine-credentials-playbook.adoc new file mode 100644 index 0000000000..4c5622194a --- /dev/null +++ b/downstream/modules/platform/con-controller-access-machine-credentials-playbook.adoc @@ -0,0 +1,13 @@ +[id="con-controller-access-machine-credentials-playbook"] + += Access machine credentials in an ansible playbook + +You can get username and password from Ansible facts: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + machine: + username: '{{ ansible_user }}' + password: '{{ ansible_password }}' +---- \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-access-organizations.adoc b/downstream/modules/platform/con-controller-access-organizations.adoc index d916d64d7b..a89326e8f7 100644 --- a/downstream/modules/platform/con-controller-access-organizations.adoc +++ b/downstream/modules/platform/con-controller-access-organizations.adoc @@ -1,24 +1,8 @@ -[id="con-controller-access-organizations"] - -= Access to organizations - -* Select btn:[Access] when viewing your organization to display the users associated with this organization, and their -roles. +:_mod-docs-content-type: CONCEPT -image:organizations-show-users-permissions-organization.png[Organization access] - -Use this page to complete the following tasks: - -* Manage the user membership for this organization. -Click btn:[Users] on the navigation panel to manage user membership on a per-user basis from the *Users* page. -* Assign specific users certain levels of permissions within your organization. -* Enable them to act as an administrator for a particular resource. -For more information, see link:https://docs.ansible.com/automation-controller/latest/html/userguide/security.html#rbac-ug[Role-Based Access Controls]. +[id="con-controller-access-organizations"] -Click a user to display that user's details. -You can review, grant, edit, and remove associated permissions for that user. -For more information, see xref:assembly-controller-users[Users]. += Access to organizations +You can manage access to an organization by selecting an organization from the *Organizations* list view and selecting the associated tabs for providing access to xref:proc-controller-add-organization-user[Users], xref:proc-gw-add-admin-organization[Administrators] or xref:proc-gw-add-team-organization[Teams]. -include::proc-controller-add-organization-user.adoc[leveloffset=+1] -include::ref-controller-organization-notifications.adoc[leveloffset=+1] diff --git a/downstream/modules/platform/con-controller-additional-settings.adoc b/downstream/modules/platform/con-controller-additional-settings.adoc index e2386940a0..ade906853f 100644 --- a/downstream/modules/platform/con-controller-additional-settings.adoc +++ b/downstream/modules/platform/con-controller-additional-settings.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-additional-settings"] = Additional settings for {ControllerName} @@ -6,4 +8,7 @@ There are additional advanced settings that can affect {ControllerName} behavior For traditional virtual machine based deployments, these settings can be provided to {ControllerName} by creating a file in `/etc/tower/conf.d/custom.py`. When settings are provided to {ControllerName} through file-based settings, the settings file must be present on all control plane nodes. These include all of the hybrid or control type nodes in the `automationcontroller` group in the installer inventory. -For these settings to be effective, restart the service with `automation-controller-service` restart on each node with the settings file. If the settings provided in this file are also visible in the {ControllerName} UI, then they are marked as "Read only" in the UI. \ No newline at end of file +For these settings to be effective, restart the service with `automation-controller-service` restart on each node with the settings file. If the settings provided in this file are also visible in the {ControllerName} UI, then they are marked as "Read only" in the UI. + +For container-based installations, use `controller_extra_settings` in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/containerized_installation/appendix-inventory-files-vars#controller-variables[{ControllerNameStart} variables]. +The containerized version does not support `custom.py`. diff --git a/downstream/modules/platform/con-controller-administration.adoc b/downstream/modules/platform/con-controller-administration.adoc index 5af49ae1b7..7f584e255c 100644 --- a/downstream/modules/platform/con-controller-administration.adoc +++ b/downstream/modules/platform/con-controller-administration.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-administration"] = Administration @@ -5,16 +7,11 @@ The *Administration* menu provides access to the administrative options of {ControllerName}. From here, you can create, view, and edit: -* xref:assembly-controller-custom-credentials[Credential types] -* xref:controller-notifications[Notifications] -* Management_jobs -* xref:controller-instance-groups[Instance groups] -* Instances -* xref:assembly-controller-applications[Applications] -* xref:assembly-controller-execution-environments[Execution environments] -//Topology View is in the Admin Guide -* Topology view -//Next version includes -//* Instance Groups -//* Instances -//* Execution Environments +//activity stream is an unconnected procedure. It needs a home. +* link:{URLControllerUserGuide}/assembly-controller-activity-stream[Activity Stream] +* link:{URLControllerUserGuide}/controller-workflow-job-templates#controller-approval-nodes[Workflow Approvals] +* link:{URLControllerUserGuide}/controller-notifications[Notifiers] +* link:{URLControllerAdminGuide}/assembly-controller-management-jobs[Management Jobs] + + + diff --git a/downstream/modules/platform/con-controller-api-basic-auth.adoc b/downstream/modules/platform/con-controller-api-basic-auth.adoc index fe9f6ad3a7..decc008a37 100644 --- a/downstream/modules/platform/con-controller-api-basic-auth.adoc +++ b/downstream/modules/platform/con-controller-api-basic-auth.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-api-basic-auth"] = Basic authentication @@ -11,7 +13,7 @@ The following is an example of Basic authentication with curl: [literal, options="nowrap" subs="+attributes"] ---- # the --user flag adds this Authorization header for us -curl -X GET --user 'user:password' https:///api/v2/credentials -k -L +curl -X GET --user 'user:password' https:///api/gateway/v1/tokens/ -k -L ---- .Additional resources @@ -24,6 +26,7 @@ You can disable Basic authentication for security purposes. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. Select *Miscellaneous Authentication settings* from the list of *System* options. -. Disable the option to *Enable HTTP Basic Auth*. +. From the navigation panel, select {MenuSetGateway}. +. Click btn:[Edit {Gateway} settings]. +. Disable the option *Gateway basic auth enabled*. +. Click btn:[Save platform gateway settings]. diff --git a/downstream/modules/platform/con-controller-api-identifier-format-protocol.adoc b/downstream/modules/platform/con-controller-api-identifier-format-protocol.adoc index a13870922d..e577d6b45d 100644 --- a/downstream/modules/platform/con-controller-api-identifier-format-protocol.adoc +++ b/downstream/modules/platform/con-controller-api-identifier-format-protocol.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-api-identifier-format-protocol"] = Identifier format protocol diff --git a/downstream/modules/platform/con-controller-api-oauth2-token.adoc b/downstream/modules/platform/con-controller-api-oauth2-token.adoc index 20cebfcb10..e5756239a7 100644 --- a/downstream/modules/platform/con-controller-api-oauth2-token.adoc +++ b/downstream/modules/platform/con-controller-api-oauth2-token.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-api-oauth2-token"] = OAuth 2 token authentication @@ -7,7 +9,7 @@ OAuth 2 authentication is commonly used when interacting with the {ControllerNam Similar to Basic authentication, you are given an OAuth 2 token with each API request through the Authorization header. Unlike Basic authentication, OAuth 2 tokens have a configurable timeout and are scopable. Tokens have a configurable expiration time and can be easily revoked for one user or for the entire {ControllerName} system by an administrator if needed. -You can do this with the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-revoke-oauth2-token[revoke_oauth2_tokens] management command, or by using the API as explained in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-revoke-access-token[Revoke an access token]. +You can do this with the link:{URLCentralAuth}/gw-token-based-authentication#ref-controller-revoke-oauth2-token[revoke_oauth2_tokens] management command, or by using the API as explained in link:{URLCentralAuth}/gw-token-based-authentication#ref-controller-revoke-access-token[Revoke an access token]. The different methods for obtaining OAuth2 access tokens in {ControllerName} include the following: @@ -16,8 +18,8 @@ The different methods for obtaining OAuth2 access tokens in {ControllerName} inc * Application token: Implicit grant type * Application token: Authorization Code grant type -A user needs to create an OAuth 2 token in the API or in the *Users* > *Tokens* tab of the {ControllerName} UI. -For more information about creating tokens through the UI, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-user-tokens[Users - Tokens]. +A user needs to create an OAuth 2 token in the API or in the {MenuAMAdminOauthApps} tab of the {Gateway} UI. +For more information about creating tokens through the UI, see link:{URLCentralAuth}/gw-token-based-authentication#proc-controller-apps-create-tokens[Adding tokens]. For the purpose of this example, use the PAT method for creating a token in the API. After you create it, you can set the scope. @@ -25,7 +27,7 @@ After you create it, you can set the scope. [NOTE] ==== You can configure the expiration time of the token system-wide. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-use-oauth2-token-system[Using OAuth 2 Token System for Personal Access Tokens]. +For more information, see link:{URLCentralAuth}/gw-token-based-authentication[Configuring access to external applications with token-based authentication]. ==== Token authentication is best used for any programmatic use of the {ControllerName} API, such as Python scripts or tools such as curl. @@ -88,7 +90,7 @@ print(json.dumps(response.json(), indent=4, sort_keys=True)) .Additional resources -For more information about obtaining OAuth2 access tokens and how to use OAuth 2 in the context of external applications, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#assembly-controller-token-based-authentication[Token-Based Authentication] in the _{ControllerAG}_. +For more information about obtaining OAuth2 access tokens and how to use OAuth 2 in the context of external applications, see link:{URLCentralAuth}/gw-token-based-authentication[Configuring access to external applications with token-based authentication]. [discrete] == Enabling external users to create OAuth 2 tokens @@ -97,6 +99,6 @@ By default, external users such as those created by single sign-on are not able .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. Select *Miscellaneous Authentication settings* from the list of *System* options. -. Enable the option to *Allow External Users to Create OAuth2 Tokens*. +. From the navigation panel, select {MenuSetGateway}. +. Select btn:[Edit {Gateway} settings]. +. Enable the option to *Allow external users to create OAuth2 tokens*. diff --git a/downstream/modules/platform/con-controller-api-readonly-fields.adoc b/downstream/modules/platform/con-controller-api-readonly-fields.adoc index 49c00ce09d..b8ff1cd99c 100644 --- a/downstream/modules/platform/con-controller-api-readonly-fields.adoc +++ b/downstream/modules/platform/con-controller-api-readonly-fields.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-api-readonly"] These usually include the URL of a resource, the ID, and occasionally some internal fields. diff --git a/downstream/modules/platform/con-controller-api-sso-auth.adoc b/downstream/modules/platform/con-controller-api-sso-auth.adoc index f12e5ec2e4..4b64a799aa 100644 --- a/downstream/modules/platform/con-controller-api-sso-auth.adoc +++ b/downstream/modules/platform/con-controller-api-sso-auth.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-api-sso-auth"] = Single sign-on authentication @@ -11,5 +13,4 @@ If you click that option, it redirects you to the Identity Provider, in this cas .Additional resources -For the various types of supported SSO authentication methods, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#assembly-controller-set-up-social-authentication[Setting up social authentication] and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-set-up-enterprise-authentication[Setting up enterprise authentication] in the _{ControllerAG}_. - +For the various types of supported SSO authentication methods, see link:{URLCentralAuth}/gw-configure-authentication#gw-config-authentication-type[Configuring an authentication type]. diff --git a/downstream/modules/platform/con-controller-api-tools.adoc b/downstream/modules/platform/con-controller-api-tools.adoc index c0d0adca67..b8af09d32a 100644 --- a/downstream/modules/platform/con-controller-api-tools.adoc +++ b/downstream/modules/platform/con-controller-api-tools.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="ref-controller-api-tools"] Representational State Transfer (REST) relies on a stateless, client-server, and cacheable communications protocol, usually the HTTP protocol. @@ -13,6 +15,7 @@ Further options include the following: * link:http://www.telerik.com/fiddler[Fiddler] * link:https://mitmproxy.org/[mitmproxy] -* link:https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/[Live HTTP headers FireFox extension] -* link:http://sourceforge.net/projects/paros/[Paros] +// * [emcwhinn] Link deprecated +// link:https://addons.mozilla.org/en-US/firefox/addon/live-http-headers/[Live HTTP headers FireFox extension] +* link:https://sourceforge.net/projects/paros/[Paros] diff --git a/downstream/modules/platform/con-controller-backup-restore-playbooks.adoc b/downstream/modules/platform/con-controller-backup-restore-playbooks.adoc index 94c8afe42b..7fc64747d6 100644 --- a/downstream/modules/platform/con-controller-backup-restore-playbooks.adoc +++ b/downstream/modules/platform/con-controller-backup-restore-playbooks.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-backup-restore-playbooks"] = Backup and restore playbooks diff --git a/downstream/modules/platform/con-controller-benefits-fact-caching.adoc b/downstream/modules/platform/con-controller-benefits-fact-caching.adoc index 903a5f158f..641f9cb769 100644 --- a/downstream/modules/platform/con-controller-benefits-fact-caching.adoc +++ b/downstream/modules/platform/con-controller-benefits-fact-caching.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-benefits-of-fact-caching"] = Benefits of fact caching @@ -14,9 +16,9 @@ Custom fact caching could conflict with the controller's fact caching feature. You must use the fact caching module that includes {ControllerName}. ==== -You can select to use cached facts in your job by checking the *Enable Fact Storage* option when you create or edit a job template. +You can select to use cached facts in your job by checking the *Enable fact storage* option when you create or edit a job template. -image::ug-job-templates-options-use-factcache.png[Cached facts] +//image::ug-job-templates-options-use-factcache.png[Cached facts] To clear facts, run the Ansible `clear_facts` link:https://docs.ansible.com/ansible/latest/collections/ansible/builtin/meta_module.html#examples[meta task]. The following is an example playbook that uses the Ansible `clear_facts` meta task. diff --git a/downstream/modules/platform/con-controller-capacity-determination.adoc b/downstream/modules/platform/con-controller-capacity-determination.adoc index ba9ad91962..475da9e123 100644 --- a/downstream/modules/platform/con-controller-capacity-determination.adoc +++ b/downstream/modules/platform/con-controller-capacity-determination.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-capacity-determination"] = {ControllerNameStart} capacity determination and job impact -The {ControllerNameStart} capacity system determines how many jobs can run on an instance given the amount of resources available to the instance and the size of the jobs that are running (referred to as Impact). +The {ControllerName} capacity system determines how many jobs can run on an instance given the amount of resources available to the instance and the size of the jobs that are running (referred to as Impact). The algorithm used to determine this is based on the following two things: * How much memory is available to the system (`mem_capacity`) @@ -13,7 +15,7 @@ Since groups are made up of instances, instances can also be assigned to multipl This means that impact to one instance can affect the overall capacity of other groups. Instance groups, not instances themselves, can be assigned to be used by jobs at various levels. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-clustering[Clustering] in the _{ControllerAG}_. +For more information, see link:{URLControllerAdminGuide}/controller-clustering[Clustering] in _{ControllerAG}_. When the Task Manager prepares its graph to determine which group a job runs on, it commits the capacity of an instance group to a job that is not ready to start yet. @@ -22,5 +24,5 @@ This guarantees that jobs do not get stuck as a result of an under-provisioned s .Additional resources -* For information on container groups, see link:{BaseURL}red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups#controller-container-capacity[Container capacity limits] in the _{ControllerAG}_. -* For information on sliced jobs and their impact to capacity, see xref:controller-job-slice-execution-behavior[Job slice execution behavior]. +* For information about container groups, see link:{URLControllerAdminGuide}/assembly-controller-improving-performance#ref-controller-settings-control-execution-nodes[Capacity settings for instance group and container group] in _{ControllerAG}_. +* For information about sliced jobs and their impact to capacity, see xref:controller-job-slice-execution-behavior[Job slice execution behavior]. diff --git a/downstream/modules/platform/con-controller-capacity-job-impacts.adoc b/downstream/modules/platform/con-controller-capacity-job-impacts.adoc index 6783149907..f512967399 100644 --- a/downstream/modules/platform/con-controller-capacity-job-impacts.adoc +++ b/downstream/modules/platform/con-controller-capacity-job-impacts.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-capacity-job-impacts"] = Capacity job impacts diff --git a/downstream/modules/platform/con-controller-cleanup-expired-sessions.adoc b/downstream/modules/platform/con-controller-cleanup-expired-sessions.adoc index 379a0d363a..8e26b420ea 100644 --- a/downstream/modules/platform/con-controller-cleanup-expired-sessions.adoc +++ b/downstream/modules/platform/con-controller-cleanup-expired-sessions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-cleanup-expired-sessions"] = Cleanup Expired Sessions @@ -9,4 +11,4 @@ For more information, see xref:proc-controller-scheduling-deletion[Scheduling de You can also set or review notifications associated with this management job the same way as described in xref:proc-controller-management-notifications[Notifications] for activity stream management jobs. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-notifications[Notifications] in the _{ControllerUG}_. +For more information, see link:{URLControllerUserGuide}/controller-notifications[Notifiers] in _{ControllerUG}_. diff --git a/downstream/modules/platform/con-controller-cloud-credentials.adoc b/downstream/modules/platform/con-controller-cloud-credentials.adoc index 8fecd67fdf..a262fa3ed0 100644 --- a/downstream/modules/platform/con-controller-cloud-credentials.adoc +++ b/downstream/modules/platform/con-controller-cloud-credentials.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-cloud-credentials"] = Use Cloud Credentials with a cloud inventory diff --git a/downstream/modules/platform/con-controller-cluster-job-runs.adoc b/downstream/modules/platform/con-controller-cluster-job-runs.adoc index b1be82a2c1..b0049e1bfe 100644 --- a/downstream/modules/platform/con-controller-cluster-job-runs.adoc +++ b/downstream/modules/platform/con-controller-cluster-job-runs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-cluster-job-runs"] = Job runs @@ -9,7 +11,7 @@ To support taking an instance offline temporarily, there is a property enabled d When this property is disabled, no jobs are assigned to that instance. Existing jobs finish, but no new work is assigned. -.Troubleshooting +*Troubleshooting* When you issue a `cancel` request on a running {ControllerName} job, {ControllerName} issues a `SIGINT` to the ansible-playbook process. While this causes Ansible to stop dispatching new tasks and exit, in many cases, module tasks that were already dispatched to remote hosts will run to completion. diff --git a/downstream/modules/platform/con-controller-configure-hostname-notifications.adoc b/downstream/modules/platform/con-controller-configure-hostname-notifications.adoc index 02a8fed400..d5428e5fb4 100644 --- a/downstream/modules/platform/con-controller-configure-hostname-notifications.adoc +++ b/downstream/modules/platform/con-controller-configure-hostname-notifications.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-configure-hostname-notifications"] = Configure the host hostname for notifications -In link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-config#controller-configure-system[System settings], you can replace the default value in the *Base URL of the service* field with your preferred hostname to change the notification hostname. +In link:{URLControllerAdminGuide}/controller-config#controller-configure-system[System settings], you can replace the default value in the *Base URL of the service* field with your preferred hostname to change the notification hostname. //image::ug-system-misc-baseurl.png[System Base URL] diff --git a/downstream/modules/platform/con-controller-configure-instance-groups.adoc b/downstream/modules/platform/con-controller-configure-instance-groups.adoc index 1951e19fb1..06f033d1d6 100644 --- a/downstream/modules/platform/con-controller-configure-instance-groups.adoc +++ b/downstream/modules/platform/con-controller-configure-instance-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-configure-instance-groups"] = Configure instance groups from the API @@ -12,4 +14,4 @@ HTTP POST /api/v2/instance_groups/x/instances/ {'id': y}` ---- An instance that is added to an instance group automatically reconfigures itself to listen on the group's work queue. -For more information, see the following section _Instance group policies_. +For more information, see link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-instance-group-policies[Instance group policies]. diff --git a/downstream/modules/platform/con-controller-container-groups.adoc b/downstream/modules/platform/con-controller-container-groups.adoc index 33729a878d..d95719999c 100644 --- a/downstream/modules/platform/con-controller-container-groups.adoc +++ b/downstream/modules/platform/con-controller-container-groups.adoc @@ -1,9 +1,13 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-container-groups"] = Container groups -{PlatformNameShort} supports container groups, which enable you to execute jobs in {ControllerName} regardless of whether {ControllerName} is installed as a standalone, in a virtual environment, or in a container. +{PlatformNameShort} supports container groups, which enable you to run jobs in {ControllerName} regardless of whether {ControllerName} is installed as a standalone, in a virtual environment, or in a container. + Container groups act as a pool of resources within a virtual environment. + You can create instance groups to point to an OpenShift container. These are job environments that are provisioned on-demand as a pod that exists only for the duration of the playbook run. This is known as the ephemeral execution model and ensures a clean environment for every job run. @@ -16,4 +20,4 @@ Container groups upgraded from versions before {ControllerName} 4.0 revert back ==== Container groups are different from {ExecEnvShort}s in that {ExecEnvShort}s are container images and do not use a virtual environment. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#assembly-controller-execution-environments[Execution environments] in the _{ControllerUG}_. +For more information, see link:{URLControllerUserGuide}/assembly-controller-execution-environments[Execution environments]. diff --git a/downstream/modules/platform/con-controller-control-job-run.adoc b/downstream/modules/platform/con-controller-control-job-run.adoc index 0a41d62fc7..bb1276fb4a 100644 --- a/downstream/modules/platform/con-controller-control-job-run.adoc +++ b/downstream/modules/platform/con-controller-control-job-run.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-control-job-run"] = Control where a job runs @@ -17,17 +19,16 @@ Jobs must execute in those groups in preferential order as resources are availab You can still associate the global `default` group with a resource, such as any of the custom instance groups defined in the playbook. You can use this to specify a preferred instance group on the job template or inventory, but still enable the job to be submitted to any instance if those are out of capacity. -.Examples - * If you associate `group_a` with a job template and also associate the `default` group with its inventory, you enable the `default` group to be used as a fallback in case `group_a` gets out of capacity. * In addition, it is possible to not associate an instance group with one resource but choose another resource as the fallback. For example, not associating an instance group with a job template and having it fall back to the inventory or the organization's instance group. -This presents the following two examples: +This presents the following possibilites: . Associating instance groups with an inventory (omitting assigning the job template to an instance group) ensures that any playbook run against a specific inventory runs only on the group associated with it. This is useful in the situation where only those instances have a direct link to the managed nodes. . An administrator can assign instance groups to organizations. ++ This enables the administrator to segment out the entire infrastructure and guarantee that each organization has capacity to run jobs without interfering with any other organization's ability to run jobs. + An administrator can assign multiple groups to each organization, similar to the following scenario: diff --git a/downstream/modules/platform/con-controller-create-insights-inventory.adoc b/downstream/modules/platform/con-controller-create-insights-inventory.adoc index 2e4307545b..5ef71e03f9 100644 --- a/downstream/modules/platform/con-controller-create-insights-inventory.adoc +++ b/downstream/modules/platform/con-controller-create-insights-inventory.adoc @@ -1,7 +1,10 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-create-insights-inventory"] = Create an Insights inventory The Insights playbook contains a `hosts:` line where the value is the host name supplied to red Hat insights, which can be different from the host name supplied to {ControllerName} -To create a new inventory to use with Red Hat Insights, see xref:proc-controller-inv-source-insights[Creating Insights credentials]. +// This looks like a circular reference +//To create a new inventory to use with Red Hat Insights, see xref:proc-controller-inv-source-insights[Red Hat Insights]. diff --git a/downstream/modules/platform/con-controller-ee-mount-options.adoc b/downstream/modules/platform/con-controller-ee-mount-options.adoc index 1b28b8f079..afb2e682f5 100644 --- a/downstream/modules/platform/con-controller-ee-mount-options.adoc +++ b/downstream/modules/platform/con-controller-ee-mount-options.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-ee-mount-options"] = Execution environment mount options diff --git a/downstream/modules/platform/con-controller-enable-notifications.adoc b/downstream/modules/platform/con-controller-enable-notifications.adoc index 934fe8c0d0..c42a5a7b9d 100644 --- a/downstream/modules/platform/con-controller-enable-notifications.adoc +++ b/downstream/modules/platform/con-controller-enable-notifications.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-enable-disable-notifications"] = Enable and disable notifications @@ -24,4 +26,5 @@ For workflow templates that have approval nodes, in addition to *Start*, *Succes //image::ug-completed-notifications-view.png[Completed notifications view] .Additional resources -For more information on working with these types of nodes, see xref:controller-approval-nodes[Approval nodes]. + +* xref:controller-approval-nodes[Approval nodes] diff --git a/downstream/modules/platform/con-controller-enforce-separation-duties.adoc b/downstream/modules/platform/con-controller-enforce-separation-duties.adoc index 8ea75cb9dd..dc9f440250 100644 --- a/downstream/modules/platform/con-controller-enforce-separation-duties.adoc +++ b/downstream/modules/platform/con-controller-enforce-separation-duties.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-enforce-separation-duties"] = Enforce separation of duties diff --git a/downstream/modules/platform/con-controller-fact-caching.adoc b/downstream/modules/platform/con-controller-fact-caching.adoc index 321e806214..309014b7ce 100644 --- a/downstream/modules/platform/con-controller-fact-caching.adoc +++ b/downstream/modules/platform/con-controller-fact-caching.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-fact-caching"] = Fact caching diff --git a/downstream/modules/platform/con-controller-fips-support.adoc b/downstream/modules/platform/con-controller-fips-support.adoc new file mode 100644 index 0000000000..b1b0f29b7d --- /dev/null +++ b/downstream/modules/platform/con-controller-fips-support.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-fips-support_{context}"] + += Support for deployment in a FIPS-enabled environment +{ControllerNameStart} deploys and runs in restricted modes such as FIPS. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-how-credentials-work.adoc b/downstream/modules/platform/con-controller-how-credentials-work.adoc new file mode 100644 index 0000000000..693e63ce10 --- /dev/null +++ b/downstream/modules/platform/con-controller-how-credentials-work.adoc @@ -0,0 +1,28 @@ +[id="con-controller-how-credentials-work"] + += How credentials work + +{ControllerNameStart} uses SSH to connect to remote hosts. +To pass the key from {ControllerName} to SSH, the key must be decrypted before it can be written to a named pipe. +{ControllerNameStart} uses that pipe to send the key to SSH, so that the key is never written to disk. +If passwords are used, {ControllerName} handles them by responding directly to the password prompt and decrypting the password before writing it to the prompt. + +The *Credentials* page shows credentials that are currently available. +The default view is collapsed (Compact), showing the credential name, and credential type. + +From this screen you can edit image:leftpencil.png[Edit,15,15], duplicate image:copy.png[Copy,15,15] or delete {MoreActionsIcon} a credential. + +[NOTE] +==== +It is possible to create duplicate credentials with the same name and without an organization. +However, it is not possible to create two duplicate credentials in the same organization. + +.Example + +. Create two machine credentials with the same name but without an organization. +. Use the module `ansible.controller.export` to export the credentials. +. Use the module `ansible.controller.import` in a different automation execution node. +. Check the imported credentials. + +When you export two duplicate credentials and then import them in a different node, only one credential is imported. +==== \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-impact-of-job-types.adoc b/downstream/modules/platform/con-controller-impact-of-job-types.adoc index fb3c9889f2..0572cc80b8 100644 --- a/downstream/modules/platform/con-controller-impact-of-job-types.adoc +++ b/downstream/modules/platform/con-controller-impact-of-job-types.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-impact-of-job-types"] = Impact of job types in {ControllerName} diff --git a/downstream/modules/platform/con-controller-infrastructure.adoc b/downstream/modules/platform/con-controller-infrastructure.adoc new file mode 100644 index 0000000000..05f4565ac4 --- /dev/null +++ b/downstream/modules/platform/con-controller-infrastructure.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-infrastructure"] + += Infrastructure menu + +The *Infrastructure* menu provides quick access to the following {ControllerName} resources: + +* link:{URLControllerUserGuide}/assembly-controller-topology-viewer[Topology View] +* link:{URLControllerUserGuide}/controller-inventories[Inventories] +* link:{URLControllerUserGuide}/assembly-controller-hosts[Hosts] +* link:{URLControllerUserGuide}/controller-instance-groups[Instance Groups] +* link:{URLControllerUserGuide}/assembly-controller-instances[Instances] +* link:{URLControllerUserGuide}/assembly-controller-execution-environments[Execution Environments] +* link:{URLControllerUserGuide}/controller-credentials[Credentials] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-types[Credential Types] diff --git a/downstream/modules/platform/con-controller-instance-groups.adoc b/downstream/modules/platform/con-controller-instance-groups.adoc index 0215512211..4d8f4a709f 100644 --- a/downstream/modules/platform/con-controller-instance-groups.adoc +++ b/downstream/modules/platform/con-controller-instance-groups.adoc @@ -1,4 +1,6 @@ -[id="controller-instance-groups"] +:_mod-docs-content-type: CONCEPT + +[id="con-controller-instance-groups"] = Instance groups @@ -22,7 +24,7 @@ Consider the following when working with instance groups: These groups must be prefixed with `instance_group_`. Instances are required to be in the `automationcontroller` or `execution_nodes` group alongside other `instance_group_` groups. In a clustered setup, at least one instance must be present in the `automationcontroller` group, which appears as `controlplane` in the API instance groups. -For more information and example scenarios, see xref:controller-group-policies-automationcontroller[Group policies for `automationcontroller`]. +For more information and example scenarios, see xref:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-group-policies-automationcontroller[Group policies for `automationcontroller`]. * You cannot modify the `controlplane` instance group, and attempting to do so results in a permission denied error for any user. + diff --git a/downstream/modules/platform/con-controller-inventory-sync-jobs.adoc b/downstream/modules/platform/con-controller-inventory-sync-jobs.adoc index 78b0fcd149..88077a0564 100644 --- a/downstream/modules/platform/con-controller-inventory-sync-jobs.adoc +++ b/downstream/modules/platform/con-controller-inventory-sync-jobs.adoc @@ -1,22 +1,25 @@ -[id="controller-inventory-sync-jobs"] +:_mod-docs-content-type: CONCEPT + +[id="controller-inventory-sync-jobs_{context}"] = Inventory sync jobs When an inventory synchronization is executed, the results display in the *Output* tab. -For more information on inventory synchronization, see xref:ref-controller-constructed-inventories[Constructed inventories]. +For more information about inventory synchronization, see link:{URLControllerUserGuide}/controller-inventories#ref-controller-constructed-inventories[Constructed inventories]. If used, the Ansible CLI displays the same information. This can be useful for debugging. The `ANSIBLE_DISPLAY_ARGS_TO_STDOUT` parameter is set to `False` for all playbook runs. -This parameter matches Ansible's default behavior and does not display task arguments in task headers in the *Job Detail* interface to avoid leaking certain sensitive module parameters to `stdout`. -To restore the previous behavior, set `ANSIBLE_DISPLAY_ARGS_TO_STDOUT` to `True` through the `AWX_TASK_ENV` configuration setting. +This parameter matches Ansible's default behavior and does not display task arguments in task headers in the Job *Details* interface to avoid leaking certain sensitive module parameters to `stdout`. +To restore the earlier behavior, set `ANSIBLE_DISPLAY_ARGS_TO_STDOUT` to `True` through the `AWX_TASK_ENV` configuration setting. -For more information, see link:http://docs.ansible.com/ansible/latest/reference_appendices/config.html#envvar-ANSIBLE_DISPLAY_ARGS_TO_STDOUT[ANSIBLE_DISPLAY_ARGS_TO_STDOUT] in the ansible documentation. +For more information, see link:http://docs.ansible.com/ansible/latest/reference_appendices/config.html#envvar-ANSIBLE_DISPLAY_ARGS_TO_STDOUT[ANSIBLE_DISPLAY_ARGS_TO_STDOUT] in the Ansible Configuration Settings. -Use the icons to relaunch image:rightrocket.png[Launch,15,15], download image:download.png[Download,15,15] the job output, or delete image:delete-button.png[Delete,15,15] the job. +// For AAP-45084, I need to confirm if the latest env shows the following options: +You can btn:[Relaunch job], btn:[Cancel job], download image:download.png[Download,15,15] the job output, or delete image:delete-button.png[Delete,15,15] the job. -image::ug-show-job-results-for-inv-sync.png[Job results inventory sync] +//image::ug-show-job-results-for-inv-sync.png[Job results inventory sync] [NOTE] ==== diff --git a/downstream/modules/platform/con-controller-job-branch-overriding.adoc b/downstream/modules/platform/con-controller-job-branch-overriding.adoc index 5a542ea4bc..f564738c49 100644 --- a/downstream/modules/platform/con-controller-job-branch-overriding.adoc +++ b/downstream/modules/platform/con-controller-job-branch-overriding.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-job-branch-overriding"] = Job branch overriding @@ -7,5 +9,5 @@ These are represented by the values specified in the *Type Details* fields: image::ug-scm-project-branching-emphasized.png[Project branching emphasized] -When creating or editing a job you have the option to *Allow Branch Override*. +When creating or editing a job you have the option to *Allow branch override*. When this option is checked, project administrators can delegate branch selection to the job templates that use that project, requiring only project `use_role`. diff --git a/downstream/modules/platform/con-controller-job-slice-considerations.adoc b/downstream/modules/platform/con-controller-job-slice-considerations.adoc index 7710e8457a..6b6cdc7379 100644 --- a/downstream/modules/platform/con-controller-job-slice-considerations.adoc +++ b/downstream/modules/platform/con-controller-job-slice-considerations.adoc @@ -1,4 +1,6 @@ -[id="controller-job-slice-considerations"] +:_mod-docs-content-type: CONCEPT + +[id="con-controller-job-slice-considerations"] = Job slice considerations diff --git a/downstream/modules/platform/con-controller-keep-subscription-in-compliance.adoc b/downstream/modules/platform/con-controller-keep-subscription-in-compliance.adoc index 2e53bb851c..ded76428dd 100644 --- a/downstream/modules/platform/con-controller-keep-subscription-in-compliance.adoc +++ b/downstream/modules/platform/con-controller-keep-subscription-in-compliance.adoc @@ -1,6 +1,8 @@ -[id="controller-keep-subscription-in-compliance"] +:_mod-docs-content-type: CONCEPT -= Troubleshooting: Keep your subscription in compliance +[id="controller-keep-subscription-in-compliance_{context}"] + += Keeping your subscription in compliance Your subscription has two possible statuses: @@ -8,7 +10,6 @@ Your subscription has two possible statuses: * *Out of compliance*: Indicates that you have exceeded the number of hosts in your subscription. //image::gs-controller-license-non-compliant.png[Subscription out of compliance] -ifdef::controller-UG,controller-AG[] Compliance is computed as follows: [literal, options="nowrap" subs="+attributes"] @@ -23,11 +24,11 @@ Where: Other important information displayed are: -* *Hosts automated*: Host count automated by the job, which consumes the license count. -* *Hosts imported*: Host count considering unique host names across all inventory sources. This number does not impact hosts remaining. -* *Hosts remaining*: Total host count minus hosts automated. -* *Hosts deleted*: Hosts that were deleted, freeing the license capacity. -* *Active hosts previously deleted*: Number of hosts now active that were previously deleted. +* *Hosts automated*: The number of hosts automated by the job, which consumes the license count. +* *Hosts imported*: The number of hosts considering unique host names across all inventory sources. This number does not impact hosts remaining. +* *Hosts remaining*: The number of hosts minus the number of hosts automated. +* *Hosts deleted*: The number of hosts that were deleted, freeing the license capacity. +* *Active hosts previously deleted*: The number of hosts now active that were previously deleted. For example, if you have a subscription capacity of 10 hosts: @@ -35,27 +36,3 @@ For example, if you have a subscription capacity of 10 hosts: * 3 hosts were automated again, now you have 11 hosts, which puts you over the subscription limit of 10 (non-compliant). * If you delete hosts, refresh the subscription details to see the change in count and status. -= Viewing the host activity - -.Procedure -//[ddacosta] I don't see a Host Metrics menu selection off the standalone navigation panel. Should it be Resources > Hosts? If so, add replace with {MenuInfrastructureHosts} -//[ddacosta] For 2.5 Host Metrics is off the Analytics menu. Use {MenuAAHostMetrics} -. In the navigation panel, select menu:Host Metrics[] to view the activity associated with hosts, such as those that have been automated and deleted. -+ -Each unique hostname is listed and sorted by the user's preference. -+ -image::ug-host-metrics.png[Host metrics] -+ -[NOTE] -==== -A scheduled task automatically updates these values on a weekly basis and deletes jobs with hosts that were last automated more than a year ago. -==== - -. Delete unnecessary hosts directly from the Host Metrics view by selecting the desired hosts and clicking btn:[Delete]. -+ -These are soft-deleted, meaning their records are not removed, but are not being used and thereby not counted towards your subscription. -endif::controller-UG,controller-AG[] - -ifdef::controller-GS,controller-AG[] -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-keep-subscription-in-compliance[Troubleshooting: Keeping your subscription in compliance] in the _{ControllerUG}_. -endif::controller-GS,controller-AG[] diff --git a/downstream/modules/platform/con-controller-metrics-monitor-controller.adoc b/downstream/modules/platform/con-controller-metrics-monitor-controller.adoc index 41fe7dca03..20ee90f1b0 100644 --- a/downstream/modules/platform/con-controller-metrics-monitor-controller.adoc +++ b/downstream/modules/platform/con-controller-metrics-monitor-controller.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-metrics-monitor-controller"] = Metrics to monitor {ControllerName} @@ -21,9 +23,10 @@ Application level metrics provide data that the application knows about the syst Using system and application metrics can help you identify what was happening in the application when a service degradation occurred. Information about {ControllerName}'s performance over time helps when diagnosing problems or doing capacity planning for future growth. include::ref-controller-metrics-monitoring.adoc[leveloffset=+1] + include::con-controller-system-level-monitoring.adoc[leveloffset=+1] .Additional resources -* For more information about configuring monitoring, see xref:assembly-controller-metrics[Metrics]. -* Additional insights into automation usage are available when you enable data collection for automation analytics. For more information, see link:https://www.ansible.com/products/insights-for-ansible[Automation analytics and Red Hat Insights for Red Hat Ansible Automation Platform]. +* xref:assembly-controller-metrics[Metrics] +* link:https://www.ansible.com/products/insights-for-ansible[Automation analytics and Red Hat Insights for Red Hat Ansible Automation Platform] diff --git a/downstream/modules/platform/con-controller-minimize-administrative-accounts.adoc b/downstream/modules/platform/con-controller-minimize-administrative-accounts.adoc index c8e28195af..d310fb761c 100644 --- a/downstream/modules/platform/con-controller-minimize-administrative-accounts.adoc +++ b/downstream/modules/platform/con-controller-minimize-administrative-accounts.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-minimize-administrative-accounts"] = Minimize administrative accounts diff --git a/downstream/modules/platform/con-controller-minimize-system-access.adoc b/downstream/modules/platform/con-controller-minimize-system-access.adoc index f4d6228fad..a7847fa664 100644 --- a/downstream/modules/platform/con-controller-minimize-system-access.adoc +++ b/downstream/modules/platform/con-controller-minimize-system-access.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-minimize-system-access"] = Minimize local system access diff --git a/downstream/modules/platform/con-controller-notification-hierarchy.adoc b/downstream/modules/platform/con-controller-notification-hierarchy.adoc index 15ecbea193..56ad17575a 100644 --- a/downstream/modules/platform/con-controller-notification-hierarchy.adoc +++ b/downstream/modules/platform/con-controller-notification-hierarchy.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-notification-hierarchy"] = Notification hierarchy diff --git a/downstream/modules/platform/con-controller-notification-types.adoc b/downstream/modules/platform/con-controller-notification-types.adoc index 961f12e70a..bd0cf3ece1 100644 --- a/downstream/modules/platform/con-controller-notification-types.adoc +++ b/downstream/modules/platform/con-controller-notification-types.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-notification-types"] = Notification types @@ -20,5 +22,6 @@ You might need to test them in different ways. Additionally, you can customize each type of notification down to a specific detail or a set of criteria to trigger a notification. .Additional resources -For more information on configuring custom notifications, see xref:controller-create-custom-notifications[Create custom notifications]. -The following sections give further details on each type of notification. + +* xref:controller-create-custom-notifications[Create custom notifications] + diff --git a/downstream/modules/platform/con-controller-notification-workflow.adoc b/downstream/modules/platform/con-controller-notification-workflow.adoc index 8bd7d221a4..ffbceb2be8 100644 --- a/downstream/modules/platform/con-controller-notification-workflow.adoc +++ b/downstream/modules/platform/con-controller-notification-workflow.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-notification-workflow"] = Notification workflow -When a job succeeds or fails, the error or success handler pulls a list of relevant notification templates using the procedure defined in the xref:controller-notifications[Notifications] section. +When a job succeeds or fails, the error or success handler pulls a list of relevant notification templates using the procedure defined in the xref:controller-notifications[Notifiers] section. It then creates a notification object for each one, containing relevant details about the job and sends it to the destination. These include email addresses, slack channels, and SMS numbers. diff --git a/downstream/modules/platform/con-controller-overview-api.adoc b/downstream/modules/platform/con-controller-overview-api.adoc new file mode 100644 index 0000000000..937d0566a1 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-api.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-api_{context}"] + += The ideal RESTful API +The {ControllerName} REST API is the ideal RESTful API for a systems management application, with all resources fully discoverable, paginated, searchable, and well modeled. A styled API browser enables API exploration from the API root at `\http:///api/`, showing off every resource and relation. Everything that can be done in the user interface can be done in the API. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-auth-enhance.adoc b/downstream/modules/platform/con-controller-overview-auth-enhance.adoc new file mode 100644 index 0000000000..01ddb6591c --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-auth-enhance.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-auth-enhance_{context}"] + += Authentication enhancements +{ControllerNameStart} supports: + +* LDAP +* SAML +* token-based authentication + +With LDAP and SAML support you can integrate your enterprise account information in a more flexible manner. + +Token-based authentication permits authentication of third-party tools and services with {ControllerName} through integrated OAuth 2 token support. diff --git a/downstream/modules/platform/con-controller-overview-automation.adoc b/downstream/modules/platform/con-controller-overview-automation.adoc new file mode 100644 index 0000000000..5000808269 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-automation.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-automation_{context}"] + += "Push Button" automation +Use {ControllerName} to access your favorite projects and re-trigger execution from the web interface. +{ControllerNameStart} asks for input variables, prompts for your credentials, starts and monitors jobs, and displays results and host history. diff --git a/downstream/modules/platform/con-controller-overview-backup-restore.adoc b/downstream/modules/platform/con-controller-overview-backup-restore.adoc new file mode 100644 index 0000000000..ccc3ab6ad0 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-backup-restore.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-backup-restore_{context}"] + += Backup and restore +{PlatformNameShort} can backup and restore your systems or systems, making it easy for you to backup and replicate your instance as required. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-cloud-autoscaling.adoc b/downstream/modules/platform/con-controller-overview-cloud-autoscaling.adoc new file mode 100644 index 0000000000..72fb28c3c7 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-cloud-autoscaling.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-cloud-autoscaling_{context}"] + + += Cloud and autoscaling flexibility +{ControllerNameStart} includes a powerful optional provisioning callback feature that enables nodes to request configuration on-demand. +This is an ideal solution for a cloud auto-scaling scenario and includes the following features: + +* It integrates with provisioning servers such as Cobbler and deals with managed systems with unpredictable uptimes. +* It requires no management software to be installed on remote nodes. +* The callback solution can be triggered by a call to `curl` or `wget`, and can be embedded in `init` scripts, kickstarts, or preseeds. +* You can control access so that only machines listed in the inventory can request configuration. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-cluster-manage.adoc b/downstream/modules/platform/con-controller-overview-cluster-manage.adoc new file mode 100644 index 0000000000..1a568d1070 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-cluster-manage.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-cluster-manage_{context}"] + += Cluster management +Run time management of cluster groups enables configurable scaling. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-details.adoc b/downstream/modules/platform/con-controller-overview-details.adoc index 7d165bbfbe..2740d299d2 100644 --- a/downstream/modules/platform/con-controller-overview-details.adoc +++ b/downstream/modules/platform/con-controller-overview-details.adoc @@ -1,162 +1,6 @@ -[id="con-controller-overview-details"] +:_mod-docs-content-type: CONCEPT -= Real-time playbook output and exploration -With {ControllerName} you can watch playbooks run in real time, seeing each host as they check in. -You can go back and explore the results for specific tasks and hosts in great detail, search for specific plays or hosts and see just those results, or locate errors that need to be corrected. +[id="con-controller-overview-details_{context}"] -= "Push Button" automation -Use {ControllerName} to access your favorite projects and re-trigger execution from the web interface. -{ControllerNameStart} asks for input variables, prompts for your credentials, starts and monitors jobs, and displays results and host history. -= Simplified role-based access control and auditing -With {ControllerName} you can: - -* Grant permissions to perform a specific task to different teams or explicit users through _role-based access control_ (RBAC). -Example tasks include viewing, creating, or modifying a file. -* Keep some projects private, while enabling some users to edit inventories, and others to run playbooks against certain systems, either in check (dry run) or live mode. -* Enable certain users to use credentials without exposing the credentials to them. - -{ControllerNameStart} records the history of operations and who made them, including objects edited and jobs launched. - -If you want to give any user or team permissions to use a job template, you can assign permissions directly on the job template. Credentials are full objects in the {ControllerName} RBAC system, and can be assigned to many users or teams for use. - -{ControllerNameStart} includes an _auditor_ type. A system-level auditor can see all aspects of the systems automation, but does not have permission to run or change automation. -An auditor is useful for a service account that scrapes automation information from the REST API. - -.Additional resources -* For more information about user roles, see xref:con-controller-rbac[Role-Based Access Controls]. - -= Cloud and autoscaling flexibility -{ControllerNameStart} includes a powerful optional provisioning callback feature that enables nodes to request configuration on-demand. -This is an ideal solution for a cloud auto-scaling scenario and includes the following features: - -* It integrates with provisioning servers such as Cobbler and deals with managed systems with unpredictable uptimes. -* It requires no management software to be installed on remote nodes. -* The callback solution can be triggered by a call to `curl` or `wget`, and can be embedded in `init` scripts, kickstarts, or preseeds. -* You can control access so that only machines listed in the inventory can request configuration. - -= The ideal RESTful API -The {ControllerName} REST API is the ideal RESTful API for a systems management application, with all resources fully discoverable, paginated, searchable, and well modeled. A styled API browser enables API exploration from the API root at `\http:///api/`, showing off every resource and relation. Everything that can be done in the user interface can be done in the API. - -= Backup and restore -{PlatformNameShort} can backup and restore your systems or systems, making it easy for you to backup and replicate your instance as required. - -= Ansible Galaxy integration -By including an {Galaxy} `requirements.yml` file in your project directory, {ControllerName} automatically fetches the roles your playbook needs from Galaxy, GitHub, or your local source control. -For more information, see xref:ref-projects-galaxy-support[Ansible Galaxy Support]. - -= Inventory support for OpenStack -Dynamic inventory support is available for OpenStack. With this you can target any of the virtual machines or images running in your OpenStack cloud. - -For more information, see the xref:ref-controller-credential-openstack[OpenStack credential type] section. - -= Remote command execution -Use remote command execution to perform a simple tasks, such as adding a single user, updating a single security vulnerability, or restarting a failing service. -Any task that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory. -You can manage your systems quickly and easily. -Because of an RBAC engine and detailed audit logging, you know which user has completed a specific task. - -= System tracking -You can collect facts using the fact caching feature. For more information, see xref:controller-fact-caching[Fact Caching]. - -= Integrated notifications -Keep track of the status of your automation. - -You can configure the following notifications: - -* stackable notifications for job templates, projects, or entire organizations -* different notifications for job start, job success, job failure, and job approval (for workflow nodes) - -The following notification sources are supported: - -* xref:controller-notification-email[Email] -* xref:controller-notification-grafana[Grafana] -* xref:controller-notification-irc[IRC] -* xref:controller-notification-mattermost[Mattermost] -* xref:controller-notification-pagerduty[PagerDuty] -* xref:controller-notification-rocketchat[Rocket.Chat] -* xref:controller-notification-slack[Slack] -* xref:controller-notification-twilio[Twilio] -* xref:controller-notification-webhook[Webhook] (post to an arbitrary webhook, for integration into other tools) - -You can also customize notification messages for each of the preceding notification types. - -= Integrations - -{ControllerNameStart} supports the following integrations: - -* Dynamic inventory sources for Red Hat Satellite 6. - -For more information, see xref:proc-controller-inv-source-satellite[Red Hat Satellite 6]. - -* Red Hat Insights integration, enabling Insights playbooks to be used as an {PlatformNameShort} project. - -For more information, see xref:controller-setting-up-insights[Setting up Insights Remediations]. - -* {HubNameStart} acts as a content provider for {ControllerName}, requiring both an {ControllerName} deployment and an {HubName} deployment running alongside each other. - - -= Custom Virtual Environments -With Custom Ansible environment support you can have different Ansible environments and specify custom paths for different teams and jobs. - -= Authentication enhancements -Automation controller supports: - -* LDAP -* SAML -* token-based authentication - -With LDAP and SAML support you can integrate your enterprise account information in a more flexible manner. - -Token-based authentication permits authentication of third-party tools and services with {ControllerName} through integrated OAuth 2 token support. - -= Cluster management -Run time management of cluster groups enables configurable scaling. - -= Workflow enhancements -To model your complex provisioning, deployment, and orchestration workflows, you can use {ControllerName} expanded workflows in several ways: - -* *Inventory overrides for Workflows* You can override an inventory across a workflow at workflow definition time, or at launch time. -Use {ControllerName} to define your application deployment workflows, and then re-use them in many environments. -* *Convergence nodes for Workflows* When modeling complex processes, you must sometimes wait for many steps to finish before proceeding. -{ControllerNameStart} workflows can replicate this; workflow steps can wait for any number of earlier workflow steps to complete properly before proceeding. -* *Workflow Nesting* You can re-use individual workflows as components of a larger workflow. -Examples include combining provisioning and application deployment workflows into a single workflow. -* *Workflow Pause and Approval* You can build workflows containing approval nodes that require user intervention. -This makes it possible to pause workflows in between playbooks so that a user can give approval (or denial) for continuing on to the next step in the workflow. - -For more information, see xref:controller-workflows[Workflows in {ControllerName}]. - -= Job distribution - -Take a fact gathering or configuration job running across thousands of machines and divide it into slices that can be distributed across your {ControllerName} cluster. -This increases reliability, offers faster job completion, and improved cluster use. - -For example, you can change a parameter across 15,000 switches at scale, or gather information across your multi-thousand-node RHEL estate. - -For more information, see xref:controller-job-slicing[Job Slicing]. - -= Support for deployment in a FIPS-enabled environment -{ControllerNameStart} deploys and runs in restricted modes such as FIPS. - -= Limit the number of hosts per organization -Many large organizations have instances shared among many organizations. -To ensure that one organization cannot use all the licensed hosts, this feature enables superusers to set a specified upper limit on how many licensed hosts can that you can allocate to each organization. -The {ControllerName} algorithm factors changes in the limit for an organization and the number of total hosts across all organizations. -Inventory updates fail if an inventory synchronization brings an organization out of compliance with the policy. -Additionally, superusers are able to over-allocate their licenses, with a warning. - -= Inventory plugins -The following inventory plugins are used from upstream collections: - -* `amazon.aws.aws_ec2` -* `community.vmware.vmware_vm_inventory` -* `azure.azcollection.azure_rm` -* `google.cloud.gcp_compute` -* `theforeman.foreman.foreman` -* `openstack.cloud.openstack` -* `ovirt.ovirt.ovirt` -* `awx.awx.tower` - -= Secret management system -With a secret management system, external credentials are stored and supplied for use in {ControllerName} so you need not provide them directly. +{PlatformNameShort} includes {ControllerName}, which enables users to define, operate, scale, and delegate automation across their enterprise. diff --git a/downstream/modules/platform/con-controller-overview-exploration.adoc b/downstream/modules/platform/con-controller-overview-exploration.adoc new file mode 100644 index 0000000000..7cabc5ef0a --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-exploration.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-exploration_{context}"] + += Real-time playbook output and exploration +With {ControllerName} you can watch playbooks run in real time, seeing each host as they check in. +You can go back and explore the results for specific tasks and hosts in great detail, search for specific plays or hosts and see just those results, or locate errors that need to be corrected. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-galaxy.adoc b/downstream/modules/platform/con-controller-overview-galaxy.adoc new file mode 100644 index 0000000000..40bd64a9f8 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-galaxy.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-galaxy_{context}"] + += Ansible Galaxy integration +By including an {Galaxy} `requirements.yml` file in your project directory, {ControllerName} automatically fetches the roles your playbook needs from Galaxy, GitHub, or your local source control. +For more information, see xref:ref-projects-galaxy-support[Ansible Galaxy Support]. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-host-limits.adoc b/downstream/modules/platform/con-controller-overview-host-limits.adoc new file mode 100644 index 0000000000..dd24dbc9fa --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-host-limits.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-host-limits_{context}"] + += Limit the number of hosts per organization +Many large organizations have instances shared among many organizations. +To ensure that one organization cannot use all the licensed hosts, this feature enables superusers to set a specified upper limit on how many licensed hosts can that you can allocate to each organization. +The {ControllerName} algorithm factors changes in the limit for an organization and the number of total hosts across all organizations. +Inventory updates fail if an inventory synchronization brings an organization out of compliance with the policy. +Additionally, superusers are able to over-allocate their licenses, with a warning. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-integrations.adoc b/downstream/modules/platform/con-controller-overview-integrations.adoc new file mode 100644 index 0000000000..01d4a02b28 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-integrations.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-integrations_{context}"] + += Integrations + +{ControllerNameStart} supports the following integrations: + +* Dynamic inventory sources for Red Hat Satellite 6. + +For more information, see xref:proc-controller-inv-source-satellite[Red Hat Satellite 6]. + +* Red Hat Insights integration, enabling Insights playbooks to be used as an {PlatformNameShort} project. + +For more information, see xref:controller-setting-up-insights[Setting up Red Hat Insights for {PlatformName} Remediations]. + +* {HubNameStart} acts as a content provider for {ControllerName}, requiring both an {ControllerName} deployment and an {HubName} deployment running alongside each other. diff --git a/downstream/modules/platform/con-controller-overview-inventory-plugins.adoc b/downstream/modules/platform/con-controller-overview-inventory-plugins.adoc new file mode 100644 index 0000000000..dbb7c5c1a8 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-inventory-plugins.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-inventory-plugins_{context}"] + += Inventory plugins +The following inventory plugins are used from upstream collections: + +* `amazon.aws.aws_ec2` +* `community.vmware.vmware_vm_inventory` +// May be one for vmware-exsi? +* `azure.azcollection.azure_rm` +* `google.cloud.gcp_compute` +* `theforeman.foreman.foreman` +* `openstack.cloud.openstack` +* `ovirt.ovirt.ovirt` +* `awx.awx.tower` +//Possible 3 new plugins. diff --git a/downstream/modules/platform/con-controller-overview-job-distribution.adoc b/downstream/modules/platform/con-controller-overview-job-distribution.adoc new file mode 100644 index 0000000000..7d0232140b --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-job-distribution.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: CONCEPT + +[id=con-controller-overview-job-distribution_{context}] + += Job distribution + +Take a fact gathering or configuration job running across thousands of machines and divide it into slices that can be distributed across your {ControllerName} cluster. +This increases reliability, offers faster job completion, and improved cluster use. + +For example, you can change a parameter across 15,000 switches at scale, or gather information across your multi-thousand-node RHEL estate. + +For more information, see link:{URLControllerUserGuide}/controller-job-slicing[Job slicing]. diff --git a/downstream/modules/platform/con-controller-overview-notifiers.adoc b/downstream/modules/platform/con-controller-overview-notifiers.adoc new file mode 100644 index 0000000000..efc796449c --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-notifiers.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-notifiers_{context}"] + += Integrated notifications +Keep track of the status of your automation. + +You can configure the following notifications: + +* stackable notifications for job templates, projects, or entire organizations +* different notifications for job start, job success, job failure, and job approval (for workflow nodes) + +The following notification sources are supported: + +* xref:controller-notification-email[Email] +* xref:controller-notification-grafana[Grafana] +* xref:controller-notification-irc[IRC] +* xref:controller-notification-mattermost[Mattermost] +* xref:controller-notification-pagerduty[PagerDuty] +* xref:controller-notification-rocketchat[Rocket.Chat] +* xref:controller-notification-slack[Slack] +* xref:controller-notification-twilio[Twilio] +* xref:controller-notification-webhook[Webhook] (post to an arbitrary webhook, for integration into other tools) + +You can also customize notification messages for each of the preceding notification types. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-openstack.adoc b/downstream/modules/platform/con-controller-overview-openstack.adoc new file mode 100644 index 0000000000..53a060e166 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-openstack.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-openstack_{context}"] + += Inventory support for OpenStack +Dynamic inventory support is available for OpenStack. With this you can target any of the virtual machines or images running in your OpenStack cloud. + +For more information, see xref:ref-controller-credential-openstack[OpenStack credential type]. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-rbac.adoc b/downstream/modules/platform/con-controller-overview-rbac.adoc new file mode 100644 index 0000000000..0d9b878446 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-rbac.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-rbac_{context}"] + += Simplified role-based access control and auditing +With {ControllerName} you can: + +* Grant permissions to perform a specific task to different teams or explicit users through _role-based access control_ (RBAC). +Example tasks include viewing, creating, or modifying a file. +* Keep some projects private, while enabling some users to edit inventories, and others to run playbooks against certain systems, either in check (dry run) or live mode. +* Enable certain users to use credentials without exposing the credentials to them. + +{ControllerNameStart} records the history of operations and who made them, including objects edited and jobs launched. + +If you want to give any user or team permissions to use a job template, you can assign permissions directly on the job template. Credentials are full objects in the {ControllerName} RBAC system, and can be assigned to many users or teams for use. + +{ControllerNameStart} includes an _auditor_ type. A system-level auditor can see all aspects of the systems automation, but does not have permission to run or change automation. +An auditor is useful for a service account that scrapes automation information from the REST API. + +.Additional resources +* For more information about user roles, see link:{URLCentralAuth}/gw-managing-access[Managing access with role based access control]. diff --git a/downstream/modules/platform/con-controller-overview-remote-exec.adoc b/downstream/modules/platform/con-controller-overview-remote-exec.adoc new file mode 100644 index 0000000000..b18ca7cb6c --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-remote-exec.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-remote-exec_{context}"] + += Remote command execution +Use remote command execution to perform a simple task, such as adding a single user, updating a single security vulnerability, or restarting a failing service. +Any task that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory. +You can manage your systems quickly and easily. +Because of an RBAC engine and detailed audit logging, you know which user has completed a specific task. diff --git a/downstream/modules/platform/con-controller-overview-secret-management.adoc b/downstream/modules/platform/con-controller-overview-secret-management.adoc new file mode 100644 index 0000000000..8beb452ba6 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-secret-management.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-secret-management_{context}"] + += Secret management system +With a secret management system, external credentials are stored and supplied for use in {ControllerName} so you need not provide them directly. diff --git a/downstream/modules/platform/con-controller-overview-tracking.adoc b/downstream/modules/platform/con-controller-overview-tracking.adoc new file mode 100644 index 0000000000..ea00ae49b1 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-tracking.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-tracking_{context}"] + += System tracking +You can collect facts by using the fact caching feature. +For more information, see xref:controller-fact-caching[Fact Caching]. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-virtual-envs.adoc b/downstream/modules/platform/con-controller-overview-virtual-envs.adoc new file mode 100644 index 0000000000..56b452fa80 --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-virtual-envs.adoc @@ -0,0 +1,6 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-virtual-envs_{context}"] + += Custom Virtual Environments +With Custom Ansible environment support you can have different Ansible environments and specify custom paths for different teams and jobs. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-overview-workflow-enhancements.adoc b/downstream/modules/platform/con-controller-overview-workflow-enhancements.adoc new file mode 100644 index 0000000000..13a58e91bd --- /dev/null +++ b/downstream/modules/platform/con-controller-overview-workflow-enhancements.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-controller-overview-workflow-enhancements_{context}"] + += Workflow enhancements +To model your complex provisioning, deployment, and orchestration workflows, you can use {ControllerName} expanded workflows in several ways: + +* *Inventory overrides for Workflows* You can override an inventory across a workflow at workflow definition time, or at launch time. +Use {ControllerName} to define your application deployment workflows, and then re-use them in many environments. +* *Convergence nodes for Workflows* When modeling complex processes, you must sometimes wait for many steps to finish before proceeding. +{ControllerNameStart} workflows can replicate this; workflow steps can wait for any number of earlier workflow steps to complete properly before proceeding. +* *Workflow Nesting* You can re-use individual workflows as components of a larger workflow. +Examples include combining provisioning and application deployment workflows into a single workflow. +* *Workflow Pause and Approval* You can build workflows containing approval nodes that require user intervention. +This makes it possible to pause workflows in between playbooks so that a user can give approval (or denial) for continuing on to the next step in the workflow. + +For more information, see xref:controller-workflows[Workflows in {ControllerName}]. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-playbook-access-info-sharing.adoc b/downstream/modules/platform/con-controller-playbook-access-info-sharing.adoc index 599dd9be7b..c5f98793fd 100644 --- a/downstream/modules/platform/con-controller-playbook-access-info-sharing.adoc +++ b/downstream/modules/platform/con-controller-playbook-access-info-sharing.adoc @@ -1,6 +1,8 @@ -[id="con-controller-playbook-access-info-sharing"] +:_mod-docs-content-type: CONCEPT -= Playbook Access and Information Sharing +[id="con-controller-playbook-access-info-sharing_{context}"] + += Playbook access and information sharing {ControllerNameStart}'s use of automation {ExecEnvShort}s and Linux containers prevents playbooks from reading files outside of their project directory. diff --git a/downstream/modules/platform/con-controller-playbook-run-jobs.adoc b/downstream/modules/platform/con-controller-playbook-run-jobs.adoc index 08fda30a12..05dfcfd56e 100644 --- a/downstream/modules/platform/con-controller-playbook-run-jobs.adoc +++ b/downstream/modules/platform/con-controller-playbook-run-jobs.adoc @@ -1,11 +1,13 @@ -[id="controller-playbook-run-jobs"] +:_mod-docs-content-type: CONCEPT + +[id="controller-playbook-run-jobs_{context}"] = Playbook run jobs When a playbook is executed, the results display in the *Output* tab. If used, the Ansible CLI displays the same information. This can be useful for debugging. -image::ug-results-for-example-job.png[Results for example job] +//image::ug-results-for-example-job.png[Results for example job] The events summary displays the following events that are run as part of this playbook: @@ -16,9 +18,9 @@ The events summary displays the following events that are run as part of this pl image::ug-jobs-events-summary.png[Job events summary] -Use the icons next to the events to relaunch (image:rightrocket.png[Rightrocket,15,15]), download (image:download.png[Download,15,15]) the job output, or delete (image:delete-button.png[Delete,15,15]) the job. +You can btn:[Relaunch job], btn:[Cancel job], download image:download.png[Download,15,15] the job output, or delete image:delete-button.png[Delete,15,15] the job. Hover over a section of the host status bar in the *Output* view and the number of hosts associated with that status displays. The output for a playbook job is also available after launching a job from the *Jobs* tab of its *Jobs Templates* page. -View its host details by clicking on the line item tasks in the output. +View its host details by clicking the line item tasks in the output. diff --git a/downstream/modules/platform/con-controller-project-revision-behavior.adoc b/downstream/modules/platform/con-controller-project-revision-behavior.adoc index 334f8adc30..ea23a4a0d0 100644 --- a/downstream/modules/platform/con-controller-project-revision-behavior.adoc +++ b/downstream/modules/platform/con-controller-project-revision-behavior.adoc @@ -1,16 +1,18 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-project-revision-behavior"] = Project revision behavior -During a project update, the revision of the default branch (specified in the *SCM Branch* field of the project) is stored when updated. -If providing a non-default *SCM Branch* (not a commit hash or tag) in a job, the newest revision is pulled from the source control remote immediately before the job starts. -This revision is shown in the *Source Control Revision* field of the job and its project update. +During a project update, the revision of the default branch (specified in the *Source control branch* field of the project) is stored when updated. +If providing a non-default *Source control branch* (not a commit hash or tag) in a job, the newest revision is pulled from the source control remote immediately before the job starts. +This revision is shown in the *Source control revision* field of the job and its project update. -image::ug-output-branch-override.png[Jobs output override example] +//image::ug-output-branch-override.png[Jobs output override example] As a result, offline job runs are impossible for non-default branches. To ensure that a job is running a static version from source control, use tags or commit hashes. Project updates do not save all branches, only the project default branch. -The *SCM Branch* field is not validated, so the project must update to assure it is valid. -If this field is provided or prompted for, the *Playbook* field of job templates is not validated, and you have to launch the job template in order to verify presence of the expected playbook. +The *Source control branch* field is not validated, so the project must update to assure it is valid. +If this field is provided or prompted for, the *Playbook* field of job templates is not validated, and you have to launch the job template to verify presence of the expected playbook. diff --git a/downstream/modules/platform/con-controller-provisioning-callbacks.adoc b/downstream/modules/platform/con-controller-provisioning-callbacks.adoc index 9bacaeefa0..73c7670402 100644 --- a/downstream/modules/platform/con-controller-provisioning-callbacks.adoc +++ b/downstream/modules/platform/con-controller-provisioning-callbacks.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-provisioning-callbacks"] = Provisioning Callbacks diff --git a/downstream/modules/platform/con-controller-relaunch-job-template.adoc b/downstream/modules/platform/con-controller-relaunch-job-template.adoc index cc5d0bee5a..8534ca19a8 100644 --- a/downstream/modules/platform/con-controller-relaunch-job-template.adoc +++ b/downstream/modules/platform/con-controller-relaunch-job-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-relaunch-job-template"] = Relaunch a job template diff --git a/downstream/modules/platform/con-controller-remove-access-credentials.adoc b/downstream/modules/platform/con-controller-remove-access-credentials.adoc index 24edeb0ec3..ebb29def78 100644 --- a/downstream/modules/platform/con-controller-remove-access-credentials.adoc +++ b/downstream/modules/platform/con-controller-remove-access-credentials.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-remove-access-credentials"] = Remove user access to credentials diff --git a/downstream/modules/platform/con-controller-resource-determination-capacity.adoc b/downstream/modules/platform/con-controller-resource-determination-capacity.adoc index 9888b2ddec..f6e9512b32 100644 --- a/downstream/modules/platform/con-controller-resource-determination-capacity.adoc +++ b/downstream/modules/platform/con-controller-resource-determination-capacity.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-resource-determination-for-capacity-algorithm"] = Resource determination for capacity algorithm diff --git a/downstream/modules/platform/con-controller-restore-different-cluster.adoc b/downstream/modules/platform/con-controller-restore-different-cluster.adoc index 65668f8620..59b072b552 100644 --- a/downstream/modules/platform/con-controller-restore-different-cluster.adoc +++ b/downstream/modules/platform/con-controller-restore-different-cluster.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-restore-different-cluster"] = Restore to a different cluster diff --git a/downstream/modules/platform/con-controller-role-based-access-controls.adoc b/downstream/modules/platform/con-controller-role-based-access-controls.adoc index c4d713e8f8..e7433da428 100644 --- a/downstream/modules/platform/con-controller-role-based-access-controls.adoc +++ b/downstream/modules/platform/con-controller-role-based-access-controls.adoc @@ -1,16 +1,19 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-role-based-access-controls"] = Role-based access controls +//Not sure whether this is still true. To edit and delete a workflow job template, you must have the administrator role. To create a workflow job template, you must be an organization administrator or a system administrator. -However, you can run a workflow job template that contains job templates that you do not have permissions for. +However, you can run a workflow job template that has job templates that you do not have permissions for. System administrators can create a blank workflow and then grant an `admin_role` to a low-level user, after which they can delegate more access and build the graph. You must have `execute` access to a job template to add it to a workflow job template. You can also perform other tasks, such as making a duplicate copy or re-launching a workflow, depending on which permissions are granted to a user. You must have permissions to all the resources used in a workflow, such as job templates, before relaunching or making a copy. -For more information, see xref:con-controller-rbac[Role-based access controls]. +For more information, see link:{URLCentralAuth}/gw-managing-access[Managing access with role based access control]. -For more information on performing the tasks described in this section, see the link:http://docs.ansible.com/automation-controller/4.4/html/administration/index.html#ag-start[Administration Guide]. +For more information about performing the tasks described, see xref:controller-workflow-job-templates[Workflow job templates]. diff --git a/downstream/modules/platform/con-controller-scm-inventory-jobs.adoc b/downstream/modules/platform/con-controller-scm-inventory-jobs.adoc index 3e14b09a01..67fa5affe7 100644 --- a/downstream/modules/platform/con-controller-scm-inventory-jobs.adoc +++ b/downstream/modules/platform/con-controller-scm-inventory-jobs.adoc @@ -1,9 +1,13 @@ -[id="controller-scm-inventory-jobs"] +:_mod-docs-content-type: CONCEPT + +[id="controller-scm-inventory-jobs_{context}"] = SCM inventory jobs When an inventory sourced from an SCM, for example git, is executed, the results are displayed in the *Output* tab. If used, the Ansible CLI displays the same information. This can be useful for debugging. -Use the icons in the navigation menu to relaunch (image:rightrocket.png[Rightrocket,15,15]), download (image:download.png[Download,15,15]) the job output, or delete (image:delete-button.png[Delete,15,15]) the job. -image::ug-results-for-scm-job.png[Results for SCM job] +// For AAP-45084, I need to confirm if the latest env shows the following options: +Use the navigation menu to btn:[Relaunch job], btn:[Cancel job], download image:download.png[Download,15,15] the job output, or delete image:delete-button.png[Delete,15,15] the job. + +//image::ug-results-for-scm-job.png[Results for SCM job] diff --git a/downstream/modules/platform/con-controller-secret-handling.adoc b/downstream/modules/platform/con-controller-secret-handling.adoc index f1f72783f8..e5ce56ab1e 100644 --- a/downstream/modules/platform/con-controller-secret-handling.adoc +++ b/downstream/modules/platform/con-controller-secret-handling.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-secret-handling"] = Secret handling diff --git a/downstream/modules/platform/con-controller-settings.adoc b/downstream/modules/platform/con-controller-settings.adoc index abe9eaa377..6198e2d017 100644 --- a/downstream/modules/platform/con-controller-settings.adoc +++ b/downstream/modules/platform/con-controller-settings.adoc @@ -1,16 +1,19 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-settings"] = The Settings menu -Configure global and system-level settings using the *Settings* menu. -The *Settings* menu provides access to {ControllerName} configuration settings. - -The *Settings* page enables administrators to configure the following: +You can configure some {ControllerName} options by using the *Settings* menu of the User Interface. -* Authentication -* Jobs -* System-level attributes -* Customize the UI -* Product license information +The *Settings* page enables an administrator to configure the following: -//include::settings-menu.adoc[] \ No newline at end of file +* link:{URLCentralAuth}/assembly-gw-settings#proc-controller-configure-subscriptions[Configuring subscriptions] +* link:{URLCentralAuth}/assembly-gw-settings#proc-settings-platform-gateway[{GatewayStart}] +* link:{URLCentralAuth}/assembly-gw-settings#proc-settings-user-preferences[User preferences] +//* link:{BaseURL}/documentation/red_hat_ansible_automation_platform/{PlatformVers}/html/configuring_automation_execution/index#proc-controller-configure-subscriptions[System] +* link:{URLControllerAdminGuide}/controller-config#controller-configure-jobs[Configuring jobs] +* link:{URLControllerAdminGuide}/assembly-controller-logging-aggregation#proc-controller-set-up-logging[Setting up logging] +* link:{URLCentralAuth}/assembly-gw-settings#proc-settings-troubleshooting[Troubleshooting options] +// [emcwhinn] Analytics has its own section in 2.5 UI +//* link:{BaseURL}/documentation/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/assembly-ag-controller-config#proc-controller-configure-analytics[{Analytics}] diff --git a/downstream/modules/platform/con-controller-signing-your-project.adoc b/downstream/modules/platform/con-controller-signing-your-project.adoc index f64564892b..7f0bd51ee2 100644 --- a/downstream/modules/platform/con-controller-signing-your-project.adoc +++ b/downstream/modules/platform/con-controller-signing-your-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-signing-your-project"] = Sign a project diff --git a/downstream/modules/platform/con-controller-source-tree-copy.adoc b/downstream/modules/platform/con-controller-source-tree-copy.adoc index a614edf069..33af8274b4 100644 --- a/downstream/modules/platform/con-controller-source-tree-copy.adoc +++ b/downstream/modules/platform/con-controller-source-tree-copy.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-source-tree-copy"] = Source tree copy behavior diff --git a/downstream/modules/platform/con-controller-start-stop.adoc b/downstream/modules/platform/con-controller-start-stop.adoc index 42a723a35d..645e8f0133 100644 --- a/downstream/modules/platform/con-controller-start-stop.adoc +++ b/downstream/modules/platform/con-controller-start-stop.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-start-stop"] External databases must be explicitly managed by the administrator. diff --git a/downstream/modules/platform/con-controller-surveys.adoc b/downstream/modules/platform/con-controller-surveys.adoc index f41113f413..65c5afe4b4 100644 --- a/downstream/modules/platform/con-controller-surveys.adoc +++ b/downstream/modules/platform/con-controller-surveys.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-surveys-in-job-templates"] = Surveys in job templates @@ -7,9 +9,10 @@ Surveys set extra variables for the playbook similar to *Prompt for Extra Variab Surveys also permit for validation of user input. Select the *Survey* tab to create a survey. -.Example +*Example* + You can use surveys for several situations. For example, operations want to give developers a "push to stage" button that they can run without advance knowledge of Ansible. When launched, this task could prompt for answers to questions such as "What tag should we release?". -You can ask many types of questions, including multiple-choice questions. +You can ask many types of questions, including multiple-choice questions. \ No newline at end of file diff --git a/downstream/modules/platform/con-controller-system-level-monitoring.adoc b/downstream/modules/platform/con-controller-system-level-monitoring.adoc index 4f3d949b2e..d318cf851a 100644 --- a/downstream/modules/platform/con-controller-system-level-monitoring.adoc +++ b/downstream/modules/platform/con-controller-system-level-monitoring.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-system-level-monitoring"] = System level monitoring diff --git a/downstream/modules/platform/con-controller-tuning.adoc b/downstream/modules/platform/con-controller-tuning.adoc index d4148b1fa3..f8a47e2aae 100644 --- a/downstream/modules/platform/con-controller-tuning.adoc +++ b/downstream/modules/platform/con-controller-tuning.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-tuning"] = {ControllerNameStart} tuning diff --git a/downstream/modules/platform/con-controller-understand-architecture.adoc b/downstream/modules/platform/con-controller-understand-architecture.adoc index 7319c94cc7..a54f6f12d9 100644 --- a/downstream/modules/platform/con-controller-understand-architecture.adoc +++ b/downstream/modules/platform/con-controller-understand-architecture.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-understand-architecture"] = Understand the architecture of {PlatformNameShort} and {ControllerName} {PlatformNameShort} and {ControllerName} comprise a general-purpose, declarative automation platform. -That means that once an Ansible playbook is launched (by {ControllerName}, or directly on the command line), the playbook, inventory, and credentials provided to Ansible are considered to be the source of truth. +That means that when an Ansible Playbook is launched (by {ControllerName}, or directly on the command line), the playbook, inventory, and credentials provided to Ansible are considered to be the source of truth. If you want policies around external verification of specific playbook content, job definition, or inventory contents, you must complete these processes before the automation is launched, either by the {ControllerName} web UI, or the {ControllerName} API. The use of source control, branching, and mandatory code review is best practice for Ansible automation. diff --git a/downstream/modules/platform/con-controller-view-completed-jobs.adoc b/downstream/modules/platform/con-controller-view-completed-jobs.adoc index 654af1518e..cb8dd66d6d 100644 --- a/downstream/modules/platform/con-controller-view-completed-jobs.adoc +++ b/downstream/modules/platform/con-controller-view-completed-jobs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-view-completed-jobs"] = View completed jobs @@ -5,9 +7,10 @@ The *Jobs* tab provides the list of job templates that have run. Click the expand icon next to each job to view the following details: -* Status -* ID and name +* ID and name +* Status * Type of job +* Duration of run * Time started and completed * Who started the job and which template, inventory, project, and credential were used. diff --git a/downstream/modules/platform/con-controller-view-completed-workflow-jobs.adoc b/downstream/modules/platform/con-controller-view-completed-workflow-jobs.adoc index 7df184f792..d6ab8ec11f 100644 --- a/downstream/modules/platform/con-controller-view-completed-workflow-jobs.adoc +++ b/downstream/modules/platform/con-controller-view-completed-workflow-jobs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-view-completed-workflow-jobs"] = View completed workflow jobs diff --git a/downstream/modules/platform/con-controller-work-with-notifications.adoc b/downstream/modules/platform/con-controller-work-with-notifications.adoc index 88f92c846d..fc7a87d846 100644 --- a/downstream/modules/platform/con-controller-work-with-notifications.adoc +++ b/downstream/modules/platform/con-controller-work-with-notifications.adoc @@ -1,14 +1,16 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-work-with-notifications"] = Work with notifications From the navigation panel, select {MenuAEAdminJobNotifications}. -This enables you to review any notification integrations you have set up and their statuses, if they have run. +You can review any notification integrations you have set up and their statuses, if they have run. -image::ug-job-template-completed-notifications-view.png[Job template completed notifications] +//image::ug-job-template-completed-notifications-view.png[Job template completed notifications] Use the toggles to enable or disable the notifications to use with your particular template. -For more information, see xref:controller-enable-disable-notifications[Enable and Disable Notifications]. +For more information, see xref:controller-enable-disable-notifications[Enable and disable notifications]. If no notifications have been set up, click btn:[Add notifier] to create a new notification. -For more information about configuring various notification types and extended messaging, see xref:controller-notification-types[Notification Types]. +For more information about configuring various notification types and extended messaging, see xref:controller-notification-types[Notification types]. diff --git a/downstream/modules/platform/con-controller-work-with-permissions.adoc b/downstream/modules/platform/con-controller-work-with-permissions.adoc index 8ed4a03f9d..27c7224317 100644 --- a/downstream/modules/platform/con-controller-work-with-permissions.adoc +++ b/downstream/modules/platform/con-controller-work-with-permissions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-work-with-permissions"] = Work with permissions diff --git a/downstream/modules/platform/con-controller-workflow-job-surveys.adoc b/downstream/modules/platform/con-controller-workflow-job-surveys.adoc index 6a1e4dbee5..3f2a0ca8d8 100644 --- a/downstream/modules/platform/con-controller-workflow-job-surveys.adoc +++ b/downstream/modules/platform/con-controller-workflow-job-surveys.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-surveys-in-workflow-job-templates"] = Surveys in workflow job templates diff --git a/downstream/modules/platform/con-controller-workflow-notifications.adoc b/downstream/modules/platform/con-controller-workflow-notifications.adoc index 5d34e0b172..8b87908da4 100644 --- a/downstream/modules/platform/con-controller-workflow-notifications.adoc +++ b/downstream/modules/platform/con-controller-workflow-notifications.adoc @@ -1,5 +1,7 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-workflow-notifications"] = Work with notifications -For information on working with notifications in workflow job templates, see xref:controller-work-with-notifications[Work with notifications]. +For information about working with notifications in workflow job templates, see xref:controller-work-with-notifications[Work with notifications]. diff --git a/downstream/modules/platform/con-controller-workflow-scenarios.adoc b/downstream/modules/platform/con-controller-workflow-scenarios.adoc index 4f978f6270..7f20fd6fb0 100644 --- a/downstream/modules/platform/con-controller-workflow-scenarios.adoc +++ b/downstream/modules/platform/con-controller-workflow-scenarios.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-workflow-scenarios"] = Workflow scenarios and considerations diff --git a/downstream/modules/platform/con-controller-workflow-states.adoc b/downstream/modules/platform/con-controller-workflow-states.adoc index d378e4db1a..699da79cbd 100644 --- a/downstream/modules/platform/con-controller-workflow-states.adoc +++ b/downstream/modules/platform/con-controller-workflow-states.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-workflow-states"] = Workflow states diff --git a/downstream/modules/platform/con-controller-workflow-visualizer.adoc b/downstream/modules/platform/con-controller-workflow-visualizer.adoc index 941faea527..4e41f93ffa 100644 --- a/downstream/modules/platform/con-controller-workflow-visualizer.adoc +++ b/downstream/modules/platform/con-controller-workflow-visualizer.adoc @@ -1,6 +1,8 @@ +:_mod-docs-content-type: CONCEPT + [id="controller-workflow-visualizer"] = Workflow visualizer The Workflow Visualizer provides a graphical way of linking together job templates, workflow templates, project syncs, and inventory syncs to build a workflow template. -Before you build a workflow template, see the xref:controller-workflows[Workflows] section for considerations associated with various scenarios on parent, child, and sibling nodes. +Before you build a workflow template, see the xref:controller-workflows[Workflows in {ControllerName}] section for considerations associated with various scenarios on parent, child, and sibling nodes. diff --git a/downstream/modules/platform/con-declaring-variables.adoc b/downstream/modules/platform/con-declaring-variables.adoc index f2ce09e0f0..a8f1d69114 100644 --- a/downstream/modules/platform/con-declaring-variables.adoc +++ b/downstream/modules/platform/con-declaring-variables.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-declaring_variables"] = Rules for declaring variables in inventory files @@ -31,4 +33,6 @@ The YAML inventory plugin processes variable values consistently and correctly. If a parameter value in the Ansible inventory file contains special characters, such as #, { or }, you must double-escape the value (that is enclose the value in both single and double quotation marks). -For example, to use `mypasswordwith#hashsigns` as a value for the variable `pg_password`, declare it as `pg_password='"mypasswordwith#hashsigns"'` in the Ansible host inventory file. \ No newline at end of file +For example, to use `mypasswordwith#hashsigns` as a value for the variable `pg_password`, declare it as `pg_password='"mypasswordwith#hashsigns"'` in the Ansible host inventory file. + +include::../aap-common/external-site-disclaimer.adoc[] \ No newline at end of file diff --git a/downstream/modules/platform/con-edge-manager-agent-service.adoc b/downstream/modules/platform/con-edge-manager-agent-service.adoc new file mode 100644 index 0000000000..68be88a8b7 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-agent-service.adoc @@ -0,0 +1,40 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-agent-service"] + += {RedHatEdge} agent and service + +The {RedHatEdge} agent is a process running on each managed device that periodically communicates with the {RedHatEdge} service. +The agent is responsible for the following tasks: + +* Enrolling devices into the service +* Periodically checking with the service for changes in the device specification, such as changes to the operating system, configuration, and applications +* Applying any updates independently from the service +* Reporting status of the device and the applications + +The {RedHatEdge} service is responsible for the following tasks: + +* Authenticating and authorizing users and agents +* Enrolling devices +* Managing device inventory +* Reporting status from individual devices or fleets + +The service also communicates with a database that stores the device inventory and the target device configuration. +When communicating with the service, the agent polls the service for changes in the configuration. +If the agent detects that the current configuration deviates from the target configuration, the agent attempts to apply the changes to the device. + +When the agent receives a new target configuration from the service, the agent does the following tasks: + +. To avoid depending on network connectivity during the update, the agent downloads all required resources, such as the operating system image and application container images, over the network to disk. +. The agent updates the operating system image by delegating to `bootc`. +. The agent updates configuration files on the file system of the device by overlaying a set of files that the service sends to the device. +. If necessary, the agent reboots into the new operating system. Otherwise, the agent signals system services and applications to reload the updated configuration. +. The agent updates applications running on Podman. + +If the update fails or the system does not return online after rebooting, the agent automatically rolls back to the earlier operating system image and configuration. + +[NOTE] +==== +You can keep fleet definitions in Git. +The {RedHatEdge} periodically syncs with the fleet definitions in the database. +==== diff --git a/downstream/modules/platform/con-edge-manager-api-server.adoc b/downstream/modules/platform/con-edge-manager-api-server.adoc new file mode 100644 index 0000000000..d00c81e57a --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-api-server.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-api-server"] + += {RedHatEdge} API server + +The API server is a core part of the {RedHatEdge} service that gives users and agents an option to communicate with the service. + +The API server exposes the following endpoints: + +User-facing API endpoint:: Users can connect to the user-facing API endpoint from the CLI or the web console. +Users must authenticate on the {Gateway} to obtain a JSON Web Token (JWT) to make HTTPS requests. + +Agent-facing API endpoint:: Agents connect to the agent-facing endpoint, which is mTLS-protected. +The service authenticates devices by using the X.509 client certificates. + +The {RedHatEdge} service also communicates with various external systems to authenticate and authorize users, get mTLS certificates signed, or query configuration for managed devices. diff --git a/downstream/modules/platform/con-edge-manager-build-bootc.adoc b/downstream/modules/platform/con-edge-manager-build-bootc.adoc new file mode 100644 index 0000000000..fa47360ffc --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-build-bootc.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-build-bootc"] + += Building a _bootc_ operating system image for the {RedHatEdge} + +To prepare your device to be managed by the {RedHatEdge}, build a `bootc` operating system image that has the {RedHatEdge} agent. +Then build an operating system disk image for your devices. + +For more information, see the following sections: + +* link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-install-CLI[Installing the Red Hat Edge Manager CLI] + +* link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-request-cert[Optional: Requesting an enrollment certificate for early binding] + +* link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-image-pullsecrets[Optional: Using image pull secrets] + +* link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-build-bootc-image[Building the operating system image with _bootc_] + +* link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-build-sign-image[Signing and publishing the _bootc_ operating system image by using Sigstore] + +* link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-build-disk-image[Building the operating system disk image] + +* link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-sign-disk-image[Optional: Signing and publishing the operating system disk image to an Open Container Initiative registry] diff --git a/downstream/modules/platform/con-edge-manager-build-prereq.adoc b/downstream/modules/platform/con-edge-manager-build-prereq.adoc new file mode 100644 index 0000000000..92cbbb9890 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-build-prereq.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-build-prereq"] + += Prerequisites + +See the following prerequisites for building a `bootc` operating system image: + +* Install `podman` version 5.0 or later and `skopeo` version 1.14 or later. See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/building_running_and_managing_containers/index#proc_getting-container-tools_assembly_starting-with-containers[Getting container tools]. +* Install `bootc-image-builder`. See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/index#installing-bootc-image-builder_creating-bootc-compatible-base-disk-images-with-bootc-image-builder[Installing bootc-image-builder]. diff --git a/downstream/modules/platform/con-edge-manager-buildtime-runtime.adoc b/downstream/modules/platform/con-edge-manager-buildtime-runtime.adoc new file mode 100644 index 0000000000..a3e683eeed --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-buildtime-runtime.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-buildtime-runtime"] + += Build-time configuration over dynamic runtime configuration + +Add the configuration to the operating system image at build time. +Adding the configuration at build time ensures that the configurations are tested, distributed, and updated together. +In cases when build-time configuration is not feasible or desirable, you can dynamically configure devices at runtime instead with the {RedHatEdge}. + +Dynamic runtime configuration is preferable in the following cases: + +* You have a configuration that is deployment or site-specific, such as a hostname or a site-specific network credential. +* You have secrets that are not secure to distribute with the image. +* You have application workloads that need to be added, updated, or deleted without reboot or they are on a faster cadence than the operating system. diff --git a/downstream/modules/platform/con-edge-manager-config-providers.adoc b/downstream/modules/platform/con-edge-manager-config-providers.adoc new file mode 100644 index 0000000000..0278aab8fc --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-config-providers.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-config-providers"] + += Configuration providers + +You can provide configuration from many sources, called configuration providers, in {RedHatEdge}. +The {RedHatEdge} currently supports the following configuration providers: + +Git Config Provider:: Fetches device configuration files from a Git repository. +Kubernetes Secret Provider:: Fetches a secret from a Kubernetes cluster and writes the content to the file system of the device. +HTTP Config Provider:: Fetches device configuration files from an HTTP(S) endpoint. +Inline Config Provider:: Allows specifying device configuration files inline in the device manifest without querying external systems. + +Read more about the configuration providers in the following sections: + +* xref:edge-manager-config-git-repo[Configuration from a Git repository] +* xref:edge-manager-k8s-cluster[Secrets from a Kubernetes cluster] +* xref:edge-manager-config-http[Configuration from an HTTP server] +* xref:edge-manager-config-inline[Configuration inline in the device specification] diff --git a/downstream/modules/platform/con-edge-manager-device-enroll.adoc b/downstream/modules/platform/con-edge-manager-device-enroll.adoc new file mode 100644 index 0000000000..d5c2a4a13b --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-device-enroll.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-device-enroll"] + += Device enrollment + +You must enroll devices to a {RedHatEdge} service before you can start managing them. +The {RedHatEdge} agent that runs on a device handles the device enrollment. + +When the agent starts on a device, the agent searches for the configuration in the `/etc/flightctl/config.yaml` file. +The file defines the following configurations: + +* The enrollment endpoint, which is the {RedHatEdge} service that the agent connects to for enrollment. +* The enrollment certificate, which is the X.509 client certificate and key that the agent only uses to securely request enrollment from the {RedHatEdge} service. +* *Optional*: Any additional agent configuration. + +The agent starts the enrollment process by searching for the enrollment endpoint, the {RedHatEdge} service, that is defined in the configuration file. +After establishing a secure, mTLS-protected network connection with the service, the agent submits an enrollment request to the service. + +The request includes a description of hardware and operating system of the device, a X.509 certificate signing request, and the cryptographic identity of the device. The enrollment request must be approved by an authorized user. +After the request is approved, the device becomes trusted and managed by the {RedHatEdge} service. diff --git a/downstream/modules/platform/con-edge-manager-device-selection-strat.adoc b/downstream/modules/platform/con-edge-manager-device-selection-strat.adoc new file mode 100644 index 0000000000..e7a97bcdd4 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-device-selection-strat.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-device-selection-strat"] + += Device selection strategy + +The {RedHatEdge} supports only the `BatchSequence` strategy for device selection. +This strategy defines a stepwise rollout process where devices are added in batches based on specific criteria. +Batches are executed sequentially. +After each batch completes, execution proceeds to the next batch only if the success rate of the previous batch meets or exceeds the configured success threshold. + +The success rate is determined as: + +[literal, options="nowrap" subs="+attributes"] +---- +# of successful rollouts in the batch / # of devices in the batch >= success threshold +---- + +In a batch sequence, the final batch is an implicit batch and it is not specified in the batch sequence. +It selects all devices in a fleet that have not been selected by the explicit batches in the sequence. diff --git a/downstream/modules/platform/con-edge-manager-device-targeting.adoc b/downstream/modules/platform/con-edge-manager-device-targeting.adoc new file mode 100644 index 0000000000..57d5c25045 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-device-targeting.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-device-targeting"] + += Device targeting + +A rollout applies only to devices that belong to a fleet. +Each device can belong to only a single fleet. +Since rollout definitions are done at the fleet level, the selection process determines which devices within a fleet that participate in a batch rollout based on label criteria. +After processing all batches, all fleet devices are rolled out. + +* *Labels*: Devices with specific metadata labels can be targeted for rollouts. +* *Fleet membership*: Rollouts apply only to devices within the specified fleet. diff --git a/downstream/modules/platform/con-edge-manager-drop-dir.adoc b/downstream/modules/platform/con-edge-manager-drop-dir.adoc new file mode 100644 index 0000000000..7758fd8d88 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-drop-dir.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-drop-dir"] + += Drop-in directories + +Use drop-in directories to add, replace, or remove configuration files that the service aggregates. +Do not directly edit your configuration files because it can cause deviations from the target configuration. + +[NOTE] +==== +You can identify drop-in directories by the `.d/` at the end of the directory name. +For example, `/etc/containers/certs.d`, `/etc/cron.d`, and `/etc/NetworkManager/conf.d`. +==== diff --git a/downstream/modules/platform/con-edge-manager-enroll-meth.adoc b/downstream/modules/platform/con-edge-manager-enroll-meth.adoc new file mode 100644 index 0000000000..03544224c5 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-enroll-meth.adoc @@ -0,0 +1,24 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-enroll-meth"] + += Enrollment methods + +You can provision the enrollment endpoint and certificate to the device in the following ways: + +Early binding:: You can build an operating system image that includes the enrollment endpoint and certificate. +Devices using an early binding image can automatically connect to the defined service to request enrollment, without depending on any provisioning infrastructure. +The devices share the same long-lived X.509 client certificate. +However, in this case, the devices are bound to a specific service and owner. + +Late binding:: +You can define the enrollment endpoint and certificate at provisioning time instead of including them in the operating system image. +Devices using a late binding image are not bound to a single owner or service and can have device-specific, short-lived X.509 client certificates. +However, late binding requires virtualization or bare-metal provisioning infrastructure that can request device-specific enrollment endpoints and certificates from the {RedHatEdge} service and inject them into the provisioned system by using mechanisms such as link:https://cloud-init.io/[cloud-init], link:https://coreos.github.io/ignition/supported-platforms/[Ignition], or link:https://anaconda-installer.readthedocs.io/en/latest/kickstart.html[kickstart]. + +[NOTE] +==== +The enrollment certificate is only used to secure the network connection for submitting an enrollment request. +The enrollment certificate is not involved in the actual verification or approval of the enrollment request. +The enrollment certificate is no longer used with enrolled devices, as the devices rely on device-specific management certificates instead. +==== diff --git a/downstream/modules/platform/con-edge-manager-enroll.adoc b/downstream/modules/platform/con-edge-manager-enroll.adoc new file mode 100644 index 0000000000..359e9d25c1 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-enroll.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-enroll"] + += Enroll devices + +To manage your devices with the {RedHatEdge}, you must enroll the devices to the {RedHatEdge} service. + +The first time the {RedHatEdge} agent runs on a device, the agent prepares for the enrollment process by generating a cryptographic key pair. +The cryptographic key pair serves as the unique cryptographic identity of the device. +The key pair consists of a public and a private key. +The private key never leaves the device, so that the device cannot be duplicated or impersonated. + +When the device is not yet enrolled, the agent performs service discovery to find its {RedHatEdge} service instance. +Then, the device establishes a secure, mTLS-protected network connection to the service. +The device uses its X.509 enrollment certificate that the device acquired during image building or device provisioning. +The device submits an enrollment request to the service that includes the following: + +* a description of the device hardware and operating system +* an X.509 Certificate Signing Request which includes the cryptographic identity of the device to obtain the initial management certificate + +The device is not considered trusted and remains quarantined in a device lobby until an authorized user approves or denies the request. + +For more information, see the following sections: + +* xref:edge-manager-device-enroll[Device enrollment] +* xref:edge-manager-request-cert[Optional: Requesting an enrollment certificate for early binding] + diff --git a/downstream/modules/platform/con-edge-manager-labels.adoc b/downstream/modules/platform/con-edge-manager-labels.adoc new file mode 100644 index 0000000000..30556fd5c3 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-labels.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-labels"] + += Labels and label selectors + +You can organize your resources by assigning them labels, for example, to record their location, hardware type, or purpose. +The {RedHatEdge} labels follow the same syntax, principles, and operators as Kubernetes labels and label selectors. +You can select devices with labels when viewing the device inventory or applying operations to the devices. + +Labels follow the `key=value` format. +You can use the key to group devices. +For example, if your labels follow the `site=` naming convention, you can group your devices by site. +You can also use labels that only consist of keys. + +Labels must adhere to the following rules to be valid: + +* Keys and value must each be 63 characters or less. +* Keys and values can consist of alphanumeric characters (`a-z`, `A-Z`, `0-9`). +* Keys and values can also contain dashes (`-`), underscores (`_`), dots (`.`) but not as the first or last character. +* Value can be omitted. + +You can apply labels to devices in the following ways: + +* Define a set of default labels during image building that are automatically applied to all devices during deployment. +* Assign initial labels during enrollment. +* Assign labels post-enrollment. + +When resources are labeled, you can select a subset of devices by creating a label selector. +A label selector is a comma-separated list of labels for selecting devices that have the same set of labels. + +See the following examples: + +|==== +|Example label selector |Selected devices + +|`site=factory-berlin`|All devices with a `site` label key and a `factory-berlin` label value. +|`site!=factory-berlin`|All devices with a `site` label key but where the label value is not `factory-berlin`. +|`site in (factory-berlin,factory-madrid)`|All devices with a `site` label key and where the label value is either `factory-berlin` or `factory-madrid`. +|==== + +For more information about labels and selectors, see link:https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/[Labels and Selectors] in the Kubernetes documentation. diff --git a/downstream/modules/platform/con-edge-manager-limit-device.adoc b/downstream/modules/platform/con-edge-manager-limit-device.adoc new file mode 100644 index 0000000000..25b0ad39da --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-limit-device.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-limit-device"] + += Limit in device selection + +Each batch in the `BatchSequence` strategy might use an optional `limit` parameter to define how many devices should be included in the batch. +You can specify the limit can in two ways: + +* *Absolute number*: A fixed number of devices to be selected. +* *Percentage*: The percentage of the total matching device population to be selected. + +** If you provide a `selector` with labels, the percentage is calculated based on the number of devices that match the label criteria within the fleet. +** If you do not provide a `selector`, the percentage is applied to all devices in the fleet. diff --git a/downstream/modules/platform/con-edge-manager-manage-os-config.adoc b/downstream/modules/platform/con-edge-manager-manage-os-config.adoc new file mode 100644 index 0000000000..00db01873c --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-manage-os-config.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-manage-os-config"] + += Operating system configuration for edge devices + +You can include an operating system-level host configuration in the image to give maximum consistency and repeatability. +To update the configuration, create a new operating system image and update devices with the new image. + +However, updating devices with a new image can be impractical in the following cases: + +* The configuration is missing in the image. +* The configuration needs to be specific to a device. +* The configuration needs to be updateable at runtime without updating the operating system image and rebooting. + +For these cases, you can declare a set of configuration files that are present on the file system of the device. +The {RedHatEdge} agent applies updates to the configuration files while ensuring that either all files are successfully updated in the file system, or rolled back to their pre-update state. +If the user updates both an operating system and configuration set of a device at the same time, the {RedHatEdge} agent updates the operating system first. +It then applies the specified set of configuration files. + +You can also specify a list of configuration sets that the {RedHatEdge} agent applies in sequence. +In case of a conflict, the last applied configuration set is valid. + +[IMPORTANT] +==== +After the {RedHatEdge} agent updates the configuration on the disk, the running applications need to reload the new configuration into memory for the configuration to become effective. +If the update involves a reboot, `systemd` automatically restarts the applications with the new configuration and in the correct order. +If the update does not involve a reboot, many applications can detect changes to their configuration files and automatically reload the files. +When an application does not support change detection, you can use device lifecycle hooks to run scripts or commands if certain conditions are met. +==== diff --git a/downstream/modules/platform/con-edge-manager-os-img-script.adoc b/downstream/modules/platform/con-edge-manager-os-img-script.adoc new file mode 100644 index 0000000000..6c68ed9d9a --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-os-img-script.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-os-img-script"] + += Operating system images with scripts + +Avoid executing scripts or commands that change the file system. +The `bootc` or the {RedHatEdge} can overwrite the changed files which can cause a deviation or failed integrity checks.. + +Instead, run such scripts or commands during image building so changes are part of the image. +You can also use the configuration management mechanisms of the {RedHatEdge}. + +.Additional resources + +* link:https://bootc-dev.github.io/bootc/building/guidance.html[Generic guidance for building images] diff --git a/downstream/modules/platform/con-edge-manager-provisioning-openshift-virt.adoc b/downstream/modules/platform/con-edge-manager-provisioning-openshift-virt.adoc new file mode 100644 index 0000000000..eaeb6e6e46 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-provisioning-openshift-virt.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-provisioning-openshift-virt"] + += Provision devices with {OCPVShort} + +You can provision a virtual machine on {OCPVShort} by using a QCoW2 container disk image that is hosted on an OCI container registry. + +If your operating system image does not already contain the {RedHatEdge} agent enrollment configuration, you can inject the configuration through the `cloud-init` user data at provisioning. + +.Prerequisites + +* You installed the `flightctl` CLI and logged in to your {RedHatEdge} service instance. +* You installed the `oc` CLI, used it to log in to your OpenShift cluster instance, and changed to the project in which you want to create your virtual machine. diff --git a/downstream/modules/platform/con-edge-manager-provisioning-physical.adoc b/downstream/modules/platform/con-edge-manager-provisioning-physical.adoc new file mode 100644 index 0000000000..5954bd478f --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-provisioning-physical.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-provisioning-physical"] + += Provision physical devices + +When you build an International Organization for Standardization (ISO) disk image from an operating system image by using the `bootc-image-builder` tool, the image is similar to the RHEL ISOs available for download. +However, your operating system image content is embedded in the ISO disk image. + +To install the ISO disk image to a bare metal system without having access to the network, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/deploying-the-rhel-bootc-images_using-image-mode-for-rhel-to-build-deploy-and-manage-operating-systems#deploying-an-custom-iso-container-image_deploying-the-rhel-bootc-images[Deploying a custom ISO container image] in the {RHEL} documentation. + +To install the ISO disk image through the network, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/deploying-the-rhel-bootc-images_using-image-mode-for-rhel-to-build-deploy-and-manage-operating-systems#deploying-an-iso-bootc-container-over-pxe-boot_deploying-the-rhel-bootc-images[Deploying an ISO _bootc_ image over PXE boot] in the {RHEL} documentation. diff --git a/downstream/modules/platform/con-edge-manager-rollout-device-selection.adoc b/downstream/modules/platform/con-edge-manager-rollout-device-selection.adoc new file mode 100644 index 0000000000..7bb0262f57 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-rollout-device-selection.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-device-rollout-device-selection"] + += Rollout device selection + +When performing a rollout by using `flightctl`, you must manage which devices participate in the rollout and how much disruption is acceptable. +The device selection process and the rollout disruption budget concept ensure controlled and predictable rollouts. + +The process and configuration for selecting devices during a rollout includes targeting strategies, batch sequencing, and success criteria for controlled software deployment. diff --git a/downstream/modules/platform/con-edge-manager-rollout-disruption.adoc b/downstream/modules/platform/con-edge-manager-rollout-disruption.adoc new file mode 100644 index 0000000000..30602976df --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-rollout-disruption.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-rollout-disruption"] + += Rollout disruption budget + +A rollout disruption budget defines the acceptable level of service impact during a rollout. +This ensures that a deployment does not take down too many devices at once, maintaining overall system stability. diff --git a/downstream/modules/platform/con-edge-manager-set-up-oauth.adoc b/downstream/modules/platform/con-edge-manager-set-up-oauth.adoc new file mode 100644 index 0000000000..1247d56083 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-set-up-oauth.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-set-up-oauth"] + += Set up the OAuth application for {PlatformNameShort} + +You have two options for setting up the OAuth application in {PlatformNameShort}, either manually or automatically in the {PlatformNameShort} UI. diff --git a/downstream/modules/platform/con-edge-manager-update-os.adoc b/downstream/modules/platform/con-edge-manager-update-os.adoc new file mode 100644 index 0000000000..5d34572a60 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-update-os.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-update-os"] + += Update the operating system + +You can update the operating system of a device by updating the target operating system image name or version in the device specification. +When the agent communicates with the server, the agent detects the requested update. +Then, the agent automatically starts downloading and verifying the new operating system version in the background. +The {RedHatEdge} agent schedules the actual system update that is performed according to the update policy. +At the scheduled update time, the agent installs the new version without disrupting the currently running operating system. +Finally, the device reboots into the new version. + +The {RedHatEdge} currently supports the following image type and image reference format: + +[width="100%",cols="40%,60%",options="header",] +|=== +|Image Type |Image Reference +|bootc|An OCI image reference to a container registry. Example: `quay.io/flightctl-example/rhel:9.5` +|=== + +During the process, the agent sends status updates to the service. +You can check the update process by viewing the device status. + +For more information, see xref:edge-manager-view-devices[View devices]. + diff --git a/downstream/modules/platform/con-edge-manager-usr-dir.adoc b/downstream/modules/platform/con-edge-manager-usr-dir.adoc new file mode 100644 index 0000000000..4bf15084c9 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-usr-dir.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-usr-dir"] + += Configuration in the `/usr` directory + +Place configuration files in the `/usr` directory if the configuration is static and the application or service supports that configuration. +By placing the configuration in the `/usr` directory, the configuration remains read-only and fully defined by the image. + +Do not place the configuration in the `/usr` directory in the following cases: + +* The configuration is deployment or site-specific. +* The application or service only supports reading configuration from the `/etc` directory. +* The configuration might need to be changed at runtime. + diff --git a/downstream/modules/platform/con-edge-manager-view-devices.adoc b/downstream/modules/platform/con-edge-manager-view-devices.adoc new file mode 100644 index 0000000000..cd1469fc85 --- /dev/null +++ b/downstream/modules/platform/con-edge-manager-view-devices.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: CONCEPT + +[id="edge-manager-view-devices"] + += View devices + +To get more information about the devices in your inventory, you can use the {RedHatEdge} CLI. + +.Prerequisites + +* You must install the {RedHatEdge} CLI. +See xref:edge-manager-install-CLI[Installing the Red Hat Edge Manager CLI]. +* You must enroll at least one device. diff --git a/downstream/modules/platform/con-editing-inventory-files.adoc b/downstream/modules/platform/con-editing-inventory-files.adoc index da7ca97038..d397bdd7f1 100644 --- a/downstream/modules/platform/con-editing-inventory-files.adoc +++ b/downstream/modules/platform/con-editing-inventory-files.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-editing_inventory_files"] = Additional inventory file variables @@ -6,4 +8,4 @@ You can further configure your {PlatformName} installation by including addition These configurations add optional features for managing your {PlatformName}. Add these variables by editing the inventory file using a text editor. -A table of predefined values for inventory file variables can be found in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars[Inventory file variables] in the _{PlatformName} Installation Guide_. \ No newline at end of file +A table of predefined values for inventory file variables can be found in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/rpm_installation/appendix-inventory-files-vars[Inventory file variables] in the _{PlatformName} Installation Guide_. \ No newline at end of file diff --git a/downstream/modules/platform/con-grpc-settings-py.adoc b/downstream/modules/platform/con-grpc-settings-py.adoc new file mode 100644 index 0000000000..92ac8ca651 --- /dev/null +++ b/downstream/modules/platform/con-grpc-settings-py.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: CONCEPT + +[id="grpc-settings-py_{context}"] + += `grpc_settings.py` file + +[role="_abstract"] +Platform administrators can use the `grpc_settings.py` file to define special or custom parameters for the gRPC server. + +There are two gRPC settings files: the default `grpc_default.py` that is part of the codebase and must not be edited, and an override file that can be used to override the default values. The `grpc_default.py` file includes database keepalive OPTIONS to help maintain a healthy gRPC connection and prevent interruptions. If it was necessary to change these defaults, the `grpc_settings.py` file can be used to override values from the `grpc_defauly.py` file. + +The location and management of the override `grpc_settings.py` file can differ based on your deployment (RPM-based, {ContainerBase} or {OperatorBase}). + +== RPM deployments + +The override `grpc_settings.py` file in an RPM-based setup can be edited directly, and changes take effect after restarting the gateway systemd service. If you choose to edit the file, be sure to use the proper syntax and values. The override `grpc_settings.py` file is located in the following directory: +---- +/etc/ansible-automation-platform/gateway/grpc_settings.py +---- diff --git a/downstream/modules/platform/con-gs-about-builder.adoc b/downstream/modules/platform/con-gs-about-builder.adoc new file mode 100644 index 0000000000..7cc2ccd8a0 --- /dev/null +++ b/downstream/modules/platform/con-gs-about-builder.adoc @@ -0,0 +1,19 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-06-25 +:_mod-docs-content-type: CONCEPT + +[id="gs-about-builder_{context}"] +== About {Builder} + +You also have the option of creating an entirely new {ExecEnvShort} with {Builder}, also referred to as {ExecEnvShort} builder. +{Builder} is a command line tool you can use to create an {ExecEnvShort} for Ansible. +You can only create {ExecEnvShort}s with {Builder}. + +To build your own {ExecEnvShort}, you must: + +* Download {Builder} +* Create a definition file that defines your {ExecEnvShort} +* Create an {ExecEnvShort} image based on the definition file + +For more information about building an {ExecEnvShort}, see link:{LinkBuilder}. + diff --git a/downstream/modules/platform/con-gs-ansible-content.adoc b/downstream/modules/platform/con-gs-ansible-content.adoc new file mode 100644 index 0000000000..e28c6e1c34 --- /dev/null +++ b/downstream/modules/platform/con-gs-ansible-content.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-ansible-content_{context}"] + += About automation content + +Use the following Ansible concepts to create successful Ansible Playbooks and {ExecEnvName} before beginning your Ansible development project. + diff --git a/downstream/modules/platform/con-gs-ansible-lightspeed.adoc b/downstream/modules/platform/con-gs-ansible-lightspeed.adoc new file mode 100644 index 0000000000..5a98909cd4 --- /dev/null +++ b/downstream/modules/platform/con-gs-ansible-lightspeed.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-ansible-lightspeed"] + += {LightspeedShortName} + +{LightspeedFullName} is a generative AI service designed by and for Ansible platform engineers and developers. +It accepts natural-language prompts entered by a user and then interacts with IBM watsonx foundation models to produce code recommendations built on Ansible best practices. + +{LightspeedFullName} helps automation teams learn, create, and maintain {PlatformName} content more efficiently. diff --git a/downstream/modules/platform/con-gs-ansible-roles.adoc b/downstream/modules/platform/con-gs-ansible-roles.adoc new file mode 100644 index 0000000000..bcdaa5c424 --- /dev/null +++ b/downstream/modules/platform/con-gs-ansible-roles.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-ansible-roles_{context}"] + += Bundle content with Ansible roles + +A role is like a customized piece of automation content that bundles together relevant bits from playbooks to fit your system's specific needs. Roles are self-contained and portable, and can include groupings of tasks, variables, configuration templates, handlers, and other supporting files to orchestrate complicated automation flows. + +Instead of creating huge playbooks with hundreds of tasks, you can use roles to break the tasks apart into smaller, more discrete units of work. + +To learn more about roles, see link:https://www.redhat.com/en/topics/automation/what-is-an-ansible-role[What is an Ansible Role-and how is it used?]. diff --git a/downstream/modules/platform/con-gs-auto-dev-about-inv.adoc b/downstream/modules/platform/con-gs-auto-dev-about-inv.adoc new file mode 100644 index 0000000000..93c8a9ce41 --- /dev/null +++ b/downstream/modules/platform/con-gs-auto-dev-about-inv.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-auto-dev-about-inv"] + += About inventories + +An inventory is a file listing the collection of hosts managed by {PlatformNameShort}. +Organizations are assigned to inventories, while permissions to launch playbooks against inventories are controlled at the user or team level. + +== Browsing and creating inventories + +You can find inventories in the UI by navigating to {MenuInfrastructureInventories}. The Inventories window displays a list of the inventories that are currently available. You can sort the inventory list by name and search by inventory type, organization, description, inventory creators or modifiers, or additional criteria. +Use the following procedure to create a new inventory. + +.Procedure + +. From the navigation panel, select {MenuInfrastructureInventories}. The *Inventories* view displays a list of the inventories currently available. +. Click btn:[Create inventory], and from the list menu select the type of inventory you want to create. +. Enter the appropriate details into the following fields: +* *Name*: Enter a name for the inventory. +* Optional: *Description*: Enter a description. +* *Organization*: Choose among the available organizations. +* Only applicable to Smart Inventories: *Smart Host Filter*: Filters are similar to tags in that tags are used to filter certain hosts that contain those names. Therefore, to populate this field, specify a tag that contains the hosts you want, not the hosts themselves. Filters are case-sensitive. For more information, see link:{URLControllerUserGuide}/controller-inventories#ref-controller-smart-host-filter[Smart host filters] in the {TitleControllerUserGuide} guide. +* *Instance groups*: Select the instance group or groups for this inventory to run on. If the list is extensive, use the search to narrow the options. You can select multiple instance groups and sort them in the order that you want them run. +* Optional: *Labels*: Add labels that describe this inventory, so they can be used to group and filter inventories and jobs. +* Only applicable to constructed inventories: *Input inventories*: Specify the source inventories to include in this constructed inventory. Empty groups from input inventories are copied into the constructed inventory. +* Optional and only applicable to constructed inventories: *Cache timeout (seconds)*:Set the length of time you want the cache plugin data to timeout. +* Only applicable to constructed inventories: *Verbosity*: Control the level of output that Ansible produces as the playbook executes related to inventory sources associated with constructed inventories. Select the verbosity from Normal to various Verbose or Debug settings. This only appears in the "details" report view. +** Verbose logging includes the output of all commands. +** Debug logging is exceedingly verbose and includes information on SSH operations that can be useful in certain support instances. Most users do not need to see debug mode output. +* Only applicable to constructed inventories: *Limit*: Restricts the number of returned hosts for the inventory source associated with the constructed inventory. You can paste a group name into the limit field to only include hosts in that group. For more information, see the *Source variables* setting. +* Only applicable to standard inventories: *Options*: Check the box next to *Prevent instance group fallback* to enable only the instance groups listed in the *Instance groups* field to execute the job. If unchecked, all available instances in the execution pool will be used based on the hierarchy described in link:{URLControllerAdminGuide}/controller-clustering#controller-cluster-job-runs[Control where a job runs] in the {TitleControllerAdminGuide} guide. Click the tooltip for more information. ++ +NOTE: Set the `prevent_instance_group_fallback` option for smart inventories through the API. ++ +* *Variables* (*Source variables* for constructed inventories): +** *Variables*: Variable definitions and values to apply to all hosts in this inventory. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. +** *Source variables* for constructed inventories are used to configure the constructed inventory plugin. Source variables create groups under the `groups` data key. The variable accepts Jinja2 template syntax, renders it for every host, makes a `true` or `false` evaluation, and includes the host in the group (from the key of the entry) if the result is `true`. +. Click btn:[Create inventory]. + +After creating the new inventory, you can proceed with configuring permissions, groups, hosts, sources, and viewing completed jobs, if applicable to the type of inventory. \ No newline at end of file diff --git a/downstream/modules/platform/con-gs-auto-dev-job-templates.adoc b/downstream/modules/platform/con-gs-auto-dev-job-templates.adoc new file mode 100644 index 0000000000..cf957cef09 --- /dev/null +++ b/downstream/modules/platform/con-gs-auto-dev-job-templates.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-auto-dev-job-templates"] + += Work with job templates + +A job template is a definition and set of parameters for running an Ansible job. + +A job template combines an Ansible playbook from a project and the settings required to launch it, including information about the target host against which the playbook will run, authentication information to access the host, and any other relevant variables . Job templates are useful to run the same job many times. Job templates also encourage the reuse of Ansible playbook content and collaboration between teams. For more information, see Job Templates in the Automation controller User Guide. diff --git a/downstream/modules/platform/con-gs-auto-op-about-inv.adoc b/downstream/modules/platform/con-gs-auto-op-about-inv.adoc new file mode 100644 index 0000000000..39c9c0cd98 --- /dev/null +++ b/downstream/modules/platform/con-gs-auto-op-about-inv.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-auto-op-about-inv"] + += About inventories + +An inventory is a file listing the collection of hosts managed by {PlatformNameShort}. +Organizations are assigned to inventories, while permissions to launch playbooks against inventories are controlled at the user or team level. + +Platform administrators and automation developers have the permissions to create inventories. +As an automation operator you can view inventories and their details. diff --git a/downstream/modules/platform/con-gs-auto-op-execute-inv.adoc b/downstream/modules/platform/con-gs-auto-op-execute-inv.adoc new file mode 100644 index 0000000000..395c34e5a0 --- /dev/null +++ b/downstream/modules/platform/con-gs-auto-op-execute-inv.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-auto-op-execute-inv"] + += Executing an inventory + +.Procedure + +. From the navigation panel, select {MenuInfrastructureInventories}. +The *Inventories* window displays a list of inventories that are currently available, along with the following information: +* *Name*: The inventory name. +* *Status*: The statuses are: +** *Success*: The inventory sync completed successfully. +** *Disabled*: No inventory source added to the inventory. +** *Error*: The inventory source completed with error. +* *Type*: Identifies whether the inventory is a standard inventory, a smart inventory, or a constructed inventory. +* *Organization*: The organization to which the inventory belongs. +. Select an inventory name to display the *Details* page for the inventory, including the inventory's groups and hosts. + +For more information about inventories, see the link:{URLControllerUserGuide}/controller-inventories[Inventories] section of the {TitleControllerUserGuide} guide. + diff --git a/downstream/modules/platform/con-gs-auto-op-job-templates.adoc b/downstream/modules/platform/con-gs-auto-op-job-templates.adoc new file mode 100644 index 0000000000..98085f4a7e --- /dev/null +++ b/downstream/modules/platform/con-gs-auto-op-job-templates.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-auto-op-job-templates"] + += Work with job templates + +A job template is a definition and set of parameters for running an Ansible job. + +A job template combines an Ansible Playbook from a project with the settings required to launch the job. Job templates are useful for running the same job many times. Job templates also encourage the reuse of Ansible Playbook content and collaboration between teams. For more information, see link:{URLControllerUserGuide}/controller-job-templates[Job Templates] in the {TitleControllerUserGuide} guide. + +Platform administrators and automation developers have the permissions to create job templates. As an automation operator you can launch job templates and view their details. diff --git a/downstream/modules/platform/con-gs-automation-content.adoc b/downstream/modules/platform/con-gs-automation-content.adoc new file mode 100644 index 0000000000..4b9e6796b9 --- /dev/null +++ b/downstream/modules/platform/con-gs-automation-content.adoc @@ -0,0 +1,59 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-automation-content"] + += Automation content + +{HubNameStart} is the central location for your {PlatformNameShort} content. +In {HubName} you can also find content collections that you can download and integrate into your automation environment. You can also create and upload your own content to distribute to your users. + +An Ansible Content Collection is a ready-to-use toolkit for automation and can include multiple types of content, including roles, modules, playbooks, and plugins all in one place. + +You can access {HubName} in one of two ways: + +* On the Red Hat-hosted link:https://console.redhat.com/[Hybrid Cloud Console], where you can find Red Hat validated or certified content that you can sync to your platform environment. +* On a self-hosted, on-premise {PrivateHubName}, where you can curate content for your automation users and manage access to collections and {ExecEnvShort}s. + +Depending on the way you access {HubName}, you may have access to different types of content collections. + +There are two types of Red Hat Ansible content: + +* {CertifiedName}, which Red Hat builds, supports, and maintains. +Certified collections are included in your subscription to {PlatformName} and can be found in {HubName}. +* {Valid} collections, which are customizable and therefore do not have a support guarantee, but have been tested in the {PlatformNameShort} environment. + +For more information about Ansible content, see xref:con-gs-create-automation-content[Create automation content] in xref:assembly-gs-auto-dev[Getting started as an automation developer]. + +== Ansible roles + +Ansible roles allow you to create reusable automation content that helps teams to work more efficiently and avoid duplicating efforts. +With roles, you can group together a broader range of existing automation content, like playbooks, configuration files, templates, tasks, and handlers to create customized automation content that can be reused and shared with others. + +You can also make roles configurable by exposing variables that users can set when calling the role, allowing them to configure their system according to their organization's requirements. + +Roles are generally included in Ansible content collections. + +.Additional resources + +For more information, see xref:con-gs-ansible-roles_assembly-gs-auto-dev[Bundle content with Ansible roles]. + +== Ansible playbooks + +Playbooks are YAML files that contain specific sets of human-readable instructions, or “plays,” that you send to run on a single target or groups of targets. +Ansible playbooks are repeatable and reusable configuration management tools designed to deploy complex applications. + +You can use playbooks to manage configurations of and deployments to remote machines to sequence multitiered rollouts involving rolling updates. Use playbooks to delegate actions to other hosts, interacting with monitoring servers and load balancers along the way. + +Once written, you can use and re-use playbooks for automation across your enterprise. +For example, if you need to run a task more than once, write a playbook and put it under source control. +Then, you can use the playbook to push out new configuration or confirm the configuration of remote systems. + +Ansible playbooks can declare configurations, orchestrate steps of any manually ordered process on many machines in a defined order, or start tasks synchronously or asynchronously. + +You can also use {LightspeedShortName}, Ansible's generative AI service, to create and develop playbooks to fit your needs. See the link:https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index[Ansible Lightspeed documentation] for more information. + +.Additional resources + +* link:{LinkPlaybooksGettingStarted} +* link:https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index[{LightspeedFullName} user guide] + diff --git a/downstream/modules/platform/con-gs-automation-decisions.adoc b/downstream/modules/platform/con-gs-automation-decisions.adoc new file mode 100644 index 0000000000..5389e2c4af --- /dev/null +++ b/downstream/modules/platform/con-gs-automation-decisions.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-automation-decisions"] + += Automation decisions + +{PlatformName} includes {EDAName}, an automation engine that listens to your system's event stream and reacts to events that you have specified with targeted automation tasks. +In this way, {EDAName} manages routine automation tasks and responses, freeing you up to work on more complex tasks. + +Managed through {EDAcontroller}, Ansible rulebooks are the framework for automation decisions. Ansible rulebooks are collections of rulesets, which in turn consist of one or more sources, rules, and conditions. Rulebooks tell the system what events to flag and how to respond to them. From the Automation Decisions section of the platform user interface, you can use rulebooks to connect and listen to event sources, and define actions that are triggered in response to certain events. + +.Additional resources +For more information about rulebook, events, and sources, see xref:con-gs-define-events-rulebooks[Rulebook actions]. diff --git a/downstream/modules/platform/con-gs-automation-execution-jobs.adoc b/downstream/modules/platform/con-gs-automation-execution-jobs.adoc new file mode 100644 index 0000000000..89f39ccc1d --- /dev/null +++ b/downstream/modules/platform/con-gs-automation-execution-jobs.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-automation-execution-jobs"] + += Automation execution jobs + +A job is an instance of {PlatformNameShort} launching an Ansible Playbook against an inventory of hosts. diff --git a/downstream/modules/platform/con-gs-automation-execution.adoc b/downstream/modules/platform/con-gs-automation-execution.adoc new file mode 100644 index 0000000000..b820c15f6a --- /dev/null +++ b/downstream/modules/platform/con-gs-automation-execution.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-automation-execution"] + += Automation execution + +The centerpiece of {PlatformNameShort} is its automation execution command and control center, where you can deploy, define, operate, scale and delegate automation across your enterprise. +With this functionality, you can perform a variety of tasks from a single location, such as running playbooks from a simple, straightforward web UI, monitoring dashboard activity, and centralized logging to manage and track job execution. + +In the automation execution environment, you can use {ControllerName} tasks to build job templates, which standardize how automation is deployed, initiated, and delegated, making it more reusable and consistent. + +== Inventories + +An inventory is a single file, usually in INI or YAML format, containing a list of hosts and groups that can be acted upon using Ansible commands and playbooks. +You can use an inventory file to specify your installation scenario and describe host deployments to Ansible. +You can also use an inventory file to organize managed nodes in centralized files that give Ansible with system information and network locations. diff --git a/downstream/modules/platform/con-gs-automation-mesh.adoc b/downstream/modules/platform/con-gs-automation-mesh.adoc new file mode 100644 index 0000000000..67785517b1 --- /dev/null +++ b/downstream/modules/platform/con-gs-automation-mesh.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-automation-mesh"] + += {AutomationMeshStart} + +{AutomationMeshStart} is an overlay network intended to ease the distribution of automation across a collection of execution nodes using existing connectivity. +Execution nodes are where link:https://www.redhat.com/en/topics/automation/what-is-an-ansible-playbook[Ansible Playbooks] are actually executed. +A node runs an {ExecEnvNameSing} which, in turn, runs the Ansible Playbook. +{AutomationMeshStart} creates peer-to-peer connections between these execution nodes, increasing the resiliency of your automation workloads to network latency and connection disruptions. +This also permits more flexible architectures and provides rapid, independent scaling of control and execution capacity. diff --git a/downstream/modules/platform/con-gs-build-decision-env.adoc b/downstream/modules/platform/con-gs-build-decision-env.adoc new file mode 100644 index 0000000000..5e775613fd --- /dev/null +++ b/downstream/modules/platform/con-gs-build-decision-env.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-build-decision-env"] + += Build and use a decision environment + +{EDAName} includes an ansible.eda collection, which contains sample sources, event filters and rulebooks. +All the collections, ansible rulebooks and their dependencies use a decision environment, which is an image that can be run on either Podman or Kubernetes. + +In decision environments, sources, which are typically Python code, are distributed through ansible-collections. +They inject external events into a rulebook for processing. +The rulebook consists of the following: + +* The python interpreter +* Java Runtime Environment for Drools rule engine +* ansible-rulebook python package +* ansible.eda collection + +You can use the base decision environment and build your own customized Decision Environments with additional collections and collection dependencies. +You can build a decision environment using a Dockerfile or optionally you can deploy your CA certificate into the image. diff --git a/downstream/modules/platform/con-gs-config-authentication.adoc b/downstream/modules/platform/con-gs-config-authentication.adoc new file mode 100644 index 0000000000..691ce9d8f5 --- /dev/null +++ b/downstream/modules/platform/con-gs-config-authentication.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-config-authentication"] + += Configure authentication + +After your first login as an administrator you must configure authentication for your users. +Depending on your organization's needs and resources, you can either: + +* Set up authentication by creating users, teams, and organizations manually. +* Use an external source such as GitHub to configure authentication for your system. diff --git a/downstream/modules/platform/con-gs-create-automation-content.adoc b/downstream/modules/platform/con-gs-create-automation-content.adoc new file mode 100644 index 0000000000..e4aabebd8e --- /dev/null +++ b/downstream/modules/platform/con-gs-create-automation-content.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-create-automation-content"] + += Create automation content with playbooks + +Ansible playbooks are blueprints that tell {PlatformNameShort} what tasks to perform with which devices. +You can use a playbook to define the automation tasks that you want the platform to run. + +== Create a playbook + +A playbook contains one or more plays. A basic play contains the following parameters: + +* *Name*: a brief description of the overall function of the playbook, which assists in keeping it readable and organized for all users. +* *Hosts*: identifies the target or targets for Ansible to run against. +* *Become statements*: this optional statement can be set to `true` or `yes` to enable privilege escalation using a become plugin (such as `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`). +* *Tasks*: this is the list of actions that get executed against each host in the play. + +Here is an example of a play in a playbook. You can see the name of the play, the host, and the list of tasks included in the play. + +[source,bash] +---- +- name: Set Up a Project and Job Template + hosts: host.name.ip + become: true + + tasks: + - name: Create a Project + ansible.controller.project: + name: Job Template Test Project + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + + - name: Create a Job Template + ansible.controller.job_template: + name: my-job-1 + project: Job Template Test Project + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: present +---- + +For more detailed instructions on authoring playbooks, see link:{LinkDevelopAutomationContent}, or consult our documentation on link:{LinkLightspeedUserGuide} to learn how to generate a playbook with AI assistance. \ No newline at end of file diff --git a/downstream/modules/platform/con-gs-dashboard-components.adoc b/downstream/modules/platform/con-gs-dashboard-components.adoc new file mode 100644 index 0000000000..9130cdc938 --- /dev/null +++ b/downstream/modules/platform/con-gs-dashboard-components.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-dashboard-components"] + += Dashboard components + +image::AAP_dashboard_2.5.png[Dashboard] + +After you install {PlatformNameShort} on your system and log in for the first time, familiarize yourself with the platform dashboard. + +Quick starts:: +You can learn about Ansible automation functions with guided tutorials called quick starts. +In the dashboard, you can access quick starts by selecting a quick start card. +From the panel displayed, click btn:[Start] and complete the onscreen instructions. +You can also filter quick starts by keyword and status. + +Resource status:: +Indicates the status of your hosts, projects, and inventories. +The status indicator links to your configured hosts, projects and inventories where you can search, filter, add and change these resources. + +Job Activity:: +You can view a summary of your current job status. +Filter the job status within a period of time or by job type, or click btn:[Go to jobs] to view a complete list of jobs that are currently available. + +Jobs:: +You can view recent jobs that have run, or click btn:[View all Jobs] to view a complete list of jobs that are currently available, or create a new job. + +Projects:: +You can view recently updated projects or click btn:[View all Projects] to view a complete list of the projects that are currently available, or create a new project. + +Inventories:: +You can view recently updated inventories or click btn:[View all Inventories] to view a complete list of available inventories, or create a new inventory. + +Rulebook Activations:: +You can view the list of recent rulebook activations and their status, display the complete list of rulebook activations that are currently available, or create a new rulebook activation. + +Rule Audit:: +You view recently fired rule audits, view rule audit records, and view rule audit data based on corresponding rulebook activation runs. + +Decision Environments:: +You can view recently updated decision environments, or click btn:[View all Decision Environments] to view a complete list of available inventories, or create a new decision environment. diff --git a/downstream/modules/platform/con-gs-define-events-rulebooks.adoc b/downstream/modules/platform/con-gs-define-events-rulebooks.adoc new file mode 100644 index 0000000000..39bedfb935 --- /dev/null +++ b/downstream/modules/platform/con-gs-define-events-rulebooks.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-define-events-rulebooks"] + += Define events with rulebooks + +An Ansible rulebook is a collection of rulesets that references one or more sources, rules, and conditions. + +Rulebooks are to {EDAName} what playbooks are to {PlatformNameShort} as a whole. +Like a playbook, a rulebook defines automation tasks for the platform, along with when they should be triggered. + +== Rulebook actions + +Rulebooks use an "if-this-then-that” logic that tells {EDAName} what actions to activate when a rule is triggered. {EDAName} listens to the controller event stream and, when an event triggers a rule, activates an automation action in response. + +Rulebooks can trigger the following activations: + +* `run_job_template` +* `run_playbook` (only supported with ansible-rulebook CLI) +* `debug` +* `print_event` +* `set_fact` +* `post_event` +* `retract_fact` +* `shutdown` + +To read more about rulebook activations, see link:https://ansible.readthedocs.io/projects/rulebook/en/latest/actions.html[Actions] in the Ansible Rulebook documentation. \ No newline at end of file diff --git a/downstream/modules/platform/con-gs-developer-tools.adoc b/downstream/modules/platform/con-gs-developer-tools.adoc new file mode 100644 index 0000000000..5222fd5814 --- /dev/null +++ b/downstream/modules/platform/con-gs-developer-tools.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-developer-tools"] + += {ToolsName} + +{ToolsName} are an integrated and supported suite of capabilities that help IT practitioners at any skill level generate automation content faster than they might with manual coding. +{ToolsName} can help you create, test, and deploy automation content like playbooks, {ExecEnvShort}s, and collections quickly and accurately using recommended practices. For more information on how {ToolsName} can help you create automation content, see our documentation on link:{LinkDevelopAutomationContent}. \ No newline at end of file diff --git a/downstream/modules/platform/con-gs-execution-env.adoc b/downstream/modules/platform/con-gs-execution-env.adoc new file mode 100644 index 0000000000..d9b135d93c --- /dev/null +++ b/downstream/modules/platform/con-gs-execution-env.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-execution-env_{context}"] + += Build and use an {ExecEnvShort} + +All automation in Red Hat Ansible Automation Platform runs on container images called {ExecEnvName}. + +{ExecEnvNameStart} are consistent and shareable container images that serve as Ansible control nodes. +{ExecEnvNameStart} reduce the challenge of sharing Ansible content that has external dependencies. +If automation content is like a script that a developer has written, an automation {ExecEnvShort} is like a replica of that developer's environment, thereby enabling you to reproduce and scale the automation content that the developer has written. In this way, {ExecEnvShort}s make it easier for you to implement automation in a range of environments. + +{ExecEnvNameStart} contain: + +* Ansible Core +* {Runner} +* Ansible Collections +* Python libraries +* System dependencies +* Custom user needs + +You can either use the default base {ExecEnvShort} included in your {PlatformNameShort} subscription, or you can define and create an {ExecEnvNameSing} using {Builder}. \ No newline at end of file diff --git a/downstream/modules/platform/con-gs-final-set-up.adoc b/downstream/modules/platform/con-gs-final-set-up.adoc new file mode 100644 index 0000000000..2ccf74ac5f --- /dev/null +++ b/downstream/modules/platform/con-gs-final-set-up.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-final-set-up"] + += Using this guide + +After you have installed {PlatformNameShort} {PlatformVers} and have become familiar with the dashboard, use this document to explore further options for setup and daily use. +This guide is structured so that you can select the path that is most appropriate to you and your role within your organization. +We also encourage you to explore the other paths outlined in this guide to learn how Ansible empowers users with various roles and objectives to build and customize automation tasks. + +Select one of the following paths to continue getting started: + +* If you are a systems administrator configuring authentication and setting up teams and organizations, see xref:assembly-gs-platform-admin[Getting started as a platform administrator]. +* If you are a developer setting up development environments, creating playbooks, rulebooks, roles, or projects, see xref:assembly-gs-auto-dev[Getting started as an automation developer]. +* If you are an operator using playbooks, publishing custom content, creating projects, and creating and using inventories, see xref:assembly-gs-auto-op[Getting started as an automation operator]. diff --git a/downstream/modules/platform/con-gs-learn-about-collections.adoc b/downstream/modules/platform/con-gs-learn-about-collections.adoc new file mode 100644 index 0000000000..b449375ce9 --- /dev/null +++ b/downstream/modules/platform/con-gs-learn-about-collections.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-learn-about-collections_{context}"] + += About content collections + +Ansible content collections are assemblages of automation content. There are two types of Ansible collections: + +* *{CertifiedName}*, which contain fully-supported roles and modules that are enterprise- and production-ready for use in your environments. +* *{Valid} collections*, which provide you with a trusted, expert-guided approach for performing foundational operations and tasks in your product. + +Both types of content collections can be found in {HubName} through the link:https://console.redhat.com/ansible/automation-hub/[Hybrid Cloud Console]. + + diff --git a/downstream/modules/platform/con-gs-manage-RBAC.adoc b/downstream/modules/platform/con-gs-manage-RBAC.adoc new file mode 100644 index 0000000000..6ade274aea --- /dev/null +++ b/downstream/modules/platform/con-gs-manage-RBAC.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-manage-RBAC"] + += Managing user access with role-based access control + +Role-based access control (RBAC) restricts user access based on their role within an organization. +The roles in RBAC refer to the levels of access that users have to the network. + +You can control what users can do with the components of {PlatformNameShort} at a broad or granular level depending on your RBAC policy. +You can select whether the user is a system administrator or normal user and align roles and access permissions with their positions within the organization. + +You can define roles with many permissions that can then be assigned to resources, teams, and users. +The permissions that make up a role dictate what the assigned role allows. +Permissions are allocated with only the access needed for a user to perform the tasks appropriate for their role. diff --git a/downstream/modules/platform/con-gs-manage-collections.adoc b/downstream/modules/platform/con-gs-manage-collections.adoc new file mode 100644 index 0000000000..67c1f4f6f8 --- /dev/null +++ b/downstream/modules/platform/con-gs-manage-collections.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-manage-collections"] + += Manage collections in {HubName} + +As a platform operator, you can use namespaces in {HubName} to curate and manage collections for the following purposes: + +* Create groups with permissions to curate namespaces and upload collections to {PrivateHubName}. +* Add information and resources to the namespace to help end users of the collection in their automation tasks. +* Upload collections to the namespace. +* Review the namespace import logs to decide the success or failure of uploading the collection and its current approval status. + +For more information about collections, see _link:{LinkHubManagingContent}_. diff --git a/downstream/modules/platform/con-gs-playbooks.adoc b/downstream/modules/platform/con-gs-playbooks.adoc new file mode 100644 index 0000000000..e1b2365ce1 --- /dev/null +++ b/downstream/modules/platform/con-gs-playbooks.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-playbooks"] + += Get started with playbooks + +A playbook runs tasks in order from top to bottom. Within each play, tasks also run in order from top to bottom. + +== Learn about playbooks + +Playbooks with multiple “plays” can orchestrate multi-machine deployments, running one play on your web servers, another play on your database servers, and a third play on your network infrastructure. + +For more information, see link:{LinkPlaybooksGettingStarted}. + diff --git a/downstream/modules/platform/con-gs-rulebook-activations.adoc b/downstream/modules/platform/con-gs-rulebook-activations.adoc new file mode 100644 index 0000000000..14a1490077 --- /dev/null +++ b/downstream/modules/platform/con-gs-rulebook-activations.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gs-rulebook-activations"] + += Create and run a rulebook activation + +In {EDAName}, a rulebook activation is a process running in the background defined by a decision environment executing a specific rulebook. diff --git a/downstream/modules/platform/con-gs-setting-up-dev-env.adoc b/downstream/modules/platform/con-gs-setting-up-dev-env.adoc new file mode 100644 index 0000000000..42ed47e033 --- /dev/null +++ b/downstream/modules/platform/con-gs-setting-up-dev-env.adoc @@ -0,0 +1,9 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-09-19 + +:_mod-docs-content-type: CONCEPT + +[id="setting-up-dev-env_{context}"] += Setting up your development environment + +Before you begin to create content, consult our guide to link:{LinkDevelopAutomationContent}. There you can find information on {ToolsName}, which you can integrate into your environment, and learn how to scaffold a playbook project. diff --git a/downstream/modules/platform/con-gw-activity-stream.adoc b/downstream/modules/platform/con-gw-activity-stream.adoc new file mode 100644 index 0000000000..2245e67a6d --- /dev/null +++ b/downstream/modules/platform/con-gw-activity-stream.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-gw-activity-stream"] + += Activity stream +The {Gateway} includes an activity stream that captures changes to {Gateway} resources, such as the creation or modification of organizations, users, and service clusters, among others. For each change, the activity stream collects information about the time of the change, the user that initiated the change, the action performed, and the actual changes made to the object, when possible. The information gathered varies depending on the type of change. + +You can access the details captured by the activity stream from the API: + +----- +/api/gateway/v1/activitystream/ +----- diff --git a/downstream/modules/platform/con-gw-authenticator-map-examples.adoc b/downstream/modules/platform/con-gw-authenticator-map-examples.adoc new file mode 100644 index 0000000000..9d36914cfd --- /dev/null +++ b/downstream/modules/platform/con-gw-authenticator-map-examples.adoc @@ -0,0 +1,61 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-authenticator-map-examples"] + += Authenticator map examples + +Use the following examples to explore the different conditions, like groups and attribute values you can implement to control user access to the platform. + +.Add users to an organization based on an attribute +In this example, you will add a user to the *Networking* organization if they have an `Organization` attribute with the value of `Networking`: + +image::am-org-mapping-full-annotation.png[Add users to an organization mapping example fully annotated with callout numbers that correlate with the following list that describes the function of each field] + +. The *Organization* title of the page indicates that you are configuring settings permissions on an organization. +. `Network Organization` is entered in this field and is the unique, descriptive name for this map configuration. +. *Attributes* is selected from the *Trigger* list to configure authentication based on an attribute from the source system, which in this example is `Organization`. +. The operation is defined as `or` meaning that at least one condition must be true for authentication to succeed. +. The *Attribute* coming from the source system is Organization. +. The *Comparison* value is set to `matches` which means that when a user has the attribute *Value* of `Networking`, they are added to the *Networking* organization. +. The attribute *Value* coming from the source system is `Networking`. +. The name of the *Organization* to which you are adding members is `Networking`. +. Users are added to the *Networking* organization with the `Organization Member` role. + +.Add users to a team based on the users group +In this example, you will add user to the `Apple` team if they have either of the following groups: + +----- +cn=Administrators,ou=AAP,ou=example,o=com +----- + +or + +----- +cn=Operators,ou=AAP,ou=example,co=com +----- + +image::am-apple-team-map-example.png[Add user to a team mapping example] + +.Do not escalate privileges + +In this example, you never escalate users to a superuser. But note, this rule does not revoke a user’s superuser permission because the revoke option is not set. + +image::am-do-not-escalate-privileges.png[Do not escalate privileges mapping example] + +.Escalate privileges based on a user having a group + +In this example, you escalate user privileges to superuser if they belong to the following group: + +----- +cn=Administrators,ou=AAP +----- + +image::am-escalate-privileges.png[Escalate privileges mapping example] + +.Using mapping order to create exceptions + +Since maps are executed in order, it is possible to create exceptions. Expanding on the previous example for __Do not escalate privileges__, you can add another rule with a higher order, such as, __Escalate privileges__. + +The first rule (__Do not escalate privileges__) prevents any user from being escalated to a superuser, but the second rule (__Escalate privileges__) alters that decision to grant superuser privileges to a user if they are in the `Administrators` group. + +image::am-mapping-order.png[Mapping order example] \ No newline at end of file diff --git a/downstream/modules/platform/con-gw-authenticator-map-triggers.adoc b/downstream/modules/platform/con-gw-authenticator-map-triggers.adoc new file mode 100644 index 0000000000..96ae787345 --- /dev/null +++ b/downstream/modules/platform/con-gw-authenticator-map-triggers.adoc @@ -0,0 +1,55 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-authenticator-map-triggers"] + += Authenticator map triggers + +Each map has a trigger that defines when the map should be evaluated as true. Trigger types include the following: + +Always:: The trigger should always be fired. +Never:: The trigger should never be fired. +Group:: The map is true or false based on a user having, not having, or having multiple groups in the source system. See link:{URLCentralAuth}/gw-configure-authentication#gw-authenticator-map-examples[Authenticator map examples] for information on using *Group* triggers. ++ +When defining a group trigger, the authentication mapping expands to include the following selections: ++ +* *Operation:* This field includes conditional settings that trigger the handling of the rule based on the specified *Groups* criteria. The choices include *and* and *or*. For example, if you select *and* the user logging in must be a member of all of the groups specified in the *Groups* field for this trigger to be true. Alternatively, if you select *or* the user logging in must be a member of any of the specified *Groups* in order for the trigger to fire. ++ +[NOTE] +==== +If you are only keying off one group it doesn’t matter if you select *"and"* or *"or"*. +==== ++ +* *Groups:* This is a list of one or more groups coming from the authentication system that the user must be a member of. The first time you create a *Groups* entry, you must manually enter the values. Once entered, that selection will be available from the *Groups* list. ++ +See the *Operation* field to determine the behavior of the trigger if more than one group is specified in the trigger. ++ +[NOTE] +==== +Group identifiers must be entered in lowercase. For example, `cn=johnsmith,dc=example,dc=com` instead of `CN=johnsmith,DC=example,DC=com`. +==== ++ +Attribute:: The map is true or false based on a users attributes coming from the source system. See link:{URLCentralAuth}/gw-configure-authentication#gw-authenticator-map-examples[Authenticator map examples] for information on using *Attribute* triggers. ++ +When defining an attribute trigger, the authentication mapping expands to include the following selections: ++ +* *Operation:* This field includes conditional settings that trigger the handling of the rule based on the specified *Attribute* criteria. In version {PlatformVers} this field indicates what will happen if the source system returns a list of attributes instead of a single value. For example, if the source system returns multiple emails for a user and *Operation* was set to *and*, all of the given emails must match the *Comparison* for the trigger to be _True_. If *Operation* was set to *or*, any of the returned emails will set the trigger to _True_ if they match the *Comparison* in the trigger. ++ +[NOTE] +==== +If you would like to experiment with multiple attribute maps you can do that through the API but the UI form will remove multi-attribute maps if the authenticator is saved through the UI. When adding multiple attributes to a map, the *Operation* will also apply to the attributes. +==== ++ +* *Attribute:* The name of the attribute coming from the source system this trigger will be evaluated against. For example, if you wanted the trigger to fire based on the user's last name and the last name field in the source system was called `users_last_name` you would enter the value ‘users_last_name’ in this field. +* *Comparison:* Tells the trigger how to evaluate the value of the users. *Attribute* in the source system compared to the *Value* specified on the trigger. Available options are: *contains*, *matches*, *ends with*, *in*, or *equals*. Below is a breakdown of each *Comparison* type: ++ +** *contains*: The specified character sequence in *Value* is contained within the attributes value returned from the source. For example, given an attribute value of ‘John’ from the source the contains *Comparison* would set the trigger to _True_ if the trigger *Value* was set to ‘Jo’ and _False_ if the trigger *Value* was ‘Joy’. +** *matches*: The *Value* on the trigger is treated as a python regular expression and does an link:https://docs.python.org/3/library/re.html#re.match[Regular expression match (re.match)] (with case ignore on) between the specified *Value* and the value returned from the source system. For example, if the trigger's *Value* was ‘Jo’ the trigger would return _True_ if the value from the source was ‘John‘ or ‘Joanne‘ or any other value which matched the regular expression ‘Jo’. The trigger would return _False_ if the sources value for the attribute was ‘Dan’ because ‘Dan’ does not match the regular expression ‘Jo’. +** *ends with*: The trigger will see if the value provided by the source ends with the specified *Value* of the trigger. For example, if the source provided a value of ‘John’ the trigger would be _True_ if its *Value* was set to ‘n’ or ‘on’. The trigger would be _False_ if its *Value* was set to ‘z’ because the value ‘John’ coming from the source does not end with the value ’z’ specified by the trigger. +** *equal*: The trigger will see if the value provided by the source is equal to (in its entirety) the specified *Value* of the trigger. For example, if the source returned the value ‘John’, the trigger would be _True_ if its *Value* was set to ‘John’. Any value other than ‘John’ returned from the source would set this trigger to _False_. +** *in*: The *in* condition will see if the value matches one of several values. When *in* is specified as the *Comparison*, the *Value* field can be a comma separated list. For example, if a trigger had a *Value* of ‘John,Donna’ the trigger would be _True_ if the attribute coming from the source had either the value ‘John’ or ‘Donna’. Otherwise, the trigger would be _False_. +** *Value*: The value that a users attribute will be matched against based on the *Comparison* field. See examples in the *Comparison* definition in this section. ++ +[NOTE] +==== +If the *Comparison* type is *in*, this field can be a comma separated list (without spaces). +==== diff --git a/downstream/modules/platform/con-gw-authenticator-map-types.adoc b/downstream/modules/platform/con-gw-authenticator-map-types.adoc new file mode 100644 index 0000000000..20645d38e6 --- /dev/null +++ b/downstream/modules/platform/con-gw-authenticator-map-types.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-authenticator-map-types"] + += Authenticator map types + +{PlatformNameShort} supports the following rule types: + +Allow:: Determine if the user is allowed to log into the system. +Organization:: Determine if a user should be put into an organization. +Team:: Determine if the user should be a member of a team. +Role:: Determine if the user is a member of a role (for example, _System Auditor_). +Is Superuser:: Determine if the user is a superuser in the system. + +These authentication map types can be used with any type of authenticator. diff --git a/downstream/modules/platform/con-gw-centralized-redis.adoc b/downstream/modules/platform/con-gw-centralized-redis.adoc new file mode 100644 index 0000000000..bc45bce00e --- /dev/null +++ b/downstream/modules/platform/con-gw-centralized-redis.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-centralized-redis_{context}"] + += Centralized Redis + +{PlatformNameShort} offers a centralized Redis instance in both xref:gw-single-node-redis_planning[standalone] and xref:gw-clustered-redis_planning[clustered] topologies. This enables resiliency by providing consistent performance and reliability. \ No newline at end of file diff --git a/downstream/modules/platform/con-gw-clustered-redis.adoc b/downstream/modules/platform/con-gw-clustered-redis.adoc new file mode 100644 index 0000000000..f56db6b267 --- /dev/null +++ b/downstream/modules/platform/con-gw-clustered-redis.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-clustered-redis_{context}"] + += Clustered Redis + +With clustered Redis, data is automatically partitioned over multiple nodes to provide performance stability and nodes are assigned as replicas to provide reliability. Clustered Redis, shared between the {Gateway} and {EDAName}, is provided by default when installing {PlatformNameShort} in containerized and operator-based deployments. + +[NOTE] +==== +6 VMs are required for a Redis high availability (HA) compatible deployment. In RPM deployments, Redis can be colocated on each {PlatformNameShort} component VM except for {ControllerName}, execution nodes, or the PostgreSQL database. In containerized deployments, Redis can be colocated on any {PlatformNameShort} component VMs of your choice except for execution nodes or the PostgreSQL database. See link:{LinkTopologies} for the opinionated deployment options available. +==== + +A cluster contains three primary nodes and each primary node contains a replica node. + +If a primary instance becomes unavailable due to failures, the other primary nodes will initiate a failover state to promote a replica node to a primary node. + +image::gw-clustered-redis.png[Single-node Redis deployment] + +The benefits of deploying clustered Redis over standalone Redis include the following: + +* Data is automatically split across multiple nodes. +* Data can be dynamically adjusted. +* Automatic failover of the primary nodes is initiated during system failures. + +Therefore, if you need data scalability and automatic failover, deploy {PlatformNameShort} with a clustered Redis. For more information about scalability with Redis, refer to link:https://redis.io/docs/latest/operate/oss_and_stack/management/scaling/[Scale with Redis Cluster] in the Redis product documentation. + +For information on deploying {PlatformNameShort} with clustered Redis, refer to the link:{LinkInstallationGuide}, link:{LinkContainerizedInstall}, and link:{LinkOperatorInstallation} guides. + +include::../aap-common/external-site-disclaimer.adoc[] \ No newline at end of file diff --git a/downstream/modules/platform/con-gw-create-authentication.adoc b/downstream/modules/platform/con-gw-create-authentication.adoc new file mode 100644 index 0000000000..3ffaa1e018 --- /dev/null +++ b/downstream/modules/platform/con-gw-create-authentication.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-create-authentication"] + += Creating an authentication method + +Creating an authenticator involves the following procedures: + +* xref:gw-select-auth-type[Selecting an authentication type], where you select the type of authenticator plugin you want to configure, including the authentication details for the authentication type selected. +* xref:gw-define-rules-triggers[Mapping], where you define mapping rule types and triggers to control access to the system, and xref:gw-adjust-mapping-order[Mapping order], where you can define the mapping precedence. ++ +[NOTE] +==== +Mapping order is only available if you have defined one or more authenticator maps. +==== \ No newline at end of file diff --git a/downstream/modules/platform/con-gw-overview-access-auth.adoc b/downstream/modules/platform/con-gw-overview-access-auth.adoc new file mode 100644 index 0000000000..d0cd4c934b --- /dev/null +++ b/downstream/modules/platform/con-gw-overview-access-auth.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-overview-access-auth"] + += Overview of access management and authentication + +{PlatformNameShort} features a platform interface where you can set up centralized authentication, configure access management, and configure global and system level settings from a single location. + +The first time you log in to the {PlatformNameShort}, you must enter your subscription information to activate the platform. For more information about licensing and subscriptions, refer to xref:assembly-gateway-licensing[Managing {PlatformNameShort} licensing, updates and support]. + +A system administrator can configure access, permissions and system settings through the following tasks: + +* xref:gw-configure-authentication[Configuring authentication in the {PlatformNameShort}], where you set up simplified login for users by selecting from several authentication methods available and define permissions and assign them to users with authenticator maps. + +* xref:gw-token-based-authentication[Configuring access to external applications with token-based authentication], where you can configure authentication of third-party tools and services with the platform through integrated OAuth 2 token support. + +* xref:gw-managing-access[Managing access with role based access control], where you configure user access based on their role within a platform organization. + +* xref:assembly-gw-settings[Configuring {PlatformNameShort}], where you can configure global and system level settings for the platform and services. \ No newline at end of file diff --git a/downstream/modules/platform/con-gw-pluggable-authentication.adoc b/downstream/modules/platform/con-gw-pluggable-authentication.adoc new file mode 100644 index 0000000000..75f68f8961 --- /dev/null +++ b/downstream/modules/platform/con-gw-pluggable-authentication.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-pluggable-authentication"] + += Pluggable authentication + +Authentication is the process of verifying a user's identity to the {PlatformNameShort} (that is, to establish that a user is who they say they are). This can be done in a number of ways but would traditionally be associated with a `username` and `password`. + +{PlatformNameShort} {PlatformVers} uses a pluggable authentication system with a configuration wizard that provides a common, simplified method of configuring different types of authenticators such as LDAP and SAML. The pluggable system also allows you to configure multiple authenticators of the same type. + +In the pluggable system we have a couple of concepts: + +Authenticator Plugin:: A plugin allows {PlatformNameShort} to connect to a source system, such as, LDAP or SAML. {PlatformNameShort} includes a variety of authenticator plugins. Authenticator plugins are similar to Ansible collections, in that all of the required code is in a package and can be versioned independently if needed. + +Authenticator:: An authenticator is an instantiation of an authenticator plugin and allows users from the specified source to log in. For example, the LDAP authenticator plugin defines a required LDAP server setting. When you instantiate an authenticator from the LDAP authentication plugin, you must provide the authenticator the LDAP server URL it needs to connect to. + +Authenticator Map:: Authenticator maps are applied to authenticators and tell {PlatformNameShort} what permissions to give a user logging into the system. \ No newline at end of file diff --git a/downstream/modules/platform/con-gw-review-mapping-results.adoc b/downstream/modules/platform/con-gw-review-mapping-results.adoc new file mode 100644 index 0000000000..778194a2f6 --- /dev/null +++ b/downstream/modules/platform/con-gw-review-mapping-results.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-review-mapping-results"] + += Reviewing authenticator map results + +As a platform administrator, you can review the authenticator map results through the users page in the API, `api/gateway/v1/users/X`, to see how the maps were evaluated when the user logged in to the platform. + diff --git a/downstream/modules/platform/con-gw-single-node-redis.adoc b/downstream/modules/platform/con-gw-single-node-redis.adoc new file mode 100644 index 0000000000..8c9fb5f964 --- /dev/null +++ b/downstream/modules/platform/con-gw-single-node-redis.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-single-node-redis_{context}"] + +//[ddacosta] - changed from single-node to standalone to align with "in" product terminology + += Standalone Redis + +Standalone Redis consists of a simple architecture that is easy to deploy and configure. + +image::gw-single-node-redis.png[Standalone Redis deployment] + +If a resilient solution is not a requirement, deploy {PlatformNameShort} with a standalone Redis. diff --git a/downstream/modules/platform/con-gw-understanding-authenticator-mapping.adoc b/downstream/modules/platform/con-gw-understanding-authenticator-mapping.adoc new file mode 100644 index 0000000000..532bff1b2f --- /dev/null +++ b/downstream/modules/platform/con-gw-understanding-authenticator-mapping.adoc @@ -0,0 +1,93 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-understanding-authenticator-mapping"] + += Understanding authenticator mapping + +Authentication:: Validates a user’s identity, typically through a username and password or a trust system. +Authorization:: Determines what an authenticated user can do once they are authenticated. + +In {PlatformNameShort}, authenticators manage authentication, validating users and returning details such as their username, first name, email, and group memberships (for example, LDAP groups). Authorization comes from the authenticator’s associated maps. + +During the authentication process, after a user has authenticated, the authorization system starts with a default set of permissions in memory. Then sequentially, the authenticator maps are processed and adjust permissions based on their trigger conditions. When all the authenticator's maps are processed, the in-memory representation of the user's permissions are reconciled with their existing permissions. + +For example, here is a simplified in-memory representation of the default permissions as follows: + +----- +Access allowed = True +Superuser permission = Undefined +Admin of teams = None +----- +And, you might have maps that need to be processed are processed in the following order: + +. *Allow* rule set to never +. *Allow* rule based on group +. *Superuser* rule based on user attributes +. *Team* admin rule based on user group + +The first *Allow* map, set to never, denies access to the system and the in-memory representation looks like: + +[subs=+quotes] +----- +Access allowed = *False* +Superuser permission = Undefined +Admin of teams = None +----- + +However, if the user matches the second *Allow* map (the group-based allow), the permissions change to the following: + +[subs=+quotes] +----- +Access allowed = *True* +Superuser permission = Undefined +Admin of teams = None +----- + +Where the user is subsequently granted access to {PlatformNameShort} because they have the required groups. + +Next, the *Superuser* map checks user attributes. If no match is found, it does not revoke existing permissions by default. Therefore, the permissions remain the same as the results from the previous map: + +[subs=+quotes] +----- +Access allowed = True +Superuser permission = *Skipped* +Admin of teams = None +----- + +To revoke superuser access, you can select the *Revoke* option on the *Superuser* map. That way, when the user does not meet the attribute criteria, the permissions update to False like in the following: + +[subs=+quotes] +----- +Access allowed = True +Superuser permission = *False* +Admin of teams = None +----- + +The final *Team* map checks the user’s groups coming from the authenticator for admin access on the team “My Team”. If the user has the required group, the permissions update to the following: + +[subs=+quotes] +----- +Access allowed = True +Superuser permission = False +Admin of teams = “*My Team*” +----- + +If the user lacks the required group, permissions remain unchanged unless the *Revoke* option has been selected on the map, in which case permissions update to the following: + +[subs=+quotes] +----- +Access allowed = True +Superuser permission = False +Admin of teams = *Revoke admin of “My Team”* +----- +After processing all maps in the order defined, the final permissions reconcile, updating the user’s access based on the map rules. + +In summary, authenticators validate users and delegate system authorization to the authenticator maps. Authenticator maps are executed in order creating an in-memory representation of the users' permissions which get reconciled with the actual permissions after all maps are executed. + +By default, authenticator maps return either *ALLOW* or *SKIPPED*. + +ALLOW:: Means that a match is detected and the platform should grant the user access to the corresponding role or permission (such as, superuser, or team member). +SKIPPED:: Means that the user did not match the trigger in the map and, the platform skips processing this map and continues to check the remaining maps. This is useful if you want to grant users additional permissions in the system without having to change the authenticator maps. + +However, when the *Revoke* option is selected, *SKIPPED* becomes *DENY* and users who do not meet the required trigger criteria are denied access to the corresponding role or permission. This ensures that only users with matching trigger conditions are granted access. + diff --git a/downstream/modules/platform/con-ha-hub-installation.adoc b/downstream/modules/platform/con-ha-hub-installation.adoc index 6ef4574657..51805e43ff 100644 --- a/downstream/modules/platform/con-ha-hub-installation.adoc +++ b/downstream/modules/platform/con-ha-hub-installation.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="ha-hub-installation"] = High availability {HubName} @@ -5,7 +7,7 @@ Use the following examples to populate the inventory file to install a highly available {HubName}. This inventory file includes a highly available {HubName} with a clustered setup. //dcdacosta - include a link to the RHSSO content once it's added. -You can configure your HA deployment further to implement {RHSSO} and enable a xref:proc-install-ha-hub-selinux[high availability deployment of {HubName} on SELinux]. +You can configure your HA deployment further to enable a xref:proc-install-ha-hub-selinux[high availability deployment of {HubName} on SELinux]. .Specify database host IP @@ -26,9 +28,9 @@ automationhub_pg_port=5432 * If installing a clustered setup, replace `localhost ansible_connection=local` in the [automationhub] section with the hostname or IP of all instances. For example: ----- [automationhub] -automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18 -automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20 -automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22 +automationhub1.testing.ansible.com ansible_user=cloud-user +automationhub2.testing.ansible.com ansible_user=cloud-user +automationhub3.testing.ansible.com ansible_user=cloud-user ----- [role="_additional-resources"] diff --git a/downstream/modules/platform/con-host-metrics-dashboard.adoc b/downstream/modules/platform/con-host-metrics-dashboard.adoc new file mode 100644 index 0000000000..fad20cd9f6 --- /dev/null +++ b/downstream/modules/platform/con-host-metrics-dashboard.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-host-metrics-dashboard_{context}"] + += Host Metrics dashboard + +To view your host metrics, in the navigation pane, select {MenuAAHostMetrics}. + +The Host Metrics dashboard provides high level automation run details per managed host, including: + +* The first and last time a host was automated. +* The number of times automation has been run or attempted to be run against a host. +* The number of times a managed host has been deleted. + +The ability to view the number of times automation has been run on hosts enables you to: + +* View your most commonly automated hosts. +* More accurately reflect the scope of your automation landscape. diff --git a/downstream/modules/platform/con-host-metrics-subscriptions.adoc b/downstream/modules/platform/con-host-metrics-subscriptions.adoc new file mode 100644 index 0000000000..91bfc80b7a --- /dev/null +++ b/downstream/modules/platform/con-host-metrics-subscriptions.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-host-metrics-subscriptions_{context}"] + += Host metrics and subscription + +Host metrics can be used to accurately count node usage and ensure subscription compliance. For example, if a host is no longer in use or otherwise should not be counted towards the subscription total, it can be soft-deleted. diff --git a/downstream/modules/platform/con-hs-eda-controller.adoc b/downstream/modules/platform/con-hs-eda-controller.adoc new file mode 100644 index 0000000000..b2fcba20d9 --- /dev/null +++ b/downstream/modules/platform/con-hs-eda-controller.adoc @@ -0,0 +1,40 @@ +:_mod-docs-content-type: CONCEPT +[id="con-hs-eda-controller"] + += Horizontal scaling in {EDAcontroller} + +With {EDAcontroller}, you can set up horizontal scaling for your events automation. This multi-node deployment enables you to define as many nodes as you prefer during the installation process. You can also increase or decrease the number of nodes at any time according to your organizational needs. + +The following node types are used in this deployment: + +API node type:: Responds to the HTTP REST API of {EDAcontroller}. +Worker node type:: Runs an {EDAName} worker, which is the component of {EDAName} that not only manages projects and activations, but also executes the activations themselves. +Hybrid node type:: Is a combination of the API node and the worker node. + +// This content is used in RPM installation +ifdef::aap-install[] +The following example shows how you can set up an inventory file for horizontal scaling of {EDAcontroller} on {RHEL} VMs using the host group name `[automationedacontroller]` and the node type variable `eda_node_type`: + +----- +[automationedacontroller] + +3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api + +# worker node +3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker +----- +endif::aap-install[] + +// This content is used in Containerized installation +ifdef::container-install[] +The following example shows how you can set up an inventory file for horizontal scaling of {EDAcontroller} on {RHEL} VMs using the host group name `[automationeda]` and the node type variable `eda_type`: + +----- +[automationeda] + +3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api + +# worker node +3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker +----- +endif::container-install[] diff --git a/downstream/modules/platform/con-hs-eda-sizing-scaling.adoc b/downstream/modules/platform/con-hs-eda-sizing-scaling.adoc new file mode 100644 index 0000000000..1bdfc0308e --- /dev/null +++ b/downstream/modules/platform/con-hs-eda-sizing-scaling.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-hs-eda-sizing-scaling"] + += Sizing and scaling guidelines + +API nodes process user requests (interactions with the UI or API) while worker nodes process the activations and other background tasks required for {EDAName} to function properly. The number of API nodes you require correlates to the required number of users of the application and the number of worker nodes correlates to the required number of activations you want to run. + +Since activations are variable and controlled by worker nodes, the supported approach for scaling is to use separate API and worker nodes instead of hybrid nodes due to the efficient allocation of hardware resources by worker nodes. By separating the nodes, you can scale each type independently based on specific needs, leading to better resource utilization and cost efficiency. + +An example of an instance in which you might consider scaling up your node deployment is when you want to deploy {EDAName} for a small group of users who will run a large number of activations. In this case, one API node is adequate, but if you require more, you can scale up to three additional worker nodes. diff --git a/downstream/modules/platform/con-install-mesh.adoc b/downstream/modules/platform/con-install-mesh.adoc index e8b17a463d..ac1260a665 100644 --- a/downstream/modules/platform/con-install-mesh.adoc +++ b/downstream/modules/platform/con-install-mesh.adoc @@ -1,11 +1,13 @@ +:_mod-docs-content-type: CONCEPT + [id="install-mesh_{context}"] = {AutomationMesh} Installation -You use the {PlatformNameShort} installation program to set up {AutomationMesh} or to upgrade to {AutomationMesh}. -To provide {PlatformNameShort} with details about the nodes, groups, and peer relationships in your mesh network, you define them in an the `inventory` file in the installer bundle. +For a VM-based install of {PlatformNameShort} you use the installation program to set up {AutomationMesh} or to upgrade to {AutomationMesh}. +To provide {PlatformNameShort} with details about the nodes, groups, and peer relationships in your mesh network, you define them in an the `inventory` file in the installer bundle. For managed cloud, OpenShift, or operator environments, see link:{URLOperatorMesh}/index[{TitleOperatorMesh}]. [role="_additional-resources"] .Additional Resources -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index[{PlatformName} Installation Guide] -* <> +* link:{URLInstallationGuide}/index[{TitleInstallationGuide}] +* link:{URLAutomationMesh}/design-patterns[{AutomationMeshStart} design patterns] diff --git a/downstream/modules/platform/con-install-scenario-examples.adoc b/downstream/modules/platform/con-install-scenario-examples.adoc index 66ad30c970..fa508fa60d 100644 --- a/downstream/modules/platform/con-install-scenario-examples.adoc +++ b/downstream/modules/platform/con-install-scenario-examples.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-install-scenario-examples"] = Inventory file examples based on installation scenarios diff --git a/downstream/modules/platform/con-install-scenario-recommendations.adoc b/downstream/modules/platform/con-install-scenario-recommendations.adoc index 578daf6b11..df86d3cbf3 100644 --- a/downstream/modules/platform/con-install-scenario-recommendations.adoc +++ b/downstream/modules/platform/con-install-scenario-recommendations.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-install-scenario-recommendations"] = Inventory file recommendations based on installation scenarios @@ -5,14 +7,11 @@ [role="_abstract"] Before selecting your installation method for {PlatformNameShort}, review the following recommendations. Familiarity with these recommendations will streamline the installation process. -* For {PlatformName} or {HubName}: Add an {HubName} host in the `[automationhub]` group. // Removed for AAP-20847 and until such time as a decision is made regarding database support. //* Internal databases `[database]` are not supported. See the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/containerized_ansible_automation_platform_installation_guide/index[Containerized {PlatformName} Installation Guide] for further information on using the containerized installer for environments requiring a monolithc deployment. -* Do not install {ControllerName} and {HubName} on the same node for versions of {PlatformNameShort} in a production or customer environment. -This can cause contention issues and heavy resource use. -* Provide a reachable IP address or fully qualified domain name (FQDN) for the `[automationhub]` and `[automationcontroller]` hosts to ensure users can sync and install content from {HubName} from a different node. +* Provide a reachable IP address or fully qualified domain name (FQDN) for hosts to ensure users can sync and install content from {HubName} from a different node. + -The FQDN must not contain either the `-` or the `_` symbols, as it will not be processed correctly. +The FQDN must not contain either the `-` or the `_` symbols, as it will not be processed correctly. + Do not use `localhost`. * `admin` is the default user ID for the initial log in to {PlatformNameShort} and cannot be changed in the inventory file. diff --git a/downstream/modules/platform/con-installer-generated-certs.adoc b/downstream/modules/platform/con-installer-generated-certs.adoc new file mode 100644 index 0000000000..5713f0ef99 --- /dev/null +++ b/downstream/modules/platform/con-installer-generated-certs.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="installer-generated-certificates"] += {PlatformNameShort} generated certificates + +By default, the installation program creates a self-signed Certificate Authority (CA) and uses it to generate self-signed TLS certificates for all {PlatformNameShort} services. The self-signed CA certificate and key are generated on one node under the `~/aap/tls/` directory and copied to the same location on all other nodes. This CA is valid for 10 years after the initial creation date. + +Self-signed certificates are not part of any public chain of trust. The installation program creates a certificate truststore that includes the self-signed CA certificate under `~/aap/tls/extracted/` and bind-mounts that directory to each {PlatformNameShort} service container under `/etc/pki/ca-trust/extracted/`. This allows each {PlatformNameShort} component to validate the self-signed certificates of the other {PlatformNameShort} services. The CA certificate can also be added to the truststore of other systems or browsers as needed. diff --git a/downstream/modules/platform/con-inventory-variables-intro.adoc b/downstream/modules/platform/con-inventory-variables-intro.adoc index be855a2a97..b1e4f42625 100644 --- a/downstream/modules/platform/con-inventory-variables-intro.adoc +++ b/downstream/modules/platform/con-inventory-variables-intro.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-inventory-variables-intro_{context}"] = Inventory variables diff --git a/downstream/modules/platform/con-known-proxies.adoc b/downstream/modules/platform/con-known-proxies.adoc index 052f2c8722..a0aa0ecf94 100644 --- a/downstream/modules/platform/con-known-proxies.adoc +++ b/downstream/modules/platform/con-known-proxies.adoc @@ -1,17 +1,26 @@ +:_mod-docs-content-type: CONCEPT [id="con-known-proxies_{context}"] = Known proxies - [role="_abstract"] When {ControllerName} is configured with `REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']`, it assumes that the value of `X-Forwarded-For` has originated from the proxy/load balancer sitting in front of {ControllerName}. If {ControllerName} is reachable without use of the proxy/load balancer, or if the proxy does not validate the header, the value of `X-Forwarded-For` can be falsified to fake the originating IP addresses. + Using `HTTP_X_FORWARDED_FOR` in the `REMOTE_HOST_HEADERS` setting poses a vulnerability. -To avoid this, you can configure a list of known proxies that are allowed using the *PROXY_IP_ALLOWED_LIST* field in the settings menu on your {ControllerName}. -Load balancers and hosts that are not on the known proxies list will result in a rejected request. +To avoid this, you can configure a list of known proxies that are allowed. + +.Procedure +. From the navigation panel, select {MenuSetSystem}. +. Enter a list of proxy IP addresses from which the service should trust custom remote header values in the *Proxy IP Allowed List* field. ++ +[NOTE] +==== +Load balancers and hosts that are not on the known proxies list result in a rejected request. +==== //.Example vulnerabilities: // diff --git a/downstream/modules/platform/con-loading-impacts-grpc-settings.adoc b/downstream/modules/platform/con-loading-impacts-grpc-settings.adoc new file mode 100644 index 0000000000..96d891f9ab --- /dev/null +++ b/downstream/modules/platform/con-loading-impacts-grpc-settings.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="loading-impacts-grpc-settings-py_{context}"] + += Impacts of modifying the `grpc_settings.py` file + +[role="_abstract"] +The gRPC server is responsible for helping with authentication between the different platform services. Altering the settings in the `grpc_settings.py` file can significantly impact the behavior and performance of the gRPC connection, especially in terms of connection stability. + +It is important to thoroughly test any changes made to the gRPC settings before deploying them to production to ensure that the gRPC server functions as expected. + diff --git a/downstream/modules/platform/con-loading-order-grpc-settings.adoc b/downstream/modules/platform/con-loading-order-grpc-settings.adoc new file mode 100644 index 0000000000..af64140212 --- /dev/null +++ b/downstream/modules/platform/con-loading-order-grpc-settings.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: CONCEPT + +[id="loading-order-grpc-settings-py_{context}"] + += Loading order of settings + +[role="_abstract"] +The {Gateway} utilizes the Dynaconf library for managing its application settings. Dynaconf follows a layered configuration approach, where settings are loaded from multiple sources in a defined order, with later sources overriding earlier ones. {PlatformNameShort} loads settings in the following sequence: + +. *Application settings.py:* This file is in the application itself and defines the loading order and location of additional settings files. +. *Application default settings:* The platform loads default settings from a defaults.py file, which is part of the application itself. This file includes general configurations for both the API Server and the gRPC server. +. *Customer override file:* The `/etc/ansible-automation-platform/gateway/settings.py` file is automatically installed and can be used to override any configuration in `defaults.py`. Changes to this file affect both the API and gRPC servers. +. *Application gRPC default settings:* After the customer override file, the application loads additional default settings for the gRPC server only from the `grpc_default.py` file. Specifically, this file includes database OPTIONS for the gRPC server, such as the keepalive parameters. +. *Customer gRPC override file:* The file `/etc/ansible-automation-platform/gateway/grpc_settings.py`, if present, is loaded next and any settings contained in this file are applied only to the gRPC server. +. *Platform override settings file:* Any settings in the `/etc/ansible-automation-platform/settings.py` file are applied to both the gRPC server and the API server. If there are multiple {PlatformNameShort} services on a single node, items in this file are applied to all services. +. *ENV vars:* Environment variables, where you can configure certain {PlatformNameShort} settings outside of the configuration files, are loaded last. They override any previously loaded settings. diff --git a/downstream/modules/platform/con-ocp-supported-install.adoc b/downstream/modules/platform/con-ocp-supported-install.adoc index 53c4b4d571..07396dcb36 100644 --- a/downstream/modules/platform/con-ocp-supported-install.adoc +++ b/downstream/modules/platform/con-ocp-supported-install.adoc @@ -1,15 +1,41 @@ -[id="con-ocp-supported-install_{context}"] +:_mod-docs-content-type: CONCEPT + +[id="ocp-supported-install_{context}"] = Supported installation scenarios for {OCP} +You can use the OperatorHub on the {OCP} web console to install {OperatorPlatformNameShort}. + +Alternatively, you can install {OperatorPlatformNameShort} from the {OCPShort} command-line interface (CLI), `oc`. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#installing-aap-operator-cli_operator-platform-doc[Installing {OperatorPlatformName} from the {OCPShort} CLI] for help with this. + +After you have installed {OperatorPlatformNameShort} you must create an *{PlatformNameShort}* custom resource (CR). This enables you to manage {PlatformNameShort} components from a single unified interface known as the {Gateway}. As of version 2.5, you must create an {PlatformNameShort} CR, even if you have an existing {ControllerName}, {HubName}, or {EDAName}, components. + +If existing components have already been deployed, you must specify these components on the {PlatformNameShort} CR. You must create the custom resource in the same namespace as the existing components. + +[cols=2*a,options="header"] +|=== +| *Supported scenarios* | *Supported scenarios with existing components* +| +* {PlatformNameShort} CR for blank slate install with {ControllerName}, {HubName}, and {EDAName} enabled + +* {PlatformNameShort} CR with just {ControllerName} enabled + +* {PlatformNameShort} CR with just {ControllerName}, {HubName} enabled + +* {PlatformNameShort} CR with just {ControllerName}, {EDAName} enabled + | + * {PlatformNameShort} CR created in the same namespace as an existing {ControllerName} CR with the {ControllerName} name specified on the {PlatformNameShort} CR spec + +* Same with {ControllerName} and {HubName} -You can use the OperatorHub on the {OCP} web console to install {OperatorPlatform}. +* Same with {ControllerName}, {HubName}, and {EDAName} -Alternatively, you can install {OperatorPlatform} from the {OCPShort} command-line interface (CLI), `oc`. +* Same with {ControllerName} and {EDAName} +|=== -Follow one of the workflows below to install the {OperatorPlatform} and use it to install the components of {PlatformNameShort} that you require. -* {ControllerNameStart} custom resources first, then {HubName} custom resources; -* {HubNameStart} custom resources first, then {ControllerName} custom resources; -* {ControllerNameStart} custom resources; -* {HubNameStart} custom resources. +//Commenting out as upgrade is not included in EA [gmurray] +//[NOTE] +//==== +//The stand-alone EDA user interface will not work upon upgrade. After you configure {PlatformNameShort}, other stand-alone user interfaces will not work. +//==== diff --git a/downstream/modules/platform/con-operator-additional-resources.adoc b/downstream/modules/platform/con-operator-additional-resources.adoc index c1dbdcf3c5..4872c914e8 100644 --- a/downstream/modules/platform/con-operator-additional-resources.adoc +++ b/downstream/modules/platform/con-operator-additional-resources.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-operator-additional-resources_{context}"] = Additional resources diff --git a/downstream/modules/platform/con-operator-channel-upgrade.adoc b/downstream/modules/platform/con-operator-channel-upgrade.adoc new file mode 100644 index 0000000000..0a8461922c --- /dev/null +++ b/downstream/modules/platform/con-operator-channel-upgrade.adoc @@ -0,0 +1,52 @@ +:_mod-docs-content-type: CONCEPT + +[id="operator-channel-upgrade_{context}"] + += Channel upgrades + +Upgrading to version 2.5 from {PlatformNameShort} 2.4 involves retrieving updates from a “channel”. +A channel refers to a location where you can access your update. +It currently resides in the OpenShift console UI. + +image:change_subscription.png[Update channel] + +== In-channel upgrades + +Most upgrades occur within a channel as follows: + +. A new update becomes available in the marketplace, through the redhat-operator CatalogSource. +. The system automatically creates a new InstallPlan for your {PlatformNameShort} subscription. +* If set to *Manual*, the InstallPlan needs manual approval in the OpenShift UI. +* If set to *Automatic*, it upgrades as soon as the new version is available. ++ +[NOTE] +==== +Set a manual install strategy on your {OperatorPlatformNameShort} subscription during installation or upgrade. You will be prompted to approve upgrades when available in your chosen update channel. Stable channels, like stable-2.5, are available for each X.Y release. +==== ++ +. A new subscription, CSV, and operator containers are created alongside the old ones. +The old resources are cleaned up after a successful install. + +== Cross-channel upgrades + +Upgrading between X.Y channels is always manual and intentional. +Stable channels for major and minor versions are in the Operator Catalog. +Currently, only version 2.x is available, so there are few channels. +It is recommended to stay on the latest minor version channel for the latest patches. + +If the subscription is set for manual upgrades, you must approve the upgrade in the UI. Then, the system upgrades the Operator to the latest version in that channel. +[NOTE] +==== +It is recommended to set a manual install strategy on your {OperatorPlatformNameShort} subscription during installation or upgrade. +You will be prompted to approve upgrades when they become available in your chosen update channel. +Stable channels, such as stable-2.5, are available for each X.Y release. +==== + +The containers provided in the latest channel are updated regularly for OS upgrades and critical fixes. This allows customers to receive critical patches and CVE fixes faster. Larger changes and new features are saved for minor and major releases. + +For each major or minor version channel, there is a corresponding "cluster-scoped" channel available. Cluster-scoped channels deploy operators that can manage all namespaces, while non-cluster-scoped channels can only manage resources in their own namespace. + +[IMPORTANT] +==== +Cluster-scoped bundles are not compatible with namespace-scoped bundles. Do not try to switch between normal (stable-2.4 for example) channels and cluster-scoped (stable-2.4-cluster-scoped) channels, as this is not supported. +==== \ No newline at end of file diff --git a/downstream/modules/platform/con-operator-csrf-management.adoc b/downstream/modules/platform/con-operator-csrf-management.adoc new file mode 100644 index 0000000000..06e7c51e99 --- /dev/null +++ b/downstream/modules/platform/con-operator-csrf-management.adoc @@ -0,0 +1,47 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-operator-csrf-management_{context}"] + += {OperatorPlatformNameShort} CSRF management + +In {PlatformNameShort} version {PlatformVers} the {OperatorPlatformNameShort} on {OCPShort} creates OpenShift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress, for help with this see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-operator-config-csrf-gateway_operator-configure-gateway[Configuring your CSRF settings for your {Gateway} operator ingress]. + +[IMPORTANT] +==== +In previous versions CSRF was configurable through the {controllerName} user interface, in version {PlatformVers} {controllerName} settings are still present but have no impact on CSRF settings for the {Gateway}. +==== + +The following table helps to clarify which settings are applicable for which component. + +[cols=2*a,options="header"] +|=== +| *UI setting* | *Applicable for* +| +Subscription +| +{ControllerName} +| +{Gateway} +| +{Gateway} +| +User Preferences +| +User interface +| +System +| +{ControllerNameStart} +| +Job +| +{ControllerNameStart} +| +Logging +| +{ControllerNameStart} +| +Troubleshooting +| +{ControllerNameStart} +|=== \ No newline at end of file diff --git a/downstream/modules/platform/con-operator-custom-resources.adoc b/downstream/modules/platform/con-operator-custom-resources.adoc index a4c3a41dac..43f09d9fd1 100644 --- a/downstream/modules/platform/con-operator-custom-resources.adoc +++ b/downstream/modules/platform/con-operator-custom-resources.adoc @@ -1,5 +1,17 @@ +:_mod-docs-content-type: CONCEPT + [id="con-operator-custom-resources_{context}"] = Custom resources You can define custom resources for each primary installation workflows. + +//[Jameria] Moved this topic from supported installation section to custom resources since that's what the cross-referenced topic links to in the appendix (Custom resources appendix) +== Modifying the number of simultaneous rulebook activations during or after {EDAcontroller} installation + +* If you plan to install {EDAName} on {OCPShort} and modify the number of simultaneous rulebook activations, add the required `EDA_MAX_RUNNING_ACTIVATIONS` parameter to your custom resources. By default, {EDAcontroller} allows 12 activations per node to run simultaneously. For an example see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#eda_max_running_activations_yml[eda-max-running-activations.yml] in the appendix section. + +[NOTE] +==== +`EDA_MAX_RUNNING_ACTIVATIONS` for {OCPShort} is a global value since there is no concept of worker nodes when installing {EDAName} on {OCPShort}. +==== diff --git a/downstream/modules/platform/con-operator-upgrade-considerations.adoc b/downstream/modules/platform/con-operator-upgrade-considerations.adoc index 646d66df6f..88acbd9dad 100644 --- a/downstream/modules/platform/con-operator-upgrade-considerations.adoc +++ b/downstream/modules/platform/con-operator-upgrade-considerations.adoc @@ -1,12 +1,12 @@ +:_mod-docs-content-type: CONCEPT + [id="operator-upgrade-considerations"] = Upgrade considerations +If you are upgrading from version 2.4, continue to the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#upgrading-operator_operator-upgrade[Upgrading the {OperatorPlatformNameShort}]. -[role="_abstract"] -{PlatformName} version 2.0 was the first release of the {OperatorPlatform}. If you are upgrading from version 2.0, continue to the xref:upgrading-operator_operator-upgrade[Upgrading the {OperatorPlatform}] procedure. - -If you are using a version of {OCPShort} that is not supported by the version of {PlatformName} to which you are upgrading, you must upgrade your {OCPShort} cluster to a supported version prior to upgrading. +If your {OCPShort} version is not supported by the {PlatformName} version you are upgrading to, you must upgrade your {OCPShort} cluster to a supported version first. Refer to the link:https://access.redhat.com/support/policy/updates/ansible-automation-platform[Red Hat Ansible Automation Platform Life Cycle] to determine the {OCPShort} version needed. diff --git a/downstream/modules/platform/con-operator-upgrade-overview.adoc b/downstream/modules/platform/con-operator-upgrade-overview.adoc new file mode 100644 index 0000000000..e98a64c0f0 --- /dev/null +++ b/downstream/modules/platform/con-operator-upgrade-overview.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: CONCEPT + +[id="operator-upgrade-overview"] + += Overview + +You can use this document for help with upgrading {PlatformNameShort} 2.4 to 2.5 on {OCP}. +This document applies to upgrades of {PlatformNameShort} 2.5 to later versions of 2.5. + +The {OperatorPlatformNameShort} manages deployments, upgrades, backups, and restores of {ControllerName} and {HubName}. +It also handles deployments of AnsibleJob and JobTemplate resources from the {PlatformNameShort} Resource Operator. + +Each operator version has default {ControllerName} and {HubName} versions. +When the operator is upgraded, it also upgrades the {ControllerName} and {HubName} deployments it manages, unless overridden in the spec. + +OpenShift deployments of {PlatformNameShort} use the built-in Operator Lifecycle Management (OLM) functionality. +For more information, see link:https://docs.openshift.com/container-platform/4.16/operators/understanding/olm/olm-understanding-olm.html[Operator Lifecycle Manager concepts and resources]. +OpenShift does this by using Subscription, CSV, InstallPlan, and OperatorGroup objects. +Most users will not have to interact directly with these resources. +They are created when the {OperatorPlatformNameShort} is installed from *OperatorHub* and managed through the *Subscriptions* tab in the OpenShift console UI. +For more information, refer to link:https://docs.openshift.com/container-platform/4.16/web_console/web-console.html[Accessing the web console]. + +image:Subscription_tab.png[Subscription tab] \ No newline at end of file diff --git a/downstream/modules/platform/con-operator-upgrade-prereq.adoc b/downstream/modules/platform/con-operator-upgrade-prereq.adoc index 60e7b3b0a4..e9bf049c39 100644 --- a/downstream/modules/platform/con-operator-upgrade-prereq.adoc +++ b/downstream/modules/platform/con-operator-upgrade-prereq.adoc @@ -1,11 +1,15 @@ +:_mod-docs-content-type: CONCEPT + [id="operator-upgrade-prereq_{context}"] = Prerequisites -[role="_abstract"] -To upgrade to a newer version of {OperatorPlatform}, it is recommended that you do the following: +To upgrade to a newer version of {OperatorPlatformNameShort}, you must: -* Create AutomationControllerBackup and AutomationHubBackup objects. For help with this see link:{BaseURL}red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_operator_backup_and_recovery_guide/index#aap-backup-recommendations[Creating Red Hat Ansible Automation Platform backup resources] -//See (Backup and Restore) for information on creating backup objects. [add link to new backup and restore doc when complete] -* Review the release notes for the new {PlatformNameShort} version to which you are upgrading and any intermediate versions. +* Ensure your system meets the system requirements detailed in the link:{URLTopologies}/ocp-topologies[Operator topologies] section of the _{TitleTopologies}_ guide. +* Create AutomationControllerBackup and AutomationHubBackup objects. +For help with this see link:{URLOperatorBackup}[{TitleOperatorBackup}] +* Review the link:{URLReleaseNotes}[{TitleReleaseNotes}] for the new {PlatformNameShort} version to which you are upgrading and any intermediate versions. +* Determine the type of upgrade you want to perform. +See the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#operator-channel-upgrade_operator-upgrade[Channel Upgrades] section for more information. diff --git a/downstream/modules/platform/con-overview-automation-mesh.adoc b/downstream/modules/platform/con-overview-automation-mesh.adoc index 5b95b22cc3..fe1e92f191 100644 --- a/downstream/modules/platform/con-overview-automation-mesh.adoc +++ b/downstream/modules/platform/con-overview-automation-mesh.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-overview-automation-mesh_{context}"] = {AutomationMeshStart} diff --git a/downstream/modules/platform/con-pac-policies-rules.adoc b/downstream/modules/platform/con-pac-policies-rules.adoc new file mode 100644 index 0000000000..85078a1c97 --- /dev/null +++ b/downstream/modules/platform/con-pac-policies-rules.adoc @@ -0,0 +1,28 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-09 +:_mod-docs-content-type: CONCEPT + +[id="pac-policies-rules_{context}"] += Understanding OPA packages and rules + +An OPA policy is organized in packages, which are namespaced collections of rules. The basic structure of an OPA policy looks like this: + +[source,rego] +---- +package aap_policy_examples # Package name + +import rego.v1 # Import required for Rego v1 syntax + +# Rules define the policy logic +allowed := { + "allowed": true, + "violations": [] +} +---- + +The key components of the rule's structure are: + +Package declaration:: This defines the namespace for your policy. +Rules:: This defines the policy's logic and the decision that it returns. + +These components together comprise the OPA policy name, which is formatted as `[package]/[rule]`. You will enter the OPA policy name when you configure enforcement points. diff --git a/downstream/modules/platform/con-pod-specification-mods.adoc b/downstream/modules/platform/con-pod-specification-mods.adoc index fc463828b1..9aa252c450 100644 --- a/downstream/modules/platform/con-pod-specification-mods.adoc +++ b/downstream/modules/platform/con-pod-specification-mods.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-pod-specification-mods_{context}"] = Introduction @@ -103,4 +105,4 @@ In this case, it provides an ephemeral volume for the registry storage and a sec You can change the pod used to run jobs in a Kubernetes-based cluster by using {ControllerName} and editing the pod specification in the {ControllerName} UI. The pod specification that is used to create the pod that runs the job is in YAML format. -For further information about editing the pod specifications, see xref:proc-customizing-pod-specs[Customizing the pod specification]. +For further information about editing the pod specifications, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/performance_considerations_for_operator_environments/index#proc-customizing-pod-specs[Customizing the pod specification]. diff --git a/downstream/modules/platform/con-post-migration-cleanup.adoc b/downstream/modules/platform/con-post-migration-cleanup.adoc new file mode 100644 index 0000000000..f02053580c --- /dev/null +++ b/downstream/modules/platform/con-post-migration-cleanup.adoc @@ -0,0 +1,11 @@ +:_mod-docs-content-type: CONCEPT + +[id="post-migration-cleanup_{context}"] + += Post migration cleanup + +[role=_abstract] + +After data migration, delete unnecessary instance groups and unlink the `old database configuration secret` from the {ControllerName} resource definition. + + diff --git a/downstream/modules/platform/con-receptor-cert-considerations.adoc b/downstream/modules/platform/con-receptor-cert-considerations.adoc new file mode 100644 index 0000000000..328172b6e1 --- /dev/null +++ b/downstream/modules/platform/con-receptor-cert-considerations.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="receptor-certificate-considerations"] += Receptor certificate considerations + +When using a custom certificate for Receptor nodes, the certificate requires the `otherName` field specified in the Subject Alternative Name (SAN) of the certificate with the value `1.3.6.1.4.1.2312.19.1`. For more information, see link:https://ansible.readthedocs.io/projects/receptor/en/latest/user_guide/tls.html#above-the-mesh-tls[Above the mesh TLS]. + +Receptor does not support the usage of wildcard certificates. Additionally, each Receptor certificate must have the host FQDN specified in its SAN for TLS hostname validation to be correctly performed. diff --git a/downstream/modules/platform/con-resource-operator-overview.adoc b/downstream/modules/platform/con-resource-operator-overview.adoc index f3c0312fe6..3767c3e04b 100644 --- a/downstream/modules/platform/con-resource-operator-overview.adoc +++ b/downstream/modules/platform/con-resource-operator-overview.adoc @@ -1,13 +1,24 @@ +:_mod-docs-content-type: CONCEPT + [id="con-controller-resource-operator_{context}"] = {OperatorResourceShort} overview -{OperatorResourceShort} is a custom resource (CR) that you can deploy after you have created your {ControllerName} deployment. - With {OperatorResourceShort} you can define projects, job templates, and inventories through the use of YAML files. - These YAML files are then used by {ControllerName} to create these resources. - You can create the YAML through the *Form view* that prompts you for keys and values for your YAML code. - Alternatively, to work with YAML directly, you can select *YAML view*. -There are currently two custom resources provided by the {OperatorResourceShort}: +{OperatorResourceShort} is a custom resource (CR) that you can deploy after you have created your {Gateway} deployment. +With {OperatorResourceShort} you can define resources such as projects, job templates, and inventories in YAML files. +{ControllerName} then uses the YAML files to create these resources. +You can create the YAML through the *Form view* that prompts you for keys and values for your YAML code. +Alternatively, to work with YAML directly, you can select *YAML view*. + +The {OperatorResourceShort} provides the following CRs: + +* AnsibleJob +* JobTemplate +* {ControllerNameStart} project +* {ControllerNameStart} schedule +* {ControllerNameStart} workflow +* {ControllerNameStart} workflow template: +* {ControllerNameStart} inventory +* {ControllerNameStart} credential -* AnsibleJob: launches a job in the {ControllerName} instance specified in the Kubernetes secret ({ControllerName} host URL, token). -* JobTemplate: creates a job template in the {ControllerName} instance specified. +For more information on any of the above custom resources, see link:{URLControllerUserGuide}[{TitleControllerUserGuide}]. diff --git a/downstream/modules/platform/con-settings-py.adoc b/downstream/modules/platform/con-settings-py.adoc new file mode 100644 index 0000000000..08ae12afb7 --- /dev/null +++ b/downstream/modules/platform/con-settings-py.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: CONCEPT + +[id="settings-py_{context}"] + += `settings.py` file + +[role="_abstract"] +As a platform administrator, you can modify the `settings.py` file to configure various aspects of {PlatformNameShort}, such as database connections, logging configurations, caching, and more. + +There are two `settings.py` files; the default `settings.py` that is part of the codebase and must not be edited, and an override file that can be used to override the default values. + +The location and management of the override `settings.py` file can differ based on your deployment (RPM-based, {ContainerBase}, or {OperatorBase}). + +== RPM deployments + +The override `settings.py` file in an RPM-based setup can be edited directly, and changes take effect after restarting the {Gateway} service. If you choose to edit the file, be sure to use the proper syntax and values. The override `settings.py` file is located in the following directory: + +---- +/etc/ansible-automation-platform/gateway/settings.py +---- + +== Container-based deployments +For {ContainerBase} deployments, {PlatformNameShort} runs within containers and the `settings.py` file is included inside the container. However, directly editing the `settings.py` file in {ContainerBase} deployments is not recommended because the `settings.py` file is overwritten during upgrades or new installs. + +To customize settings in {ContainerBase} deployments, you can use the `extra_settings` parameter to ensure that customizations persist through installer updates. For more information, see link:{URLContainerizedInstall}/appendix-inventory-files-vars[Inventory file variables] in the {TitleContainerizedInstall} guide. + +== Operator-based deployments + +For {OperatorBase} deployments, the `settings.py` file is typically located inside the container, however, users cannot modify the `settings.py` files directly in the container because containers in {OCP} are read-only. + +Instead, for operator-based deployments, you can modify the settings for the {Gateway} using the `spec.extra_settings` parameter on the {PlatformNameShort} custom resource. diff --git a/downstream/modules/platform/con-soft-deletion.adoc b/downstream/modules/platform/con-soft-deletion.adoc new file mode 100644 index 0000000000..876eaddf1c --- /dev/null +++ b/downstream/modules/platform/con-soft-deletion.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-soft-deletion_{context}"] + += Soft deletion + +Soft deletion enables the removal of decommissioned hosts from the Host Metrics view and excludes them from any potential managed node counting. +Additionally, the following host types can also be deleted: + +* Ephemeral, uniquely provisioned hosts, such as those used for CI-CD, or testing only. +* Bench provisioning or temporary hosts. +* Older hosts that have been decommissioned and are never automated against. + +Soft deletion is intended for legitimate use case scenarios only. +It must not be used to violate subscription counting, for example, for the purposes of node recycling. +For more information, see link:https://access.redhat.com/articles/3331481[How are "managed nodes" defined as part of the {PlatformName} offering]? diff --git a/downstream/modules/platform/con-sticky-sessions.adoc b/downstream/modules/platform/con-sticky-sessions.adoc index da473b1216..10526f9f37 100644 --- a/downstream/modules/platform/con-sticky-sessions.adoc +++ b/downstream/modules/platform/con-sticky-sessions.adoc @@ -1,6 +1,11 @@ +:_mod-docs-content-type: CONCEPT + [id="con-sticky-sessions_{context}"] = Enable sticky sessions [role="_abstract"] -By default, an Application Load Balancer routes each request independently to a registered target based on the chosen load-balancing algorithm. To avoid authentication errors when running multiple instances of {HubName} behind a load balancer, you must enable sticky sessions. Enabling sticky sessions sets a custom application cookie that matches the cookie configured on the load balancer to enable stickiness. This custom cookie can include any of the cookie attributes required by the application. +By default, an application load balancer routes each request independently to a registered target based on the chosen load-balancing algorithm. +To avoid authentication errors when running multiple instances of {HubName} behind a load balancer, you must enable sticky sessions. +Enabling sticky sessions sets a custom application cookie that matches the cookie configured on the load balancer to enable stickiness. +This custom cookie can include any of the cookie attributes required by the application. diff --git a/downstream/modules/platform/con-storage-options-for-operator-installation-on-ocp.adoc b/downstream/modules/platform/con-storage-options-for-operator-installation-on-ocp.adoc index 09bee5434a..4069a11a8c 100644 --- a/downstream/modules/platform/con-storage-options-for-operator-installation-on-ocp.adoc +++ b/downstream/modules/platform/con-storage-options-for-operator-installation-on-ocp.adoc @@ -1,11 +1,13 @@ +:_mod-docs-content-type: CONCEPT + [id="con-storage-options-for-operator-installation-on-ocp_{context}"] -= Storage options for {OperatorPlatform} installation on {OCP} += Storage options for {OperatorPlatformNameShort} installation on {OCP} {HubNameStart} requires `ReadWriteMany` file-based storage, Azure Blob storage, or Amazon S3-compliant storage for operation so that multiple pods can access shared content, such as collections. The process for configuring object storage on the `AutomationHub` CR is similar for Amazon S3 and Azure Blob Storage. -If you are using file-based storage and your installation scenario includes {HubName}, ensure that the storage option for {OperatorPlatform} is set to `ReadWriteMany`. +If you are using file-based storage and your installation scenario includes {HubName}, ensure that the storage option for {OperatorPlatformNameShort} is set to `ReadWriteMany`. `ReadWriteMany` is the default storage option. In addition, {ODFShort} provides a `ReadWriteMany` or S3-compliant implementation. Also, you can set up NFS storage configuration to support `ReadWriteMany`. This, however, introduces the NFS server as a potential, single point of failure. @@ -14,5 +16,5 @@ In addition, {ODFShort} provides a `ReadWriteMany` or S3-compliant implementatio [role="_additional-resources"] .Additional resources -* link:https://docs.openshift.com/container-platform/{OCPLatest}/storage/persistent_storage/persistent-storage-nfs.html[Persistent storage using NFS] in the {OCPShort} _Storage_ guide -* IBM's link:https://www.ibm.com/support/pages/how-do-i-create-storage-class-nfs-dynamic-storage-provisioning-openshift-environment[How do I create a storage class for NFS dynamic storage provisioning in an OpenShift environment?] +* link:https://docs.openshift.com/container-platform/{OCPLatest}/storage/persistent_storage/persistent-storage-nfs.html[Persistent storage using NFS] +* link:https://www.ibm.com/support/pages/how-do-i-create-storage-class-nfs-dynamic-storage-provisioning-openshift-environment[How do I create a storage class for NFS dynamic storage provisioning in an OpenShift environment?] diff --git a/downstream/modules/platform/con-update-planning.adoc b/downstream/modules/platform/con-update-planning.adoc new file mode 100644 index 0000000000..5f8108b531 --- /dev/null +++ b/downstream/modules/platform/con-update-planning.adoc @@ -0,0 +1,12 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-update-planning"] += Update planning + +Before you begin the update process, review the following considerations to plan and prepare your {PlatformNameShort} deployment: + +* Even if you have a valid license from an earlier version, you must give your credentials or a subscription manifest upon upgrading to the latest version of {PlatformNameShort}. For more information, see link:{URLCentralAuth}/assembly-gateway-licensing#proc-attaching-subscriptions[Attaching your {PlatformName} subscription] in _{TitleCentralAuth}_. + +* Clustered upgrades require special attention to instance and instance groups before upgrading. Ensure you capture your inventory or instance group details before upgrading. For more information, see link:{URLControllerAdminGuide}/controller-clustering[Clustering] in _{TitleControllerAdminGuide}_. + +* If you are currently running {EDAcontroller}, disable all rulebook activations before upgrading to ensure that only new activations run after the upgrade process has completed. This prevents possibilities of orphaned containers running activations from the earlier version. For more information, see link:{URLEDAUserGuide}/eda-rulebook-activations#eda-enable-rulebook-activations[Enabling and disabling rulebook activations] in _{TitleEDAUserGuide}_. \ No newline at end of file diff --git a/downstream/modules/platform/con-user-data-tracking.adoc b/downstream/modules/platform/con-user-data-tracking.adoc index f5ddc54fca..75fd829c6e 100644 --- a/downstream/modules/platform/con-user-data-tracking.adoc +++ b/downstream/modules/platform/con-user-data-tracking.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + [id="con-usability-analytics_{context}"] = Usability analytics and data collection diff --git a/downstream/modules/platform/con-using-chatbot.adoc b/downstream/modules/platform/con-using-chatbot.adoc new file mode 100644 index 0000000000..c7b381e71a --- /dev/null +++ b/downstream/modules/platform/con-using-chatbot.adoc @@ -0,0 +1,32 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-using-chatbot"] + += Using the {AAPchatbot} + +After you deploy the {AAPchatbot}, all Ansible users within the organization can access and use the chat interface to ask questions and receive information about the {PlatformNameShort}. + +.Accessing the {AAPchatbot} +. Log in to the {PlatformNameShort}. +. Click the {AAPchatbot} icon image:chatbot-icon.png[{AAPchatbot} icon] that is displayed at the top right corner of the taskbar. ++ +The {AAPchatbot} window opens with a welcome message, as shown in the following image: ++ +[.thumb] +image:aap-ansible-lightspeed-intelligent-assistant.png[{AAPchatbot}] + +.Using the {AAPchatbot} + +You can perform the following tasks: + +* Ask questions in the prompt field and get answers about the {PlatformNameShort} +* View the chat history of all conversations in a chat session +* Search the chat history using a user prompt or answer ++ +The chat history is deleted when you close an existing chat session or log out from the {PlatformNameShort}. +* Restore a previous chat by clicking the relevant entry from the chat history +* Provide feedback on the quality of the chat answers, by clicking the *Thumbs up* or *Thumbs down* icon +* Copy and record the answers by clicking the *Copy* icon +* Change the mode of the virtual assistant to dark or light mode, by clicking the *Sun* icon image:sun-icon.png[Sun icon] from the top right corner of the toolbar +* Clear the context of an existing chat by using the *New chat* button in the chat history +* Close the chat interface while working on the {PlatformNameShort} diff --git a/downstream/modules/platform/con-view-hosts-in-CLI.adoc b/downstream/modules/platform/con-view-hosts-in-CLI.adoc new file mode 100644 index 0000000000..917684ad77 --- /dev/null +++ b/downstream/modules/platform/con-view-hosts-in-CLI.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-view-hosts-in-CLI_{context}"] + += Viewing Hosts automated in the CLI + +{ControllerNameStart} provides a way to generate a CSV output of the host metric data and host metric summary through the _Command Line Interface_ (CLI). +You can also soft delete hosts in bulk through the API. diff --git a/downstream/modules/platform/con-view-hosts-in-ui.adoc b/downstream/modules/platform/con-view-hosts-in-ui.adoc new file mode 100644 index 0000000000..84e9045df0 --- /dev/null +++ b/downstream/modules/platform/con-view-hosts-in-ui.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: CONCEPT + +[id="con-view-hosts-in-ui_{context}"] + += Viewing hosts automated in the user interface + +.Procedure +//[ddacosta] I don't see a Host Metrics menu selection off the standalone navigation panel. Should it be Resources > Hosts? If so, add replace with {MenuInfrastructureHosts} +//[ddacosta] For 2.5 Host Metrics is off the Analytics menu. Use {MenuAAHostMetrics} +. In the navigation panel, select {MenuAAHostMetrics} to view the activity associated with hosts, such as those that have been automated and deleted. ++ +Each unique hostname is listed and sorted by the user's preference. +//+ +image::ug-host-metrics.png[Host metrics] ++ +[NOTE] +==== +A scheduled task automatically updates these values on a weekly basis and deletes jobs with hosts that were last automated more than a year ago. +==== + +. Delete unnecessary hosts directly from the Host Metrics view by selecting the desired hosts and clicking btn:[Delete]. ++ +These are soft-deleted, meaning their records are not removed, but are not being used and thereby not counted towards your subscription. + +For more information, see link:{LinkControllerAdminGuide}/index#controller-keep-subscription-in-compliance[Keeping your subscription in compliance]. diff --git a/downstream/modules/platform/con-websocket-setup.adoc b/downstream/modules/platform/con-websocket-setup.adoc index 854eff4ba1..04346f0ec0 100644 --- a/downstream/modules/platform/con-websocket-setup.adoc +++ b/downstream/modules/platform/con-websocket-setup.adoc @@ -1,15 +1,22 @@ -[id="con-websocket-setup_{context}"] +:_mod-docs-content-type: CONCEPT + +[id="con-websocket-setup"] = Websocket configuration for {ControllerName} -[role="_abstract"] -{ControllerNameStart} nodes are interconnected through websockets to distribute all websocket-emitted messages throughout your system. This configuration setup enables any browser client websocket to subscribe to any job that might be running on any {ControllerName} node. Websocket clients are not routed to specific {ControllerName} nodes. Instead, any {ControllerName} node can handle any websocket request and each {ControllerName} node must know about all websocket messages destined for all clients. +You can configure {ControllerName} to align the websocket configuration with your nginx or load balancer configuration. + +{ControllerNameStart} nodes are interconnected through websockets to distribute all websocket-emitted messages throughout your system. +This configuration setup enables any browser client websocket to subscribe to any job that might be running on any {ControllerName} node. +Websocket clients are not routed to specific {ControllerName} nodes. +Instead, any {ControllerName} node can handle any websocket request and each {ControllerName} node must know about all websocket messages destined for all clients. -You can configure websockets at `/etc/tower/conf.d/websocket_config.py` in all of your {ControllerName} nodes and the changes will be effective after the service restarts. +You can configure websockets at `/etc/tower/conf.d/websocket_config.py` in all of your {ControllerName} nodes and the changes become effective after the service restarts. {ControllerNameStart} automatically handles discovery of other {ControllerName} nodes through the Instance record in the database. [IMPORTANT] ==== -Your {ControllerName} nodes are designed to broadcast websocket traffic across a private, trusted subnet (and not the open Internet). Therefore, if you turn off HTTPS for websocket broadcasting, the websocket traffic, composed mostly of Ansible playbook stdout, is sent unencrypted between {ControllerName} nodes. +Your {ControllerName} nodes are designed to broadcast websocket traffic across a private, trusted subnet (and not the open Internet). +Therefore, if you turn off HTTPS for websocket broadcasting, the websocket traffic, composed mostly of Ansible Playbook stdout, is sent unencrypted between {ControllerName} nodes. ==== diff --git a/downstream/modules/platform/con-why-automation-mesh.adoc b/downstream/modules/platform/con-why-automation-mesh.adoc deleted file mode 100644 index 4be04db817..0000000000 --- a/downstream/modules/platform/con-why-automation-mesh.adoc +++ /dev/null @@ -1,19 +0,0 @@ -[id="con-why-automation-mesh"] - -= Benefits of {AutomationMesh} - -The {AutomationMesh} component of the {PlatformName} simplifies the process of distributing automation across multi-site deployments. For enterprises with multiple isolated IT environments, {AutomationMesh} provides a consistent and reliable way to deploy and scale up automation across your execution nodes using a peer-to-peer mesh communication network. - -When upgrading from version 1.x to the latest version of {PlatformNameShort}, you must migrate the data from your legacy isolated nodes into execution nodes necessary for {AutomationMesh}. You can implement {AutomationMesh} by planning out a network of hybrid and control nodes, then editing the inventory file found in the {PlatformNameShort} installer to assign mesh-related values to each of your execution nodes. - - -[role="_additional-resources"] -.Additional resources - -* For instructions on how to migrate from isolated nodes to execution nodes, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/index[Red Hat Ansible Automation Platform Upgrade and Migration Guide]. - -* For information about automation mesh and the various ways to design your automation mesh for your environment: - -** For a VM-based installation, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_guide_for_vm-based_installations/index[{PlatformName} {AutomationMesh} guide for VM-based installations]. - -** For an operator-based installation, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_for_operator-based_installations/index[{PlatformName} {AutomationMesh} for operator-based installations]. diff --git a/downstream/modules/platform/con-why-ee.adoc b/downstream/modules/platform/con-why-ee.adoc index 93effbbea8..50da8f83b9 100644 --- a/downstream/modules/platform/con-why-ee.adoc +++ b/downstream/modules/platform/con-why-ee.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: CONCEPT + // [id="con-why-ee_{context}"] = Why upgrade to {ExecEnvName}? diff --git a/downstream/modules/platform/ini/clustered-nodes.ini b/downstream/modules/platform/ini/clustered-nodes.ini index 738ef3cdab..bd235611ae 100644 --- a/downstream/modules/platform/ini/clustered-nodes.ini +++ b/downstream/modules/platform/ini/clustered-nodes.ini @@ -1,4 +1,4 @@ -[controller] +[automationcontroller] clusternode1.example.com clusternode2.example.com clusternode3.example.com @@ -6,8 +6,7 @@ clusternode3.example.com [all:vars] admin_password='password' -pg_host='' -pg_port='' +pg_host='' pg_database='' pg_username='' diff --git a/downstream/modules/platform/proc-aap-activate-with-credentials.adoc b/downstream/modules/platform/proc-aap-activate-with-credentials.adoc index ca6325214b..544edbb7e0 100644 --- a/downstream/modules/platform/proc-aap-activate-with-credentials.adoc +++ b/downstream/modules/platform/proc-aap-activate-with-credentials.adoc @@ -1,13 +1,28 @@ +:_mod-docs-content-type: PROCEDURE -[id="proc-aap-activate-with-credentials_{context}"] + +[id="proc-aap-activate-with-credentials"] = Activate with credentials -When {PlatformNameShort} launches for the first time, the {PlatformNameShort} Subscription screen automatically displays. You can use your Red Hat credentials to retrieve and import your subscription directly into {PlatformNameShort}. +When {PlatformNameShort} launches for the first time, the {PlatformNameShort} Subscription screen automatically displays. If you are an organization administrator, you can use your Red Hat service account to retrieve and import your subscription directly into {PlatformNameShort}. + +If you do not have administrative access, you can enter your Red Hat username and password in the Client ID and Client secret fields, respectively, to locate and add your subscription to your {PlatformNameShort} instance. + +[NOTE] +==== +You are opted in for {Analytics} by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating {PlatformNameShort}, by doing the following: -.Procedures -. Enter your Red Hat username and password. -. Click btn:[Get Subscriptions]. +. From the navigation panel, select {MenuSetSystem}. +. Clear the *Gather data for {Analytics}* option. +. Click btn:[Save]. +==== + +.Procedure +. Log in to {PlatformName}. +. Select *Service Account / Red Hat Satellite*. +. Enter your *Client ID / Satellite username* and *Client secret / Satellite password*. +. Select your subscription from the *Subscription* list. + [NOTE] ==== @@ -15,6 +30,13 @@ You can also use your Satellite username and password if your cluster nodes are ==== + . Review the End User License Agreement and select *I agree to the End User License Agreement*. -. The Tracking and Analytics options are checked by default. These selections help Red Hat improve the product by delivering you a much better user experience. You can opt out by deselecting the options. -. Click btn:[Submit]. -. Once your subscription has been accepted, the license screen displays and navigates you to the Dashboard of the {PlatformNameShort} interface. You can return to the license screen by clicking the btn:[Settings] icon *⚙* and selecting the *License* tab from the Settings screen. +. Click btn:[Finish]. + +.Verification +After your subscription has been accepted, subscription details are displayed. A status of _Compliant_ indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as _Out of Compliance_, indicating you have exceeded the number of hosts in your subscription. +Other important information displayed include the following: + +Hosts automated:: Host count automated by the job, which consumes the license count +Hosts imported:: Host count considering all inventory sources (does not impact hosts remaining) +Hosts remaining:: Total host count minus hosts automated + \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-activate-with-manifest.adoc b/downstream/modules/platform/proc-aap-activate-with-manifest.adoc index 0f6b004635..d86044d973 100644 --- a/downstream/modules/platform/proc-aap-activate-with-manifest.adoc +++ b/downstream/modules/platform/proc-aap-activate-with-manifest.adoc @@ -1,34 +1,49 @@ +:_mod-docs-content-type: PROCEDURE -[id="proc-aap-activate-with-manifest_{context}"] + +[id="proc-aap-activate-with-manifest"] = Activate with a manifest file -If you have a subscriptions manifest, you can upload the manifest file either using the {PlatformName} interface or manually in an Ansible playbook. +If you have a subscriptions manifest, you can upload the manifest file either by using the {PlatformName} interface. + +[NOTE] +==== +You are opted in for {Analytics} by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating {PlatformNameShort}, by doing the following: +. From the navigation panel, select {MenuSetSystem}. +. Uncheck the *Gather data for {Analytics}* option. +. Click btn:[Save]. +==== + +ifndef::controller-AG[] .Prerequisites You must have a Red Hat Subscription Manifest file exported from the Red Hat Customer Portal. For more information, see xref:assembly-aap-obtain-manifest-files[Obtaining a manifest file]. -.Uploading with the interface +.Procedure -. Complete steps to generate and download the manifest file . Log in to {PlatformName}. -//[ddacosta] There is no license setting in the test environment for 2.4? Need to verify this selection. In 2.5, I think it will be Settings[Subscription]... -. If you are not immediately prompted for a manifest file, go to menu:Settings[License]. -. Make sure the *Username* and *Password* fields are empty. +. If you are not immediately prompted for a manifest file, go to {MenuSetSubscription}. +. Select *Subscription manifest*. . Click btn:[Browse] and select the manifest file. -. Click btn:[Next]. +. Review the End User License Agreement and select *I agree to the End User License Agreement*. +. Click btn:[Finish]. [NOTE] ==== If the btn:[BROWSE] button is disabled on the License page, clear the *USERNAME* and *PASSWORD* fields. ==== -.Uploading manually +.Verification +After your subscription has been accepted, subscription details are displayed. A status of _Compliant_ indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as _Out of Compliance_, indicating you have exceeded the number of hosts in your subscription. +Other important information displayed include the following: + +Hosts automated:: Host count automated by the job, which consumes the license count +Hosts imported:: Host count considering all inventory sources (does not impact hosts remaining) +Hosts remaining:: Total host count minus hosts automated -If you are unable to apply or update the subscription info using the {PlatformName} interface, you can upload the subscriptions manifest manually in an Ansible playbook using the `license` module in the `ansible.controller` collection. +[role="_additional-resources"] +.Next steps +* You can return to the license screen by selecting {MenuSetSubscription} from the navigation panel and clicking btn:[Edit subscription]. ------ -- name: Set the license using a file - license: - manifest: "/tmp/my_manifest.zip" ------ +endif::controller-AG[] \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-add-allowed-registries.adoc b/downstream/modules/platform/proc-aap-add-allowed-registries.adoc new file mode 100644 index 0000000000..fb1b6c5d3f --- /dev/null +++ b/downstream/modules/platform/proc-aap-add-allowed-registries.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: PROCEDURE + +[id="aap-add-allowed-registries_{context}"] + += Adding allowed registries to the {ControllerName} image configuration + +[role=_abstract] + +Before you can deploy a container image in {HubName}, you must add the registry to the `allowedRegistries` in the {ControllerName} image configuration. To do this you can copy and paste the following code into your {ControllerName} image YAML. + +.Procedure + +. Log in to *{OCP}*. +. Navigate to menu:Home[Search]. +. Select the *Resources* drop-down list and type "Image". +. Select *Image (config,openshift.io/v1)*. +. Click btn:[Cluster] under the *Name* heading. +. Select the btn:[YAML] tab. +. Paste in the following under spec value: ++ +---- +spec: + registrySources: + allowedRegistries: + - quay.io + - registry.redhat.io + - image-registry.openshift-image-registry.svc:5000 + - +---- ++ +. Click btn:[Save]. \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-add-merge-subscriptions.adoc b/downstream/modules/platform/proc-aap-add-merge-subscriptions.adoc index aa6c72a2eb..efc667ab51 100644 --- a/downstream/modules/platform/proc-aap-add-merge-subscriptions.adoc +++ b/downstream/modules/platform/proc-aap-add-merge-subscriptions.adoc @@ -1,5 +1,7 @@ +:_mod-docs-content-type: PROCEDURE -[id="proc-add-merge-subscriptions_{context}"] + +[id="proc-add-merge-subscriptions"] = Adding subscriptions to a subscription allocation @@ -12,11 +14,6 @@ Once an allocation is created, you can add the subscriptions you need for {Platf . Enter the number of {PlatformNameShort} Entitlement(s) you plan to add. . Click btn:[Submit]. -.Verification -After your subscription has been accepted, subscription details are displayed. A status of _Compliant_ indicates your subscription is in compliance with the number of hosts you have automated within your subscription count. Otherwise, your status will show as _Out of Compliance_, indicating you have exceeded the number of hosts in your subscription. - -Other important information displayed include the following: - -Hosts automated:: Host count automated by the job, which consumes the license count -Hosts imported:: Host count considering all inventory sources (does not impact hosts remaining) -Hosts remaining:: Total host count minus hosts automated +[role="_additional-resources"] +.Next steps +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-aap-generate-manifest-file[Download the manifest file]. \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-create-aap-object.adoc b/downstream/modules/platform/proc-aap-create-aap-object.adoc new file mode 100644 index 0000000000..e92a0c4c67 --- /dev/null +++ b/downstream/modules/platform/proc-aap-create-aap-object.adoc @@ -0,0 +1,40 @@ +:_mod-docs-content-type: PROCEDURE + +[id="aap-create_controller"] + += Creating an {PlatformNameShort} object + +Use the following steps to create an *AnsibleAutomationPlatform* custom resource object. + +.Procedure +. Log in to *{OCP}*. +. Navigate to menu:Operators[Installed Operators]. +. Select the {OperatorPlatformNameShort} installed on your project namespace. +. Select the *Ansible Automation Platform* tab. +. Click btn:[Create AnsibleAutomationPlatform]. +. Select *YAML view* and paste in the following, modified accordingly: ++ +---- + +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + postgres_configuration_secret: external-postgres-configuration + + controller: + disabled: false + postgres_configuration_secret: external-controller-postgres-configuration + secret_key_secret: controller-secret-key + + hub: + disabled: false + postgres_configuration_secret: external-hub-postgres-configuration + db_fields_encryption_secret: hub-db-fields-encryption-secret +---- ++ +. Click btn:[Create]. + +[role=_abstract] \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-create-subscription-allocation.adoc b/downstream/modules/platform/proc-aap-create-subscription-allocation.adoc index 6f3798b198..cedd231a3b 100644 --- a/downstream/modules/platform/proc-aap-create-subscription-allocation.adoc +++ b/downstream/modules/platform/proc-aap-create-subscription-allocation.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-create-subscription-allocation_{context}"] @@ -8,5 +10,9 @@ Creating a new subscription allocation allows you to set aside subscriptions and .Procedure . From the link:https://access.redhat.com/management/subscription_allocations/[Subscription Allocations] page, click btn:[New Subscription Allocation]. . Enter a name for the allocation so that you can find it later. -. Select *Type: Satellite 6.8* as the management application. +. Select *Type: Satellite {SatelliteVers}* as the management application. . Click btn:[Create]. + +[role="_additional-resources"] +.Next steps +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-add-merge-subscriptions[Add the subscriptions]. \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-create_controller.adoc b/downstream/modules/platform/proc-aap-create_controller.adoc deleted file mode 100644 index 4ca79bcf78..0000000000 --- a/downstream/modules/platform/proc-aap-create_controller.adoc +++ /dev/null @@ -1,19 +0,0 @@ -[id="aap-create_controller"] - -= Creating an AutomationController object - -[role=_abstract] - -Use the following steps to create an AutomationController custom resource object. - -.Procedure -. Log in to *{OCP}*. -. Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. -. Select the *Automation Controller* tab. -. Click btn:[Create AutomationController]. -. Enter a name for the new deployment. -. In *Advanced configurations*, do the following: -.. From the *Admin Password Secret* list, select your xref:create-secret-key-secret_aap-migration[secret key secret]. -.. From the *Database Configuration Secret* list, select the xref:create-postresql-secret_aap-migration[postgres configuration secret]. -. Click btn:[Create]. diff --git a/downstream/modules/platform/proc-aap-create_hub.adoc b/downstream/modules/platform/proc-aap-create_hub.adoc deleted file mode 100644 index 987590e5f1..0000000000 --- a/downstream/modules/platform/proc-aap-create_hub.adoc +++ /dev/null @@ -1,17 +0,0 @@ -[id="aap-create_hub"] - -= Creating an AutomationHub object - -[role=_abstract] - -Use the following steps to create an AutomationHub custom resource object. - -.Procedure -. Log in to *{OCP}*. -. Navigate to menu:Operators[Installed Operators]. -. Select the {OperatorPlatform} installed on your project namespace. -. Select the *Automation Hub* tab. -. Click btn:[Create AutomationHub]. -. Enter a name for the new deployment. -. In *Advanced configurations*, select your xref:create-secret-key-secret_aap-migration[secret key secret] and xref:create-postresql-secret_aap-migration[postgres configuration secret]. -. Click btn:[Create]. diff --git a/downstream/modules/platform/proc-aap-enable-disable-auth.adoc b/downstream/modules/platform/proc-aap-enable-disable-auth.adoc new file mode 100644 index 0000000000..c520e894a3 --- /dev/null +++ b/downstream/modules/platform/proc-aap-enable-disable-auth.adoc @@ -0,0 +1,46 @@ +:_mod-docs-content-type: PROCEDURE + +[id="aap-enable-disable-auth_{context}"] + += Enabling and disabling the local authenticator + +As a platform administrator, you can enable or disable authenticators. However, disabling your local authenticator can have significant impacts and should only be done under specific circumstances. Before you disable your local authenticator, you must consider the following: + +Local account inaccessibility:: Disabling the local authenticator prevents all local accounts, including the default `admin` account from logging in. +Potential inaccessibility:: Disabling the local authenticator without having at least one other configured authenticator can render the {PlatformNameShort} environment completely inaccessible. +Dependency on enterprise authentication provider:: If the local authenticator is disabled and an issue occurs with the configured enterprise authentication provider, the platform will become inaccessible until the enterprise authentication provider issue is resolved. + +.Prerequisites + +* You have at least one other authenticator method configured. +* You have at least one administrator account that can authenticate using your alternate authenticator. + +.Procedure + +[CAUTION] +==== +Disabling the local authenticator without an alternative authentication in place can result in a locked environment. +==== + +. From the navigation panel, select {MenuAMAuthentication}. +. Ensure that at least one other authenticator type is configured and enabled. +. Select your *Local Authenticator*. +. Toggle the *Enabled* switch to the off position to disable the local authenticator. + +.Troubleshooting + +If the local authenticator is disabled without another authentication method configured, or if an issue arises with your configured enterprise authentication provider, making the {PlatformNameShort} inaccessible, you can re-enable the local authenticator from the command line as follows: + +. List the available authenticators and retrieve the ID of your local authenticator by running the following command: ++ +---- +aap-gateway-api authenticators --list +---- ++ +. Enable the local authenticator using its ID: ++ +---- +aap-gateway-manage authenticators --enable :id +---- ++ +where: `:id` is the ID of the local authenticator obtained from the previous step. diff --git a/downstream/modules/platform/proc-aap-generate-manifest-file.adoc b/downstream/modules/platform/proc-aap-generate-manifest-file.adoc index 5d96ca2ab4..7871ffdb7f 100644 --- a/downstream/modules/platform/proc-aap-generate-manifest-file.adoc +++ b/downstream/modules/platform/proc-aap-generate-manifest-file.adoc @@ -1,5 +1,7 @@ +:_mod-docs-content-type: PROCEDURE -[id="proc-generate-manifest-file_{context}"] + +[id="proc-aap-generate-manifest-file"] = Downloading a manifest file @@ -10,8 +12,9 @@ After an allocation is created and has the appropriate subscriptions on it, you . From the link:https://access.redhat.com/management/subscription_allocations/[Subscription Allocations] page, click on the name of the *Subscription Allocation* to which you would like to generate a manifest. . Click the *Subscriptions* tab. . Click btn:[Export Manifest] to download the manifest file. ++ +This downloads a file _manifest__.zip_ to your default downloads folder. -[NOTE] -==== -The file is saved to your default downloads folder and can now be uploaded to xref:proc-aap-activate-with-manifest_activate-aap[activate {PlatformName}]. -==== +[role="_additional-resources"] +.Next steps +* link:{URLCentralAuth}/assembly-gateway-licensing#proc-aap-activate-with-manifest[Upload the manifest file]. diff --git a/downstream/modules/platform/proc-aap-migrate-LDAP-users.adoc b/downstream/modules/platform/proc-aap-migrate-LDAP-users.adoc new file mode 100644 index 0000000000..bf78f0cd2d --- /dev/null +++ b/downstream/modules/platform/proc-aap-migrate-LDAP-users.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + + + +[id="proc-migrate-LDAP-users"] + += Migrating LDAP users without account linking + + +[role="_abstract"] + +If a user is unable to link their accounts because there is no linking option for their {HubName} account, you must immediately configure the auto-migrate feature on all legacy password authenticators to target the new gateway LDAP authenticator. + +Then, when a user logs in, the {Gateway} will automatically migrate the user to the LDAP authenticator if a matching UID is found. + +.Prerequisites + +* Verify that all legacy accounts are properly linked and merged before proceeding with auto-migration. + +* Verify that there are no UID collisions or ensure they are manually migrated before proceeding with auto-migration. + +.Procedure + +. Log in to the {PlatformNameShort} UI. +. Set up a new LDAP authentication method in the {Gateway} following the steps in link:{URLCentralAuth}/gw-configure-authentication#controller-set-up-LDAP[Configuring LDAP authentication]. This will be the configuration that you will migrate your previous LDAP users to. ++ +[NOTE] +==== +{PlatformNameShort} 2.4 LDAP configurations are renamed during the upgrade process and are displayed in the *Authentication Methods* list view prefixed to indicate that it is a legacy configuration, for example, `: legacy_password`. The *Authentication type* is listed as *Legacy password*. These configurations can not be modified. +==== ++ +. Select the legacy LDAP authenticator from the *Auto migrate users from* list. This is the legacy authenticator you want to use for migrating users to your {Gateway} LDAP authenticator. + +Once you set up the auto migrate functionality, you should be able to login with LDAP in the {Gateway} and any matching accounts from the legacy 2.4 LDAP authenticator will automatically be linked. + diff --git a/downstream/modules/platform/proc-aap-migrate-admin-users.adoc b/downstream/modules/platform/proc-aap-migrate-admin-users.adoc new file mode 100644 index 0000000000..a05e2c7fb9 --- /dev/null +++ b/downstream/modules/platform/proc-aap-migrate-admin-users.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="aap-migrate-admin-users_{context}"] + += Migrating admin users + +[role="_abstract"] +Follow this procedure to migrate admin users. + +.Prerequisites + +* Review current admin roles for the individual services in your current deployment. +* Confirm the users who will require {Gateway} admin rights post-upgrade. + +.Procedure + +. From the navigation panel of the {Gateway}, select {MenuAMUsers}. +. Select the check box for the user that you want to modify. +. Click the Pencil icon and select *Edit user*. +. The Edit user page is displayed where you can see the service level administrator privileges assigned by the *User type* checkboxes. See link:{URLCentralAuth}/gw-managing-access#gw-editing-a-user[Editing a user] for more information on these user types. + + diff --git a/downstream/modules/platform/proc-aap-migration-backup.adoc b/downstream/modules/platform/proc-aap-migration-backup.adoc index 19fba83f60..059806fc5e 100644 --- a/downstream/modules/platform/proc-aap-migration-backup.adoc +++ b/downstream/modules/platform/proc-aap-migration-backup.adoc @@ -1,11 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + [id="aap-migration-backup"] [role="_abstract"] -= Migrating to Ansible Automation Platform Operator += Migrating to {OperatorPlatformNameShort} .Prerequisites -To migrate {PlatformNameShort} deployment to {OperatorPlatform}, you must have the following: +To migrate {PlatformNameShort} deployment to {OperatorPlatformNameShort}, you must have the following: * Secret key secret * Postgresql configuration @@ -18,21 +20,9 @@ You can store the secret key information in the inventory file before the initia If you are unable to remember your secret key or have trouble locating your inventory file, contact link:https://access.redhat.com/[Ansible support] through the Red Hat Customer portal. ==== -Before migrating your data from {PlatformNameShort} 2.x or earlier, you must back up your data for loss prevention. To backup your data, do the following: +Before migrating your data from {PlatformNameShort} 2.4, you must back up your data for loss prevention. .Procedure + . Log in to your current deployment project. -. Run `setup.sh` to create a backup of your current data or deployment: -+ -For on-prem deployments of version 2.x or earlier: -+ ------ -$ ./setup.sh -b ------ -+ -For OpenShift deployments before version 2.0 (non-operator deployments): -+ ------ -./setup_openshift.sh -b ------ -//reminder - add a cross reference statement to new Backup and Restore doc once published. "For Openshift Operator installations for version 2.0 and later, refer to" +. Run `$ ./setup.sh -b` to create a backup of your current data or deployment. \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-migration.adoc b/downstream/modules/platform/proc-aap-migration.adoc index 37b64d6cc4..8b3387d12a 100644 --- a/downstream/modules/platform/proc-aap-migration.adoc +++ b/downstream/modules/platform/proc-aap-migration.adoc @@ -1,7 +1,23 @@ +:_mod-docs-content-type: PROCEDURE + [id="aap-data-migration_{context}"] -= Migrating data to the {PlatformNameShort} Operator += Migrating data to the {OperatorPlatformNameShort} [role=_abstract] -After you have set your secret key, postgresql credentials, verified network connectivity and installed the {OperatorPlatform}, you must create a custom resource controller object before you can migrate your data. +When migrating a {PlatformVers} containerized or RPM installed deployment to {OCPShort} you must create a secret with credentials to access the PostgreSQL database from the original deployment, then specify it when creating the {PlatformNameShort} object. + +[IMPORTANT] +==== +The operator does not support {EDAName} migration at this time. +==== + +.Prerequisites + +You have completed the following procedures: + +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#install-aap-operator_operator-platform-doc[Installing the Red Hat Ansible Automation Platform Operator on Red Hat OpenShift ] +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#create-secret-key-secret_aap-migration[Creating a secret key] +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#create-postresql-secret_aap-migration[Creating a postgresql configuration secret] +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#verify-network-connectivity_aap-migration[Verifying network connectivity ] \ No newline at end of file diff --git a/downstream/modules/platform/proc-aap-platform-gateway-backup.adoc b/downstream/modules/platform/proc-aap-platform-gateway-backup.adoc index 6274e4e6e7..eae1edbe28 100644 --- a/downstream/modules/platform/proc-aap-platform-gateway-backup.adoc +++ b/downstream/modules/platform/proc-aap-platform-gateway-backup.adoc @@ -1,3 +1,64 @@ +:_mod-docs-content-type: PROCEDURE + [id="aap-platform-gateway-backup_{context}"] -= Backing up your AnsibleAutomationPlatform resource += Backing up your {PlatformNameShort} deployment +Regularly backing up your *{PlatformNameShort}* deployment is vital to protect against unexpected data loss and application errors. *{PlatformNameShort}* hosts any enabled components (such as, {ControllerName}, {HubName}, and {EDAName}), when you back up *{PlatformNameShort}* the operator will also back up these components. + +.Prerequisites +* You must be authenticated on OpenShift cluster. +* You have installed {OperatorPlatformNameShort} on the cluster. +* You have deployed a *{PlatformNameShort}* instance using the {OperatorPlatformNameShort}. + +.Procedure +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Go to your *All Instances* tab, and click btn:[Create New]. +. Select *{PlatformNameShort} Backup* from the list. ++ +[NOTE] +==== +When creating the *{PlatformNameShort} Backup* resource it also creates backup resources for each of the nested components that are enabled. +==== ++ +. In the *Name* field, enter a name for the backup. +. In the *Deployment name* field, enter the name of the deployed {PlatformNameShort} instance being backed up. For example if your {PlatformNameShort} deployment must be backed up and the deployment name is aap, enter 'aap' in the *Deployment name* field. +. Click btn:[Create]. + +This results in an *AnsibleAutomationPlatformBackup* resource. The the resource YAML is similar to the following: + +---- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatformBackup +metadata: + name: backup + namespace: aap +spec: + no_log: true + deployment_name: aap +---- + +[NOTE] +==== +{OperatorPlatformNameShort} creates a PersistentVolumeClaim (PVC) for your {PlatformNameShort} Backup automatically. +You can use your own pre-created PVC by using the `backup_pvc` spec and specifying your PVC. +==== + +.Verification +To verify that your backup was successful you can: + +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Click *All Instances*. + +The *All Instances* page displays the main backup and the backups for each component with the name you specified when creating your backup resource. +The status for the following instances must be either *Running* or *Successful*: + +* AnsibleAutomationPlatformBackup +* AutomationControllerBackup +* EDABackup +* AutomationHubBackup + + diff --git a/downstream/modules/platform/proc-aap-platform-gateway-restore.adoc b/downstream/modules/platform/proc-aap-platform-gateway-restore.adoc index 8f0d5b9ce8..1f1eb1ad9f 100644 --- a/downstream/modules/platform/proc-aap-platform-gateway-restore.adoc +++ b/downstream/modules/platform/proc-aap-platform-gateway-restore.adoc @@ -1,3 +1,43 @@ -[id="aap-platform-gateway-restore"] +:_mod-docs-content-type: PROCEDURE -= Recovering your AnsibleAutomationPlatform resource +[id="aap-platform-gateway-restore_{context}"] + += Recovering your {PlatformNameShort} deployment +*{PlatformNameShort}* manages any enabled components (such as, {ControllerName}, {HubName}, and {EDAName}), when you recover *{PlatformNameShort}* you also restore these components. + +In previous versions of the {OperatorPlatformNameShort}, it was necessary to create a restore object for each component of the platform. +Now, you create a single *AnsibleAutomationPlatformRestore* resource, which creates and manages the other restore objects: + +* AutomationControllerRestore +* AutomationHubRestore +* EDARestore + +.Prerequisites +* You must be authenticated with an OpenShift cluster. +* You have installed the {OperatorPlatformNameShort} on the cluster. +* The *AnsibleAutomationPlatformBackups* deployment is available in your cluster. + +.Procedure +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Go to your *All Instances* tab, and click btn:[Create New]. +. Select *{PlatformNameShort} Restore* from the list. +. For *Name* enter the name for the recovery deployment. +. For *New {PlatformNameShort} Name* enter the new name for your {PlatformNameShort} instance. +. *Backup Source* defaults to *CR*. +. For *Backup name* enter the name your chose when creating the backup. +. Click btn:[Create]. + +Your backups starts restoring under the *AnsibleAutomationPlatformRestores* tab. + +[NOTE] +==== +The recovery is not complete until all the resources are successfully restored. Depending on the size of your database this this can take some time. +==== + +.Verification +To verify that your recovery was successful you can: + +. Go to menu:Workloads[Pods]. +. Confirm that all pods are in a *Running* or *Completed* state. diff --git a/downstream/modules/platform/proc-access-hub-operator-ui.adoc b/downstream/modules/platform/proc-access-hub-operator-ui.adoc index 351fb0fd6a..8708414e34 100644 --- a/downstream/modules/platform/proc-access-hub-operator-ui.adoc +++ b/downstream/modules/platform/proc-access-hub-operator-ui.adoc @@ -1,13 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-access-hub-operator-ui_{context}"] -= Accessing the {HubName} user interface += Finding the {HubName} route -You can access the {HubName} interface once all pods have successfully launched. +You can access the {HubName} through the {Gateway} or through the following procedure. .Procedure +. Log into {OCP}. . Navigate to menu:Networking[Routes]. . Under *Location*, click on the URL for your {HubName} instance. +.Verification The {HubName} user interface launches where you can sign in with the administrator credentials specified during the operator configuration process. [NOTE] diff --git a/downstream/modules/platform/proc-accessing-rpm-repositories-for-locally-mounted-dvd.adoc b/downstream/modules/platform/proc-accessing-rpm-repositories-for-locally-mounted-dvd.adoc index 040dcdf01d..76f2752bb1 100644 --- a/downstream/modules/platform/proc-accessing-rpm-repositories-for-locally-mounted-dvd.adoc +++ b/downstream/modules/platform/proc-accessing-rpm-repositories-for-locally-mounted-dvd.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="accessing-rpm-repositories-for-locally-mounted-dvd_{context}"] = Accessing RPM repositories from a locally mounted DVD diff --git a/downstream/modules/platform/proc-account-linking.adoc b/downstream/modules/platform/proc-account-linking.adoc new file mode 100644 index 0000000000..3eae70e984 --- /dev/null +++ b/downstream/modules/platform/proc-account-linking.adoc @@ -0,0 +1,53 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-10-08 +:_mod-docs-content-type: PROCEDURE + +[id="account-linking_{context}"] += Linking your account + +{PlatformNameShort} 2.5 provides a centralized location for users, teams and organizations to access the platform's services and features. +//[ddacosta] Moved this statement to the assembly intro +//When you upgrade from a previous version of {PlatformNameShort}, your existing account is automatically migrated to a single platform account. However, if you have multiple component accounts (such as, {ControllerName}, {HubName}, and {EDAName}), your accounts must be linked to use the centralized features of the platform. + +The first time you log in to {PlatformNameShort} 2.5, the platform searches through the existing services to locate a user account with the credentials you entered. When there is a match to an existing account, that account is registered and becomes centrally managed by the platform. Any subsequent component accounts in the system are orphaned and cannot be used to log into the platform. + +To address this problem, use the account linking procedure to authenticate from any of your existing component accounts and still be recognized by the platform. Linking accounts associates existing component accounts with the same user profile. + +If you have completed the upgrade process and have a legacy {PlatformNameShort} subscription, follow the account linking procedure below to migrate your account to {PlatformNameShort} 2.5. + +.Prerequisites + +* You have completed the upgrade process and have a legacy {PlatformNameShort} account and credentials. + +.Procedure + +. Navigate to the login page for {PlatformNameShort}. +. In the login modal, select either *I have an {ControllerName} account* or *I have an {HubName} account* based on the credentials you have. +. On the next screen, enter the legacy credentials for the component account you selected and click btn:[Log in]. ++ +[NOTE] +==== +If you are logging in using OIDC credentials, see link:https://access.redhat.com/solutions/7092980[How to fix broken OIDC redirect after upgrading to AAP 2.5]. +==== ++ +. If you have successfully linked your account, the next screen shows your username with a green checkmark beside it. If you have other legacy accounts that you want to link, enter those account credentials and click btn:[Link] to link them to your centralized {Gateway} account. +. Click btn:[Submit] to complete linking your legacy accounts. +. After your accounts are linked, depending on your authentication method, you might be prompted to create a new username and password. These credentials will replace your legacy credentials for each component account. +* You can also link your legacy account manually by taking the following steps: +. Select your user icon at the top right of your screen, and select *User details*. +. Select the btn:[More Actions] icon *{MoreActionsIcon}* > *Link user accounts*. +. Enter the credentials for the account that you want to link. + +.Troubleshooting + +If you encounter an error message telling you that your account could not be authenticated, contact your platform administrator. + +[NOTE] +==== +If you log into {PlatformNameShort} for the first time and are prompted to change your username, this is an indication that another user has already logged into {PlatformNameShort} with the same username. To proceed with account migration, follow the prompts to change your username. {PlatformNameShort} uses your password to authenticate which account or accounts belong to you. +==== + +*A diagram of the account linking flow* +image:account-linking-flow.png[Account linking flow] + +After you have migrated your user account, you can manage your account from the *Access Management* menu. See link:{URLCentralAuth}/gw-managing-access[Managing access with role based access control]. diff --git a/downstream/modules/platform/proc-add-controller-access-token.adoc b/downstream/modules/platform/proc-add-controller-access-token.adoc index 6c11a89fc3..a92c21ff8a 100644 --- a/downstream/modules/platform/proc-add-controller-access-token.adoc +++ b/downstream/modules/platform/proc-add-controller-access-token.adoc @@ -1,26 +1,31 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-add-controller-access-token_{context}"] -= Connecting {OperatorResourceShort} to {ControllerName} += Connecting {OperatorResourceShort} to {Gateway} -To connect {OperatorResourceShort} with {ControllerName} you need to create a k8s secret with the connection information for your {ControllerName} instance. +To connect {OperatorResourceShort} with {Gateway} you must create a Kubernetes secret with the connection information for your {ControllerName} instance. -.Procedure -To create an OAuth2 token for your user in the {ControllerName} UI: +Use the following procedure to create an OAuth2 token for your user in the {Gateway} UI. -. In the navigation panel, select menu:Access[Users]. -. Select the username you want to create a token for. -. Click on btn:[Tokens], then click btn:[Add]. -. You can leave *Applications* empty. Add a description and select *Read* or *Write* for the *Scope*. +[NOTE] +==== +You can only create OAuth 2 Tokens for your own user through the API or UI, which means you can only configure or view tokens from your own user profile. +==== -Alternatively, you can create a OAuth2 token at the command-line by using the `create_oauth2_token` manage command: +.Procedure ----- -$ controller-manage create_oauth2_token --user example_user -New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2 ----- +. Log in to {OCP}. +. In the navigation panel, select menu:Access Management[Users]. +. Select the username you want to create a token for. +. Select menu:Tokens[Automation Execution] +. Click btn:[Create Token]. +. You can leave *Applications* empty. Add a description and select *Read* or *Write* for the *Scope*. ++ [NOTE] ==== Make sure you provide a valid user when creating tokens. -Otherwise, you will get an error message that you tried to issue the command without specifying a user, or supplying a username that does not exist. -==== \ No newline at end of file +Otherwise, you get an error message that you tried to issue the command without either specifying a user, or supplying a username that does not exist. +==== ++ \ No newline at end of file diff --git a/downstream/modules/platform/proc-add-eda-safe-plugin-var.adoc b/downstream/modules/platform/proc-add-eda-safe-plugin-var.adoc new file mode 100644 index 0000000000..0ccb5fce07 --- /dev/null +++ b/downstream/modules/platform/proc-add-eda-safe-plugin-var.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: PROCEDURE + + +[id="proc-add-eda-safe-plugin-var"] + += Adding a safe plugin variable to {EDAcontroller} + +When using `redhat.insights_eda` or similar plugins to run rulebook activations in {EDAcontroller}, you must add a safe plugin variable to a directory in {PlatformNameShort}. This ensures connection between {EDAcontroller} and the source plugin, and displays port mappings correctly. + +.Procedure +// Procedure for RPM installer +ifdef::aap-install[] +. Create a directory for the safe plugin variable: `mkdir -p ./group_vars/automationedacontroller` +. Create a file within that directory for your new setting (for example, `touch ./group_vars/automationedacontroller/custom.yml`) +. Add the variable `automationedacontroller_additional_settings` to extend the default `settings.yaml` template for {EDAcontroller} and add the `SAFE_PLUGINS` field with a list of plugins to enable. For example: ++ +---- +automationedacontroller_additional_settings: + SAFE_PLUGINS: + - ansible.eda.webhook + - ansible.eda.alertmanager +---- ++ +[NOTE] +==== +You can also extend the `automationedacontroller_additional_settings` variable beyond `SAFE_PLUGINS` in the Django configuration file `/etc/ansible-automation-platform/eda/settings.yaml`. +==== +endif::aap-install[] + + +// Procedure for Containerized installer +ifdef::container-install[] +. Create a directory for the safe plugin variable: ++ +---- +mkdir -p ./group_vars/automationeda +---- ++ +. Create a file within that directory for your new setting (for example, `touch ./group_vars/automationeda/custom.yml`) +. Add the variable `eda_safe_plugins` with a list of plugins to enable. For example: ++ +---- +eda_safe_plugins: ['ansible.eda.webhook', 'ansible.eda.alertmanager'] +---- +endif::container-install[] diff --git a/downstream/modules/platform/proc-add-operator-execution-nodes.adoc b/downstream/modules/platform/proc-add-operator-execution-nodes.adoc index 0c649ec0ed..881adbadfd 100644 --- a/downstream/modules/platform/proc-add-operator-execution-nodes.adoc +++ b/downstream/modules/platform/proc-add-operator-execution-nodes.adoc @@ -1,15 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + [id="add-operator-execution-nodes_{context}"] .Prerequisites -* An {ControllerName} instance -* The receptor collection package is installed -* The `ansible-runner` package is installed +* An {ControllerName} instance. +* The receptor collection package is installed. +* The {PlatformNameShort} repository `ansible-automation-platform-{PlatformVers}-for-rhel-{RHEL-RELEASE-NUMBER}-x86_64-rpms` is enabled. .Procedure . Log in to {PlatformName}. . In the navigation panel, select {MenuInfrastructureInstances}. . Click btn:[Add]. -. Input the VM name in the *Host Name* field. +. Input the Execution Node domain name or IP in the *Host Name* field. . Optional: Input the port number in the *Listener Port* field. . Click btn:[Save]. . Click the download icon image:download.png[download,15,15]next to *Install Bundle*. This starts a download, take note of where you save the file @@ -17,8 +19,8 @@ + [NOTE] ==== -To run the `install_receptor.yml` playbook you need to install the receptor collection from {Galaxy}: -`Ansible-galaxy collection install -r requirements.txt` +To run the `install_receptor.yml` playbook you must install the receptor collection from {Galaxy}: +`Ansible-galaxy collection install -r requirements.yml` ==== . Update the playbook with your user name and SSH private key file. Note that `ansible_host` pre-populates with the hostname you input earlier. + @@ -26,7 +28,7 @@ To run the `install_receptor.yml` playbook you need to install the receptor col all: hosts: remote-execution: - ansible_host: example_host_name + ansible_host: example_host_name # Same with configured in AAP WebUI ansible_user: #user provided Ansible_ssh_private_key_file: ~/.ssh/id_example ---- @@ -34,15 +36,21 @@ all: . To install the bundle run: + ---- -ansible-playbook install_receptor.yml -i inventory +ansible-playbook install_receptor.yml -i inventory.yml ---- . When installed you can now upgrade your execution node by downloading and re-running the playbook for the instance you created. .Verification +To verify receptor service status run the following command: +---- +sudo systemctl status receptor.service +---- +Make sure the service is in `active (running)` state + To verify if your playbook runs correctly on your new node run the following command: ---- watch podman ps ---- .Additional resources -* For more information about managing instance groups see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-instance-groups[Managing Instance Groups] section of the Automation Controller User Guide. +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/using_automation_execution/index#controller-instance-groups[Managing Instance Groups] \ No newline at end of file diff --git a/downstream/modules/platform/proc-adding-a-subscription.adoc b/downstream/modules/platform/proc-adding-a-subscription.adoc new file mode 100644 index 0000000000..f839f83fd1 --- /dev/null +++ b/downstream/modules/platform/proc-adding-a-subscription.adoc @@ -0,0 +1,48 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-29 +:_mod-docs-content-type: PROCEDURE + +[id="adding-a-subscription"] += Adding your subscription + +To add your subscription information, you can either upload your subscription manifest, or use your service account credentials to find the subscription associated with your account. + +.Prerequisites + +To add your subscription by uploading a subscription manifest, you must first: + +* Obtain your manifest file. See link:{URLCentralAuth}/assembly-gateway-licensing#assembly-aap-obtain-manifest-files[Obtaining a manifest file] in the {TitleCentralAuth} guide for steps on how to do this. + +To add your subscription using your service account credentials, you must first: + +* Have link:https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/proc-ciam-svc-acct-overview-creating-service-acct#proc-ciam-svc-acct-create-creating-service-acct[created a service account] and saved the client ID and client secret. +* Add your service account to the Subscription viewer user group to give it the ability to see your subscriptions. See the "Updates to subscription management" section in the Knowledgebase article link:https://access.redhat.com/articles/7112649[Configure {PlatformNameShort} to authenticate through service account credentials] for instructions on how to do so. + +.Procedure + +To add your subscription by uploading a subscription manifest: + +* Drag the file to the field beneath *Red Hat subscription manifest* or browse for the file on your local machine. + +To add your subscription with your service account credentials: + +. Click the *Service Account / Red Hat Satellite* tab. +. Enter the *client ID* you received when you created your service account in the field labeled Client ID / Satellite username. +. Enter the *client secret* you received when you created your service account in the field labeled Client secret / Satellite password. +Your subscription appears in the *Subscription* list. +Select your subscription. +. After you have added your subscription, click btn:[Next]. +. Check the box indicating that you agree to the *End User License Agreement*. +. Review your information and click btn:[Finish]. + +[NOTE] +==== +If you enter your client ID and client secret but cannot locate your subscription, you might not have the correct permissions set on your service account. For more information and troubleshooting guidance for service accounts, see link:https://access.redhat.com/articles/7112649[Configure Ansible Automation Platform to authenticate through service account credentials]. +==== + +[TIP] +==== + +After logging in, review the quick starts section in the navigation panel for useful guidance. + +==== diff --git a/downstream/modules/platform/proc-apply-selinux-context.adoc b/downstream/modules/platform/proc-apply-selinux-context.adoc index a1a8b43e37..598a5677b0 100644 --- a/downstream/modules/platform/proc-apply-selinux-context.adoc +++ b/downstream/modules/platform/proc-apply-selinux-context.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-apply-selinux-context"] = Applying the SELinux context diff --git a/downstream/modules/platform/proc-attaching-subscriptions.adoc b/downstream/modules/platform/proc-attaching-subscriptions.adoc new file mode 100644 index 0000000000..a49989b8c0 --- /dev/null +++ b/downstream/modules/platform/proc-attaching-subscriptions.adoc @@ -0,0 +1,75 @@ +:_mod-docs-content-type: PROCEDURE + +// emurtoug removed this assembly from the Planning guide to avoid duplication of subscription content added to Access management and authentication + +[id="proc-attaching-subscriptions"] + += Attaching your {PlatformName} subscription + +[role="_abstract"] +You *must* have valid subscriptions attached on all nodes before installing {PlatformName}. Attaching your {PlatformNameShort} subscription provides access to subscription-only resources necessary to proceed with the installation. + +//[ddacosta] Removing this note until it can be verified that SCA is available with AAP +// [NOTE] +// ==== +// Attaching a subscription is unnecessary if you have enabled Simple Content Access Mode on your Red Hat account. Once enabled, you will need to register your systems to either Red Hat Subscription Management (RHSM) or Satellite before installing the {PlatformNameShort}. For more information, see link:https://access.redhat.com/articles/simple-content-access[Simple Content Access]. +// ==== + +.Procedure + +. Make sure your system is registered: ++ +----- +$ sudo subscription-manager register --username <$INSERT_USERNAME_HERE> --password <$INSERT_PASSWORD_HERE> +----- ++ +. Obtain the `pool_id` for your {PlatformName} subscription: ++ +----- +$ sudo subscription-manager list --available --all | grep "Ansible Automation Platform" -B 3 -A 6 +----- ++ +[NOTE] +==== +Do not use MCT4022 as a `pool_id` for your subscription because it can cause {PlatformNameShort} subscription attachment to fail. +==== ++ +. Attach the subscription: ++ +----- +$ sudo subscription-manager attach --pool= +----- ++ +You have now attached your {PlatformName} subscriptions to all nodes. ++ +. To remove this subscription, enter the following command: ++ +----- +$ sudo subscription-manager remove --pool= +----- + +.Verification + +* Verify the subscription was successfully attached: + +----- +$ sudo subscription-manager list --consumed +----- + +.Troubleshooting + +* If you are unable to locate certain packages that came bundled with the {PlatformNameShort} installer, or if you are seeing a `_Repositories disabled by configuration_` message, try enabling the repository by using the command: ++ +{PlatformName} {PlatformVers} for RHEL 8 ++ +[literal, options="nowrap" subs="+attributes"] +---- +$ sudo subscription-manager repos --enable ansible-automation-platform-{PlatformVers}-for-rhel-8-x86_64-rpms +---- ++ +{PlatformName} {PlatformVers} for RHEL 9 ++ +[literal, options="nowrap" subs="+attributes"] +---- +$ sudo subscription-manager repos --enable ansible-automation-platform-{PlatformVers}-for-rhel-9-x86_64-rpms +---- diff --git a/downstream/modules/platform/proc-backup-aap-container.adoc b/downstream/modules/platform/proc-backup-aap-container.adoc new file mode 100644 index 0000000000..3da00566f9 --- /dev/null +++ b/downstream/modules/platform/proc-backup-aap-container.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: PROCEDURE + +[id="backing-up-containerized-ansible-automation-platform"] + += Backing up containerized {PlatformNameShort} + +Perform a backup of your {ContainerBase} of {PlatformNameShort}. + +.Prerequisites + +* You have logged in to the {RHEL} host as your dedicated non-root user. + +.Procedure + +. Go to the {PlatformName} installation directory on your {RHEL} host. + +. To control compression of the backup artifacts before they are sent to the host running the backup operation, you can use the following variables in your inventory file: +.. For control of compression for filesystem related backup files: ++ +---- +# For global control of compression for filesystem related backup files +use_archive_compression=true + +# For component-level control of compression for filesystem related backup files +#controller_use_archive_compression=true +#eda_use_archive_compression=true +#gateway_use_archive_compression=true +#hub_use_archive_compression=true +#pcp_use_archive_compression=true +#postgresql_use_archive_compression=true +#receptor_use_archive_compression=true +#redis_use_archive_compression=true +---- ++ +.. For control of compression for database related backup files: ++ +---- +# For global control of compression for database related backup files +use_db_compression=true + +# For component-level control of compression for database related backup files +#controller_use_db_compression=true +#eda_use_db_compression=true +#hub_use_db_compression=true +#gateway_use_db_compression=true +---- + +. Run the `backup` playbook: ++ +---- +$ ansible-playbook -i ansible.containerized_installer.backup +---- ++ +This backs up the important data deployed by the containerized installer such as: ++ +* PostgreSQL databases +* Configuration files +* Data files + +. By default, the backup directory is set to `./backups`. You can change this by using the `backup_dir` variable in your `inventory` file. diff --git a/downstream/modules/platform/proc-backup-aap-rpm.adoc b/downstream/modules/platform/proc-backup-aap-rpm.adoc new file mode 100644 index 0000000000..a2c4ec3e7d --- /dev/null +++ b/downstream/modules/platform/proc-backup-aap-rpm.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-backup-aap-rpm"] + += Backing up RPM-based {PlatformNameShort} + +Back up an existing {PlatformNameShort} instance by running the `setup.sh` script with the `backup_dir` flag, which saves the content and configuration of your current environment: + +. Go to your {PlatformNameShort} installation directory. + +. Run the `setup.sh` script following the example below: ++ +---- +$ ./setup.sh -e 'backup_dir=/ansible/mybackup' -e 'use_compression=True' @credentials.yml -b +---- ++ +* `backup_dir` specifies a directory to save your backup to. ++ +* `@credentials.yml` passes the password variables and their values that are encrypted by `ansible-vault`. + +With a successful backup, a backup file is created at `/ansible/mybackup/.tar.gz`. + +*Additional resources* + +* For more information about backing up and restoring, see link:{URLControllerAdminGuide}/controller-backup-and-restore[Backup and restore] in _{TitleControllerAdminGuide}_. diff --git a/downstream/modules/platform/proc-benchmark-postgresql.adoc b/downstream/modules/platform/proc-benchmark-postgresql.adoc index facfe94a8e..34bbb03f04 100644 --- a/downstream/modules/platform/proc-benchmark-postgresql.adoc +++ b/downstream/modules/platform/proc-benchmark-postgresql.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="benchmark-postgresql"] = Benchmarking storage performance for the {PlatformNameShort} PostgreSQL database @@ -5,7 +7,7 @@ Check whether the minimum {PlatformNameShort} PostgreSQL database requirements are met by using the Flexible I/O Tester (FIO) tool. FIO is a tool used to benchmark read and write IOPS performance of the storage system. .Prerequisites - * You have installed the Flexible I/O Tester (`fio`) storage performance benchmarking tool. +* You have installed the Flexible I/O Tester (`fio`) storage performance benchmarking tool. + To install `fio`, run the following command as the root user: + @@ -64,5 +66,8 @@ read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-40 […] ---- + -You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands. +[NOTE] +==== +The above is a baseline to help evaluate the best case performance on your systems. Systems can and will change and performance may vary depending on what else is happening on your systems, storage or network at the time of testing. You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands. +==== diff --git a/downstream/modules/platform/proc-change-ssl-controller-ocp.adoc b/downstream/modules/platform/proc-change-ssl-controller-ocp.adoc index 65ac3afb44..5907a23e43 100644 --- a/downstream/modules/platform/proc-change-ssl-controller-ocp.adoc +++ b/downstream/modules/platform/proc-change-ssl-controller-ocp.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="change-ssl-controller-ocp_{context}"] = Changing the SSL certificate and key on {ControllerName} on {OCPShort} diff --git a/downstream/modules/platform/proc-change-ssl-controller.adoc b/downstream/modules/platform/proc-change-ssl-controller.adoc index 9365caf910..fc98d7d931 100644 --- a/downstream/modules/platform/proc-change-ssl-controller.adoc +++ b/downstream/modules/platform/proc-change-ssl-controller.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="change-ssl-controller_{context}"] = Changing the SSL certificate and key manually on {ControllerName} [role="_abstract"] -The following procedure describes how to change the SSL certificate and key manually on Automation Controller. +The following procedure describes how to change the SSL certificate and key manually on {ControllerName}. .Procedure diff --git a/downstream/modules/platform/proc-change-ssl-eda-controller.adoc b/downstream/modules/platform/proc-change-ssl-eda-controller.adoc index 8608c2a306..a73be4e2bb 100644 --- a/downstream/modules/platform/proc-change-ssl-eda-controller.adoc +++ b/downstream/modules/platform/proc-change-ssl-eda-controller.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="change-ssl-eda-controller_{context}"] = Changing the SSL certificate and key on {EDAcontroller} diff --git a/downstream/modules/platform/proc-change-ssl-hub-ocp.adoc b/downstream/modules/platform/proc-change-ssl-hub-ocp.adoc index 5aee793698..d6e14c682f 100644 --- a/downstream/modules/platform/proc-change-ssl-hub-ocp.adoc +++ b/downstream/modules/platform/proc-change-ssl-hub-ocp.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="change-ssl-hub-ocp_{context}"] = Changing the SSL certificate and key for {HubName} on {OCPShort} diff --git a/downstream/modules/platform/proc-change-ssl-hub.adoc b/downstream/modules/platform/proc-change-ssl-hub.adoc index c7314db1f2..bbc7e9f60c 100644 --- a/downstream/modules/platform/proc-change-ssl-hub.adoc +++ b/downstream/modules/platform/proc-change-ssl-hub.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="change-ssl-hub_{context}"] = Changing the SSL certificate and key manually on {HubName} diff --git a/downstream/modules/platform/proc-change-ssl-installer.adoc b/downstream/modules/platform/proc-change-ssl-installer.adoc index 4945182417..6d440fa98d 100644 --- a/downstream/modules/platform/proc-change-ssl-installer.adoc +++ b/downstream/modules/platform/proc-change-ssl-installer.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="change-ssl-installer_{context}"] = Changing the SSL certificate and key using the installer @@ -9,10 +11,7 @@ The following procedure describes how to change the SSL certificate and key in t . Copy the new SSL certificates and keys to a path relative to the {PlatformNameShort} installer. . Add the absolute paths of the SSL certificates and keys to the inventory file. -Refer to the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index#ref-hub-variables[{ControllerNameStart} variables], -link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index#ref-controller-variables[{HubNameStart} variables], and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars#event-driven-ansible-controller[{EDAcontroller} variables] sections of the -link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index[{PlatformName} Installation Guide] -for guidance on setting these variables. +Refer to the link:{URLInstallationGuide}/appendix-inventory-files-vars#controller-variables[{ControllerNameStart} variables], link:{URLInstallationGuide}/appendix-inventory-files-vars#hub-variables[{HubNameStart} variables], and link:{URLInstallationGuide}/appendix-inventory-files-vars#event-driven-ansible-variables[{EDAcontroller} variables] sections of link:{LinkInstallationGuide} for guidance on setting these variables. + -- ** {ControllerNameStart}: `web_server_ssl_cert`, `web_server_ssl_key`, `custom_ca_cert` @@ -25,5 +24,5 @@ for guidance on setting these variables. The `custom_ca_cert` must be the root certificate authority that signed the intermediate certificate authority. This file is installed in `/etc/pki/ca-trust/source/anchors`. ==== -. Run the installer. +. Run the installation program. diff --git a/downstream/modules/platform/proc-choosing-obtaining-installer-no-internet.adoc b/downstream/modules/platform/proc-choosing-obtaining-installer-no-internet.adoc new file mode 100644 index 0000000000..301106a5cb --- /dev/null +++ b/downstream/modules/platform/proc-choosing-obtaining-installer-no-internet.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + + +[id="proc-choosing-obtaining-installer-no-internet_{context}"] + += Installing without internet access + +Use the {PlatformName} *Bundle* installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to {RHEL} repositories is still needed. All other dependencies are included in the tar archive. + +.Procedure + +. Navigate to the link:{PlatformDownloadUrl}[{PlatformName} download] page. +. In the *Product software* tab, click btn:[Download Now] for the *Ansible Automation Platform Setup Bundle*. +. Extract the files: ++ +----- +$ tar xvzf ansible-automation-platform-setup-bundle-.tar.gz +----- diff --git a/downstream/modules/platform/proc-choosing-obtaining-installer.adoc b/downstream/modules/platform/proc-choosing-obtaining-installer.adoc index 9050b7395b..2da6500fcc 100644 --- a/downstream/modules/platform/proc-choosing-obtaining-installer.adoc +++ b/downstream/modules/platform/proc-choosing-obtaining-installer.adoc @@ -1,6 +1,8 @@ +:_mod-docs-content-type: PROCEDURE -// [id="proc-choosing-obtaining-installer_{context}"] + +[id="proc-choosing-obtaining-installer_{context}"] = Choosing and obtaining a {PlatformName} installer @@ -13,14 +15,16 @@ Choose the {PlatformName} installer you need based on your {RHEL} environment in A valid Red Hat customer account is required to access {PlatformName} installer downloads on the Red Hat Customer Portal. ==== -.Installing with internet access +== Installing with internet access Choose the {PlatformName} installer if your {RHEL} environment is connected to the internet. Installing with internet access retrieves the latest required repositories, packages, and dependencies. Choose one of the following ways to set up your {PlatformNameShort} installer. *Tarball install* +.Procedure + . Navigate to the link:{PlatformDownloadUrl}[{PlatformName} download] page. -. Click btn:[Download Now] for the *Ansible Automation Platform Setup*. +. In the *Product software* tab, click btn:[Download Now] for the *Ansible Automation Platform Setup*. . Extract the files: + ----- @@ -29,33 +33,24 @@ $ tar xvzf ansible-automation-platform-setup-.tar.gz *RPM install* -. Install {PlatformNameShort} Installer Package +.Procedure + +. Install the {PlatformNameShort} Installer Package. + -v.{PlatformVers} for RHEL 8 for x86_64 +v.{PlatformVers} for RHEL 8 for x86_64: + ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms ansible-automation-platform-installer ---- + -v.{PlatformVers} for RHEL 9 for x86-64 +v.{PlatformVers} for RHEL 9 for x86-64: + ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms ansible-automation-platform-installer ---- - [NOTE] +==== `dnf install` enables the repo as the repo is disabled by default. +==== -When you use the RPM installer, the files are placed under the `/opt/ansible-automation-platform/installer` directory. - -.Installing without internet access - -Use the {PlatformName} *Bundle* installer if you are unable to access the internet, or would prefer not to install separate components and dependencies from online repositories. Access to {RHEL} repositories is still needed. All other dependencies are included in the tar archive. - -. Navigate to the link:{PlatformDownloadUrl}[{PlatformName} download] page. -. Click btn:[Download Now] for the *Ansible Automation Platform Setup Bundle*. -. Extract the files: -+ ------ -$ tar xvzf ansible-automation-platform-setup-bundle-.tar.gz ------ +When you use the RPM installer, the files are placed under the `/opt/ansible-automation-platform/installer` directory. \ No newline at end of file diff --git a/downstream/modules/platform/proc-cli-get-controller-address.adoc b/downstream/modules/platform/proc-cli-get-controller-address.adoc new file mode 100644 index 0000000000..5182058794 --- /dev/null +++ b/downstream/modules/platform/proc-cli-get-controller-address.adoc @@ -0,0 +1,33 @@ +:_mod-docs-content-type: PROCEDURE + + +[id="proc-cli-get-controller-address{context}"] + += Fetching the {Gateway} web address + +A {OCP} route exposes a service at a host name, so that external clients can reach it by name. +When you created the {Gateway} instance, a route was created for it. +The route inherits the name that you assigned to the {Gateway} object in the YAML file. + +.Procedure + +* Use the following command to fetch the routes: ++ +[subs="+quotes"] +----- +oc get routes -n ____ +----- ++ + +.Verification + +You can see in the following example, the `_example_` {Gateway} is running in the `_ansible-automation-platform_` namespace. + +----- +$ oc get routes -n ansible-automation-platform + +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None +----- + +The address for the {Gateway} instance is `example-ansible-automation-platform.apps-crc.testing`. diff --git a/downstream/modules/platform/proc-cli-get-controller-pwd-decode.adoc b/downstream/modules/platform/proc-cli-get-controller-pwd-decode.adoc new file mode 100644 index 0000000000..0730e888fd --- /dev/null +++ b/downstream/modules/platform/proc-cli-get-controller-pwd-decode.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + + +[id="proc-cli-get-controller-pwd-decode{context}"] + += Decoding the {Gateway} password + +After you have fetched your gateway password, you must decode it from base64. + +* Run the following command to decode your password from base64: +---- +oc get secret/example-admin-password -o jsonpath={.data.password} | base64 --decode +---- + diff --git a/downstream/modules/platform/proc-cli-get-controller-pwd.adoc b/downstream/modules/platform/proc-cli-get-controller-pwd.adoc index 5426d2a5a2..29d0d2589e 100644 --- a/downstream/modules/platform/proc-cli-get-controller-pwd.adoc +++ b/downstream/modules/platform/proc-cli-get-controller-pwd.adoc @@ -1,56 +1,31 @@ +:_mod-docs-content-type: PROCEDURE + // Used in // assemblies/platform/assembly-installing-aap-operator-cli.adoc // titles/aap-operator-installation/ [id="proc-cli-get-controller-pwd{context}"] -= Fetching {ControllerNameStart} login details from the {OCPShort} CLI - -To login to the {ControllerNameStart}, you need the web address and the password. - -== Fetching the {ControllerName} web address - -A {OCP} route exposes a service at a host name, so that external clients can reach it by name. -When you created the {ControllerName} instance, a route was created for it. -The route inherits the name that you assigned to the {ControllerName} object in the YAML file. - -Use the following command to fetch the routes: - -[subs="+quotes"] ------ -oc get routes -n ____ ------ - -In the following example, the `_example_` {ControllerName} is running in the `_ansible-automation-platform_` namespace. - ------ -$ oc get routes -n ansible-automation-platform - -NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD -example example-ansible-automation-platform.apps-crc.testing example-service http edge/Redirect None ------ - -The address for the {ControllerName} instance is `example-ansible-automation-platform.apps-crc.testing`. += Fetching the {Gateway} password -== Fetching the {ControllerName} password - -The YAML block for the {ControllerName} instance in [filename]`sub.yaml` assigns values to the _name_ and _admin_user_ keys. -Use these values in the following command to fetch the password for the {ControllerName} instance. +The YAML block for the {Gateway} instance in the `AnsibleAutomationPlatform` object assigns values to the _name_ and _admin_user_ keys. +. Use these values in the following command to fetch the password for the {Gateway} instance. ++ ----- -oc get secret/--password -o yaml +oc get secret/--password -o yaml ----- - -The default value for _admin_user_ is `_admin_`. Modify the command if you changed the admin username in [filename]`sub.yaml`. - -The following example retrieves the password for an {ControllerName} object called `_example_`: - ++ +. The default value for _admin_user_ is `_admin_`. Modify the command if you changed the admin username in the `AnsibleAutomationPlatform` object. ++ +The following example retrieves the password for a {Gateway} object called `_example_`: ++ ----- oc get secret/example-admin-password -o yaml ----- - -The password for the {ControllerName} instance is listed in the `metadata` field in the output: - ++ +The base64 encoded password for the {Gateway} instance is listed in the `metadata` field in the output: ++ ----- $ oc get secret/example-admin-password -o yaml @@ -59,20 +34,12 @@ data: password: ODzLODzLODzLODzLODzLODzLODzLODzLODzLODzLODzL kind: Secret metadata: - annotations: - kubectl.kubernetes.io/last-applied-configuration: '{"apiVersion":"v1","kind":"Secret","metadata":{"labels":{"app.kubernetes.io/component":"automationcontroller","app.kubernetes.io/managed-by":"automationcontroller-operator","app.kubernetes.io/name":"example","app.kubernetes.io/operator-version":"","app.kubernetes.io/part-of":"example"},"name":"example-admin-password","namespace":"ansible-automation-platform"},"stringData":{"password":"88TG88TG88TG88TG88TG88TG88TG88TG"}}' - creationTimestamp: "2021-11-03T00:02:24Z" labels: - app.kubernetes.io/component: automationcontroller - app.kubernetes.io/managed-by: automationcontroller-operator + app.kubernetes.io/component: aap app.kubernetes.io/name: example app.kubernetes.io/operator-version: "" app.kubernetes.io/part-of: example name: example-admin-password namespace: ansible-automation-platform - resourceVersion: "185185" - uid: 39393939-5252-4242-b929-665f665f665f - ------ -For this example, the password is `88TG88TG88TG88TG88TG88TG88TG88TG`. +----- diff --git a/downstream/modules/platform/proc-completing-post-installation-tasks.adoc b/downstream/modules/platform/proc-completing-post-installation-tasks.adoc index 12dc9cf221..44cd4a1238 100644 --- a/downstream/modules/platform/proc-completing-post-installation-tasks.adoc +++ b/downstream/modules/platform/proc-completing-post-installation-tasks.adoc @@ -1,77 +1,21 @@ +:_mod-docs-content-type: PROCEDURE + [id="completing-post-installation-tasks_{context}"] = Completing post installation tasks [role="_abstract"] -After you have completed the installation of {PlatformNameShort}, ensure that {HubName} and {ControllerName} deploy properly. - - -== Adding a controller subscription - - - -.Procedure - -. Navigate to the FQDN of the {ControllerNameStart}. Log in with the username admin and the password you specified as `admin_password` in your inventory file. - -. Click btn:[Browse] and select the __manifest.zip__ you created earlier. - -. Click btn:[Next]. - -. Uncheck btn:[User analytics] and btn:[Automation analytics]. These rely on an internet connection and must be turned off. - -. Click btn:[Next]. - -. Read the End User License Agreement and click btn:[Submit] if you agree. -== Updating the CA trust store - -As part of your post-installation tasks, you must update the software's certificates. -By default, {PlatformNameShort} {HubName} and {ControllerName} are installed using self-signed certificates. Because of this, the controller does not trust the hub’s certificate and will not download the {ExecEnvShort} from the hub. - -To ensure that automation controller downloads the execution environment from automation hub, you must import the hub’s Certificate Authority (CA) certificate as a trusted certificate on the controller. You can do this in one of two ways, depending on whether SSH is available as root user between {ControllerName} and {PrivateHubName}. - -=== Using secure copy (SCP) as a root user - -If SSH is available as the root user between the controller and {PrivateHubName}, use SCP to copy the root certificate on the {PrivateHubName} to the controller. - - -.Procedure - - . Run `update-ca-trust` on the controller to update the CA trust store: - ----- -$ sudo -i -# scp :/etc/pulp/certs/root.crt -/etc/pki/ca-trust/source/anchors/automationhub-root.crt -# update-ca-trust ----- - -=== Copying and pasting as a non root user - -If SSH is unavailable as root between the {PrivateHubName} and the controller, copy the contents of the file __/etc/pulp/certs/root.crt__ on the {PrivateHubName} and paste it into a new file on the controller called __/etc/pki/ca-trust/source/anchors/automationhub-root.crt__. - -.Procedure - -. Run `update-ca-trust` to update the CA trust store with the new certificate. On the {PrivateHubName}, run: +After you have completed the installation of {PlatformNameShort}, ensure that {HubName} and {ControllerName} deploy properly. ----- -$ sudo -i -# cat /etc/pulp/certs/root.crt -(copy the contents of the file, including the lines with 'BEGIN CERTIFICATE' and -'END CERTIFICATE') ----- +Before your first login, you must add your subscription information to the platform. To obtain your subscription information in uploadable form, see link:{URLCentralAuth}/assembly-gateway-licensing#assembly-aap-obtain-manifest-files[Obtaining a manifest file] in _{TitleCentralAuth}_. -. On the {ControllerName}: +Once you have obtained your subscription manifest, see link:{LinkGettingStarted} for instructions on how to upload your subscription information. ----- -$ sudo -i -# vi /etc/pki/ca-trust/source/anchors/automationhub-root.crt -(paste the contents of the root.crt file from the private automation hub into the new file and write to disk) -# update-ca-trust ----- +Now that you have successfully installed Ansible Automation Platform, to begin using its features, see the following guides for your next steps: +link:{LinkGettingStarted}. -.Additional Resources +link:{LinkHubManagingContent}. -* For further information on unknown certificate authority, see link:https://access.redhat.com/solutions/6707451[Project sync fails with unknown certificate authority error in {PlatformNameShort} 2.1]. +link:{LinkBuilder}. diff --git a/downstream/modules/platform/proc-configure-a-config-map.adoc b/downstream/modules/platform/proc-configure-a-config-map.adoc new file mode 100644 index 0000000000..add3d5833a --- /dev/null +++ b/downstream/modules/platform/proc-configure-a-config-map.adoc @@ -0,0 +1,49 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-configure-a-config-map"] + += Create a ConfigMap in the OpenShift UI YAML view + +To inject the `metrics-utility` cronjobs with configuration data, use the following procedure to create a ConfigMap in the OpenShift UI YAML view: + +.Prerequisites: +* A running OpenShift cluster +* An operator-based installation of {PlatformNameShort} on {OCPShort}. + +[NOTE] +==== +Metrics-utility runs as indicated by the parameters you set in the configuration file. +You cannot run the utility cannot manually on {OCPShort}. +==== + +.Procedure +. From the navigation panel, select menu:ConfigMaps[]. +. Click btn:[Create ConfigMap]. +. On the next screen, select the YAML view tab. +. In the YAML field, enter the following parameters with the appropriate variables set: ++ +---- +apiVersion: v1 +kind: ConfigMap +metadata: + name: automationcontroller-metrics-utility-configmap +data: + METRICS_UTILITY_SHIP_TARGET: directory + METRICS_UTILITY_SHIP_PATH: /metrics-utility + METRICS_UTILITY_REPORT_TYPE: CCSP + METRICS_UTILITY_PRICE_PER_NODE: '11' # in USD + METRICS_UTILITY_REPORT_SKU: MCT3752MO + METRICS_UTILITY_REPORT_SKU_DESCRIPTION: "EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" + METRICS_UTILITY_REPORT_H1_HEADING: "CCSP Reporting : ANSIBLE Consumption" + METRICS_UTILITY_REPORT_COMPANY_NAME: "Company Name" + METRICS_UTILITY_REPORT_EMAIL: "email@email.com" + METRICS_UTILITY_REPORT_RHN_LOGIN: "test_login" + METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER: "BUSINESS LEADER" + METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER: "PROCUREMENT LEADER" +---- ++ +. Click btn:[Create]. + +.Verification + +* To verify that you created the ConfigMap and the metric utility is installed, select *ConfigMap* from the navigation panel and look for your ConfigMap in the list. diff --git a/downstream/modules/platform/proc-configure-controller-OCP.adoc b/downstream/modules/platform/proc-configure-controller-OCP.adoc index b7cd1b2d6d..5ccf86e96b 100644 --- a/downstream/modules/platform/proc-configure-controller-OCP.adoc +++ b/downstream/modules/platform/proc-configure-controller-OCP.adoc @@ -1,4 +1,6 @@ -[id="configure-controller-OCP"] +:_mod-docs-content-type: PROCEDURE + +[id="configure-controller-OCP_{context}"] = Minimizing downtime during {OCPShort} upgrade @@ -6,12 +8,12 @@ Make the following configuration changes in {ControllerName} to minimize downtim .Prerequisites -* {PlatformNameShort} 2.4 -* Ansible {ControllerName} 4.4 -* {OCPShort} -** > 4.10.42 -** > 4.11.16 -** > 4.12.0 +* {PlatformNameShort} 2.4 or later +* Ansible {ControllerName} 4.4 or later +* {OCPShort}: +** Later than 4.10.42 +** Later than 4.11.16 +** Later than 4.12.0 * High availability (HA) deployment of Postgres * Multiple worker node that {ControllerName} pods can be scheduled on diff --git a/downstream/modules/platform/proc-configure-ext-db-mtls.adoc b/downstream/modules/platform/proc-configure-ext-db-mtls.adoc new file mode 100644 index 0000000000..665ff18b3a --- /dev/null +++ b/downstream/modules/platform/proc-configure-ext-db-mtls.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + +[id="configure-ext-db-mtls"] += Optional: configuring mutual TLS (mTLS) authentication for an external database + +mTLS authentication is disabled by default. To configure each component's database with mTLS authentication, add the following variables to your inventory file under the `[all:vars]` group and ensure each component has a different TLS certificate and key: + +.Procedure + +* Add the following variables to your inventory file under the `[all:vars]` group: ++ +[source,yaml,subs="+attributes"] +---- +# {GatewayStart} +gateway_pg_cert_auth=true +gateway_pg_tls_cert=/path/to/gateway.cert +gateway_pg_tls_key=/path/to/gateway.key +gateway_pg_sslmode=verify-full + +# {ControllerNameStart} +controller_pg_cert_auth=true +controller_pg_tls_cert=/path/to/awx.cert +controller_pg_tls_key=/path/to/awx.key +controller_pg_sslmode=verify-full + +# {HubNameStart} +hub_pg_cert_auth=true +hub_pg_tls_cert=/path/to/pulp.cert +hub_pg_tls_key=/path/to/pulp.key +hub_pg_sslmode=verify-full + +# {EDAName} +eda_pg_cert_auth=true +eda_pg_tls_cert=/path/to/eda.cert +eda_pg_tls_key=/path/to/eda.key +eda_pg_sslmode=verify-full +---- diff --git a/downstream/modules/platform/proc-configure-haproxy-load-balancer.adoc b/downstream/modules/platform/proc-configure-haproxy-load-balancer.adoc new file mode 100644 index 0000000000..6325bb3167 --- /dev/null +++ b/downstream/modules/platform/proc-configure-haproxy-load-balancer.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + +[id="configuring-haproxy-load-balancer"] += Configuring a HAProxy load balancer + +To configure a HAProxy load balancer in front of {Gateway} with a custom CA cert, set the following inventory file variables under the `[all:vars]` group: + +---- +custom_ca_cert= +gateway_main_url= +---- + +[NOTE] +==== +HAProxy SSL passthrough mode is not supported with {Gateway}. +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-configure-hub-azure-storage.adoc b/downstream/modules/platform/proc-configure-hub-azure-storage.adoc new file mode 100644 index 0000000000..2d59dcd46a --- /dev/null +++ b/downstream/modules/platform/proc-configure-hub-azure-storage.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: PROCEDURE + +[id="configure-hub-azure-storage"] += Configuring Azure Blob Storage for {HubName} + +Azure Blob storage is a type of object storage that is supported in containerized installations. When using an Azure blob storage backend, set `hub_storage_backend` to `azure`. The Azure container needs to exist before running the installation program. + +.Procedure + +. Ensure your Azure container exists before proceeding with the installation. +. Add the following variables to your inventory file under the `[all:vars]` group to configure Azure Blob storage: ++ +* `hub_azure_account_key` +* `hub_azure_account_name` +* `hub_azure_container` +* `hub_azure_extra_settings` ++ +You can pass extra parameters through an Ansible `hub_azure_extra_settings` dictionary. For example: ++ +[source,yaml,subs="+attributes"] +---- +hub_azure_extra_settings: + AZURE_LOCATION: foo + AZURE_SSL: True + AZURE_URL_EXPIRATION_SECS: 60 +---- + +[role="_additional-resources"] +.Additional resources +* link:https://django-storages.readthedocs.io/en/latest/backends/azure.html#settings[django-storages documentation - Azure Storage] diff --git a/downstream/modules/platform/proc-configure-hub-nfs-storage.adoc b/downstream/modules/platform/proc-configure-hub-nfs-storage.adoc new file mode 100644 index 0000000000..556972b456 --- /dev/null +++ b/downstream/modules/platform/proc-configure-hub-nfs-storage.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="configure-hub-nfs-storage"] += Configuring Network File System (NFS) storage for {HubName} + +NFS is a type of shared storage that is supported in containerized installations. Shared storage is required when installing more than one instance of {HubName} with a `file` storage backend. When installing a single instance of the {HubName}, shared storage is optional. + +.Procedure + +. To configure shared storage for {HubName}, set the `hub_shared_data_path` variable in your inventory file: ++ +[source,yaml,subs="+attributes"] +---- +hub_shared_data_path= +---- ++ +The value must match the format `host:dir`, for example `nfs-server.example.com:/exports/hub`. +. (Optional) To change the mount options for your NFS share, use the `hub_shared_data_mount_opts` variable. The default value is `rw,sync,hard`. diff --git a/downstream/modules/platform/proc-configure-hub-s3-storage.adoc b/downstream/modules/platform/proc-configure-hub-s3-storage.adoc new file mode 100644 index 0000000000..e8cde7ead0 --- /dev/null +++ b/downstream/modules/platform/proc-configure-hub-s3-storage.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: PROCEDURE + +[id="configure-hub-s3-storage"] += Configuring Amazon S3 storage for {HubName} + +Amazon S3 storage is a type of object storage that is supported in containerized installations. When using an AWS S3 storage backend, set `hub_storage_backend` to `s3`. The AWS S3 bucket needs to exist before running the installation program. + +.Procedure + +. Ensure your AWS S3 bucket exists before proceeding with the installation. +. Add the following variables to your inventory file under the `[all:vars]` group to configure S3 storage: ++ +* `hub_s3_access_key` +* `hub_s3_secret_key` +* `hub_s3_bucket_name` +* `hub_s3_extra_settings` ++ +You can pass extra parameters through an Ansible `hub_s3_extra_settings` dictionary. For example: ++ +[source,yaml,subs="+attributes"] +---- +hub_s3_extra_settings: + AWS_S3_MAX_MEMORY_SIZE: 4096 + AWS_S3_REGION_NAME: eu-central-1 + AWS_S3_USE_SSL: True +---- + +[role="_additional-resources"] +.Additional resources +* link:https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings[django-storages documentation - Amazon S3] diff --git a/downstream/modules/platform/proc-configure-known-proxies.adoc b/downstream/modules/platform/proc-configure-known-proxies.adoc index 00625167cb..c63a48776c 100644 --- a/downstream/modules/platform/proc-configure-known-proxies.adoc +++ b/downstream/modules/platform/proc-configure-known-proxies.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-configuring-known-proxies_{context}"] @@ -5,26 +6,28 @@ [role="_abstract"] -To configure a list of known proxies for your {ControllerName}, add the proxy IP addresses to the *PROXY_IP_ALLOWED_LIST* field in the settings page for your {ControllerName}. +To configure a list of known proxies for your {ControllerName}, add the proxy IP addresses to the *Proxy IP Allowed List* field in the System Settings page. .Procedure -//[ddacosta] Need to verify that in 2.5 this is Settings[System]... -. On your {ControllerName}, navigate to {MenuAEAdminSettings} and select *Miscellaneous System settings* from the list of *System* options. -. In the *PROXY_IP_ALLOWED_LIST* field, enter IP addresses that are allowed to connect to your {ControllerName}, following the syntax in the example below: +//[ddacosta] The Settings > System configurations are for controller only, so don't change ControllerName to PlatformName. +. From the navigation panel, select {MenuSetSystem}. +. In the *Proxy IP Allowed List* field, enter IP addresses that are permitted to connect to your {ControllerName}, using the syntax in the following example: + -.Example *PROXY_IP_ALLOWED_LIST* entry +.Example *Proxy IP Allowed List* entry ---- [ "example1.proxy.com:8080", "example2.proxy.com:8080" ] ---- - ++ [IMPORTANT] ==== -* `PROXY_IP_ALLOWED_LIST` requires proxies in the list are properly sanitizing header input and correctly setting an ``X-Forwarded-For`` value equal to the real source IP of the client. {ControllerNameStart} can rely on the IP addresses and hostnames in `PROXY_IP_ALLOWED_LIST` to provide non-spoofed values for the `X-Forwarded-For` field. -* Do not configure `HTTP_X_FORWARDED_FOR` as an item in `REMOTE_HOST_HEADERS`unless *all* of the following conditions are satisfied: +* *Proxy IP Allowed List* requires proxies in the list are properly sanitizing header input and correctly setting an `X-Forwarded-For` value equal to the real source IP of the client. {ControllerNameStart} can rely on the IP addresses and hostnames in *Proxy IP Allowed List* to provide non-spoofed values for `X-Forwarded-For`. +* Do not configure `HTTP_X_FORWARDED_FOR` as an item in *Remote Host Headers* unless *all* of the following conditions are satisfied: ** You are using a proxied environment with ssl termination; ** The proxy provides sanitization or validation of the `X-Forwarded-For` header to prevent client spoofing; ** `/etc/tower/conf.d/remote_host_headers.py` defines `PROXY_IP_ALLOWED_LIST` that contains only the originating IP addresses of trusted proxies or load balancers. ==== ++ +. Click btn:[Save] to save the settings. diff --git a/downstream/modules/platform/proc-configure-pac-enforcement.adoc b/downstream/modules/platform/proc-configure-pac-enforcement.adoc new file mode 100644 index 0000000000..33c07089d0 --- /dev/null +++ b/downstream/modules/platform/proc-configure-pac-enforcement.adoc @@ -0,0 +1,22 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-08 +:_mod-docs-content-type: PROCEDURE + +[id="configure-enforcement-points_{context}"] += Configuring enforcement points + +After you have set up your {PlatformNameShort} instance to communicate with the OPA server, you can set up enforcement points where you want the policy to be applied. + +You can associate a policy with a job template, an inventory, or an organization. Enforcement then occurs in the following ways: + +Organization:: Jobs launched from a template owned by an organization will fail if the policy is violated. This configuration provides broad control over automation within organizational boundaries. +Inventory:: Jobs using an inventory associated with a policy fail if the policy is violated. This configuration allows you to control access to specific infrastructure resources. +Job template:: Jobs launched from a template associated with a policy fail if the job violates the associated policy. This configuration provides granular control over specific automation tasks. + +[NOTE] + +==== + +If you do not associate a policy with a resource, policy evaluation will not occur when you run the related job. + +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-configure-pac-settings.adoc b/downstream/modules/platform/proc-configure-pac-settings.adoc new file mode 100644 index 0000000000..29359fb294 --- /dev/null +++ b/downstream/modules/platform/proc-configure-pac-settings.adoc @@ -0,0 +1,46 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-08 +:_mod-docs-content-type: PROCEDURE + +[id="configure-pac-settings_{context}"] += Configuring policy enforcement settings + +You can specify how your {PlatformNameShort} instance interacts with OPA by modifying your global settings. + +.Prerequisites +* To configure policy enforcement, you must have administrative privileges. + +[NOTE] + +==== + +If you do not configure the OPA server in your policy settings, policy evaluation will not occur when you run the job. + +==== + +.Procedure +. From the navigation panel, select {MenuSetPolicy}. +. Click *Edit policy settings*. +. On the Policy Settings page, fill out the following fields: ++ +OPA Server hostname:: Enter the name of the host that connects to the OPA service. +OPA server port:: Enter the port that connects to the OPA service. +OPA authentication type:: Select the OPA authentication type. +OPA custom authentication header:: Enter a custom header to append to request headers for OPA authentication. +OPA request timeout:: Enter the number of seconds until the connection times out. +OPA request retry count:: Enter a figure for the number of times a request can attempt to connect to the OPA service before failing. ++ +. Depending on your authentication type, you might need to fill out the following fields. +.. If you selected Token as your authentication type: ++ +OPA authentication token:: Enter the OPA authentication token. ++ +.. If you selected Certificate as your authentication type: ++ +OPA client certificate content:: Enter content of the CA certificate for mTLS authentication. +OPA client key content:: Enter the client key for mTLS authentication. +OPA CA certificate content:: Enter the content of the CA certificate for mTLS authentication. ++ +. Beneath the heading labeled *Options*: +Use SSL for OPA connection:: Check this box to enable an SSL connection to the OPA service. +. Click btn:[Save policy settings]. diff --git a/downstream/modules/platform/proc-configure-pulpcore-service.adoc b/downstream/modules/platform/proc-configure-pulpcore-service.adoc index 00a79b2190..c8e0762202 100644 --- a/downstream/modules/platform/proc-configure-pulpcore-service.adoc +++ b/downstream/modules/platform/proc-configure-pulpcore-service.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-configure-pulpcore-service"] = Configuring pulpcore.service diff --git a/downstream/modules/platform/proc-configuring-controller-image-pull-policy.adoc b/downstream/modules/platform/proc-configuring-controller-image-pull-policy.adoc index 0fea68049b..7061b1654e 100644 --- a/downstream/modules/platform/proc-configuring-controller-image-pull-policy.adoc +++ b/downstream/modules/platform/proc-configuring-controller-image-pull-policy.adoc @@ -1,11 +1,21 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-configuring-controller-image-pull-policy_{context}"] = Configuring your controller image pull policy + Use this procedure to configure the image pull policy on your {ControllerName}. .Procedure -. Under *Image Pull Policy*, click on the radio button to select +. Log in to {OCP}. +. Go to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Automation Controller* tab. +. For new instances, click btn:[Create AutomationController]. +.. For existing instances, you can edit the YAML view by clicking the {MoreActionsIcon} icon and then btn:[Edit AutomationController]. +. Click btn:[advanced Configuration]. +Under *Image Pull Policy*, click on the radio button to select * *Always* * *Never* * *IfNotPresent* diff --git a/downstream/modules/platform/proc-configuring-controller-ldap-security.adoc b/downstream/modules/platform/proc-configuring-controller-ldap-security.adoc index 94bdcdaf0e..96d696d9a1 100644 --- a/downstream/modules/platform/proc-configuring-controller-ldap-security.adoc +++ b/downstream/modules/platform/proc-configuring-controller-ldap-security.adoc @@ -1,37 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc_configuring-controller-ldap-security_{context}"] = Configuring your controller LDAP security -Use this procedure to configure LDAP security for your {ControllerName}. + +You can configure your LDAP SSL configuration for {ControllerName} through any of the following options: + +* The {ControllerName} user interface. +* The {Gateway} user interface. See the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/access_management_and_authentication/index#controller-set-up-LDAP[Configuring LDAP authentication] section of the _Access management and authentication_ guide for additional steps. +* The following procedure steps. .Procedure -. If you do not have a `ldap_cacert_secret`, you can create one with the following command: +. Create a secret in your {PlatformNameShort} namespace for the `bundle-ca.crt` file (the filename must be `bundle-ca.crt`): + ---- -$ oc create secret generic -custom-certs \ - --from-file=ldap-ca.crt= \ <1> +$ oc create secret -n aap-namespace generic bundle-ca-secret --from-file=bundle-ca.crt ---- -<1> Modify this to point to where your CA cert is stored. + -This will create a secret that looks like this: +. Add the `bundle_cacert_secret` to the {PlatformNameShort} customer resource: + ---- -$ oc get secret/mycerts -o yaml -apiVersion: v1 -data: - ldap-ca.crt: <1> -kind: Secret -metadata: - name: mycerts - namespace: awx -type: Opaque +... +spec: + bundle_cacert_secret: bundle-ca-secret +... ---- -<1> {ControllerNameStart} looks for the data field `ldap-ca.crt` in the specified secret when using the `ldap_cacert_secret`. + -. Under *LDAP Certificate Authority Trust Bundle* click the drop-down menu and select your `ldap_cacert_secret`. -. Under *LDAP Password Secret*, click the drop-down menu and select a secret. -. Under *EE Images Pull Credentials Secret*, click the drop-down menu and select a secret. -. Under *Bundle Cacert Secret*, click the drop-down menu and select a secret. -. Under *Service Type*, click the drop-down menu and select -* *ClusterIP* -* *LoadBalancer* -* *NodePort* + +.Verification + +You can verify the expected certificate by running: ++ +---- +oc exec -it deployment.apps/aap-gateway - openssl x509 -in /etc/pki/tls/certs/bundle-ca.crt -noout -text +---- + diff --git a/downstream/modules/platform/proc-configuring-controller-route-options.adoc b/downstream/modules/platform/proc-configuring-controller-route-options.adoc index 260c3f95b7..350adcd83f 100644 --- a/downstream/modules/platform/proc-configuring-controller-route-options.adoc +++ b/downstream/modules/platform/proc-configuring-controller-route-options.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-configuring-controller-route-options_{context}"] = Configuring your {ControllerName} operator route options @@ -5,6 +7,12 @@ The {PlatformName} operator installation form allows you to further configure your {ControllerName} operator route options under *Advanced configuration*. .Procedure +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Automation Controller* tab. +. For new instances, click btn:[Create AutomationController]. +.. For existing instances, you can edit the YAML view by clicking the {MoreActionsIcon} icon and then btn:[Edit AutomationController]. . Click btn:[Advanced configuration]. . Under *Ingress type*, click the drop-down menu and select *Route*. . Under *Route DNS host*, enter a common host name that the route answers to. diff --git a/downstream/modules/platform/proc-configuring-discovery.adoc b/downstream/modules/platform/proc-configuring-discovery.adoc index a93ebf3c2c..66ba3b74d5 100644 --- a/downstream/modules/platform/proc-configuring-discovery.adoc +++ b/downstream/modules/platform/proc-configuring-discovery.adoc @@ -1,4 +1,6 @@ -[id="proc-configuring-discovery_{context}"] +:_mod-docs-content-type: PROCEDURE + +[id="proc-configuring-discovery"] = Configuring automatic discovery of other {ControllerName} nodes diff --git a/downstream/modules/platform/proc-configuring-reverse-proxy.adoc b/downstream/modules/platform/proc-configuring-reverse-proxy.adoc index 3a854d38c8..ae2636172c 100644 --- a/downstream/modules/platform/proc-configuring-reverse-proxy.adoc +++ b/downstream/modules/platform/proc-configuring-reverse-proxy.adoc @@ -1,19 +1,18 @@ - +:_mod-docs-content-type: PROCEDURE [id="proc-configuring-reverse-proxy_{context}"] - - -= Configuring a reverse proxy += Configuring a reverse proxy through a load balancer [role="_abstract"] -You can support a reverse proxy server configuration by adding `HTTP_X_FORWARDED_FOR` to the *REMOTE_HOST_HEADERS* field in your {ControllerName} settings. The ``X-Forwarded-For`` (XFF) HTTP header field identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. +A reverse proxy manages external requests to servers, offering load balancing and concealing server identities for added security. +You can support a reverse proxy server configuration by adding `HTTP_X_FORWARDED_FOR` to the *Remote Host Headers* field in the Systems Settings. The ``X-Forwarded-For`` (XFF) HTTP header field identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. .Procedure -//[ddacosta] Need to verify that in 2.5 this is Settings[System]... -. On your {ControllerName}, navigate to {MenuAEAdminSettings} and select *Miscellaneous System settings* from the list of *System* options. -. In the *REMOTE_HOST_HEADERS* field, enter the following values: +//[ddacosta] Settings > System are controller specific for 2.5EA so don't change ControllerName to PlatformName. +. From the navigation panel, select {MenuSetSystem}. +. In the *Remote Host Headers* field, enter the following values: + ---- [ @@ -22,9 +21,11 @@ You can support a reverse proxy server configuration by adding `HTTP_X_FORWARDED "REMOTE_HOST" ] ---- -. Add the lines below to ``/etc/tower/conf.d/custom.py`` to ensure the application uses the correct headers: - ++ +. Add the lines below to `/etc/tower/conf.d/custom.py` to ensure the application uses the correct headers: ++ ---- USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True ---- +. Click btn:[Save] to save the settings. diff --git a/downstream/modules/platform/proc-connecting-nodes-through-mesh-ingress.adoc b/downstream/modules/platform/proc-connecting-nodes-through-mesh-ingress.adoc index de67cef383..ef48346def 100644 --- a/downstream/modules/platform/proc-connecting-nodes-through-mesh-ingress.adoc +++ b/downstream/modules/platform/proc-connecting-nodes-through-mesh-ingress.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-connecting-nodes-through-mesh-ingress"] = Connecting nodes through mesh ingress @@ -19,53 +21,36 @@ Use the following procedure to set up mesh nodes. .Procedure -. Create a YAML file to set up the mesh ingress node. +. Create a YAML file (in this case named `oc_meshingress.yml`) to set up the mesh ingress node. + -The file resembles the following: +Your file should resemble the following: + ---- -apiVersion: +apiVersion: automationcontroller.ansible.com/v1alpha1 kind: AutomationControllerMeshIngress metadata: name: namespace: spec: - deployment name: + deployment_name: aap-controller ---- + Where: * *apiVersion*: defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and might reject unrecognized values. +This value is static. * *kind*: Is the type of node to create. -Set the value to `AutomationControllerMeshIngress`. -`AutomationControllerMeshIngress` controls the deployment and configuration of mesh ingress on {ControllerName}. -* *name*: is the name of the mesh ingress node. -* *namespace*: is which Kubernetes namespace to deploy the mesh ingress into. -This must be in the same namespace as the {ControllerName} that the mesh is connected to -* *deployment_name*: You can find the deployment name by using: + ----- -oc get ansible-automation-platform. ----- +Use the value `AutomationControllerMeshIngress`. + -//Additionally you can use: +`AutomationControllerMeshIngress` controls the deployment and configuration of mesh ingress on {ControllerName}. +* *name*: enter a name for the mesh ingress node. +* *namespace*: enter a name for the Kubernetes namespace to deploy the mesh ingress into. + -//* *external_hostname*: an optional field used for specifying the external hostname defined in an user managed ingress. -//* *external_ipaddress*: an optional field used for specifying the external IP address defined in an user managed ingress -//* *ingress_type*: Ingress type for ingress managed by the operator. -//Where options are: -//** none (default) -//** Ingress -//** IngressRouteTCP -//** Route (default when deployed on OpenShift) -//* *ingress_api_version*: the API Version for ingress managed by the operator. -//This parameter is ignored when `ingress_type=Route`. -//* *ingress_annotations*: annotation on the ingress managed by the operator -//* *ingress_class_name*: the name of ingress class to use instead of the cluster default. -//This parameter is ignored when `ingress_type=Route`. -//* *ingress_controller*: special configuration for specific Ingress Controllers. -//This parameter is ignored when `ingress_type=Route`. +This must be in the same namespace as the {ControllerName} that the mesh is connected to. +* *deployment_name*: is the {ControllerName} instance that this mesh ingress is attached to. +Provide the name of your {ControllerName} instance. . Apply this YAML file using: + @@ -73,8 +58,27 @@ oc get ansible-automation-platform. oc apply -f oc_meshingress.yml ---- + -This runs the playbook associated with `AutomationControllerMeshIngress`, and creates the hop node called ``. +Run this playbook to creates the `AutomationControllerMeshIngress` resource. +The operator creates a hop node in {ControllerName} with the `name` you supplied. . When the MeshIngress instance has been created, it appears in the Instances page. - - ++ +[IMPORTANT] +==== +Any instance that is to function as a remote execution node in "pull" mode need to be created after this procedure and must be configured as follows: +---- +instance type: Execution +listener port: keep empty +options: + Enable instance: checked + Managed by Policy: as needed + Peers from control nodes: unchecked (this one is important) +---- +==== +. Associate this new instance with the hop node you created using the procedure in this paragraph +. Download the tarball. ++ +[NOTE] +==== +Association with the hop node must be done before creating the tarball. +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-containerized-troubleshoot-gathering-logs.adoc b/downstream/modules/platform/proc-containerized-troubleshoot-gathering-logs.adoc new file mode 100644 index 0000000000..caa829f82d --- /dev/null +++ b/downstream/modules/platform/proc-containerized-troubleshoot-gathering-logs.adoc @@ -0,0 +1,57 @@ +:_mod-docs-content-type: PROCEDURE +[id="gathering-ansible-automation-platform-logs_{context}"] + += Gathering {PlatformNameShort} logs + +With the `sos` utility, you can collect configuration, diagnostic, and troubleshooting data, and give those files to Red Hat Technical Support. An `sos` report is a common starting point for Red Hat technical support engineers when performing analysis of a service request for {PlatformNameShort}. + +You can collect an `sos` report for each host in your containerized {PlatformNameShort} deployment by running the `log_gathering` playbook with the appropriate parameters. + +.Procedure + +. Go to the {PlatformNameShort} installation directory. + +. Run the `log_gathering` playbook. This playbook connects to each host in the inventory file, installs the `sos` tool, and generates the `sos` report. ++ +---- +$ ansible-playbook -i ansible.containerized_installer.log_gathering +---- ++ +. Optional: To define additional parameters, specify them with the `-e` option. For example: ++ +---- +$ ansible-playbook -i ansible.containerized_installer.log_gathering -e 'target_sos_directory=' -e 'case_number=0000000' -e 'clean=true' -e 'upload=true' -s +---- ++ +.. You can use the `-s` option to step through each task in the playbook and confirm its execution. This is optional but can be helpful for debugging. + +.. The following is a list of the parameters you can use with the `log_gathering` playbook: ++ +.Parameter reference +[options="header"] +|==== +| Parameter name | Description | Default + +| `target_sos_directory` +| Used to change the default location for the `sos` report files. +| `/tmp` directory of the current server. + +| `case_number` +| Specifies the support case number if relevant to the log gathering. +| + +| `clean` +| Obfuscates sensitive data that might be present on the `sos` report. +| `false` + +| `upload` +| Automatically uploads the `sos` report data to Red Hat. +| `false` +|==== ++ +. Gather the `sos` report files described in the playbook output and share them with the support engineer or directly upload the `sos` report to Red Hat using the `upload=true` additional parameter. + +[role="_additional-resources"] +.Additional resources + +* link:https://access.redhat.com/solutions/3592[What is an sos report and how to create one in {RHEL}?] diff --git a/downstream/modules/platform/proc-control-data-collection.adoc b/downstream/modules/platform/proc-control-data-collection.adoc index 12f09d5952..39d48c02e1 100644 --- a/downstream/modules/platform/proc-control-data-collection.adoc +++ b/downstream/modules/platform/proc-control-data-collection.adoc @@ -1,17 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controlling-data-collection_{context}"] = Controlling data collection from {ControllerName} [role="_abstract"] -You can control how {ControllerName} collects data by setting your participation level in the *User Interface* tab in the *Settings* menu. +You can control how {ControllerName} collects data from the {MenuSetSystem} menu. .Procedure . Log in to your {ControllerName}. -//[ddacosta]I don't see an equivalent in 2.5, need to verify where it gets added -. Navigate to {MenuAEAdminSettings} and select *User Interface settings* from the *User Interface* option. -. Select the desired level of data collection from the *User Analytics Tracking State* drop-down list: -** *Off*: Prevents any data collection. -** *Anonymous*: Enables data collection without your specific user data. -** *Detailed*: Enables data collection including your specific user data. -. Click btn:[Save] to apply the settings or btn:[Cancel] to discard the changes. +. From the navigation panel, select {MenuSetSystem}. +. Select *Gather data for Automation Analytics* to enable {ControllerName} to gather data on automation and send it to Automation Analytics. + + \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-access-topology-viewer.adoc b/downstream/modules/platform/proc-controller-access-topology-viewer.adoc index e88f204bb1..2a3e794957 100644 --- a/downstream/modules/platform/proc-controller-access-topology-viewer.adoc +++ b/downstream/modules/platform/proc-controller-access-topology-viewer.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-access-topology-viewer"] = Accessing the topology viewer @@ -22,9 +24,9 @@ To reset the view to its default view, click the *Reset view* (image:reset.png[R . Refer to the *Legend* to identify the type of nodes that are represented. + -For VM-based installations, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_guide_for_vm-based_installations/assembly-planning-mesh#con-automation-mesh-node-types[Control and execution planes] +For VM-based installations, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_mesh_for_vm_environments/assembly-planning-mesh#con-automation-mesh-node-types[Control and execution planes]. + -For operator-based installations, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_for_operator-based_installations/assembly-planning-mesh#con-automation-mesh-node-types[Control and execution planes] for more information about each type of node. +For operator-based installations, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_mesh_for_managed_cloud_or_operator_environments/assembly-planning-mesh#con-automation-mesh-node-types[Control and execution planes] for more information about each type of node. + //Not relevant in the 2.5 UI: //[NOTE] @@ -55,4 +57,4 @@ You can use the *Details* page to remove the instance, run a health check on the However, you can disable it to exclude the node from having any jobs running on it. . Additional resources -For more information about creating new nodes and scaling the mesh, see xref:assembly-controller-instances[Managing Capacity with Instances]. +For more information about creating new nodes and scaling the mesh, see link:{URLControllerUserGuide}/assembly-controller-instances[Managing capacity with Instances]. diff --git a/downstream/modules/platform/proc-controller-add-groups-to-groups.adoc b/downstream/modules/platform/proc-controller-add-groups-to-groups.adoc index 5399ac7e54..95a7d161e2 100644 --- a/downstream/modules/platform/proc-controller-add-groups-to-groups.adoc +++ b/downstream/modules/platform/proc-controller-add-groups-to-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-add-groups-to-groups"] = Adding groups within groups @@ -7,24 +9,20 @@ When you have added a group to a template, the Group *Details* page is displayed .Procedure . Select the *Related Groups* tab. -. Click btn:[Existing group] to add a group that already exists in your configuration or btn:[New group] to create a new group. +. Click btn:[Add existing group] to add a group that already exists in your configuration or btn:[Create group] to create a new group. . If creating a new group, enter the appropriate details into the required and optional fields: * *Name* (required): * Optional: *Description*: Enter a description as appropriate. -* *Variables*: Enter definitions and values to be applied to all hosts in this group. +* Optional: *Variables*: Enter definitions and values to be applied to all hosts in this group. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. -. Click btn:[Save]. -. The *Create Group* window closes and the newly created group is displayed as an entry in the list of groups associated with the group that it was +. Click btn:[Create group]. +. The *Create group* window closes and the newly created group is displayed as an entry in the list of groups associated with the group that it was created for. -//+ -//image:inventories-add-group-subgroup-added.png[Inventories add group subgroup] +.Next steps If you select to add an existing group, available groups appear in a separate selection window. -//+ -//image:inventories-add-group-existing-subgroup.png[Inventories add group existing subgroup] - When you select a group, it is displayed in the list of groups associated with the group. -* To configure additional groups and hosts under the subgroup, click the name of the subgroup from the list of groups and repeat the steps listed in this section. +* To configure additional groups and hosts under the subgroup, click the name of the subgroup from the list of groups and repeat the steps listed before. diff --git a/downstream/modules/platform/proc-controller-add-groups.adoc b/downstream/modules/platform/proc-controller-add-groups.adoc index 55a9150d9e..f5ca5a2492 100644 --- a/downstream/modules/platform/proc-controller-add-groups.adoc +++ b/downstream/modules/platform/proc-controller-add-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-add-groups"] = Adding groups to inventories @@ -29,7 +31,7 @@ All of these spawned groups can have hosts. . From the navigation panel, select {MenuInfrastructureInventories}. . Select the Inventory name you want to add groups to. . In the Inventory *Details* page, select the *Groups* tab. -. Click btn:[Create group] to open the *Create new group* window. +. Click btn:[Create group]. //+ //image:inventories-add-group-new.png[Inventories_manage_group_add] @@ -37,8 +39,10 @@ All of these spawned groups can have hosts. * *Name*: Required * Optional: *Description*: Enter a description as appropriate. -* *Variables*: Enter definitions and values to be applied to all hosts in this group. +* Optional: *Variables*: Enter definitions and values to be applied to all hosts in this group. Enter variables by using either JSON or YAML syntax. Use the radio button to toggle between the two. -. Click btn:[Save]. -. When you have added a group to a template, the Group *Details* page is displayed. +. Click btn:[Create group]. + +.Result +When you have added a group to a template, the Group *Details* page is displayed. diff --git a/downstream/modules/platform/proc-controller-add-hosts.adoc b/downstream/modules/platform/proc-controller-add-hosts.adoc index ba6ab169a0..b54462a06d 100644 --- a/downstream/modules/platform/proc-controller-add-hosts.adoc +++ b/downstream/modules/platform/proc-controller-add-hosts.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-add-hosts"] = Adding hosts to an inventory @@ -15,7 +17,7 @@ You can configure hosts for the inventory and for groups and groups within group * *Name* (required): * Optional: *Description*: Enter a description as appropriate. -* *Variables*: Enter definitions and values to be applied to all hosts in this group, as in the following example: +* Optional: *Variables*: Enter definitions and values to be applied to all hosts in this group, as in the following example: + [literal, options="nowrap" subs="+attributes"] ---- @@ -29,7 +31,7 @@ You can configure hosts for the inventory and for groups and groups within group Enter variables by using either JSON or YAML syntax. Use the radio button to toggle between the two. . Click btn:[Create host]. -. The *Create Host* window closes and the newly created host is displayed in the list of hosts associated with the group that it was created for. +. The *Create host* window closes and the newly created host is displayed in the list of hosts associated with the group that it was created for. + //image:inventories-add-group-host-added.png[Inventories add group host] + @@ -78,5 +80,5 @@ These hosts must be unique within the inventory. Either all hosts are added, or an error is returned indicating why the operation was not able to complete. Use the *OPTIONS* request to return the relevant schema. -For more information, see https://docs.ansible.com/automation-controller/latest/html/controllerapi/api_ref.html#/Bulk[Bulk endpoints] in the _Automation Controller API Guide_. +//For more information, see https://docs.ansible.com/automation-controller/latest/html/controllerapi/api_ref.html#/Bulk[Bulk endpoints] in the _Automation Controller API Guide_. ==== diff --git a/downstream/modules/platform/proc-controller-add-new-schedule-from-resource.adoc b/downstream/modules/platform/proc-controller-add-new-schedule-from-resource.adoc new file mode 100644 index 0000000000..5e7a1db817 --- /dev/null +++ b/downstream/modules/platform/proc-controller-add-new-schedule-from-resource.adoc @@ -0,0 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + +[id="controller-adding-new-schedule-from-resource"] + += Adding a new schedule from a resource page + +To create a new schedule from a resource page: + +.Procedure +. Click the *Schedules* tab of the resource that you are configuring. +This can be a template, project, or inventory source. +. Click btn:[Create schedule]. +This opens the *Create schedule* window. + +. Enter the appropriate details into the following fields: + +* *Schedule name*: Enter the name. +* Optional: *Description*: Enter a description. +* *Start date/time*: Enter the date and time to start the schedule. +* *Time zone*: Select the time zone. The *Start date/time* that you enter must be in this time zone. +//* *Repeat frequency*: Appropriate scheduling options display depending on the frequency you select. ++ +The *Schedule Details* display when you establish a schedule, enabling you to review the schedule settings and a list of the scheduled occurrences in the selected *Local Time Zone*. ++ +[IMPORTANT] +==== +Jobs are scheduled in UTC. +Repeating jobs that run at a specific time of day can move relative to a local time zone when Daylight Saving Time shifts occur. +The system resolves the local time zone based time to UTC when the schedule is saved. +To ensure your schedules are correctly created, set your schedules in UTC time. +==== ++ +. Click btn:[Next]. +The *Define rules* page is displayed. diff --git a/downstream/modules/platform/proc-controller-add-organization-user.adoc b/downstream/modules/platform/proc-controller-add-organization-user.adoc index 652f53b73d..8724776a51 100644 --- a/downstream/modules/platform/proc-controller-add-organization-user.adoc +++ b/downstream/modules/platform/proc-controller-add-organization-user.adoc @@ -1,37 +1,42 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-add-organization-user"] -= Add a User or Team += Adding a user to an organization + +You can provide a user with access to an organization by adding them to the organization and managing the roles associated with the user. To add a user to an organization, the user must already exist. For more information, see xref:proc-controller-creating-a-user[Creating a user]. +To add roles for a user, the role must already exist. See xref:proc-gw-create-roles[Creating a role] for more information. -To add a user or team to an organization, the user or team must already exist. +The following tab selections are available when adding users to an organization. When user accounts from the {ControllerName} organization have been migrated to {PlatformNameShort} 2.5 during the upgrade process, the *Automation Execution* tab shows content based on whether the users were added to the organization prior to migration. -For more information, see xref:proc-controller-creating-a-user[Creating a User] and xref:proc-controller-creating-a-team[Creating a Team]. +{PlatformNameShort}:: Reflects all users added to the organization at the platform level. From this tab, you can add users as organization members and, optionally provide specific organization level roles. -To add existing users or team to the Organization: +Automation Execution:: Reflects users that were added directly to the {ControllerName} organization prior to an upgrade and migration. From this tab, you can only view existing memberships in {ControllerName} and remove those memberships but not you can not add new memberships. + +New user memberships to an organization must be added at the platform level. .Procedure -. In the *Access tab* of the *Organization* page, click btn:[Add]. -. Select a user or team to add. -. Click btn:[Next]. -. Select one or more users or teams from the list by clicking the checkbox next to the name to add them as members. +. From the navigation panel, select {MenuAMOrganizations}. +. From the *Organizations* list view, select the organization to which you want to add a user. +. Click the *Users* tab to add users. +. Select the *{PlatformNameShort}* tab and click btn:[Add users] to add user access to the team, or select the *Automation Execution* tab to view or remove user access from the team. +. Select one or more users from the list by clicking the checkbox next to the name to add them as members. . Click btn:[Next]. +. Select the roles you want the selected user to have. Scroll down for a complete list of roles. + -image:organizations-add-users-for-example-organization.png[Add roles] +include::snippets/snip-gw-roles-note-multiple-components.adoc[] + -In this example, two users have been selected. -. Select the role you want the selected user or team to have. -Scroll down for a complete list of roles. -Different resources have different options available. -+ -image:organizations-add-users-roles.png[Add user roles] -. Click btn:[Save] to apply the roles to the selected user or team, and to add them as members. -The *Add Users* or *Add Teams* window displays the updated roles assigned for each user and team. +. Click btn:[Next] to review the roles settings. +. Click btn:[Finish] to apply the roles to the selected users, and to add them as members. The *Add roles* dialog displays the updated roles assigned for each user. + [NOTE] ==== -A user or team with associated roles retains them if they are reassigned to another organization. +A user with associated roles retains them if they are reassigned to another organization. ==== -. To remove roles for a particular user, click the disassociate image:disassociate.png[Disassociate,10,10] icon next to its resource. -This launches a confirmation dialog, asking you to confirm the disassociation. ++ +. To remove a particular user from the organization, select *Remove user* from the *More actions {MoreActionsIcon}* list next to the user. This launches a confirmation dialog, asking you to confirm the removal. +. To manage roles for users in an organization, click the *{SettingsIcon}* icon next to the user and select *Manage roles*. + diff --git a/downstream/modules/platform/proc-controller-add-source.adoc b/downstream/modules/platform/proc-controller-add-source.adoc index d5f722144a..e7a1f56dc4 100644 --- a/downstream/modules/platform/proc-controller-add-source.adoc +++ b/downstream/modules/platform/proc-controller-add-source.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-add-source"] = Adding a source @@ -12,7 +14,7 @@ Adding a source to an inventory only applies to standard inventories. . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want to add a source to. . In the Inventory *Details* page, select the *Sources* tab. -. Click btn:[Add source]. This opens the *Add new source* window. +. Click btn:[Create source]. + //image:inventories-create-source.png[Inventories create source] @@ -27,10 +29,13 @@ For more information about sources, and supplying the appropriate information, s . When the information for your chosen xref:ref-controller-inventory-sources[Inventory sources] is complete, you can optionally specify other common parameters, such as verbosity, host filters, and variables. . Use the *Verbosity* menu to select the level of output on any inventory source's update jobs. -. Use the *Host Filter* field to specify only matching host names to be imported into {ControllerName}. -. In the *Enabled Variable* field, specify that {ControllerName} retrieves the enabled state from the dictionary of host variables. +. Use the *Host filter* field to specify only matching host names to be imported into {ControllerName}. +. In the *Enabled variable* field, specify that {ControllerName} retrieves the enabled state from the dictionary of host variables. You can specify the enabled variable by using dot notation as 'foo.bar', in which case the lookup searches nested dictionaries, equivalent to: `from_dict.get('foo', {}).get('bar', default)`. -. If you specified a dictionary of host variables in the *Enabled Variable* field, you can provide a value to enable on import. ++ +The *Enabled value* field is ignored unless you set the *Enabled variable* field. If the enabled variable matches this value, the host is enabled on import. + +. If you specified a dictionary of host variables in the *Enabled variable* field, you can give a value to enable on import. For example, for `enabled_var='status.power_state'` and `'enabled_value='powered_on'` in the following host variables, the host is marked `enabled`: + [literal, options="nowrap" subs="+attributes"] @@ -56,57 +61,30 @@ the {ControllerName} inventory. Hosts and groups that were not managed by the inventory source are promoted to the next manually created group, or, if there is no manually created group to promote them into, they are left in the "all" default group for the inventory. + When not checked, local child hosts and groups not found on the external source remain untouched by the inventory update process. -* *Overwrite Variables*: If checked, all variables for child groups and hosts are removed and replaced by those found on the external source. +* *Overwrite variables*: If checked, all variables for child groups and hosts are removed and replaced by those found on the external source. + When not checked, a merge is performed, combining local variables with those found on the external source. -* *Update on Launch*: Each time a job runs using this inventory, refresh the inventory from the selected source before executing job tasks. +* *Update on launch*: Each time a job runs using this inventory, refresh the inventory from the selected source before executing job tasks. + To avoid job overflows if jobs are spawned faster than the inventory can synchronize, selecting this enables you to configure a *Cache Timeout* to previous cache inventory synchronizations for a certain number of seconds. + -The *Update on Launch* setting refers to a dependency system for projects and inventory, and does not specifically exclude two jobs from running at the same time. +The *Update on launch* setting refers to a dependency system for projects and inventory, and does not specifically exclude two jobs from running at the same time. + If a cache timeout is specified, then the dependencies for the second job are created, and it uses the project and inventory update that the first job spawned. + Both jobs then wait for that project or inventory update to finish before proceeding. If they are different job templates, they can then both start and run at the same time, if the system has the capacity to do so. -If you intend to use {ControllerName}'s provisioning callback feature with a dynamic inventory source, *Update on Launch* must be set for the inventory +If you intend to use {ControllerName}'s provisioning callback feature with a dynamic inventory source, *Update on launch* must be set for the inventory group. + -If you synchronize an inventory source that uses a project that has *Update On Launch* set, then the project might automatically update (according to +If you synchronize an inventory source that uses a project that has *Update On launch* set, then the project might automatically update (according to cache timeout rules) before the inventory update starts. + You can create a job template that uses an inventory that sources from the same project that the template uses. In such a case, the project updates and then the inventory updates (if updates are not already in progress, or if the cache timeout has not already expired). + . Review your entries and selections. This enables you to configure additional details, such as schedules and notifications. . To configure schedules associated with this inventory source, click the *Schedules* tab: * If schedules are already set up, then review, edit, enable or disable your schedule preferences. * If schedules have not been set up, for more information about setting up schedules, see xref:controller-schedules[Schedules]. - -= Configuring notifications for the source - -Use the following procedure to configure notifications for the source: - -.Procedure - -. From the navigation panel, select {MenuInfrastructureInventories}. -. Select the inventory name you want to configure notifications for. -. In the inventory *Details* page, select the *Notifications* tab. -+ -[NOTE] -==== -The *Notifications* tab is only present when you have saved the newly-created source. - -//image:inventories-create-source-with-notifications-tab.png[Notification tab] -==== -. If notifications are already set up, use the toggles to enable or disable the notifications to use with your particular source. -For more information, see xref:controller-enable-disable-notifications[Enable and Disable Notifications]. -. If you have not set up notifications, see xref:controller-notifications[Notifications] for more information. -. Review your entries and selections. -. Click btn:[Save]. - -When you define a source, it is displayed in the list of sources associated with the inventory. -From the *Sources* tab you can perform a sync on a single source, or sync all of them at once. -You can also perform additional actions such as scheduling a sync process, and edit or delete the source. - -//image:inventories-view-sources.png[Inventories view sources] diff --git a/downstream/modules/platform/proc-controller-add-users-job-templates.adoc b/downstream/modules/platform/proc-controller-add-users-job-templates.adoc index 2315685b6d..e22cd62db8 100644 --- a/downstream/modules/platform/proc-controller-add-users-job-templates.adoc +++ b/downstream/modules/platform/proc-controller-add-users-job-templates.adoc @@ -1,16 +1,30 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-credential-add-users-job-templates"] = Adding new users and job templates to existing credentials .Procedure -. From the navigation panel, select {MenuAMCredentials}. +. From the navigation panel, select {MenuAECredentials}. . Select the credential that you want to assign to additional users. -. Click the *Access* tab. +. Click the *User Access* tab. You can see users and teams associated with this credential and their roles. -. Choose a user and click btn:[Add]. If no users exist, add them from the *Users* menu. -For more information, see xref:assembly-controller-users[Users]. -. Select *Job Templates* to display the job templates associated with this credential, and which jobs have run recently by using this credential. -. Choose a job template and click btn:[Add] to assign the credential to additional job templates. -For more information about creating new job templates, see the xref:controller-job-templates[Job templates] section. +For more information, see link:{URLCentralAuth}/gw-managing-access#assembly-controller-users_gw-manage-rbac[Users]. +. Click btn:[Add roles]. +. Select the user(s) that you want to give access to the credential and click btn:[Next]. +. From the *Select roles to apply* page, select the roles you want to add to the User. +. Click btn:[Next]. +. Review your selections and click btn:[Finish] to add the roles or click btn:[Back] to make changes. ++ +The *Add roles* window displays stating whether the action was successful. ++ +If the action is not successful, a warning displays. ++ +. Click btn:[Close]. +. The *User Access* page displays the summary information. +. Select the *Job templates* tab to select a job template to which you want to assign this credential. +. Chose a job template or select *Create job template* from the *Create template* list to assign the credential to additional job templates. ++ +For more information about creating new job templates, see link:{URLControllerUserGuide}/controller-job-templates[Job templates]. diff --git a/downstream/modules/platform/proc-controller-adding-a-project.adoc b/downstream/modules/platform/proc-controller-adding-a-project.adoc index ef05947440..a04cc49969 100644 --- a/downstream/modules/platform/proc-controller-adding-a-project.adoc +++ b/downstream/modules/platform/proc-controller-adding-a-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-adding-a-project"] = Adding a new project @@ -15,12 +17,12 @@ You can create a logical collection of playbooks, called projects in {Controller * *Name* (required) * Optional: *Description* * *Organization* (required): A project must have at least one organization. Select one organization now to create the project. When the project is created you can add additional organizations. -* Optional: *Execution Environment*: Enter the name of the {ExecEnvShort} or search from a list of existing ones to run this project. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/upgrading-to-ees[Migrating to automation execution environments] in the _Red Hat Ansible Automation Platform Upgrade and Migration Guide_. -* *Source Control Type* (required): Select an SCM type associated with this project from the menu. +* Optional: *Execution environment*: Enter the name of the {ExecEnvShort} or search from a list of existing ones to run this project. +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/creating_and_using_execution_environments/index[Creating and using execution environments]. +* *Source control type* (required): Select an SCM type associated with this project from the menu. Options in the following sections become available depending on the type chosen. For more information, see xref:proc-projects-manage-playbooks-manually[Managing playbooks manually] or xref:ref-projects-manage-playbooks-with-source-control[Managing playbooks using source control]. -* Optional: *Content Signature Validation Credential*: Use this field to enable content verification. +* Optional: *Content signature validation credential*: Use this field to enable content verification. Specify the GPG key to use for validating content signature during project synchronization. If the content has been tampered with, the job will not run. For more information, see xref:assembly-controller-project-signing[Project signing and verification]. @@ -34,6 +36,3 @@ The following describe the ways projects are sourced: ** xref:proc-scm-git-subversion[SCM Types - Configuring playbooks to use Git and Subversion] ** xref:proc-scm-insights[SCM Type - Configuring playbooks to use Red Hat Insights] ** xref:proc-scm-remote-archive[SCM Type - Configuring playbooks to use a remote archive] - -include::proc-projects-manage-playbooks-manually.adoc[leveloffset=+1] -include::ref-projects-manage-playbooks-with-source-control.adoc[leveloffset=+1] diff --git a/downstream/modules/platform/proc-controller-adding-gpg-key.adoc b/downstream/modules/platform/proc-controller-adding-gpg-key.adoc index b221a0cd47..09f16ce7fa 100644 --- a/downstream/modules/platform/proc-controller-adding-gpg-key.adoc +++ b/downstream/modules/platform/proc-controller-adding-gpg-key.adoc @@ -1,4 +1,6 @@ -[id="ref-controller-adding-gpg-key"] +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-adding-gpg-key"] = Adding a GPG key to {ControllerName} @@ -10,8 +12,7 @@ $ gpg --list-keys $ gpg --export --armour > my_public_key.asc ---- -[arabic] -. From the navigation panel, select {MenuAMCredentials}. +. From the navigation panel, select {MenuAECredentials}. . Click btn:[Create credential]. . Give a meaningful name for the new credential, for example, "Infrastructure team public GPG key". . In the *Credential type* field, select *GPG Public Key*. diff --git a/downstream/modules/platform/proc-controller-adding-inv-permissions.adoc b/downstream/modules/platform/proc-controller-adding-inv-permissions.adoc index e75389c403..d279312a1b 100644 --- a/downstream/modules/platform/proc-controller-adding-inv-permissions.adoc +++ b/downstream/modules/platform/proc-controller-adding-inv-permissions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-adding-inv-permissions"] = Adding permissions to inventories @@ -22,14 +24,8 @@ Different resources have different options available. //image:organizations-add-users-roles.png[Add user roles] . Click btn:[Finish] to apply the roles to the selected users or teams and to add them as members. - The updated roles assigned for each user and team are displayed. //image:permissions-tab-roles-assigned.png[Permissions tab with Role Assignments] -.Removing a permission -* To remove roles for a particular user, click the image:disassociate.png[Disassociate,10,10] icon next to its resource. - -This launches a confirmation window, asking you to confirm the disassociation. -//image:permissions-disassociate-confirm.png[image] diff --git a/downstream/modules/platform/proc-controller-adding-new-inventory.adoc b/downstream/modules/platform/proc-controller-adding-new-inventory.adoc index 1655ce0bf3..1c725b4319 100644 --- a/downstream/modules/platform/proc-controller-adding-new-inventory.adoc +++ b/downstream/modules/platform/proc-controller-adding-new-inventory.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-adding-new-inventory"] = Add a new inventory @@ -21,40 +23,54 @@ The *Inventories* window displays a list of the inventories that are currently a * *Name*: Enter a name appropriate for this inventory. * Optional: *Description*: Enter an arbitrary description as appropriate. * *Organization*: Required. Choose among the available organizations. -//* Only applicable to Smart Inventories: *Smart Host Filter*: Click the image:search.png[Search,15,15] icon to open a separate window to filter hosts for this inventory. -//These options are based on the organization you chose. -//+ -//Filters are similar to tags in that tags are used to filter certain hosts that contain those names. -//Therefore, to populate the *Smart Host Filter* field, specify a tag that contains the hosts you want, not the hosts themselves. -//Enter the tag in the *Search* field and click btn:[Enter]. -//Filters are case-sensitive. -//For more information, see xref:ref-controller-smart-host-filter[Smart host filters]. -* *Instance Groups*: Click the image:search.png[Search,15,15] icon to open a separate window. -Select the instance group or groups for this inventory to run on. -If the list is extensive, use the search to narrow the options. -You can select multiple instance groups and sort them in the order that you want them run. +* Only applicable to Smart Inventories: *Smart host filter*: Populate the hosts for this inventory by using a search filter. ++ +For example "name__icontains=RedHat". ++ +These options are based on the organization you chose. ++ +Filters are similar to tags in that tags are used to filter certain hosts that contain those names. +Therefore, to populate the *Smart host filter* field, specify a tag that has the hosts you want, not the hosts themselves. ++ +Filters are case-sensitive. +* *Instance groups*: Select the instance group or groups for this inventory to run on. ++ +You can select many instance groups and sort them in the order that you want them run. + //image:select-instance-groups-modal.png[image] * Optional: *Labels*: Supply labels that describe this inventory, so they can be used to group and filter inventories and jobs. * Only applicable to constructed inventories: *Input inventories*: Specify the source inventories to include in this constructed inventory. -Click the image:search.png[Search,15,15] icon to select from available inventories. +//Click the image:search.png[Search,15,15] icon to select from available inventories. Empty groups from input inventories are copied into the constructed inventory. -* Optional:(Only applicable to constructed inventories): *Cached timeout (seconds)*: Set the length of time you want the cache plugin -data to timeout. +* Optional:(Only applicable to constructed inventories): *Cached timeout (seconds)*: Set the length of time you want the cache plugin data to timeout. * Only applicable to constructed inventories: *Verbosity*: Control the level of output that Ansible produces as the playbook executes related to inventory sources associated with constructed inventories. -Select the verbosity from Normal to various Verbose or Debug settings. -This only appears in the "details" report view. -** Verbose logging includes the output of all commands. -** Debug logging is exceedingly verbose and includes information about SSH operations that can be useful in certain -support instances. Most users do not need to see debug mode output. ++ +Select the verbosity from: + +* *Normal* +* *Verbose* +* *More verbose* +* *Debug* +* *Connection Debug* +* *WinRM Debug* + +** *Verbose* logging includes the output of all commands. +** *More verbose* provides more detail than *Verbose*. +** *Debug* logging is exceedingly verbose and includes information about SSH operations that can be useful in certain support instances. Most users do not need to see debug mode output. +//Not sure of this +** *Connection Debug* enables you to run SSH in verbose mode, providing debugging information about the SSH connection progress. +//Not sure of this. +** *WinRM Debug* used for verbosity specific to windows remote management ++ +Click the image:arrow.png[Expand,15,15] icon for information on *How to use the constructed inventory plugin*. * Only applicable to constructed inventories: *Limit*: Restricts the number of returned hosts for the inventory source associated with the constructed inventory. You can paste a group name into the limit field to only include hosts in that group. For more information, see the *Source vars* setting. * Only applicable to standard inventories: *Options*: Check the *Prevent Instance Group Fallback* option to enable only the instance groups listed in the *Instance Groups* field to execute the job. If unchecked, all available instances in the execution pool are used based on the hierarchy described in -link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-control-job-run[Control where a job runs] in the _{ControllerAG}_. -Click the image:question_circle.png[Help,15,15] icon for additional information. +xref:controller-control-job-run[Control where a job runs]. +//Click the image:question_circle.png[Help,15,15] icon for additional information. + //[NOTE] //==== @@ -72,5 +88,5 @@ This is particularly useful because you can paste that group name into the limit //See Example 1 in xref:ref-controller-smart-host-filter[Smart host filters]. . Click btn:[Create inventory]. -After saving the new inventory, you can proceed with configuring permissions, groups, hosts, sources, and view completed jobs, if -applicable to the type of inventory. +.Next steps +After saving the new inventory, you can proceed with configuring permissions, groups, hosts, sources, and view completed jobs, if applicable to the type of inventory. diff --git a/downstream/modules/platform/proc-controller-adding-new-schedule.adoc b/downstream/modules/platform/proc-controller-adding-new-schedule.adoc index d7841d1c99..44cea2b420 100644 --- a/downstream/modules/platform/proc-controller-adding-new-schedule.adoc +++ b/downstream/modules/platform/proc-controller-adding-new-schedule.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-adding-new-schedule"] = Adding a new schedule @@ -9,20 +11,23 @@ To create a new schedule on the *Schedules* page: .Procedure . From the navigation panel, select {MenuAESchedules}. -. Click btn:[Create schedule]. This opens the *Create Schedule* window. -+ -image::ug-generic-create-schedule.png[Create schedule] +. Click btn:[Create schedule]. This opens the *Create schedule* window. +. Select a *Resource type* onto which this schedule is applied. + +Select from: -To create a new schedule from a resource page: +* *Job template* +** For *Job template* select a *Job template* from the menu. +* *Workflow job template* +** For *Workflow job template* select a *Workflow job template* from the menu. +* *Inventory source* +** For *Inventory source* select an *Inventory* and an *Inventory source* from the appropriate menu. +* *Project sync* +** For *Project sync* select a *Project* from the menu. +* *Management job template* +** For *Management job template* select a *Workflow job template* from the menu. -.Procedure -. Click the *Schedules* tab of the resource that you are configuring. -This can be a template, project, or inventory source. -. Click btn:[Create schedule]. This opens the *Create Schedule* window. - -.For both procedures -. Enter the appropriate details into the following fields: +. For *Job template* and *Project sync* enter the appropriate details into the following fields: * *Schedule name*: Enter the name. * Optional: *Description*: Enter a description. @@ -35,12 +40,10 @@ The *Schedule Details* display when you establish a schedule, enabling you to re [IMPORTANT] ==== Jobs are scheduled in UTC. -Repeating jobs that run at a specific time of day can move relative to a local time zone when Daylight Savings Time shifts occur. +Repeating jobs that run at a specific time of day can move relative to a local time zone when Daylight Saving Time shifts occur. The system resolves the local time zone based time to UTC when the schedule is saved. To ensure your schedules are correctly created, set your schedules in UTC time. ==== + . Click btn:[Next]. The *Define rules* page is displayed. - -//Use the *On* or *Off* toggle to stop an active schedule or activate a stopped schedule. diff --git a/downstream/modules/platform/proc-controller-adding-permissions.adoc b/downstream/modules/platform/proc-controller-adding-permissions.adoc index 8b2593d56a..fdfe6bbe00 100644 --- a/downstream/modules/platform/proc-controller-adding-permissions.adoc +++ b/downstream/modules/platform/proc-controller-adding-permissions.adoc @@ -1,4 +1,6 @@ -[id="controller-adding-permissions_{context}"] +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-adding-permissions"] = Adding permissions to templates diff --git a/downstream/modules/platform/proc-controller-amazon-ec2.adoc b/downstream/modules/platform/proc-controller-amazon-ec2.adoc index bf6eb9ed40..66b8da31aa 100644 --- a/downstream/modules/platform/proc-controller-amazon-ec2.adoc +++ b/downstream/modules/platform/proc-controller-amazon-ec2.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-amazon-ec2"] = {AWS} EC2 @@ -8,20 +10,21 @@ Use the following procedure to configure an AWS EC2-sourced inventory. //[ddacosta] Rewrote this according to IBM style: Refer to a drop-down list by its label, followed by list. . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *Amazon EC2* from the *Source* list. -. The *Add new source* window expands with additional fields. +. Click btn:[Create source]. +. In the *Create source* page, select *Amazon EC2* from the *Source* list. +. The *Create source* window expands with additional fields. Enter the following details: -* Optional: *Credential*: Choose from an existing AWS credential (for more information, see xref:controller-credentials[Credentials]). +* Optional: *Credential*: Choose from an existing AWS credential. +For more information, see xref:controller-credentials[Managing user credentials]. + If {ControllerName} is running on an EC2 instance with an assigned IAM Role, the credential can be omitted, and the security credentials from the instance metadata are used instead. -For more information about using IAM Roles, see link:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-%20roles-for-amazon-ec2.html[IAM_Roles_for_Amazon_EC2_documentation_at_Amazon]. +For more information about using IAM Roles, see link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html[IAM roles for Amazon EC2_documentation_at_Amazon] documentation at Amazon. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. -. Use the *Source Variables* field to override variables used by the `aws_ec2` inventory plugin. +. Use the *Source variables* field to override variables used by the `aws_ec2` inventory plugin. Enter variables by using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information about these variables, see the @@ -29,9 +32,8 @@ link:https://console.redhat.com/ansible/automation-hub/repo/published/amazon/aws //+ //image:inventories-create-source-AWS-example.png[Inventories- create source - AWS EC2 example] -[NOTE] -==== +.Troubleshooting + If you only use `include_filters`, the AWS plugin always returns all the hosts. To use this correctly, the first condition on the `or` must be on `filters` and then build the rest of the `OR` conditions on a list of `include_filters`. -==== diff --git a/downstream/modules/platform/proc-controller-api-4xx-error-config.adoc b/downstream/modules/platform/proc-controller-api-4xx-error-config.adoc index e7068ede11..3f9162c3cd 100644 --- a/downstream/modules/platform/proc-controller-api-4xx-error-config.adoc +++ b/downstream/modules/platform/proc-controller-api-4xx-error-config.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-api-4xx-error-config"] = API 4XX Error Configuration diff --git a/downstream/modules/platform/proc-controller-api-browsing-api.adoc b/downstream/modules/platform/proc-controller-api-browsing-api.adoc index 4a728a4405..d886c471ca 100644 --- a/downstream/modules/platform/proc-controller-api-browsing-api.adoc +++ b/downstream/modules/platform/proc-controller-api-browsing-api.adoc @@ -1,10 +1,12 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-api-browsing"] .Procedure . Go to the {ControllerName} REST API in a web browser at: + -\https:///api/ +\https:///api/controller/v2 + . Click the **"v2"** link next to **"current versions"** or **"available versions"**. {ControllerNameStart} supports version 2 of the API. diff --git a/downstream/modules/platform/proc-controller-api-filtering.adoc b/downstream/modules/platform/proc-controller-api-filtering.adoc index 0a268d9c6c..328567fde2 100644 --- a/downstream/modules/platform/proc-controller-api-filtering.adoc +++ b/downstream/modules/platform/proc-controller-api-filtering.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-api-filtering-in-api"] .Procedure diff --git a/downstream/modules/platform/proc-controller-api-session-auth.adoc b/downstream/modules/platform/proc-controller-api-session-auth.adoc index a21dc8a29a..e59166e273 100644 --- a/downstream/modules/platform/proc-controller-api-session-auth.adoc +++ b/downstream/modules/platform/proc-controller-api-session-auth.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-api-session-auth"] = Using session authentication @@ -18,10 +20,9 @@ Use the curl tool to see the activity that occurs when you log in to {Controller + [literal, options="nowrap" subs="+attributes"] ---- -curl -k -c - https:///api/login/ +$ curl -k -c - https:///api/gateway/v1/login/ -localhost FALSE / FALSE 0 csrftoken -AswSFn5p1qQvaX4KoRZN6A5yer0Pq0VG2cXMTzZnzuhaY0L4tiidYqwf5PXZckuj +$YOUR_AAP_URL FALSE / TRUE 1780539778 csrftoken GODXonA5LyV1uAs8zvcD2k12DQJC74oB ---- + . `POST` to the `/api/login/` endpoint with username, password, and `X-CSRFToken=`: @@ -29,17 +30,31 @@ AswSFn5p1qQvaX4KoRZN6A5yer0Pq0VG2cXMTzZnzuhaY0L4tiidYqwf5PXZckuj [literal, options="nowrap" subs="+attributes"] ---- curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \ ---referer https:///api/login/ \ --H 'X-CSRFToken: K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' \ ---data 'username=root&password=reverse' \ ---cookie 'csrftoken=K580zVVm0rWX8pmNylz5ygTPamgUJxifrdJY0UDtMMoOis5Q1UOxRmV9918BUBIN' \ -https:///api/login/ -k -D - -o /dev/null + +--referer https:///api/gateway/v1/login/ \ + +-H 'X-CSRFToken: ' \ + +--data 'username=admin&password=$YOUR_ADMIN_PASSWORD' \ + +--cookie 'csrftoken=GODXonA5LyV1uAs8zvcD2k12DQJC74oB' \ + +https:///api/gateway/v1/login/ -k -D - -o /dev/null +---- + +. Access and test the APIs that need authentication, for example `/api/controller/v2/settings/all/`: + +[literal, options="nowrap" subs="+attributes"] +---- +$ curl -X GET -H 'Cookie: ;' https:///api/controller/v2/settings/all/ -k ---- All of this is done by {ControllerName} when you log in to the UI or API in the browser, and you must only use it when authenticating in the browser. For programmatic integration with {ControllerName}, see xref:controller-api-oauth2-token[OAuth2 token authentication]. -The following is an example of a typical response: +.Verification + +The following shows a typical response: [literal, options="nowrap" subs="+attributes"] ---- @@ -71,5 +86,5 @@ The default value is `awx_session_id` which you can see later in the `Set-Cookie [NOTE] ==== You can change the session expiration time by specifying it in the `SESSION_COOKIE_AGE` parameter. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-work-with-session-limits[Working with session limits]. +//For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-work-with-session-limits[Working with session limits]. ==== diff --git a/downstream/modules/platform/proc-controller-api-sorting.adoc b/downstream/modules/platform/proc-controller-api-sorting.adoc index ec599f9ec8..326be47620 100644 --- a/downstream/modules/platform/proc-controller-api-sorting.adoc +++ b/downstream/modules/platform/proc-controller-api-sorting.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-api-sorting-in-api"] .Procedure @@ -6,20 +8,20 @@ + [literal, options="nowrap" subs="+attributes"] ---- -https:///api/v2/model_verbose_name_plural?order_by={{ order_field }} +https:///api/v2/model_verbose_name_plural?order_by={{ order_field }} ---- + ** Prefix the field name with a dash (`-`) to sort in reverse: + [literal, options="nowrap" subs="+attributes"] ---- -https:///api/v2/model_verbose_name_plural?order_by=-{{ order_field }} +https:///api/v2/model_verbose_name_plural?order_by=-{{ order_field }} ---- + ** You can specify the sorting fields by separating the field names with a comma (`,`): + [literal, options="nowrap" subs="+attributes"] ---- -https:///api/v2/model_verbose_name_plural?order_by={{ order_field }},some_other_field +https:///api/v2/model_verbose_name_plural?order_by={{ order_field }},some_other_field ---- diff --git a/downstream/modules/platform/proc-controller-api-using-pagination.adoc b/downstream/modules/platform/proc-controller-api-using-pagination.adoc index 7c5f2affa4..23616c29e6 100644 --- a/downstream/modules/platform/proc-controller-api-using-pagination.adoc +++ b/downstream/modules/platform/proc-controller-api-using-pagination.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-api-using-pagination"] When you receive the result for a collection, something similar to the following appears: @@ -17,7 +19,7 @@ However, you can change this limit by setting the value in `/etc/tower/conf.d//api/v2/model_verbose_name?page_size=100&page=2 +http:///api/v2/model_verbose_name?page_size=100&page=2 ---- The preceding and following links returned with the results set these query string parameters automatically. diff --git a/downstream/modules/platform/proc-controller-apps-create-tokens.adoc b/downstream/modules/platform/proc-controller-apps-create-tokens.adoc index 0af8467f8a..7faf8f324b 100644 --- a/downstream/modules/platform/proc-controller-apps-create-tokens.adoc +++ b/downstream/modules/platform/proc-controller-apps-create-tokens.adoc @@ -1,50 +1,54 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-apps-create-tokens"] = Adding tokens -You can view a list of users that have tokens to access an application by selecting the *Tokens* tab in the applications *Details* page. - -Configure authentication tokens for your users. -You can select the application to which the token is associated and the level of access that the token has. +You can view a list of users that have tokens to access an application by selecting the *Tokens* tab in the *OAuth Applications* details page. -[IMPORTANT] +[NOTE] ==== -You can only create OAuth 2 Tokens for your user through the API or UI, which means you can only access your own user profile to configure or view your tokens. +You can only create OAuth 2 Tokens for your own user, which means you can only configure or view tokens from your own user profile. ==== +When authentication tokens have been configured, you can select the application to which the token is associated and the level of access that the token has. + + .Procedure . From the navigation panel, select {MenuControllerUsers}. -. Select the user for which you want to configure the OAuth 2 tokens. -. Select the *Tokens* tab on the user's profile. +. Select the username for your user profile to configure OAuth 2 tokens. +. Select the *Tokens* tab. + When no tokens are present, the *Tokens* screen prompts you to add them. . Click btn:[Create token] to open the *Create Token* window. . Enter the following details: - -* *Application*: enter the name of the application with which you want to associate your token. -You can also search for it by clicking the image:search.png[Search,15,15] icon. -This opens a separate window that enables you to choose from the available options. -Use the Search bar to filter by name if the list is extensive. -Leave this field blank if you want to create a Personal Access Token (PAT) that is not linked to any application. -* Optional: *Description*: give a short description for your token. -* *Scope* (required): specify the level of access you want this token to have. - -. Click btn:[Create token], or click btn:[Cancel] to abandon your changes. + -After you save the token, the newly created token for the user is displayed with the token information and when it expires. +Application:: Enter the name of the application with which you want to associate your token. Alternatively, you can search for it by clicking btn:[Browse]. This opens a separate window that enables you to choose from the available options. Select *Name* from the filter list to filter by name if the list is extensive. + -//image:users-token-information-example.png[Token information] - -. To view the application to which the token is associated and the token expiration date, go to the token list view. +[NOTE] +==== +To create a Personal Access Token (PAT) that is not linked to any application, leave the Application field blank. +==== +Description:: (optional) Provide a short description for your token. +Scope:: (required) Specify the level of access you want this token to have. The scope of an OAuth 2 token can be set as one of the following: ++ +* *Write*: Allows requests sent with this token to add, edit and delete resources in the system. +* *Read*: Limits actions to read only. Note that the write scope includes read scope. ++ +. Click btn:[Create token], or click btn:[Cancel] to abandon your changes. ++ +The Token information is displayed with *Token* and *Refresh Token* information, and the expiration date of the token. This will be the only time the token and refresh token will be shown. You can view the token association and token information from the list view. + -//image:users-token-assignment-example.png[Token assignment] +. Click the copy icon and save the token and refresh token for future use. .Verification -To verify that the application now shows the user with the appropriate token, open the *Tokens* tab of the Applications window. +You can verify that the application now shows the user with the appropriate token using the Tokens tab on the Applications details page. -//image:apps-tokens-list-view-example2.png[image] +. From the navigation panel, select {MenuAMAdminOauthApps}. +. Select the application you want to verify from the *Applications* list view. +. Select the *Tokens* tab. ++ +Your token should be displayed in the list of tokens associated with the application you chose. .Additional resources - -If you are a system administrator and have to create or remove tokens for other users, see the revoke and create commands in the -link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/assembly-controller-awx-manage-utility#ref-controller-token-session-management[Token and session management] section of the _{ControllerAG}_. +If you are a system administrator and have to create or remove tokens for other users, see the revoke and create commands in xref:ref-controller-token-session-management[Token and session management]. diff --git a/downstream/modules/platform/proc-controller-associate-instances-to-instance-group.adoc b/downstream/modules/platform/proc-controller-associate-instances-to-instance-group.adoc index f5063657ae..cdca49b77b 100644 --- a/downstream/modules/platform/proc-controller-associate-instances-to-instance-group.adoc +++ b/downstream/modules/platform/proc-controller-associate-instances-to-instance-group.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-associate-instances-to-instance-group"] = Associating instances to an instance group @@ -5,7 +7,7 @@ .Procedure . Select the *Instances* tab on the *Details* page of an Instance Group. -. Click btn:[Associate]. +. Click btn:[Associate instance]. . Click the checkbox next to one or more available instances from the list to select the instances you want to associate with the instance group and click btn:[Confirm] //+ //image::instance-group-assoc-instances.png[Associate instances] diff --git a/downstream/modules/platform/proc-controller-awx-manage-utility.adoc b/downstream/modules/platform/proc-controller-awx-manage-utility.adoc index 8198129865..79a511a141 100644 --- a/downstream/modules/platform/proc-controller-awx-manage-utility.adoc +++ b/downstream/modules/platform/proc-controller-awx-manage-utility.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-awx-manage-utility"] = awx-manage utility @@ -24,8 +26,10 @@ To specify the number of rows (``) to output to each file: awx-manage host_metric --tarball --rows_per_file ---- -The following is an example of a configuration file: +//The following is an example of a configuration file: + +//image:ug-host-metrics-awx-manage-config.png[Configuration file] -image:ug-host-metrics-awx-manage-config.png[Configuration file] +{Analytics} receives and uses the JSON file. -{Analytics} receives and uses the JSON file. \ No newline at end of file +For more information on using `metrics-utility` CLI, see link:{LinkControllerAdminGuide}/assembly-controller-metrics[Usage reporting with metrics-utility] \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-azure-resource-manager.adoc b/downstream/modules/platform/proc-controller-azure-resource-manager.adoc index 1d3db67ec8..fbca919b22 100644 --- a/downstream/modules/platform/proc-controller-azure-resource-manager.adoc +++ b/downstream/modules/platform/proc-controller-azure-resource-manager.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-azure-resource-manager"] = {Azure} resource manager @@ -8,14 +10,14 @@ Use the following procedure to configure an {Azure} Resource Manager-sourced inv //[ddacosta] Rewrote this according to style for drop-down lists; see Usage and highlighting for interface elements in the IBM Style Guide . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *Microsoft Azure Resource Manager* from the *Source* list. -. The *Add new source* window expands with the required *Credential* field. -Choose from an existing Azure Credential. -For more information, see xref:controller-credentials[Credentials]. +. Click btn:[Create source]. +. In the *Create source* page, select *Microsoft Azure Resource Manager* from the *Source* list. +. Enter the following details in the additional fields: +. Optional: *Credential*: Choose from an existing Azure Credential. +For more information, see xref:controller-credentials[Managing user credentials]. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. -. Use the *Source Variables* field to override variables used by the `azure_rm` inventory plugin. +. Use the *Source variables* field to override variables used by the `azure_rm` inventory plugin. Enter variables by using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information about these variables, see the diff --git a/downstream/modules/platform/proc-controller-build-workflow.adoc b/downstream/modules/platform/proc-controller-build-workflow.adoc index 5ca022f9e1..73fa454ea9 100644 --- a/downstream/modules/platform/proc-controller-build-workflow.adoc +++ b/downstream/modules/platform/proc-controller-build-workflow.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-build-workflow"] = Building a workflow @@ -9,23 +11,23 @@ You can set up any combination of two or more of the following node types to bui * Inventory Sync * Approval -Each node is represented by a rectangle while the relationships and their associated edge types are represented by a line (or link) that connects them. +// Not in 2.5 UI +//Each node is represented by a rectangle while the relationships and their associated edge types are represented by a line (or link) that connects them. .Procedure . To launch the workflow visualizer, use one of these methods: ** From the navigation panel, select {MenuAETemplates}. -... Select a workflow template, in the *Details* tab click btn:[Edit template]. -... Select the *Visualizer* tab. -** From the *Templates* list view, click the image:visualizer.png[Visualizer,15,15] icon. +*** Select a workflow template and click btn:[View workflow visualizer]. +** From the *Automation Templates* list view, click the image:visualizer.png[Visualizer,15,15] icon next to a workflow job template. + //image::ug-wf-editor-create.png[Launch visualizer] + -. Click btn:[Start] to display a list of nodes to add to your workflow. +. Click btn:[Add step] to display a list of nodes to add to your workflow. + //image::ug-wf-add-template-nodes.png[Add Node workflow job template] + -. From the *Node Type* list, select the type of node that you want to add. +. From the *Node type* list, select the type of node that you want to add. + //image::ug-wf-add-node-selections.png[Node type] + @@ -43,9 +45,9 @@ Though a credential is not required in a job template, you cannot select a job t This action is also referred to as edge type. . If the node is a root node, the edge type defaults to *Always* and is non-editable. For subsequent nodes, you can select one of the following scenarios (edge type) to apply to each: -* *Always*: Continue to execute regardless of success or failure. -* *On Success*: After successful completion, execute the next template. -* *On Failure*: After failure, execute a different template. +* *Always run*: Continue to execute regardless of success or failure. +* *Run on success*: After successful completion, execute the next template. +* *Run on fail*: After failure, execute a different template. . Select the behavior of the node if it is a convergent node from the *Convergence* field: * *Any* is the default behavior, allowing any of the nodes to complete as specified, before triggering the next converging node. If the status of one parent meets one of those run conditions, an *any* child node will run. @@ -72,7 +74,7 @@ Use the wizard to change the values in each of the tabs and click btn:[Confirm] If a workflow template used in the workflow has *Prompt on launch* selected for the inventory option, use the wizard to supply the inventory at the prompt. If the parent workflow has its own inventory, it overrides any inventory that is supplied here. + -image::ug-wf-prompt-button-inventory-wizard.png[Prompt button inventory] +//image::ug-wf-prompt-button-inventory-wizard.png[Prompt button inventory] + [NOTE] ==== @@ -93,17 +95,23 @@ Otherwise, any changes you make revert back to the values set in the job templat + When the node is created, it is labeled with its job type. A template that is associated with each workflow node runs based on the selected run scenario as it proceeds. -Click the compass (image:compass.png[Compass, 15,15]) icon to display the legend for each run scenario and their job types. +Click btn:[Legend] to display the legend for each run scenario and their job types. ++ +image::ug-wf-dropdown-list.png[Workflow dropdown list] + -image::ug-wf-dropdown-list.png[Worfklow dropdown list] +. Hover over a node to edit the node, add step and link, or delete the selected node: + -. Hover over a node to add another node, view info about the node, edit the node details, edit an existing link, or delete the selected node: +[NOTE] +==== +If you hover over a step when adding a link and a red border appears, this means that you cannot connect those two steps together. +This is a preventive measure to avoid users creating "circular dependencies", which can result in a workflow that ends up in an infinite loop and never finishes. +==== + image::ug-wf-add-template.png[Node options] + -. When you have added or edited a node, click btn:[SELECT] to save any modifications and render it on the graphical view. +. When you have added or edited a node, click btn:[Finish] to save any modifications and render it on the graphical view. For possible ways to build your workflow, see xref:controller-building-nodes-scenarios[Building nodes scenarios]. -. When you have built your workflow job template, click btn:[Create workflow job template] to save your entire workflow template and return to the new workflow job template details page. +. When you have built your workflow job template, click btn:[Save] to save your entire workflow template and return to the new workflow job template details page. [IMPORTANT] ==== diff --git a/downstream/modules/platform/proc-controller-building-nodes-scenarios.adoc b/downstream/modules/platform/proc-controller-building-nodes-scenarios.adoc index 873df66f6b..aa67e30bcd 100644 --- a/downstream/modules/platform/proc-controller-building-nodes-scenarios.adoc +++ b/downstream/modules/platform/proc-controller-building-nodes-scenarios.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-building-nodes-scenarios"] = Building nodes scenarios @@ -6,45 +8,45 @@ Learn how to manage nodes in the following scenarios. .Procedure -* Click the (image:plus_icon_dark.png[Plus icon,15,15]) icon on the parent node to add a sibling node: - +* Click the (image:options_menu.png[Plus icon,15,15]) icon on the parent node and *Add step and link* to add a sibling node: ++ image::ug-wf-create-sibling-node.png[Create sibling node] - -* Hover over the line that connects two nodes and click the plus (image:plus_icon_dark.png[Plus icon,15,15]), to insert another node in between nodes. -Clicking the plus (image:plus_icon_dark.png[Plus icon,15,15]) icon automatically inserts the node between the two nodes: - -image::ug-wf-editor-insert-node-template.png[Insert node template] - -* Click btn:[START] again, to add a root node to depict a split scenario: - -image::ug-wf-create-new-add-template-split.png[Node split scenario] - -* At any node where you want to create a split scenario, hover over the node from which the split scenario begins and click the plus (image:plus_icon_dark.png[Plus icon,15,15]) icon. -This adds multiple nodes from the same parent node, creating sibling nodes: - -image::ug-wf-create-siblings.png[Node create siblings] - -[NOTE] -==== -When adding a new node, the btn:[PROMPT] option also applies to workflow templates. -Workflow templates prompt for inventory and surveys. -==== - -* You can undo the last inserted node by using one of these methods: -** Click on another node without making a selection. -** Click btn:[Cancel]. - -The following example workflow contains all three types of jobs initiated by a job template. -If it fails to run, you must protect the sync job. -Regardless of whether it fails or succeeds, proceed to the inventory sync job: - -image::ug-wf-add-template-example.png[Workflow template example] - -Refer to the key by clicking the compass (image:compass.png[Compass, 15,15]) icon to identify the meaning of the symbols and colors associated with the graphical depiction. ++ +//. Hover over the line that connects two nodes and click the plus (image:plus_icon_dark.png[Plus icon,15,15]), to insert another node in between nodes. +//Clicking the plus (image:plus_icon_dark.png[Plus icon,15,15]) icon automatically inserts the node between the two nodes: ++ +//image::ug-wf-editor-insert-node-template.png[Insert node template] ++ +* Click btn:[Add step] or btn:[Start] (image:options_menu.png[Plus icon,15,15]) and *Add step*, to add a root node to depict a split scenario. ++ +//image::ug-wf-create-new-add-template-split.png[Node split scenario] ++ +* At any node where you want to create a split scenario, hover over the node from which the split scenario begins and click the plus (image:options_menu.png[Plus icon,15,15]) icon on the parent node and *Add step and link*. +This adds multiple nodes from the same parent node, creating sibling nodes. ++ +//image::ug-wf-create-siblings.png[Node create siblings] ++ +//[NOTE] +//==== +//When adding a new node, the btn:[PROMPT] option also applies to workflow templates. +//Workflow templates prompt for inventory and surveys. +//==== + +//* You can undo the last inserted node by using one of these methods: +//** Click on another node without making a selection. +//** Click btn:[Cancel]. + +//The following example workflow contains all three types of jobs initiated by a job template. +//If it fails to run, you must protect the sync job. +//Regardless of whether it fails or succeeds, proceed to the inventory sync job: + +//image::ug-wf-add-template-example.png[Workflow template example] + +Refer to the key by clicking btn:[Legend] to identify the meaning of the symbols and colors associated with the graphical depiction. [NOTE] ==== If you remove a node that has a follow-on node attached to it in a workflow with a set of sibling nodes that has varying edge types, the attached node automatically joins the set of sibling nodes and retains its edge type: -image::ug-wf-node-delete-scenario.png[Node delete scenario] +//image::ug-wf-node-delete-scenario.png[Node delete scenario] ==== diff --git a/downstream/modules/platform/proc-controller-cluster-deprovision-instances.adoc b/downstream/modules/platform/proc-controller-cluster-deprovision-instances.adoc index 18b6603bf4..d46f57dc46 100644 --- a/downstream/modules/platform/proc-controller-cluster-deprovision-instances.adoc +++ b/downstream/modules/platform/proc-controller-cluster-deprovision-instances.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-deprovision-instances"] = Deprovisioning instances @@ -15,11 +17,12 @@ Instead, shut down all services on the {ControllerName} instance and then run th $ awx-manage deprovision_instance --hostname= ---- -.Example +The following is an example deprovision command: + [literal, options="nowrap" subs="+attributes"] ---- $ awx-manage deprovision_instance --hostname=hostB ---- Deprovisioning instance groups in {ControllerName} does not automatically deprovision or remove instance groups. -For more information, see the xref:controller-deprovision-instance-group[Deprovisioning instance groups] section. +For more information, see the link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-deprovision-instance-group[Deprovisioning instance groups] section in _{ControllerUG}_. diff --git a/downstream/modules/platform/proc-controller-config-notifications-source.adoc b/downstream/modules/platform/proc-controller-config-notifications-source.adoc new file mode 100644 index 0000000000..998ad0180d --- /dev/null +++ b/downstream/modules/platform/proc-controller-config-notifications-source.adoc @@ -0,0 +1,32 @@ +:_mod-docs-content-type: PROCEDURE + +[id="controller-config-notifications-source"] + += Configuring notifications for the source + +Use the following procedure to configure notifications for the source: + +.Procedure + +. From the navigation panel, select {MenuInfrastructureInventories}. +. Select the inventory name you want to configure notifications for. +. In the inventory *Details* page, select the *Notifications* tab. ++ +[NOTE] +==== +The *Notifications* tab is only present when you have saved the newly-created source. + +//image:inventories-create-source-with-notifications-tab.png[Notification tab] +==== +. If notifications are already set up, use the toggles to enable or disable the notifications to use with your particular source. +For more information, see xref:controller-enable-disable-notifications[Enable and Disable Notifications]. +. If you have not set up notifications, see xref:controller-notifications[Notifiers] for more information. +. Review your entries and selections. +. Click btn:[Save]. + +.Next steps +When you define a source, it is displayed in the list of sources associated with the inventory. +From the *Sources* tab you can perform a sync on a single source, or sync all of them at once. +You can also perform additional actions such as scheduling a sync process, and edit or delete the source. + +//image:inventories-view-sources.png[Inventories view sources] diff --git a/downstream/modules/platform/proc-controller-configure-analytics.adoc b/downstream/modules/platform/proc-controller-configure-analytics.adoc new file mode 100644 index 0000000000..33b9a2ac34 --- /dev/null +++ b/downstream/modules/platform/proc-controller-configure-analytics.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-configure-analytics"] + += Configuring {Analytics} + +When you imported your license for the first time, you were automatically opted in for the collection of data that powers {Analytics}, a cloud service that is part of the {PlatformNameShort} subscription. + +.Prerequisites + +* A service account created with the *Automation Analytics Viewer* role in console.redhat.com. +For more information, see link:https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/proc-ciam-svc-acct-overview-creating-service-acct#proc-ciam-svc-acct-create-creating-service-acct[Creating a service account]. + +.Procedure + +. From the navigation panel, select {MenuSetSystem}. +. Click btn:[Edit]. +. In the field labeled *Red Hat Client ID for Analytics*, enter the client ID you received when you created your service account to retrieve subscription and content information. +. In the field labeled *Red Hat Client Secret for Analytics*, enter the client secret you received when you created your service account to send data to {Analytics}. +. In the *Options* list select the checkbox to *Gather data for Automation Analytics*. +. Click btn:[Save]. + +.Verification + +After configuring the service account, run a test job to ensure everything is set up correctly. + +. From the navigation panel, select menu:{MenuTopAE}[Jobs] to launch a job. +. Check link:https://console.redhat.com/ansible/automation-analytics/reports[analytics at console.redhat.com] to confirm that the data is being posted. diff --git a/downstream/modules/platform/proc-controller-configure-jobs.adoc b/downstream/modules/platform/proc-controller-configure-jobs.adoc index 35259aa1e1..0603b6f83d 100644 --- a/downstream/modules/platform/proc-controller-configure-jobs.adoc +++ b/downstream/modules/platform/proc-controller-configure-jobs.adoc @@ -1,18 +1,101 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-configure-jobs"] = Configuring jobs -The *Jobs* tab enables you to configure the types of modules that can be used by the {ControllerName}'s Ad Hoc Commands feature, set limits on the number of jobs that can be scheduled, define their output size, and other details pertaining to working with jobs in {ControllerName}. +You can use the *Job* option to define the operation of Jobs in {ControllerName}. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. Select *Jobs settings* in the *Jobs* option. -. Click btn:[Edit]. -. Set the configurable options from the fields provided. +. From the navigation panel, select {MenuSetJob}. +. On the *Job Settings* page, click btn:[Edit]. ++ +image::job-settings-full.png[Jobs settings options] ++ +. You can configure the following options: + +* *Ansible Modules Allowed For Ad Hoc Jobs*: List of modules allowed to be used by ad hoc jobs. ++ +The directory in which the service creates new temporary directories for job execution and isolation (such as credential files). +* *When can extra variables contain Jinja templates?*: Ansible allows variable substitution through the Jinja2 templating language for `--extra-vars`. ++ +This poses a potential security risk where users with the ability to specify extra vars at job launch time can use Jinja2 templates to run arbitrary Python. ++ +Set this value to either `template` or `never`. ++ +* *Paths to expose to isolated jobs*: List of paths that would otherwise be hidden to expose to isolated jobs. ++ +Enter one path per line. If a path to a specific file is entered, then the entire directory containing that file will be mounted inside the {ExecEnvShort}. ++ +Volumes are mounted from the execution node to the container. ++ +The supported format is `HOST-DIR[:CONTAINER-DIR[:OPTIONS]]`. ++ +* *Extra Environment Variables*: Additional environment variables set for playbook runs, inventory updates, project updates, and notification sending. +* *K8S Ansible Runner Keep-Alive Message Interval*: Only applies to jobs running in a Container Group. ++ +If not 0, send a message every specified number of seconds to keep the connection open. ++ +* *Environment Variables for Galaxy Commands*: Additional environment variables set for invocations of ansible-galaxy within project updates. +Useful if you must use a proxy server for ansible-galaxy but not git. +* *Standard Output Maximum Display Size*: Maximum Size of Standard Output in bytes to display before requiring the output be downloaded. +* *Job Event Standard Output Maximum Display Size*: Maximum Size of Standard Output in bytes to display for a single job or ad hoc command event. stdout ends with `…` when truncated. +* *Job Event Maximum Websocket Messages Per Second*: The maximum number of messages to update the UI live job output with per second. ++ +A value of 0 means no limit. +* *Maximum Scheduled Jobs*: Maximum number of the same job template that can be waiting to run when launching from a schedule before no more are created. +* *Ansible Callback Plugins*: List of paths to search for extra callback plugins to be used when running jobs. +* *Default Job Timeout*: If no output is detected from ansible in this number of seconds the execution will be terminated. ++ +Use a value of 0 to indicate that no idle timeout should be imposed. ++ +Enter one path per line. +* *Default Job Idle Timeout*: If no output is detected from ansible in this number of seconds the execution will be terminated. ++ +Use a value of 0 to indicate that no idle timeout should be imposed. +* *Default Inventory Update Timeout*: Maximum time in seconds to allow inventory updates to run. ++ +Use a value of 0 to indicate that no timeout should be imposed. ++ +A timeout set on an individual inventory source will override this. +* *Default Project Update Timeout*: Maximum time in seconds to allow project updates to run. ++ +Use a value of 0 to indicate that no timeout should be imposed. ++ +A timeout set on an individual project will override this. +* *Per-Host Ansible Fact Cache Timeout*: Maximum time, in seconds, that stored Ansible facts are considered valid since the last time they were modified. ++ +Only valid, non-stale, facts are accessible by a playbook. ++ +This does not influence the deletion of `ansible_facts` from the database. ++ +Use a value of 0 to indicate that no timeout should be imposed. +* *Maximum number of forks per job*: Saving a Job Template with more than this number of forks results in an error. ++ +When set to 0, no limit is applied. +* *Job execution path*: Only available in operator-based installations. +* *Container Run Options*: Only available in operator-based installations. ++ +List of options to pass to Podman run example: `['--network', 'slirp4netns:enable_ipv6=true', '--log-level', 'debug']`. ++ +You can set the following options: ++ +* *Run Project Updates With Higher Verbosity*: Select to add the CLI `-vvv` flag to playbook runs of `project_update.yml` used for project updates +* *Enable Role Download*: Select to allow roles to be dynamically downloaded from a `requirements.yml` file for SCM projects. +* *Enable Collection(s) Download*: Select to allow collections to be dynamically downloaded from a `requirements.yml` file for SCM projects. +* *Follow symlinks*: Select to follow symbolic links when scanning for playbooks. ++ +Be aware that setting this to `True` can lead to infinite recursion if a link points to a parent directory of itself. +* *Expose host paths for Container Groups*: Select to expose paths through hostPath for the Pods created by a Container Group. ++ +HostPath volumes present many security risks, and it is best practice to avoid the use of HostPaths when possible. ++ +*Ignore Ansible Galaxy SSL Certificate Verification*: If set to `true`, certificate validation is not done when installing content from any Galaxy server. ++ Click the tooltip image:question_circle.png[Tool tip,15,15] icon next to the field that you need additional information about. + -For more information about configuring Galaxy settings, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#ref-projects-galaxy-support[Ansible Galaxy Support] section of the _{ControllerUG}_. +For more information about configuring Galaxy settings, see the link:{URLControllerUserGuide}/controller-projects#ref-projects-galaxy-support[Ansible Galaxy Support] section of _{ControllerUG}_. + [NOTE] ==== diff --git a/downstream/modules/platform/proc-controller-configure-secret-lookups.adoc b/downstream/modules/platform/proc-controller-configure-secret-lookups.adoc index 3e3f215c65..87b479aa11 100644 --- a/downstream/modules/platform/proc-controller-configure-secret-lookups.adoc +++ b/downstream/modules/platform/proc-controller-configure-secret-lookups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-configure-secret-lookups"] = Configuring and linking secret lookups @@ -11,7 +13,7 @@ The metadata input fields are part of the external credential type definition of Use the following procedure to use {ControllerName} to configure and use each of the supported third-party secret management systems. .Procedure -. Create an external credential for authenticating with the secret management system. At minimum, give a name for the external credential and select one of the following for the *Credential Type* field: +. Create an external credential for authenticating with the secret management system. At minimum, give a name for the external credential and select one of the following for the *Credential type* field: + * xref:ref-aws-secrets-manager-lookup[AWS Secrets Manager Lookup] * xref:ref-centrify-vault-lookup[Centrify Vault Credential Provider Lookup] @@ -22,6 +24,7 @@ Use the following procedure to use {ControllerName} to configure and use each of * xref:ref-azure-key-vault-lookup[{Azure} Key Vault] * xref:ref-thycotic-devops-vault[Thycotic DevOps Secrets Vault] * xref:ref-thycotic-secret-server[Thycotic Secret Server] +* xref:controller-github-app-token[GitHub app token lookup] + In this example, the _Demo Credential_ is the target credential. @@ -52,7 +55,7 @@ You return to the *Details* screen of your target credential. . Repeat these steps, starting with Step 3 to complete the remaining input fields for the target credential. By linking the information in this manner, {ControllerName} retrieves sensitive information, such as username, password, keys, certificates, and tokens from the third-party management systems and populates the remaining fields of the target credential form with that data. . If necessary, supply any information manually for those fields that do not use linking as a way of retrieving sensitive information. -For more information about each of the fields, see the appropriate xref:ref-controller-credential-types[Credential Types]. +For more information about each of the fields, see the appropriate link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_execution/controller-credentials#ref-controller-credential-types[Credential types]. . Click btn:[Save]. .Additional resources diff --git a/downstream/modules/platform/proc-controller-configure-subscriptions.adoc b/downstream/modules/platform/proc-controller-configure-subscriptions.adoc new file mode 100644 index 0000000000..9d086c7a40 --- /dev/null +++ b/downstream/modules/platform/proc-controller-configure-subscriptions.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-configure-subscriptions"] + += Configuring subscriptions + +You can use the *Subscription* menu to view the details of your subscription, such as compliance, host-related statistics, or expiry, or you can apply a new subscription. + +Ansible subscriptions require a service account from {Console}. You must link:{BaseURL}/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/proc-ciam-svc-acct-overview-creating-service-acct#proc-ciam-svc-acct-create-creating-service-acct[create a service account] and use the client ID and client secret to activate your subscription. + +[NOTE] +==== +If you enter your client ID and client secret but cannot locate your subscription, you might not have the correct permissions set on your service account. For more information and troubleshooting guidance for service accounts, see link:https://access.redhat.com/articles/7112649[Configure Ansible Automation Platform to authenticate through service account credentials]. +==== + +For Red Hat Satellite, input your Satellite username and Satellite password in the fields below. + +.Procedure +. From the navigation panel, select {MenuSetSubscription}. The *Subscription* page is displayed. +//[ddacosta] - Removing images but they can be added back if requested. +//image::settings_subscription_page.png[Initial subscriptions page] +. Click btn:[Edit subscription]. +. You can either enter your service account or Satellite credentials, or attach a current Subscription Manifest in the *Welcome* page. +//[ddacosta] - Removing images but they can be added back if requested. +//image::subscriptions_first-page.png[Suscriptions page for password or manifest] +. Click btn:[Next] and agree to the terms of the license agreement. +. Click btn:[Next] to review the subscription settings. +. Click btn:[Finish] to complete the configuration. diff --git a/downstream/modules/platform/proc-controller-configure-system.adoc b/downstream/modules/platform/proc-controller-configure-system.adoc index 1d61859966..508f4b594a 100644 --- a/downstream/modules/platform/proc-controller-configure-system.adoc +++ b/downstream/modules/platform/proc-controller-configure-system.adoc @@ -1,33 +1,60 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-configure-system"] = Configuring system settings -The *System* tab enables you to complete the following actions: - -* Define the base URL for the {ControllerName} host -//* Configure alerts -* Enable activity capturing -* Control visibility of users -* Set {ControllerName} analytics settings -//* Enable certain {ControllerName} features and functionality through a license file -//* Configure logging aggregation options +You can use the *System* menu to define automation controller system settings. .Procedure . From the navigation panel, select {MenuSetSystem}. -. Click btn:[Edit]. -//. Choose from the following *System* options: -//* *Miscellaneous System settings*: Enable activity streams, specify the default {ExecEnvShort}, define the base URL for the {ControllerName} host, enable {ControllerName} administration alerts, set user visibility, define analytics, specify usernames and passwords, and configure proxies. -//* *Miscellaneous Authentication settings*: Configure options associated with authentication methods (built-in or SSO), sessions (timeout, number of sessions logged in, tokens), and social authentication mapping. -//* *Logging settings*: Configure logging options based on the type you choose: -//+ -//image::ag-configure-aap-system-logging-types.png[Logging settings] +The *System Settings* page is displayed. //+ -//For more information about each of the logging aggregation types, see the xref:assembly-controller-logging-aggregation[Logging and Aggregation] section. -. Set the configurable options from the fields provided. -Click the tooltip image:question_circle.png[Tool tip,15,15] icon next to the field that you need additional information about. -//+ -//The following is an example of the *Miscellaneous System* settings: -//+ -//image::ag-configure-aap-system.png[Misc. system settings] -. Click btn:[Save] to apply the settings. +//image::system-settings-page.png[System settings page - unedited] +. Click btn:[Edit]. +//+ +//image::system-settings-full.png[System settings - configurable fields] +. You can configure the following options: ++ +* *Base URL of the service*: This setting is used by services such as notifications to render a valid URL to the service. +* *Proxy IP allowed list*: If the service is behind a reverse proxy or load balancer, use this setting to configure the proxy IP addresses from which the service should trust custom `REMOTE_HOST_HEADERS` header values. ++ +If this setting is an empty list (the default), the headers specified by `REMOTE_HOST_HEADERS` are trusted unconditionally. +* *CSRF Trusted Origins List*: If the service is behind a reverse proxy or load balancer, use this setting to configure the `schema://addresses` from which the service should trust Origin header values. +* *Red Hat customer username*: This username is used to send data to Automation Analytics. +* *Red Hat customer password*: This password is used to send data to Automation Analytics. +* *Red Hat or Satellite username*: This username is used to send data to Automation Analytics. +* *Red Hat or Satellite password*: This password is used to send data to Automation Analytics. +* *Global default {ExecEnvShort}*: The {ExecEnvShort} to be used when one has not been configured for a job template. +* *Custom virtual environment paths*: Paths where {ControllerName} looks for custom virtual environments. ++ +Enter one path per line. ++ +* *Last gather date for Automation Analytics*: Set the date and time. +//This field has been removed by https://github.com/ansible/awx/pull/15497 +//* *Last gathered entries from the data collection service of {Analytics}*: Do not enter anything in this field. +* *{Analytics} Gather Interval*: Interval (in seconds) between data gathering. ++ +If *Gather data for {Analytics}* is set to false, this value is ignored. ++ +* *Last cleanup date for HostMetrics*: Set the date and time. +* *Last computing date of HostMetricSummaryMonthly*: Set the date and time. +* *Remote Host Headers*: HTTP headers and meta keys to search to decide remote hostname or IP. +Add additional items to this list, such as `HTTP_X_FORWARDED_FOR`, if behind a reverse proxy. +For more information, see link:{URLAAPOperationsGuide}/assembly-configuring-proxy-support[Configuring proxy support for {PlatformName}]. +* *Automation Analytics upload URL*: This value has been set manually in a settings file. +This setting is used to configure the upload URL for data collection for Automation Analytics. +* *Defines subscription usage model and shows Host Metrics*: ++ +You can select the following options: ++ +* *Enable Activity Stream*: Set to enable capturing activity for the activity stream. +* *Enable Activity Stream for Inventory Sync*: Set to enable capturing activity for the activity stream when running inventory sync. +* *All Users Visible to Organization Admins*: Set to control whether any organization administrator can view all users and teams, even those not associated with their organization. +* *Organization Admins Can Manage Users and Teams*: Set to control whether any organization administrator has the privileges to create and manage users and teams. ++ +You might want to disable this ability if you are using an LDAP or SAML integration. +* *Gather data for Automation Analytics*: Set to enable the service to gather data on automation and send it to {Analytics}. + +. Click btn:[Save] diff --git a/downstream/modules/platform/proc-controller-configure-transparent-SAML.adoc b/downstream/modules/platform/proc-controller-configure-transparent-SAML.adoc index d8a91abaea..7a721f9ecf 100644 --- a/downstream/modules/platform/proc-controller-configure-transparent-SAML.adoc +++ b/downstream/modules/platform/proc-controller-configure-transparent-SAML.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-configure-transparent-SAML"] = Configuring transparent SAML logins @@ -6,17 +8,4 @@ For transparent logins to work, you must first get IdP-initiated logins to work. .Procedure -. Set the `RelayState` on the IdP to the key of the IdP definition in the *SAML Enabled Identity Providers* field. -. When this is working, specify the redirect URL for non-logged-in users to somewhere other than the default {ControllerName} login page by using the *Login redirect override URL* field in the *Miscellaneous Authentication* settings window of the {MenuAEAdminSettings} menu. -You must set this to `/sso/login/saml/?idp=` for transparent SAML login, as shown in the following example: -+ -image::ag-configure-system-login-redirect-url.png[Configure SAML login] -+ -[NOTE] -==== -This example shows a typical IdP format, but might not be the correct format for your particular case. -You might need to reach out to your IdP for the correct transparent redirect URL as that URL is not the same for all IdPs. -==== -+ -. After you configure transparent SAML login, to log in using local credentials or a different SSO, go directly to `https:///login`. -This provides the standard {ControllerName} login page, including SSO authentication options, enabling you to log in with any configured method. +* Set the `RelayState` on the IdP to "IdP". diff --git a/downstream/modules/platform/proc-controller-copy-a-job-template.adoc b/downstream/modules/platform/proc-controller-copy-a-job-template.adoc index b3c9117ce9..11c8d7e019 100644 --- a/downstream/modules/platform/proc-controller-copy-a-job-template.adoc +++ b/downstream/modules/platform/proc-controller-copy-a-job-template.adoc @@ -1,16 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-copy-a-job-template"] -= Copying a job template += Duplicating a job template -If you copy a job template, it does not copy any associated schedule, notifications, or permissions. -Schedules and notifications must be recreated by the user or administrator creating the copy of the job template. -The user copying the Job Template is be granted administrator permission, but no permissions are assigned (copied) to the job template. +If you duplicate a job template, it does not duplicate any associated schedule, notifications, or permissions. +Schedules and notifications must be recreated by the user or administrator creating the duplicate of the job template. +The user duplicating the Job Template is granted administrator permission, but no permissions are assigned (duplicated) to the job template. .Procedure . From the navigation panel, select {MenuAETemplates}. -. Click image:options_menu.png[options menu,15,15] and the copy image:copy.png[Copy,15,15] icon associated with the template that you want to copy. -* The new template with the name of the template from which you copied and a timestamp displays in the list of templates. +. Click the {MoreActionIcon} icon associated with the template that you want to duplicate and select the image:copy.png[Duplicate Template,15,15] Duplicate Template icon. +* The new template with the name of the template from which you duplicated and a timestamp displays in the list of templates. . Click to open the new template and click btn:[Edit template]. -. Replace the contents of the *Name* field with a new name, and provide or modify the entries in the other fields to complete this page. +. Replace the contents of the *Name* field with a new name, and give or change the entries in the other fields to complete this page. . Click btn:[Save job template]. diff --git a/downstream/modules/platform/proc-controller-copy-workflow-job-template.adoc b/downstream/modules/platform/proc-controller-copy-workflow-job-template.adoc index c6af09852d..9484ca4647 100644 --- a/downstream/modules/platform/proc-controller-copy-workflow-job-template.adoc +++ b/downstream/modules/platform/proc-controller-copy-workflow-job-template.adoc @@ -1,30 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-copy-workflow-job-template"] -= Copying a workflow job template += Duplicating a workflow job template -With {ControllerName} you can copy a workflow job template. -When you copy a workflow job template, it does not copy any associated schedule, notifications, or permissions. -Schedules and notifications must be recreated by the user or system administrator creating the copy of the workflow template. -The user copying the workflow template is granted the administrator permission, but no permissions are assigned (copied) to the workflow template. +With {ControllerName} you can duplicate a workflow job template. +When you duplicate a workflow job template, it does not duplicate any associated schedule, notifications, or permissions. +Schedules and notifications must be recreated by the user or system administrator creating the duplicate of the workflow template. +The user duplicating the workflow template is granted the administrator permission, but no permissions are assigned (duplicated) to the workflow template. .Procedure -. Open the workflow job template that you want to copy by using one of these methods: -** From the navigation panel, select {MenuAETemplates}. -** In the workflow job template *Details* view, click image:options_menu.png[15,15] next to the desired template. -*** Click the copy (image:copy.png[Copy icon,15,15]) icon. -+ -The new template with the name of the template from which you copied and a timestamp displays in the list of templates. -+ -//image::ug-wf-list-view-copy-example.png[Workflow template copy list view] -+ -. Select the copied template and click btn:[Edit template]. -. Replace the contents of the *Name* field with a new name, and give or change the entries in the other fields to complete this template. + +. From the navigation panel, select {MenuAETemplates}. +. Click the image:options_menu.png[More options,15,15]) icon associated with the template that you want to duplicate and select the image:copy.png[Duplicate Template,15,15] Duplicate template icon. +* The new template with the name of the template from which you duplicated and a timestamp displays in the list of templates. +. Click to open the new template and click btn:[Edit template]. +. Replace the contents of the *Name* field with a new name, and give or change the entries in the other fields to complete this page. . Click btn:[Save job template]. [NOTE] ==== -If a resource has a related resource that you do not have the right level of permission to, you cannot copy the resource. For example, in the case where a project uses a credential that a current user only has Read access. -However, for a workflow job template, if any of its nodes use an unauthorized job template, inventory, or credential, the workflow template can still be copied. -But in the copied workflow job template, the corresponding fields in the workflow template node are absent. +If a resource has a related resource that you do not have the right level of permission to, you cannot duplicate the resource. For example, in the case where a project uses a credential that a current user only has Read access. +However, for a workflow job template, if any of its nodes use an unauthorized job template, inventory, or credential, the workflow template can still be duplicated. +But in the duplicated workflow job template, the corresponding fields in the workflow template node are absent. ==== diff --git a/downstream/modules/platform/proc-controller-create-application.adoc b/downstream/modules/platform/proc-controller-create-application.adoc index 7bed63c376..a0c4a66cf1 100644 --- a/downstream/modules/platform/proc-controller-create-application.adoc +++ b/downstream/modules/platform/proc-controller-create-application.adoc @@ -1,8 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-create-application"] = Creating a new application -When integrating an external web application with {ControllerName} the web application might need to create OAuth2 Tokens on behalf of users of the web application. +When integrating an external web application with {PlatformNameShort}, the web application might need to create OAuth2 tokens on behalf of users of the web application. + Creating an application with the Authorization Code grant type is the preferred way to do this for the following reasons: * External applications can obtain a token for users, using their credentials. @@ -11,25 +14,33 @@ For example, revoking all tokens associated with that application. .Procedure . From the navigation panel, select {MenuAMAdminOauthApps}. -. Click btn:[Create application]. +. Click btn:[Create OAuth application]. The *Create Application* page opens. + //image:apps-create-new.png[Create application] . Enter the following details: - -* *Name* (required): give a name for the application you want to create -* Optional: *Description*: give a short description for your application -* *Organization* (required): give an organization with which this application is associated -* *Authorization grant type* (required): select one of the grant types to use for the user to get tokens for this application. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-password-grant-type[Application using password grant type] section of the _{ControllerAG}_. -* *Client Type* (required): select the level of security of the client device. -* *Redirect URIS*: give a list of allowed URIs, separated by spaces. ++ +Name:: (required) Enter a name for the application you want to create. +URL:: (optional) Enter the URL of the external application. This link is added to the navigation panel for easy access. This setting is currently offered as a Technology Preview only. +Description:: (optional) Include a short description for your application. +Organization:: (required) Select an organization with which this application is associated. +Authorization grant type:: (required) Select one of the grant types to use for the user to get tokens for this application. +For more information, see xref:ref-gw-application-functions[Application functions] for more information about grant types. +Client Type:: (required) Select the level of security of the client device. +Redirect URIS:: Provide a list of allowed URIs, separated by spaces. You need this if you specified the grant type to be *Authorization code*. - -. Click btn:[Create application], or click btn:[Cancel] to abandon your changes. + -The client ID displays in a window. +. Click btn:[Create OAuth application], or click btn:[Cancel] to abandon your changes. ++ +The *Client ID* and *Client Secret* display in a window. This will be the only time the client secret will be shown. ++ +[NOTE] +==== +The *Client Secret* is only created when the *Client type* is set to *Confidential*. +==== ++ +. Click the copy icon and save the client ID and client secret to integrate an external application with {PlatformNameShort}. //image:apps-client-id-popup.png[Client ID] diff --git a/downstream/modules/platform/proc-controller-create-container-group.adoc b/downstream/modules/platform/proc-controller-create-container-group.adoc index da02064c9e..d9bdbb4114 100644 --- a/downstream/modules/platform/proc-controller-create-container-group.adoc +++ b/downstream/modules/platform/proc-controller-create-container-group.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-container-group"] = Creating a container group @@ -8,7 +10,8 @@ A `ContainerGroup` is a type of `InstanceGroup` that has an associated credentia * A namespace that you can launch into. Every cluster has a "default" namespace, but you can use a specific namespace. -* A service account that has the roles that enable it to launch and manage pods in this namespace. +* A service account that has the roles that enable it to launch and manage pods in this namespace. +For more information, see link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-create-service-account[Creating a service account in {OCPShort} or Kubernetes]. * If you are using {ExecEnvShort}s in a private registry, and have a container registry credential associated with them in {ControllerName}, the service account also needs the roles to get, create, and delete secrets in the namespace. If you do not want to give these roles to the service account, you can pre-create the `ImagePullSecrets` and specify them on the pod spec for the `ContainerGroup`. In this case, the {ExecEnvShort} must not have a container registry credential associated, or {ControllerName} attempts to create the secret for you in the namespace. @@ -16,48 +19,10 @@ In this case, the {ExecEnvShort} must not have a container registry credential a An OpenShift or Kubernetes Bearer Token. * A CA certificate associated with the cluster. -The following procedure explains how to create a service account in an OpenShift cluster or Kubernetes, to be used to run jobs in a container group through {ControllerName}. -After the service account is created, its credentials are provided to {ControllerName} in the form of an OpenShift or Kubernetes API Bearer Token credential. - -.Procedure - -. To create a service account, download and use the sample service account, `containergroup sa` and modify it as needed to obtain the credentials. -. Apply the configuration from `containergroup-sa.yml`: -+ -[literal, options="nowrap" subs="+attributes"] ----- -oc apply -f containergroup-sa.yml ----- -+ -. Get the secret name associated with the service account: -+ -[literal, options="nowrap" subs="+attributes"] ----- -export SA_SECRET=$(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '"') ----- -+ -. Get the token from the secret: -+ -[literal, options="nowrap" subs="+attributes"] ----- -oc get secret $(echo ${SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token ----- -+ -. Get the CA certificate: -+ -[literal, options="nowrap" subs="+attributes"] ----- -oc get secret $SA_SECRET -o json | jq '.data["ca.crt"]' | xargs | base64 --decode > containergroup-ca.crt ----- -+ -. Use the contents of `containergroup-sa.token` and `containergroup-ca.crt` to provide the information for the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#ref-controller-credential-openShift[OpenShift or Kubernetes API Bearer Token] required for the container group. - -To create a container group, create an link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#ref-controller-credential-openShift[OpenShift or Kubernetes API Bearer Token] credential to use with your container group. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-getting-started-create-credential[Creating a credential] in the _{ControllerUG}_. - .Procedure . From the navigation panel, select {MenuInfrastructureInstanceGroups}. . Click btn:[Create group] and select *Create container group*. -. Enter a name for your new container group and select the credential previously created to associate it to the container group. -. Click btn:[Create Container Group]. +. Enter a name for your new container group and select the credential you created before to associate it to the container group. +. Click btn:[Create container group]. +. Check the *Customize pod spec* box and edit the *Pod spec override* to include the namespace and service account name that you used in the previous steps. diff --git a/downstream/modules/platform/proc-controller-create-credential-type.adoc b/downstream/modules/platform/proc-controller-create-credential-type.adoc index c7766dd7eb..00eec6625b 100644 --- a/downstream/modules/platform/proc-controller-create-credential-type.adoc +++ b/downstream/modules/platform/proc-controller-create-credential-type.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-create-credential-type"] = Creating a new credential type @@ -5,9 +7,10 @@ To create a new credential type: .Procedure -. In the *Credential Types* view, click btn:[Add]. +. From the navigation panel, select {MenuAECredentials}. +. In the *Credential Types* view, click btn:[Create credential type]. + -image:credential-types-create-new.png[Create new credential type] +//image:credential-types-create-new.png[Create new credential type] . Enter the appropriate details in the *Name* and *Description* field. + @@ -16,7 +19,7 @@ image:credential-types-create-new.png[Create new credential type] When creating a new credential type, do not use reserved variable names that start with `ANSIBLE_` for the *INPUT* and *INJECTOR* names and IDs, as they are invalid for custom credential types. ==== -. In the *Input Configuration* field, specify an input schema that defines a set of ordered fields for that type. +. In the *Input configuration* field, specify an input schema that defines a set of ordered fields for that type. The format can be in YAML or JSON: + *YAML* @@ -109,7 +112,7 @@ When `type=string`, fields can optionally specify multiple choice options: }, ---- -. In the *Injector Configuration* field, enter environment variables or extra variables that specify the values a credential type can inject. +. In the *Injector configuration* field, enter environment variables or extra variables that specify the values a credential type can inject. The format can be in YAML or JSON (see examples in the previous step). + The following configuration in JSON format shows each field and how they are used: @@ -187,7 +190,7 @@ The following is an example of referencing many files in a custom credential tem } ---- -. Click btn:[Save]. +. Click btn:[Create credential type]. + Your newly created credential type is displayed on the list of credential types: + @@ -209,4 +212,4 @@ image:credential-types-new-listed-verify.png[Verify new credential type] .Additional resources -For information about how to create a new credential, see xref:controller-getting-started-create-credential[Creating a credential]. +For information about how to create a new credential, see xref:controller-create-credential[Creating a credential]. diff --git a/downstream/modules/platform/proc-controller-create-credential.adoc b/downstream/modules/platform/proc-controller-create-credential.adoc index 9e1f902dbc..fc16644cec 100644 --- a/downstream/modules/platform/proc-controller-create-credential.adoc +++ b/downstream/modules/platform/proc-controller-create-credential.adoc @@ -1,13 +1,9 @@ -[id="controller-getting-started-create-credential"] +:_mod-docs-content-type: PROCEDURE + +[id="controller-create-credential"] = Creating new credentials -ifdef::controller-GS[] -As part of the initial setup, a demonstration credential and a Galaxy credential have been created for your use. Use the Galaxy credential as a template. -It can be copied, but not edited. -You can add more credentials as necessary. -endif::controller-GS[] -ifdef::controller-UG[] Credentials added to a team are made available to all members of the team. You can also add credentials to individual users. @@ -15,42 +11,21 @@ As part of the initial setup, two credentials are available for your use: Demo C Use the Ansible Galaxy credential as a template. You can copy this credential, but not edit it. Add more credentials as needed. -endif::controller-UG[] .Procedure -. From the navigation panel, select {MenuAMCredentials}. -ifdef::controller-GS[] -. To add a new credential, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-getting-started-create-credential[Creating a credential] in the _{ControllerUG}_. -+ -[NOTE] -==== -When you set up additional credentials, the user you assign must have root access or be able to use SSH to connect to the host machine. -==== -+ -. Click btn:[Demo Credential] to view its details. - -image::controller-credentials-demo-details.png[Demo Credential] -endif::controller-GS[] -ifdef::controller-UG[] -. Click btn:[Add]. -+ +. From the navigation panel, select {MenuAECredentials}. +. On the *Credentials* page, click btn:[Create credential]. +//+ //image:credentials-create-credential.png[Credentials-create] . Enter the following information: -* The name for your new credential. -* Optional: a description for the new credential. -* Optional: The name of the organization with which the credential is associated. -+ -[NOTE] -==== -A credential with a set of permissions associated with one organization persists if the credential is reassigned to another -organization. -==== -. In the *Credential Type* field, enter or select the credential type you want to create. -+ -//image:credential-types-drop-down-menu.png[Credential types] +* *Name*: the name for your new credential. +* (Optional) *Description*: a description for the new credential. +* Optional *Organization*: The name of the organization with which the credential is associated. The default is *Default*. +* *Credential type*: enter or select the credential type you want to create. -. Enter the appropriate details depending on the type of credential selected, as described in xref:ref-controller-credential-types[Credential types]. -. Click btn:[Save]. +. Enter the appropriate details depending on the type of credential selected, as described in link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-types[Credential types]. ++ +image:credential-types-drop-down-menu.png[Credential types drop down list] +. Click btn:[Create credential]. -endif::controller-UG[] diff --git a/downstream/modules/platform/proc-controller-create-custom-notifications.adoc b/downstream/modules/platform/proc-controller-create-custom-notifications.adoc index 609a1f368d..8c525cc919 100644 --- a/downstream/modules/platform/proc-controller-create-custom-notifications.adoc +++ b/downstream/modules/platform/proc-controller-create-custom-notifications.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-custom-notifications"] = Creating custom notifications @@ -7,9 +9,9 @@ You can xref:controller-attributes-custom-notifications[customize the text conte .Procedure . From the navigation panel, select {MenuAEAdminJobNotifications}. -. Click btn:[Add notifier]. +. Click btn:[Create notifier]. . Choose a notification type from the *Type* list. -. Enable *Customize messages* using the toggle. +. Enable *Customize messages* by using the toggle. + image::ug-notification-template-customize.png[Customize notification] + @@ -22,20 +24,18 @@ image::ug-notification-template-customize.png[Customize notification] * *Workflow denied message body* * *Workflow pending message body* * *Workflow timed out message body* - ++ The message forms vary depending on the type of notification that you are configuring. For example, messages for Email and PagerDuty notifications appear to be a typical email, with a body and a subject, in which case, {ControllerName} displays the fields as *Message* and *Message Body*. Other notification types only expect a *Message* for each type of event. - ++ The *Message* fields are pre-populated with a template containing a top-level variable, `job` coupled with an attribute, such as `id` or `name`. Templates are enclosed in curly brackets and can draw from a fixed set of fields provided by {ControllerName}, shown in the pre-populated message fields: - -//image::ug-notification-template-customize-simple-syntax.png[Customize notification syntax] - ++ This pre-populated field suggests commonly displayed messages to a recipient who is notified of an event. You can customize these messages with different criteria by adding your own attributes for the job as needed. Custom notification messages are rendered using Jinja; the same templating engine used by Ansible playbooks. - ++ Messages and message bodies have different types of content, as the following points outline: * Messages are always just strings, one-liners only. @@ -64,11 +64,8 @@ In all cases, `{{ job_metadata }}` includes the following fields: *** `status` *** `traceback` + -[NOTE] -==== You cannot query individual fields within `{{ job_metadata }}`. When you use `{{ job_metadata }}` in a notification template, all data is returned. -==== + The resulting dictionary looks like the following: + @@ -151,7 +148,5 @@ If you save the notifications template without editing the custom message (or ed .Additional resources -* For more information, see link:https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#using-variables-with-jinja2[Using variables with Jinja2] in the Ansible documentation. -* {ControllerNameStart} requires valid syntax to retrieve the correct data to display the messages. - -For a list of supported attributes and the proper syntax construction, see the xref:controller-attributes-custom-notifications[Supported Attributes for Custom Notifications] section. +* link:https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#using-variables-with-jinja2[Using variables with Jinja2] +* xref:controller-attributes-custom-notifications[Supported attributes for custom notifications] diff --git a/downstream/modules/platform/proc-controller-create-host.adoc b/downstream/modules/platform/proc-controller-create-host.adoc new file mode 100644 index 0000000000..8d4780c714 --- /dev/null +++ b/downstream/modules/platform/proc-controller-create-host.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-create-host"] + += Creating a host + + +To create a new host. + +.Procedure +. From the navigation panel, select {MenuInfrastructureHosts}. +. Click btn:[Create host]. +. On the *Create Host* page enter the following information: + +* *Name*: Enter a name for your host. +* (Optional) *Description*: Enter a description for your host. +* *Inventory*: Select the inventory for this host to belong to. +* *Variables*: Enter the inventory file variables associated with your host. + +. Click btn:[Create host] to save your changes. \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-create-insights-credential.adoc b/downstream/modules/platform/proc-controller-create-insights-credential.adoc index 2e6a769418..214537e074 100644 --- a/downstream/modules/platform/proc-controller-create-insights-credential.adoc +++ b/downstream/modules/platform/proc-controller-create-insights-credential.adoc @@ -1,23 +1,82 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-insights-credential"] = Creating Red Hat Insights credentials -Use the following procedure to create a new credential for use with Red Hat Insights: +To create a Red Hat Insights credential, use the following procedure: + +//.Prerequisites + +//[emcwhinn - commenting out the following Insights content until it has been confirmed [AAP-36066]] +//* To use token-based authentication, you must link:https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/proc-ciam-svc-acct-overview-creating-service-acct#proc-ciam-svc-acct-create-creating-service-acct[create a Red Hat service account] to generate a *Client ID* and *Client secret*. +//* Assign this service account to the appropriate *User Access* group with necessary permissions. + +//To enable integration between {PlatformNameShort} and {InsightsShort}, assign the service account with the following permissions: + +//* *inventory:hosts:read* (included in the Inventory Hosts viewer role) +//* *patch::read* (included in the Patch viewer role) +//* *remediations:remediation:read* and *playbook-dispatcher:run:read* (included in the Remediations user role) + +//You might consider associating your service account to an existing user access group with required permissions, or creating a new one. + +//[NOTE] +//==== +//In adherence to security guidelines, service accounts are not automatically included in the default access group. +//To grant access, you must manually add them to the appropriate user access groups. + +//If you are not an Organization Administrator, you can create a service account and then ask your administrator to add your account to the appropriate user access ///groups. +//==== + +//Use the following procedure to create a new credential for use with {InsightsShort}: .Procedure -. From the navigation panel, select {MenuAMCredentials}. +. From the navigation panel, select {MenuAECredentials}. . Click btn:[Create credential]. . Enter the appropriate details in the following fields: * *Name*: Enter the name of the credential. * Optional: *Description*: Enter a description for the credential. * Optional: *Organization*: Enter the name of the organization with which the credential is associated, or click the search image:search.png[Search,15,15] icon and select it from the *Select organization* window. -* *Credential Type*: Enter *Insights* or select it from the list. +* *Credential type*: Enter *Insights* or select it from the list. + image::ug-credential-types-popup-window-insights.png[Credentials insights pop up] + -* *Username*: Enter a valid Red Hat Insights credential. +* *Username*: Enter a valid Red Hat Insights credential. * *Password*: Enter a valid Red Hat Insights credential. -The Red Hat Insights credentials are the user's link:https://access.redhat.com/[Red Hat Customer Portal] account username and password. ++ +The {InsightsShort} credentials are the user's link:https://access.redhat.com/[Red Hat Customer Portal] account username and password. +//+ +//[NOTE] +//==== +//Use the *Username* and *Password* fields for Basic authentication. +//You can leave it blank if using *Client ID* and *Client secret*. +//==== +//+ +//* *Client ID*: Enter the client ID you received when you created your service account. +//* *Client secret*: Enter the client secret you received when you created your service account. + . Click btn:[Create credential]. +//+ +//You can now use this credential in an xref:proc-controller-inv-source-insights[{InsightsShort}-sourced inventory] and xref:controller-create-insights-project[{InsightsShort} project]. + +//.Troubleshooting + +//* If you receive a project sync failure, see the steps in xref:controller-remediate-insights-inventory[Remediating a Red Hat Insights inventory] and check your analytics logs. + +//[IMPORTANT] +//==== +//* You must recreate existing credentials and reassociate them with existing projects and inventory sources to support token-based authentication. +//Note: The following is true for now, but there is a plan to fix this come Q3 or Q4. +//* Only remediations you create using the service account are visible in {PlatformNameShort} for that account. +//This aligns with the current policy, which does not allow a user to view remediations created by other users. +//* For more information about the Insights inventory source plugin, see link:https://console.redhat.com/ansible/automation-hub/repo/published/redhat/insights/content/inventory/insights?extIdCarryOver=true&intcmp=701f2000001OEGhAAO&percmp=7013a000002ppOOAAY&sc_cid=7013a000002q6eLAAQ[inventory > insights] in {HubName}. +//==== + +//.Additional resources + +//For more information about service accounts, see the following resources: + +//* link:https://docs.redhat.com/en/documentation/red_hat_customer_portal/1/html/creating_and_managing_service_accounts/index[Creating and Managing Service Accounts] +//* link:https://www.youtube.com/watch?v=UvNcmJsbg1w[How to use Service Accounts on the Hybrid Cloud Console] diff --git a/downstream/modules/platform/proc-controller-create-insights-project.adoc b/downstream/modules/platform/proc-controller-create-insights-project.adoc index 22d5122708..8c3c3d5850 100644 --- a/downstream/modules/platform/proc-controller-create-insights-project.adoc +++ b/downstream/modules/platform/proc-controller-create-insights-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-insights-project"] = Creating a Red Hat Insights project @@ -14,10 +16,10 @@ Note that the following fields require specific Red Hat Insights related entries * *Name*: Enter the name for your Red Hat Insights project. * Optional: *Description*: Enter a description for the project. * *Organization*: Enter the name of the organization with which the credential is associated, or click the search image:search.png[Search,15,15] icon and select it from the *Select organization* window. -* Optional: *Execution Environment*: The {ExecEnvShort} that is used for jobs that use this project. -* *Source Control Type*: Select *Red Hat Insights*. -* Optional: *Content Signature Validation Credential*: Enable content signing to verify that the content has remained secure when a project is synced. -* *Insights Credential*: This is pre-populated with the Red Hat Insights credential you previously created. +* Optional: *Execution environment*: The {ExecEnvShort} that is used for jobs that use this project. +* *Source control type*: Select *Red Hat Insights*. +* Optional: *Content signature validation credential*: Enable content signing to verify that the content has remained secure when a project is synced. +* *Insights credential*: This is pre-populated with the Red Hat Insights credential you created before. If not, enter the credential, or click the search image:search.png[Search,15,15] icon and select it from the *Select Insights Credential* window. . Select the update options for this project from the *Options* field and provide any additional values, if applicable. For more information about each option click the tooltip image:question_circle.png[Tooltip,15,15] icon next to each one. diff --git a/downstream/modules/platform/proc-controller-create-instance-group.adoc b/downstream/modules/platform/proc-controller-create-instance-group.adoc index 2238052418..3dc9bad581 100644 --- a/downstream/modules/platform/proc-controller-create-instance-group.adoc +++ b/downstream/modules/platform/proc-controller-create-instance-group.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-instance-group"] = Creating an instance group @@ -26,11 +28,17 @@ If you do not specify values, then the *Policy instance minimum* and *Policy ins [NOTE] ==== The default value of 0 for *Max concurrent jobs* and *Max forks* denotes no limit. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups#controller-instance-group-capacity[Instance group capacity limits] in the _{ControllerAG}_. +ifdef::controller-UG[] +For more information, see xref:controller-instance-group-capacity[Instance group capacity limits]. +endif::controller-UG[] +ifdef::operator-mesh[] +For more information, see link:{URLControllerUserGuide}/index#controller-instance-group-capacity[Instance group capacity limits]. +endif::operator-mesh[] ==== -. Click btn:[Create Instance Group], or, if you have edited an existing Instance Group click btn:[Save Instance Group] +. Click btn:[Create instance group], or, if you have edited an existing Instance Group click btn:[Save instance group] +.Next steps When you have successfully created the instance group the *Details* tab of the newly created instance group enables you to review and edit your instance group information. You can also edit *Instances* and review *Jobs* associated with this instance group: diff --git a/downstream/modules/platform/proc-controller-create-inventory.adoc b/downstream/modules/platform/proc-controller-create-inventory.adoc deleted file mode 100644 index 2db8b447ce..0000000000 --- a/downstream/modules/platform/proc-controller-create-inventory.adoc +++ /dev/null @@ -1,21 +0,0 @@ -[id="controller-creating-inventory"] - -= Creating a new Inventory - -The Inventories window displays a list of the inventories that are currently available. -You can sort the inventory list by name and searched type, organization, description, owners and modifiers of the inventory, or additional criteria. - -.Procedure -. To view existing inventories, select {MenuInfrastructureInventories} from the navigation panel. -** {ControllerNameStart} provides a demonstration inventory for you to use as you learn how the controller works. -You can use it as it is or edit it later. -You can create another inventory, if necessary. -. To add another inventory, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-adding-new-inventory[Add a new inventory] in the _{ControllerUG}_ for more information. -. Click btn:[Demo Inventory] to view its details. - -image::controller-inventories-demo-details.png[Demo inventory] - -As with organizations, inventories also have associated users and teams that you can view through the *Access* tab. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-inventories[Inventories] in the _{ControllerUG}_. - -A user with the role of *System Administrator* has been automatically populated for this. diff --git a/downstream/modules/platform/proc-controller-create-job-template.adoc b/downstream/modules/platform/proc-controller-create-job-template.adoc index c960e56225..1cd35133e5 100644 --- a/downstream/modules/platform/proc-controller-create-job-template.adoc +++ b/downstream/modules/platform/proc-controller-create-job-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-job-template"] = Creating a job template @@ -5,22 +7,24 @@ .Procedure . From the navigation panel, select {MenuAETemplates}. -. On the *Templates* list view, select *Create job template* from the *Create template* list. +. On the *Automation Templates* page, select *Create job template* from the *Create template* list. . Enter the appropriate details in the following fields: + [NOTE] ==== If a field has the *Prompt on launch* checkbox selected, launching the job prompts you for the value for that field when launching. + Most prompted values override any values set in the job template. + Exceptions are noted in the following table. ==== + [cols="33%,33%,33%",options="header"] |=== | *Field* | *Options* | *Prompt on Launch* -| Name | Enter a name for the job.| N/A -| Description| Enter an arbitrary description as appropriate (optional). | N/A -| Job Type a| Choose a job type: +| *Name* | Enter a name for the job.| N/A +| *Description* | Enter an arbitrary description as appropriate (optional). | N/A +| *Job type* a| Choose a job type: - Run: Start the playbook when launched, running Ansible tasks on the selected hosts. @@ -28,26 +32,27 @@ Exceptions are noted in the following table. Tasks that do not support check mode are missed and do not report potential changes. For more information about job types see the link:https://docs.ansible.com/ansible/latest/playbook_guide/index.html[Playbooks] section of the Ansible documentation.| Yes -| Inventory | Choose the inventory to use with this job template from the inventories available to the logged in user. +| *Inventory* | Choose the inventory to use with this job template from the inventories available to the logged in user. A System Administrator must grant you or your team permissions to be able to use certain inventories in a job template. | Yes. Inventory prompts show up as its own step in a later prompt window. -| Project | Select the project to use with this job template from the projects available to the user that is logged in. | N/A -| SCM branch | This field is only present if you chose a project that allows branch override. +| *Project* | Select the project to use with this job template from the projects available to the user that is logged in. | N/A +| *Source control branch* | This field is only present if you chose a project that allows branch override. Specify the overriding branch to use in your job run. If left blank, the specified SCM branch (or commit hash or tag) from the project is used. For more information, see xref:controller-job-branch-overriding[Job branch overriding]. | Yes -| Execution Environment | Select the container image to be used to run this job. -You must select a project before you can select an {ExecEnvShort}. | Yes. - -Execution environment prompts show up as its own step in a later prompt window. -| Playbook | Choose the playbook to be launched with this job template from the available playbooks. +| *Playbook* | Choose the playbook to be launched with this job template from the available playbooks. This field automatically populates with the names of the playbooks found in the project base path for the selected project. Alternatively, you can enter the name of the playbook if it is not listed, such as the name of a file (such as foo.yml) you want to use to run with that playbook. If you enter a filename that is not valid, the template displays an error, or causes the job to fail. | N/A -| Credentials | Select the image:examine.png[examine,15,15] icon to open a separate window. +| *Execution Environment* | Select the container image to be used to run this job. +You must select a project before you can select an {ExecEnvShort}. | Yes. + +Execution environment prompts show up as its own step in a later prompt window. + +| *Credentials* | Select the image:examine.png[examine,15,15] icon to open a separate window. Choose the credential from the available options to use with this job template. @@ -63,7 +68,7 @@ for the following types in order to proceed: Machine.` - You can add more credentials as you see fit. - Credential prompts show up as its own step in a later prompt window. -| Labels a| - Optionally supply labels that describe this job template, such as `dev` or `test`. +| *Labels* a| - Optionally supply labels that describe this job template, such as `dev` or `test`. - Use labels to group and filter job templates and completed jobs in the display. @@ -79,18 +84,9 @@ When a label is removed, it is no longer associated with that particular Job or - Jobs inherit labels from the Job Template at the time of launch. If you delete a label from a Job Template, it is also deleted from the Job. a| - If selected, even if a default value is supplied, you are prompted when launching to supply additional labels, if needed. - You cannot delete existing labels, selecting image:disassociate.png[Disassociate,10,10] only removes the newly added labels, not existing default labels. -| Variables a| - Pass extra command line variables to the playbook. -This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#defining-variables-at-runtime[Defining variables at runtime]. -- Provide key or value pairs by using either YAML or JSON. -These variables have a maximum value of precedence and overrides other variables specified elsewhere. -The following is an example value: -`git_branch: production -release_version: 1.5` | Yes. - -If you want to be able to specify `extra_vars` on a schedule, you must select *Prompt on launch* for Variables on the job template, or enable a survey on the job template. Those answered survey questions become `extra_vars`. -| Forks | The number of parallel or simultaneous processes to use while executing the playbook. +| *Forks* | The number of parallel or simultaneous processes to use while executing the playbook. A value of zero uses the Ansible default setting, which is five parallel processes unless overridden in `/etc/ansible/ansible.cfg`. | Yes -| Limit a| A host pattern to further constrain the list of hosts managed or affected by the playbook. You can separate many patterns by colons (:). +| *Limit* a| A host pattern to further constrain the list of hosts managed or affected by the playbook. You can separate many patterns by colons (:). As with core Ansible: * a:b means "in group a or b" @@ -101,26 +97,26 @@ For more information, see link:https://docs.ansible.com/ansible/latest/inventory If not selected, the job template executes against all nodes in the inventory or only the nodes predefined on the *Limit* field. When running as part of a workflow, the workflow job template limit is used instead. -| Verbosity | Control the level of output Ansible produces as the playbook executes. +| *Verbosity* | Control the level of output Ansible produces as the playbook executes. Choose the verbosity from Normal to various Verbose or Debug settings. This only appears in the *details* report view. Verbose logging includes the output of all commands. Debug logging is exceedingly verbose and includes information about SSH operations that can be useful in certain support instances. Verbosity `5` causes {ControllerName} to block heavily when jobs are running, which could delay reporting that the job has finished (even though it has) and can cause the browser tab to lock up.| Yes -| Job Slicing | Specify the number of slices you want this job template to run. +| *Job slicing* | Specify the number of slices you want this job template to run. Each slice runs the same tasks against a part of the inventory. For more information about job slices, see xref:controller-job-slicing[Job Slicing]. | Yes -| Timeout a| This enables you to specify the length of time (in seconds) that the job can run before it is canceled. Consider the following for setting the timeout value: +| *Timeout* a| This enables you to specify the length of time (in seconds) that the job can run before it is canceled. Consider the following for setting the timeout value: - There is a global timeout defined in the settings which defaults to 0, indicating no timeout. - A negative timeout (<0) on a job template is a true "no timeout" on the job. - A timeout of 0 on a job template defaults the job to the global timeout (which is no timeout by default). - A positive timeout sets the timeout for that job template. | Yes -| Show Changes | Enables you to see the changes made by Ansible tasks. | Yes -| Instance Groups | Choose link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups[Instance and Container Groups] to associate with this job template. +| *Show changes* | Enables you to see the changes made by Ansible tasks. | Yes +| *Instance groups* | Choose xref:controller-instance-and-container-groups[Instance and Container Groups] to associate with this job template. If the list is extensive, use the image:examine.png[examine,15,15] icon to narrow the options. -Job template instance groups contribute to the job scheduling criteria, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups#controller-job-runtime-behavior[Job Runtime Behavior] and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups#controller-control-job-run[Control where a job runs] for rules. +Job template instance groups contribute to the job scheduling criteria, see link:{URLControllerAdminGuide}/controller-clustering#controller-cluster-job-runtime[Job Runtime Behavior] and xref:controller-control-job-run[Control where a job runs] for rules. A System Administrator must grant you or your team permissions to be able to use an instance group in a job template. Use of a container group requires admin rights. a| - Yes. @@ -129,13 +125,22 @@ If selected, you are providing the jobs preferred instance groups in order of pr - If you prompt for an instance group, what you enter replaces the normal instance group hierarchy and overrides all of the organizations' and inventories' instance groups. - The Instance Groups prompt shows up as its own step in a later prompt window. -| Job Tags | Type and select the *Create* menu to specify which parts of the playbook should be executed. +| *Job tags* | Type and select the *Create* menu to specify which parts of the playbook should be executed. For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes -| Skip Tags | Type and select the *Create* menu to specify certain tasks or parts of the playbook to skip. +| *Skip tags* | Type and select the *Create* menu to specify certain tasks or parts of the playbook to skip. For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes +| *Extra variables* a| - Pass extra command line variables to the playbook. +This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#defining-variables-at-runtime[Defining variables at runtime]. +- Give key or value pairs by using either YAML or JSON. +These variables have a maximum value of precedence and overrides other variables specified elsewhere. +The following is an example value: +`git_branch: production +release_version: 1.5` | Yes. + +If you want to be able to specify `extra_vars` on a schedule, you must select *Prompt on launch* for Variables on the job template, or enable a survey on the job template. Those answered survey questions become `extra_vars`. |=== + -. Specify the following options for launching this template, if necessary: +. You can set the following options for launching this template, if necessary: * *Privilege escalation*: If checked, you enable this playbook to run as an administrator. This is the equal of passing the `--become` option to the `ansible-playbook` command. * *Provisioning callback*: If checked, you enable a host to call back to {ControllerName} through the REST API and start a job from this job template. @@ -151,13 +156,15 @@ GitHub and GitLab are the supported SCM systems. ** *Webhook key*: Generated shared secret to be used by the webhook service to sign payloads sent to {ControllerName}. You must configure this in the settings on the webhook service in order for {ControllerName} to accept webhooks from this service. ** *Webhook credential*: Optionally, give a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. ++ Before you can select it, the credential must exist. -See xref:ref-controller-credential-types[Credential Types] to create one. ++ +See xref:ref-controller-credential-types[Credential types] to create one. ** For additional information about setting up webhooks, see xref:controller-work-with-webhooks[Working with Webhooks]. * *Concurrent jobs*: If checked, you are allowing jobs in the queue to run simultaneously if not dependent on one another. Check this box if you want to run job slices simultaneously. For more information, see xref:controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. * *Enable fact storage*: If checked, {ControllerName} stores gathered facts for all hosts in an inventory related to the job running. * *Prevent instance group fallback*: Check this option to allow only the instance groups listed in the *Instance Groups* field to run the job. -If clear, all available instances in the execution pool are used based on the hierarchy described in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index[Control where a job runs]. +If clear, all available instances in the execution pool are used based on the hierarchy described in xref:controller-control-job-run[Control where a job runs]. . Click btn:[Create job template], when you have completed configuring the details of the job template. Creating the template does not exit the job template page but advances to the Job Template *Details* tab. @@ -167,7 +174,7 @@ You must first save the template before launching, otherwise, btn:[Launch templa //image::ug-job-template-details.png[Job template details] -.Verification +*Verification* . From the navigation panel, select {MenuAETemplates}. -. Verify that the newly created template appears on the *Templates* list view. +. Verify that the newly created template appears on the *Templates* page. diff --git a/downstream/modules/platform/proc-controller-create-notification-template.adoc b/downstream/modules/platform/proc-controller-create-notification-template.adoc index 13a84b60a0..79ef01c73f 100644 --- a/downstream/modules/platform/proc-controller-create-notification-template.adoc +++ b/downstream/modules/platform/proc-controller-create-notification-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-notification-template"] = Creating a notification template @@ -15,4 +17,4 @@ Use the following procedure to create a notification template. * *Organization*: Specify the organization that the notification belongs to. * *Type*: Choose a type of notification from the drop-down menu. For more information, see the xref:controller-notification-types[Notification types] section. -. Click btn:[Save]. +. Click btn:[Save notifier]. diff --git a/downstream/modules/platform/proc-controller-create-organization.adoc b/downstream/modules/platform/proc-controller-create-organization.adoc index 10a2575f19..21eec87066 100644 --- a/downstream/modules/platform/proc-controller-create-organization.adoc +++ b/downstream/modules/platform/proc-controller-create-organization.adoc @@ -1,51 +1,40 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-create-organization"] = Creating an organization -[NOTE] -==== -{ControllerNameStart} automatically creates a default organization. -If you have a Self-support level license, you have only the default organization available and must not delete it. +{PlatformNameShort} automatically creates a default organization. If you have a self-support level license, you have only the default organization available and cannot delete it. -You can use the default organization as it is initially set up and edit it later. -==== +//[ddacosta] Editing has been disabled but there are ongoing conversations about adding it back later: +// You can use the default organization as it is initially set up and edit it later. -. Click btn:[Add] to create a new organization. +.Procedure +. From the navigation panel, select {MenuAMOrganizations}. +. Click btn:[Create organization]. +. Enter the *Name* and optionally provide a *Description* for your organization. + -image:organizations-new-organization-form.png[Organizations- new organization form] - -. You can configure several attributes of an organization: - -* Enter the *Name* for your organization (required). -* Enter a *Description* for the organization. -* *Max Hosts* is only editable by a superuser to set an upper limit on the number of license hosts that an organization can have. -Setting this value to *0* signifies no limit. -If you try to add a host to an organization that has reached or exceeded its cap on hosts, an error message displays: +[NOTE] +==== +If {ControllerName} is enabled on the platform, continue with Step 4. Otherwise, proceed to Step 6. +==== + -The inventory sync output view also shows the host limit error. +. Select the name of the *Execution environment* or search for one that exists that members of this team can run automation. +. Enter the name of the *Instance Groups* on which to run this organization. +. Optional: Enter the *Galaxy credentials* or search from a list of existing ones. +. Select the *Max hosts* for this organization. The default is 0. When this value is 0, it signifies no limit. If you try to add a host to an organization that has reached or exceeded its cap on hosts, an error message displays: + -image:organizations-max-hosts-error-output-view.png[Error] +---- +You have already reached the maximum number of 1 hosts allowed for your organization. Contact your System Administrator for assistance. +---- + -Click btn:[Details] for additional information about the error. +. Click btn:[Next]. +. If you selected more than 1 instance group, you can manage the order by dragging and dropping the instance group up or down in the list and clicking btn:[Confirm]. + -* Enter the name of the *Instance Groups* on which to run this organization. -* Enter the name of the {ExecEnvShort} or search for one that exists on which to run this organization. -For more information, see link:https://docs.ansible.com/automation-controller/4.4/html/upgrade-migration-guide/upgrade_to_ees.html#upgrade-venv[Upgrading to Execution Environments]. -* Optional: Enter the *Galaxy Credentials* or search from a list of existing ones. -. Click btn:[Save] to finish creating the organization. - -When the organization is created, {ControllerName} displays the Organization details, and enables you to manage access and {ExecEnvShort}s for the organization. - -image:organizations-show-record-for-example-organization.png[Organization details] - -From the *Details* tab, you can edit or delete the organization. - [NOTE] ==== -If you attempt to delete items that are used by other work items, a message lists the items that are affected by the deletion and prompts you to confirm the deletion. -Some screens contain items that are invalid or have been deleted previously, and will fail to run. +The execution precedence is determined by the order in which the instance groups are listed. ==== - -The following is an example of such a message: - -image:warning-deletion-dependencies.png[Warning] \ No newline at end of file ++ +. Click btn:[Next] and verify the organization settings. +. Click btn:[Finish]. diff --git a/downstream/modules/platform/proc-controller-create-service-account.adoc b/downstream/modules/platform/proc-controller-create-service-account.adoc new file mode 100644 index 0000000000..8def908366 --- /dev/null +++ b/downstream/modules/platform/proc-controller-create-service-account.adoc @@ -0,0 +1,101 @@ +:_mod-docs-content-type: PROCEDURE + +[id="controller-create-service-account"] + += Creating a service account in {OCPShort} or Kubernetes + +Use service accounts in an OpenShift cluster or Kubernetes to run jobs in a container group through {ControllerName}. +After the service account is created, its credentials are provided to {ControllerName} in the form of an OpenShift or Kubernetes API Bearer Token credential. + +.Procedure + +. To create a service account, download and use the following sample service account example, `containergroup sa` and change it as required to obtain the credentials: ++ +[literal, options="nowrap" subs="+attributes"] +---- +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: containergroup-service-account + namespace: containergroup-namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: role-containergroup-service-account + namespace: containergroup-namespace +rules: + - verbs: + - get + - list + - watch + - create + - update + - patch + - delete + apiGroups: + - '' + resources: + - pods + - verbs: + - get + apiGroups: + - '' + resources: + - pods/log + - verbs: + - create + apiGroups: + - '' + resources: + - pods/attach + - verbs: + - get + - create + - delete + apiGroups: + - '' + resources: + - secrets +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: role-containergroup-service-account-binding + namespace: containergroup-namespace +subjects: +- kind: ServiceAccount + name: containergroup-service-account + namespace: containergroup-namespace +roleRef: + kind: Role + name: role-containergroup-service-account + apiGroup: rbac.authorization.k8s.io +---- ++ +. Apply the configuration from `containergroup-sa.yml`: ++ +[literal, options="nowrap" subs="+attributes"] +---- +oc apply -f containergroup-sa.yml +---- ++ +. Get an API token by generating a service account token: ++ +[literal, options="nowrap" subs="+attributes"] +---- +oc create token containergroup-service-account --duration=$((365*24))h > containergroup-sa.token +---- ++ +. Get the CA certificate: ++ +[literal, options="nowrap" subs="+attributes"] +---- +oc get secret -n openshift-ingress wildcard-tls -o jsonpath='{.data.ca\.crt}' | base64 -d > containergroup-ca.crt +---- ++ +. Use the contents of `containergroup-sa.token` and `containergroup-ca.crt` to provide the information for the link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-create-service-account[OpenShift or Kubernetes API Bearer Token] required for the container group. + +To create a container group, create a link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-create-service-account[OpenShift or Kubernetes API Bearer Token] credential to use with your container group. +For more information, see link:{URLControllerUserGuide}/controller-credentials#controller-create-credential[Creating new credentials]. diff --git a/downstream/modules/platform/proc-controller-create-survey.adoc b/downstream/modules/platform/proc-controller-create-survey.adoc index 8c808ca603..b2530d8021 100644 --- a/downstream/modules/platform/proc-controller-create-survey.adoc +++ b/downstream/modules/platform/proc-controller-create-survey.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-survey"] = Creating a survey diff --git a/downstream/modules/platform/proc-controller-create-workflow-template.adoc b/downstream/modules/platform/proc-controller-create-workflow-template.adoc index e950e236e8..617ee57cf1 100644 --- a/downstream/modules/platform/proc-controller-create-workflow-template.adoc +++ b/downstream/modules/platform/proc-controller-create-workflow-template.adoc @@ -1,6 +1,10 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-create-workflow-template"] -= Creating a workflow template += Creating a workflow job template + +:context: create-worklow-job-templates To create a new workflow job template, complete the following steps: @@ -13,7 +17,7 @@ This can lead to playbook failures if the limit is mandatory for the playbook th .Procedure . From the navigation panel, select {MenuAETemplates}. -. On the *Templates* list view, select *Create workflow job template* from the *Create template* list. +. On the *Automation Templates* page, select *Create workflow job template* from the *Create template* list. + //image::ug-create-new-wf-template.png[Create workflow template] + @@ -61,6 +65,10 @@ a| Yes If selected, even if a default value is supplied, you are prompted when launching to supply additional labels, if needed. - You cannot delete existing labels, selecting image:disassociate.png[Disassociate,10,10] only removes the newly added labels, not existing default labels. +| Job tags | Type and select the *Create* drop-down to specify which parts of the playbook should run. +For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes +| Skip tags | Type and select the *Create* drop-down to specify certain tasks or parts of the playbook to skip. +For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes | Extra variables a| - Pass extra command line variables to the playbook. This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at link:https://docs.ansible.com/ansible/latest/reference_appendices/general_precedence.html[Controlling how Ansible behaves: precedence rules]. @@ -70,27 +78,24 @@ release_version: 1.5` | Yes If you want to be able to specify `extra_vars` on a schedule, you must select *Prompt on launch* for *Extra variables* on the workflow job template, or enable a survey on the job template. Those answered survey questions become `extra_vars`. For more information about extra variables, see xref:controller-extra-variables[Extra Variables]. -| Job tags | Type and select the *Create* drop-down to specify which parts of the playbook should run. -For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes -| Skip Tags | Type and select the *Create* drop-down to specify certain tasks or parts of the playbook to skip. -For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes |=== + . Specify the following *Options* for launching this template, if necessary: + * Check *Enable webhook* to turn on the ability to interface with a predefined SCM system web service that is used to launch a workflow job template. GitHub and GitLab are the supported SCM systems. ** If you enable webhooks, other fields display, prompting for additional information: *** *Webhook service*: Select which service to listen for webhooks from. -*** *Webhook credential*: Optionally, provide a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. -For more information, see xref:ref-controller-credential-types[Credential Types] to create one. -+ -** When you click btn:[Create workflow job template], additional fields populate and the workflow visualizer automatically opens. *** *Webhook URL*: Automatically populated with the URL for the webhook service to POST requests to. +//*** *Webhook credential*: Optionally, provide a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. +//For more information, see TBD[Credential Types] to create one. *** *Webhook key*: Generated shared secret to be used by the webhook service to sign payloads sent to {ControllerName}. You must configure this in the settings on the webhook service so that webhooks from this service are accepted in {ControllerName}. For additional information about setting up webhooks, see xref:controller-work-with-webhooks[Working with Webhooks]. + -Check *Enable concurrent jobs* to allow simultaneous runs of this workflow. +//** When you click btn:[Create workflow job template], the workflow visualizer automatically opens. +//*** *Webhook URL*: Automatically populated with the URL for the webhook service to POST requests to. +* Check *Enable concurrent jobs* to allow simultaneous runs of this workflow. For more information, see xref:controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. + . When you have completed configuring the workflow template, click btn:[Create workflow job template]. @@ -110,7 +115,7 @@ There you can complete the following tasks: + [NOTE] ==== -Save the template before launching, or btn:[Launch template] remains disabled. +Save the template before launching, or btn:[Launch template] remains disabled. The *Notifications* tab is only present after you save the template. ==== diff --git a/downstream/modules/platform/proc-controller-creating-a-team.adoc b/downstream/modules/platform/proc-controller-creating-a-team.adoc index 5cabe5ad96..a94cd9e4ef 100644 --- a/downstream/modules/platform/proc-controller-creating-a-team.adoc +++ b/downstream/modules/platform/proc-controller-creating-a-team.adoc @@ -1,50 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-creating-a-team"] = Creating a team -You can create as many teams of users as you need for your organization. -You can assign permissions to each team, just as with users. -Teams can also assign ownership for credentials, minimizing the steps to assign the same credentials to the same user. - -.Procedure -. On the *Teams* page, click btn:[Add]. -+ -//image:teams-create-new-team.png[Teams -create new team] -. Enter the appropriate details into the following fields: - -* *Name* -* Optional: *Description* -* *Organization*: You must select an existing organization -. Click *Save*. -The *Details* dialog opens. -. Review and edit your team information. -+ -image:teams-example-team-successfully-created.png[Teams- Details dialog] - -== Adding or removing a user to a team - -To add a user to a team, the user must already have been created. -For more information, see xref:proc-controller-creating-a-user[Creating a user]. -Adding a user to a team adds them as a member only. -Use the *Access* tab to specify a role for the user on different resources. +You can create new teams, assign an organization to the team, and manage the users and administrators associated with each team. +Users associated with a team inherit the permissions associated with the team and any organization permissions to which the team has membership. -.Procedure -. In the *Access* tab of the *Details* page click btn:[Add]. -. Follow the prompts to add a user and assign them to roles. -. Click btn:[Save]. - -== Removing roles for a user +To add a user or administrator to a team, the user must have already been created. .Procedure -* To remove roles for a particular user, click the image:disassociate.png[Disassociate,10,10] icon next to its resource. - -//image:permissions-disassociate.png[image] - -This launches a confirmation dialog, asking you to confirm the disassociation. - -//image:permissions-disassociate-confirm.png[image] +. From the navigation panel, select {MenuAMTeams}. +. Click btn:[Create team]. +. Enter a *Name* and optionally give a *Description* for the team. +. Select an *Organization* to be associated with this team. ++ +[NOTE] +==== +Each team can only be assigned to one organization. +==== ++ +. Click btn:[Create team]. ++ +The *Details* page opens, where you can review and edit your team information. -include::ref-controller-team-access.adoc[leveloffset=+1] -include::ref-controller-team-roles.adoc[leveloffset=+1] -include::proc-controller-team-add-permissions.adoc[leveloffset=+1] diff --git a/downstream/modules/platform/proc-controller-creating-a-user.adoc b/downstream/modules/platform/proc-controller-creating-a-user.adoc index 4ab27f7a18..3871b2ccd4 100644 --- a/downstream/modules/platform/proc-controller-creating-a-user.adoc +++ b/downstream/modules/platform/proc-controller-creating-a-user.adoc @@ -1,57 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-creating-a-user"] = Creating a user -To create new users in {ControllerName} and assign them a role. +There are three types of users in {PlatformNameShort}: + +Normal user:: Normal users have read and write access limited to the resources (such as inventory, projects, and job templates) for which that user has been granted the appropriate roles and privileges. Normal users are the default type of user when no other *User type* is specified. +{PlatformNameShort} Administrator:: An administrator (also known as a Superuser) has full system administration privileges — with full read and write privileges over the entire installation. An administrator is typically responsible for managing all aspects of and delegating responsibilities for day-to-day work to various users. +{PlatformNameShort} Auditor:: Auditors have read-only capability for all objects within the environment. .Procedure -. On the *Users* page, click btn:[Add]. -+ -The *Create User* dialog opens. -. Enter the appropriate details about your new user. -Fields marked with an asterisk (*) are required. +. From the navigation panel, select {MenuAMUsers}. +. Click btn:[Create user]. +. Enter the details about your new user in the fields on the *Create user* page. Fields marked with an asterisk (*) are required. +. Normal users are the default when no *User type* is specified. To define a user as an administrator or auditor, select a *User type* checkbox. + [NOTE] ==== If you are modifying your own password, log out and log back in again for it to take effect. ==== + -You can assign three types of users: - -* *Normal User*: Normal Users have read and write access limited to the resources (such as inventory, projects, and job templates) for which that user has been granted the appropriate roles and privileges. -* *System Auditor*: Auditors inherit the read-only capability for all objects within the environment. -* *System Administrator*: A System Administrator (also known as a Superuser) has full system administration privileges -- with full read and write privileges over the entire installation. -A System Administrator is typically responsible for managing all aspects of and delegating responsibilities for day-to-day work to various users. -+ -image:users-create-user-form-types.png[User Types] -+ -[NOTE] -==== -A default administrator with the role of *System Administrator* is automatically created during the installation process and is available to all users of {ControllerName}. -One *System Administrator* must always exist. -To delete the *System Administrator* account, you must first create another *System Administrator* account. -==== - -. Click btn:[Save]. -+ -When the user is successfully created, the *User* dialog opens. -+ -image:users-edit-user-form.png[Edit User Form] +. Select the *Organization* to be assigned for this user. For information about creating a new organization, refer to xref:proc-controller-create-organization[Creating an organization]. +. Click btn:[Create user]. -. Click btn:[Delete] to delete the user, or you can delete users from a list of current users. -For more information, see xref:proc-controller-deleting-a-user[Deleting a user]. -+ -The same window opens whether you click the user's name, or the Edit image:leftpencil.png[Edit, 15,15] icon beside the user. You can use this window to review and modify the User's *Organizations*, *Teams*, *Roles*, and other user membership details. +When the user is successfully created, the *User* dialog opens. From here, you can review and modify the user’s Teams, Roles, Tokens and other membership details. [NOTE] ==== If the user is not newly-created, the details screen displays the last login activity of that user. - -//image:users-last-login-info.png[image] ==== -If you log in as yourself, and view the details of your user profile, you can manage tokens from your user profile. - -For more information, see xref:proc-controller-user-tokens[Adding a user token]. - -//image:user-with-token-button.png[image] +If you log in as yourself, and view the details of your user profile, you can manage tokens from your user profile by selecting the *Tokens* tab For more information, see xref:proc-controller-apps-create-tokens[Adding a token]. diff --git a/downstream/modules/platform/proc-controller-credential-create-openshift-account.adoc b/downstream/modules/platform/proc-controller-credential-create-openshift-account.adoc index 52a6d006dc..ba55bbbfbc 100644 --- a/downstream/modules/platform/proc-controller-credential-create-openshift-account.adoc +++ b/downstream/modules/platform/proc-controller-credential-create-openshift-account.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-credential-create-openshift-account"] = Creating a service account in an Openshift cluster @@ -8,8 +10,49 @@ After you create the service account, its credentials are provided to {Controlle After you create a service account, use the information in the new service account to configure {ControllerName}. .Procedure -. To create a service account, download and use the link:https://docs.ansible.com/automation-controller/latest/html/userguide/_downloads/7a0708e6c2113e9601bf252270fa6c50/containergroup-sa.yml[sample service account] and change it as required to obtain the previous credentials. -. Apply the configuration from the link:https://docs.ansible.com/automation-controller/latest/html/userguide/_downloads/7a0708e6c2113e9601bf252270fa6c50/containergroup-sa.yml[sample service account]: +. To create a service account, download and use the sample service account, `containergroup sa`, and change it as needed to obtain the credentials: ++ +[literal, options="nowrap" subs="+attributes"] +---- +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: containergroup-service-account + namespace: containergroup-namespace +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: role-containergroup-service-account + namespace: containergroup-namespace +rules: +- apiGroups: [""] + resources: ["pods"] + verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] +- apiGroups: [""] + resources: ["pods/log"] + verbs: ["get"] +- apiGroups: [""] + resources: ["pods/attach"] + verbs: ["get", "list", "watch", "create"] +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: role-containergroup-service-account-binding + namespace: containergroup-namespace +subjects: +- kind: ServiceAccount + name: containergroup-service-account + namespace: containergroup-namespace +roleRef: + kind: Role + name: role-containergroup-service-account + apiGroup: rbac.authorization.k8s.io +---- ++ +. Apply the configuration from `containergroup-sa.yml`: + [literal, options="nowrap" subs="+attributes"] ---- diff --git a/downstream/modules/platform/proc-controller-customize-pod-spec.adoc b/downstream/modules/platform/proc-controller-customize-pod-spec.adoc index be2c7ac232..92f7d7849e 100644 --- a/downstream/modules/platform/proc-controller-customize-pod-spec.adoc +++ b/downstream/modules/platform/proc-controller-customize-pod-spec.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-customize-pod-spec"] = Customizing the pod specification @@ -10,12 +12,12 @@ A full list of options can be found in the link:https://docs.openshift.com/onlin . From the navigation panel, select {MenuInfrastructureInstanceGroups}. . Click btn:[Create group] and select *Create container group*. -. Check the option for *Customize pod spec* -. Enter a custom Kubernetes or OpenShift Pod specification in the *Pod Spec Override* field. +. Check the option for *Customize pod spec*. +. Enter a custom Kubernetes or OpenShift Pod specification in the *Pod spec override* field. + image::ag-instance-group-customize-cg-pod.png[Customize pod specification] + -. Click btn:[Create Container Group]. +. Click btn:[Create container group]. //You can give additional customizations, if needed. Click btn:[Expand] to view the entire customization window: diff --git a/downstream/modules/platform/proc-controller-define-schedule-exceptions.adoc b/downstream/modules/platform/proc-controller-define-schedule-exceptions.adoc index 7709b2b928..838628d049 100644 --- a/downstream/modules/platform/proc-controller-define-schedule-exceptions.adoc +++ b/downstream/modules/platform/proc-controller-define-schedule-exceptions.adoc @@ -1,9 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-define-schedule-exceptions"] = Setting exceptions to the schedule -On the *Create Schedule* page, click btn:[Create exception]. +.Procedure +. On the *Schedule Exceptions* page, click btn:[Create exception]. ++ Use the same format as for the schedule rules to create a schedule exception. -Click btn:[Next] to save and review both the schedule and the exception. \ No newline at end of file +. Click btn:[Next] to save and review both the schedule and the exception. diff --git a/downstream/modules/platform/proc-controller-define-schedule-rules.adoc b/downstream/modules/platform/proc-controller-define-schedule-rules.adoc index 8e47292635..21f9cca21e 100644 --- a/downstream/modules/platform/proc-controller-define-schedule-rules.adoc +++ b/downstream/modules/platform/proc-controller-define-schedule-rules.adoc @@ -1,29 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-define-schedule-rules"] = Defining rules for the schedule -Enter the following information: +.Procedure + +. Enter the following information: -* *Frequency*: Enter how frequently the schedule runs. -//* *Interval*: I don't know what this indicates. -* *Week Start*: Select the day of the week that you want the week to begin. -* *Weekdays*: Select the days of the week on which to run the schedule. -* *Months*: Select the months of the year on which to run the schedule -* *Annual week(s) number*: This field is used to declare numbered weeks of the year that the schedule should run. -* *Minute(s) of hour*: This field is used to declare minute(s) of the hour that the schedule should run. -* *Hour of day*: This field is used to declare the hours of day that the schedule should run. -* *Monthly day(s) number*: This field is used to declare ordinal days number of the month that the schedule should run. -* *Annual day(s) number*: This field is used to declare ordinal number days of the year that the schedule should run. -* *Occurences*: Use this field to filter down indexed rules based on those declared using the form fields in the Rule section. +* *Frequency*: Enter how often the schedule runs. +* *Interval*: Select the interval at which the rule will repeat. +* *Week start*: Select the day of the week that you want the week to begin. +* *Minutes of the hour*: Use this field to declare minute(s) of the hour that the schedule should run. +* *Hours of day*: Use this field to declare the hours of day that the schedule should run. +* *Days of the week*: Select the days of the week on which to run the schedule. +* *Days of the month*: Select the months of the year on which to run the schedule +* *Weeks of the year*: Use this field to declare numbered weeks of the year that the schedule should run. +* *Months of the year*: Use this field to declare ordinal days number of the month that the schedule should run. +* *Days of the year*: Use this field to declare ordinal number days of the year that the schedule should run. +* *Occurrences*: Use this field to filter down indexed rules based on those declared using the form fields in the Rule section. +* *Schedule ending type*: Use this field to select when the schedule is set to end. + For more information, see the link:https://datatracker.ietf.org/doc/html/rfc5545[link] to the `iCalendar RFC for bysetpos` field in the iCalendar documentation when you have set the rules for the schedule. -* *Count*: The number of times this rule should be used. -* *Until*: Use this rule until the specified date and time +//* *Count*: The number of times this rule should be used. +//* *Until*: Use this rule until the specified date and time -Click btn:[Save rule] +. Click btn:[Save rule]. The *Schedule Rules* summary page is displayed. -Click btn:[Add rule] to add additional rules. -Click btn:[Next]. - -The *Schedule Exceptions* summary page is displayed. \ No newline at end of file +. Click btn:[Add rule] to add additional rules. +. Click btn:[Next]. ++ +The *Schedule Exceptions* page is displayed. diff --git a/downstream/modules/platform/proc-controller-delete-job-template.adoc b/downstream/modules/platform/proc-controller-delete-job-template.adoc index 796e831362..017be04831 100644 --- a/downstream/modules/platform/proc-controller-delete-job-template.adoc +++ b/downstream/modules/platform/proc-controller-delete-job-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-delete-job-template"] = Deleting a job template @@ -6,14 +8,15 @@ Before deleting a job template, ensure that it is not used in a workflow job tem .Procedure -. Delete a job template by using one of these methods: -* Select the checkbox next to one or more job templates. Click image:options_menu.png[options menu,15,15] and select btn:[Delete template]. -* Select the required job template, on the *Details* page click image:options_menu.png[options menu,15,15] and select btn:[Delete template]. +. Delete a job template using the following method: +* Click the {MoreActionsIcon} icon and select the Delete Template image:delete-button.png[Delete Template,15,15] icon, or +* Select the required job template, on the *Details* page click the {MoreActionsIcon} icon and select image:delete-button.png[Delete template,15,15] btn:[Delete template]. [NOTE] ==== If deleting items that are used by other work items, a message opens listing the items that are affected by the deletion and prompts you to confirm the deletion. -Some screens contain items that are invalid or previously deleted, and will fail to run. The following is an example of that message: +Some screens contain items that are invalid or previously deleted, and will fail to run. +//The following is an example of that message: //image::ug-warning-deletion.png[Deletion warning] ==== diff --git a/downstream/modules/platform/proc-controller-deleting-a-user.adoc b/downstream/modules/platform/proc-controller-deleting-a-user.adoc index 8dc4831d99..2dd255887e 100644 --- a/downstream/modules/platform/proc-controller-deleting-a-user.adoc +++ b/downstream/modules/platform/proc-controller-deleting-a-user.adoc @@ -1,16 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-deleting-a-user"] = Deleting a user -Before you can delete a user, you must have user permissions. -When you delete a user account, the name and email of the user are permanently removed from {ControllerName}. +Before you can delete a user, you must have normal user or system administrator permissions. When you delete a user account, the name and email of the user are permanently removed from {PlatformNameShort}. .Procedure -. From the navigation panel, select {MenuControllerUsers}. -. Click btn:[Users] to display a list of the current users. -. Select the check box for the user that you want to remove. -. Click btn:[Delete]. -//+ -//image:users-home-users-checked-delete.png[image] +. From the navigation panel, select {MenuAMUsers}. +. Select the checkbox for the user that you want to remove. +. Click the {MoreActionsIcon} icon next to the user you want removed and select *Delete user*. ++ +[NOTE] +==== +You can delete multiple users by selecting the checkbox next to each user you want to remove, and clicking *Delete users* from the *More actions {MoreActionsIcon}* list. +==== -. Click btn:[Delete] in the confirmation warning message to permanently delete the user. diff --git a/downstream/modules/platform/con-controller-deprovision-instance-groups.adoc b/downstream/modules/platform/proc-controller-deprovision-instance-groups.adoc similarity index 91% rename from downstream/modules/platform/con-controller-deprovision-instance-groups.adoc rename to downstream/modules/platform/proc-controller-deprovision-instance-groups.adoc index 3346b21928..774c22bd63 100644 --- a/downstream/modules/platform/con-controller-deprovision-instance-groups.adoc +++ b/downstream/modules/platform/proc-controller-deprovision-instance-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-deprovision-instance-group"] = Deprovisioning instance groups @@ -21,7 +23,7 @@ automation-controller-service stop awx-manage deprovision_instance --hostname= ---- + -.Example +For example [literal, options="nowrap" subs="+attributes"] ---- @@ -39,7 +41,7 @@ awx-manage unregister_queue --queuename= Removing an instance's membership from an instance group in the inventory file and re-running the setup playbook does not ensure that the instance is not added back to a group. To be sure that an instance is not added back to a group, remove it through the API and also remove it in your inventory file. You can also stop defining instance groups in the inventory file. You can manage instance group topology through the {ControllerName} UI. -For more information about managing instance groups in the UI, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-instance-groups[Managing Instance Groups] in the _{ControllerUG}_. +For more information about managing instance groups in the UI, see xref:controller-instance-groups[Managing Instance Groups]. [NOTE] ==== diff --git a/downstream/modules/platform/proc-controller-disabling-live-events.adoc b/downstream/modules/platform/proc-controller-disabling-live-events.adoc index 01f665df4a..4a68cc00df 100644 --- a/downstream/modules/platform/proc-controller-disabling-live-events.adoc +++ b/downstream/modules/platform/proc-controller-disabling-live-events.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-disabling-live-events"] = Disabling live streaming events @@ -6,4 +8,4 @@ . Disable live streaming events by using one of the following methods: .. In the API, set `UI_LIVE_UPDATES_ENABLED` to *False*. -.. Navigate to your {ControllerName}. Open the *Miscellaneous System Settings* window. Set the *Enable Activity Stream* toggle to *Off*. +.. Go to your {ControllerName}. Open the *Miscellaneous System Settings* window. Set the *Enable Activity Stream* toggle to *Off*. diff --git a/downstream/modules/platform/proc-controller-edit-job-template.adoc b/downstream/modules/platform/proc-controller-edit-job-template.adoc index 2828831303..700fef2c62 100644 --- a/downstream/modules/platform/proc-controller-edit-job-template.adoc +++ b/downstream/modules/platform/proc-controller-edit-job-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-edit-job-templates"] = Editing a job template diff --git a/downstream/modules/platform/proc-controller-edit-nodes.adoc b/downstream/modules/platform/proc-controller-edit-nodes.adoc index 3e7adf661e..fd8ee8332b 100644 --- a/downstream/modules/platform/proc-controller-edit-nodes.adoc +++ b/downstream/modules/platform/proc-controller-edit-nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-edit-nodes"] = Editing a node @@ -5,27 +7,25 @@ .Procedure * Edit a node by using one of these methods: -** If you want to edit a node, click on the node you want to edit. -The pane displays the current selections. -Make your changes and click btn:[Select] to apply them to the graphical view. -** To edit the edge type for an existing link, (*success*, *failure*, *always*), click the link. -The pane displays the current selection. -Make your changes and click btn:[Save] to apply them to the graphical view. -** Click the link (image:link-icon.png[Link icon,15,15]) icon that appears on each node, to add a new link from one node to another. -Doing this highlights the nodes that are possible to link to. -These options are indicated by the dotted lines. -Invalid options are indicated by disabled boxes (nodes) that would otherwise produce an invalid link. -The following example shows the *Demo Project* as a possible option for the *e2e-ec20de52-project* to link to, indicated by the arrows: -+ -image::ug-wf-node-link-scenario.png[Node link scenario] -+ - -** To remove a link, click the link and click btn:[UNLINK]. +** If you want to edit a node, click the icon of the node. +The pane displays the current selections, click btn:[Edit] to change these. +Make your changes and click btn:[Finish] to apply them to the graphical view. +** To edit the edge type for an existing link, (*Run on success*, *Run on fail*, *Run always*), click (image:options_menu.png[Plus icon,15,15]) on the existing status. +//** Click the link (image:link-icon.png[Link icon,15,15]) icon that appears on each node, to add a new link from one node to another. +//Doing this highlights the nodes that are possible to link to. +//These options are indicated by the dotted lines. +//Invalid options are indicated by disabled boxes (nodes) that would otherwise produce an invalid link. +//The following example shows the *Demo Project* as a possible option for the *e2e-ec20de52-project* to link to, indicated by the arrows: +//+ +//image::ug-wf-node-link-scenario.png[Node link scenario] +//+ + +** To remove a link, click (image:options_menu.png[Plus icon,15,15]) for the link and click btn:[Remove link]. This option only appears in the pane if the target or child node has more than one parent. All nodes must be linked to at least one other node at all times so you must create a new link before removing an old one. * Edit the view of the workflow diagram by using one of these methods: -** Click the settings icon to zoom, pan, or reposition the view. +** Click the examine icon (image:examine.png[Examine icon 15,15]) to zoom in, the reduce icon (image:reduce.png[Reduce icon 15,15]) to zoom out, the expand icon (image:expand.png[Expand icon 15,15]) to fit to screen or the reset icon (image:reset.png[Reset icon 15,15]) to reposition the view. ** Drag the workflow diagram to reposition it on the screen or use the scroll on your mouse to zoom. diff --git a/downstream/modules/platform/proc-controller-edit-project.adoc b/downstream/modules/platform/proc-controller-edit-project.adoc index ba5118334f..059bf169fd 100644 --- a/downstream/modules/platform/proc-controller-edit-project.adoc +++ b/downstream/modules/platform/proc-controller-edit-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-edit-project"] = Editing a project diff --git a/downstream/modules/platform/proc-controller-ee-mount-execution-node.adoc b/downstream/modules/platform/proc-controller-ee-mount-execution-node.adoc index b7e74cb62e..daaedfc84c 100644 --- a/downstream/modules/platform/proc-controller-ee-mount-execution-node.adoc +++ b/downstream/modules/platform/proc-controller-ee-mount-execution-node.adoc @@ -1,10 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-ee-mount-execution-node"] = Mounting the directory in the execution node to the {ExecEnvShort} container -With {PlatformNameShort} 2.1.2, only `O` and `z` options were available. -Since {PlatformNameShort} 2.2, further options such as `rw` are available. -This is useful when using NFS storage. +This procedure describes how to configure paths exposed to isolated jobs, allowing volumes to be mounted from execution or hybrid nodes to job containers. + +// Deleted since no longer supported +//With {PlatformNameShort} 2.1.2, only `O` and `z` options were available. +//Since {PlatformNameShort} 2.2, further options such as `rw` are available. +//This is useful when using NFS storage. .Procedure diff --git a/downstream/modules/platform/proc-controller-enable-provision-callbacks.adoc b/downstream/modules/platform/proc-controller-enable-provision-callbacks.adoc index 28f2f08f9a..33f757bc07 100644 --- a/downstream/modules/platform/proc-controller-enable-provision-callbacks.adoc +++ b/downstream/modules/platform/proc-controller-enable-provision-callbacks.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-enable-provision-callbacks"] = Enabling Provisioning Callbacks @@ -19,73 +21,3 @@ Give a custom value for the *Host config key*. The host key can be reused across many hosts to apply this job template against multiple hosts. If you want to control what hosts are able to request configuration, you can change the key can at any time. -To callback manually using REST: - -.Procedure - -. Examine the callback URL in the UI, in the form: -\https:///api/v2/job_templates/7/callback/ -* The "7" in the sample URL is the job template ID in {ControllerName}. -. Ensure that the request from the host is a POST. -The following is an example using `curl` (all on a single line): -+ ----- -curl -k -f -i -H 'Content-Type:application/json' -XPOST -d '{"host_config_key": "redhat"}' \ - https:///api/v2/job_templates/7/callback/ ----- -+ -. Ensure that the requesting host is defined in your inventory for the callback to succeed. - -.Troubleshooting - -If {ControllerName} fails to locate the host either by name or IP address in one of your defined inventories, the request is denied. -When running a job template in this way, ensure that the host initiating the playbook run against itself is in the inventory. -If the host is missing from the inventory, the job template fails with a *No Hosts Matched* type error message. - -If your host is not in the inventory and *Update on Launch* is checked for the inventory group, {ControllerName} attempts to update cloud based inventory sources before running the callback. - -.Verification - -Successful requests result in an entry on the *Jobs* tab, where you can view the results and history. -You can access the callback by using REST, but the suggested method of using the callback is to use one of the example scripts that includes {ControllerName}: - -* `/usr/share/awx/request_tower_configuration.sh` (Linux/UNIX) -* `/usr/share/awx/request_tower_configuration.ps1` (Windows) - -Their usage is described in the source code of the file by passing the `-h` flag, as the following shows: ----- -./request_tower_configuration.sh -h -Usage: ./request_tower_configuration.sh - - -Request server configuration from Ansible Tower. - - -OPTIONS: - -h Show this message - -s Controller server (e.g. https://ac.example.com) (required) - -k Allow insecure SSL connections and transfers - -c Host config key (required) - -t Job template ID (required) - -e Extra variables ----- - -This script can retry commands and is therefore a more robust way to use callbacks than a simple `curl` request. -The script retries once per minute for up to ten minutes. - -[NOTE] -==== -This is an example script. -Edit this script if you need more dynamic behavior when detecting failure scenarios, as any non-200 error code may not be a transient error requiring retry. -==== - -You can use callbacks with dynamic inventory in {ControllerName}. -For example, when pulling cloud inventory from one of the supported cloud providers. -In these cases, along with setting *Update On Launch*, ensure that you configure an inventory cache timeout for the inventory source, to avoid hammering of your cloud's API endpoints. -Since the `request_tower_configuration.sh` script polls once per minute for up to ten minutes, a suggested cache invalidation time for inventory (configured on the inventory source itself) would be one or two minutes. - -Running the `request_tower_configuration.sh` script from a cron job is not recommended, however, a suggested cron interval is every 30 minutes. -Repeated configuration can be handled by scheduling {ControllerName} so that the primary use of callbacks by most users is to enable a base image that is bootstrapped into the latest configuration when coming online. -Running at first boot is best practice. -First boot scripts are init scripts that typically self-delete, so you set up an init script that calls a copy of the `request_tower_configuration.sh` script and make that into an auto scaling image. - diff --git a/downstream/modules/platform/proc-controller-find-subscription.adoc b/downstream/modules/platform/proc-controller-find-subscription.adoc new file mode 100644 index 0000000000..ca518edfb4 --- /dev/null +++ b/downstream/modules/platform/proc-controller-find-subscription.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: PROCEDURE + +[id="controller-find-subscription"] + += Finding your subscription with service account credentials + +When you log in to {PlatformNameShort} for the first time, you must add your subscription information. + +If you have already added your subscription, you can update your subscription details in the platform by going to {MenuSetSubscription} → btn:[Edit subscription]. + +.Prerequisites + +* You are an organization administrator. +* You have link:https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/creating_and_managing_service_accounts/proc-ciam-svc-acct-overview-creating-service-acct#proc-ciam-svc-acct-create-creating-service-acct[created a service account] and saved the client ID and client secret. + + +[NOTE] + +==== + +If you do not have administrative access, you can enter your Red Hat username and password in the Client ID and Client secret fields, respectively, to locate and add your subscription to your {PlatformNameShort} instance. +==== + +.Procedure + +. Enter your service account credentials to find the subscription associated with your profile: +.. To find your subscription, click the tab labeled *Service Account / Red Hat Satellite*. +.. In the *Client ID / Satellite username* field enter the client ID you received when you created your service account. +.. In the *Client secret / Satellite password* field enter the client secret you received when you created your service account. +Your subscription appears in the list menu labeled *Subscription*. +Select your subscription. + +. After you have added your subscription, click btn:[Next]. +. Check the box indicating that you agree to the *End User License Agreement*. +. Review your information and click *Finish*. + +[NOTE] +==== +If you enter your client ID and client secret but cannot locate your subscription, you might not have the correct permissions set on your service account. +For more information and troubleshooting guidance for service accounts, see link:https://access.redhat.com/articles/7112649[Configure Ansible Automation Platform to authenticate through service account credentials]. +==== diff --git a/downstream/modules/platform/proc-controller-getting-started-with-job-templates.adoc b/downstream/modules/platform/proc-controller-getting-started-with-job-templates.adoc index c317777688..7506943519 100644 --- a/downstream/modules/platform/proc-controller-getting-started-with-job-templates.adoc +++ b/downstream/modules/platform/proc-controller-getting-started-with-job-templates.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-getting-started-with-job-templates"] = Getting started with job templates @@ -9,4 +11,4 @@ As part of the initial setup, a *Demo Job Template* is created for you. . To review existing templates, select {MenuAETemplates} from the navigation panel. . Click btn:[Demo Job Template] to view its details. -image::controller-job-template-demo-details.png[Job templates] +//image::controller-job-template-demo-details.png[Job templates] diff --git a/downstream/modules/platform/proc-controller-github-app-token.adoc b/downstream/modules/platform/proc-controller-github-app-token.adoc new file mode 100644 index 0000000000..11003f8cf8 --- /dev/null +++ b/downstream/modules/platform/proc-controller-github-app-token.adoc @@ -0,0 +1,73 @@ +:_mod-docs-content-type: PROCEDURE + +[id="controller-github-app-token"] + += Configuring a `GitHub App Installation Access Token Lookup` + +With this plugin you can use a private GitHub App RSA key as a credential input source to pull access tokens from GitHub App installations. +{GatewayStart} uses existing GitHub authorization from organizations' GitHub repositories. + +For more information, see link:https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-an-installation-access-token-for-a-github-app[Generating an installation access token for a GitHub App]. + +.Procedure + +. Create a lookup credential that stores your secrets. +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_execution/controller-credentials#controller-create-credential[Creating new credentials]. + +. Select *GitHub App Installation Access Token Lookup* for *Credential type*, and enter the following attributes to properly configure your lookup: + +** *GitHub App ID*: Enter the App ID provided by your instance of GitHub, this is what is used to authenticate. +** *GitHub App Installation ID*: Enter the ID of the application into your target organization where the access token is scoped. +You must set it up to have access to your target repository. +** *RSA Private Key*: Enter the generated private key that your GitHub instance generated. +You can get it from the GitHub App maintainer within GitHub. +For more information, see link:https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/managing-private-keys-for-github-apps[Managing private keys for GitHub Apps]. + +. Click btn:[Create credential] to confirm and save the credential. ++ +The following is an example of a configured *GitHub App Installation Access Token Lookup* credential: ++ +image:credentials-create-github-app-lookup-credential.png[GitHub App token lookup credential] ++ +. Create a target credential that searches for the lookup credential. +To use your lookup in a private repository, select *Source Control* as your *Credential type*. +Enter the following attributes to properly configure your target credential: + +** *Username*: Enter the username `x-access-token`. +** *Password*: Click the image:leftkey.png[Link,15,15] icon for managing external credentials in the input field. +You are prompted to set the input source to use to retrieve your secret information. +This is the lookup credential that you have already created. ++ +image:credentials-github-app-target-secret-info.png[Target credential secret info] ++ +. Enter an optional description for the metadata requested and click btn:[Finish]. + +. Click btn:[Create credential] to confirm and save the credential. + +. Verify both your lookup credential and your target credential are now available on the *Credentials* list view. +To use the target credential in a project, create a project and enter the following information: + +** *Name*: Enter the name for your project. +** *Organization*: Select the name of the organization from the drop-down menu.. +** *Execution environment*: Optionally select an execution environment, if applicable. +** *Source control type*: If you are syncing with a private repository, select *Git* for your source control. ++ +The *Type Details* view opens for additional input. +Enter the following information: + +** *Source control URL*: Enter the URL of the private repository you want to access. +The other related fields pertaining to *branch/tag/commit* and *refspec* are not relevant for use with a lookup credential. +** *Source control credential*: Select the target credential that you have already created. ++ +The following is an example of a configured target credential in a project: ++ +image:project-create-git-github-app.png[GitHub App project] ++ +. Click btn:[Create project] and the project sync automatically starts. +The project *Details* tab displays the progress of the job: ++ +image:project-sync-github-app.png[Project sync GitHub App] + +.Troubleshooting + +If your project sync fails, you might have to manually re-enter `https://api.github.com` in the *GitHub API endpoint URL* field from Step 2 and re-run your project sync. diff --git a/downstream/modules/platform/proc-controller-github-enterprise-org-settings.adoc b/downstream/modules/platform/proc-controller-github-enterprise-org-settings.adoc index 4c42b48c6f..430343a00f 100644 --- a/downstream/modules/platform/proc-controller-github-enterprise-org-settings.adoc +++ b/downstream/modules/platform/proc-controller-github-enterprise-org-settings.adoc @@ -1,39 +1,42 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-github-enterprise-org-settings"] -= GitHub Enterprise Organization settings += Configuring GitHub enterprise organization authentication -To set up social authentication for a GitHub Enterprise Organization, you must obtain a GitHub Enterprise Organization URL, an Organization API URL, an Organization OAuth2 key and secret for a web application. +To set up social authentication for a GitHub enterprise organization, you must obtain a GitHub enterprise organization URL, an Organization API URL, an Organization OAuth2 key and secret for a web application. -To obtain the URLs, refer to the GitHub documentation on link:https://docs.github.com/en/enterprise-server@3.1/rest/reference/enterprise-admin[GitHub Enterprise administration]. +To obtain the URLs, refer to the link:https://docs.github.com/en/enterprise-server@3.1/rest/reference/enterprise-admin[GitHub Enterprise administration documentation]. -To obtain the key and secret, you must first register your enterprise organization-owned application at `https://github.com/organizations//settings/applications` +The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. To register the application, you must supply it with your webpage URL, which is the *Callback URL* shown in the Authenticator details for your authenticator configuration. See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. -To register the application, you must supply it with your Authorization callback URL, which is the *Callback URL* shown in the *Details* page. +Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. -Because it is hosted on site and not `github.com`, you must specify which authentication adapter it communicates with. +.Procedure +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *GitHub enterprise organization* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. -Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. -The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. +include::snippets/snip-gw-authentication-auto-migrate.adoc[] -.Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. On the *Settings* page, select *GitHub settings* from the list of *Authentication* options. -. Click the *GitHub Enterprise Organization* tab. +. When the application is registered, GitHub displays the *Client ID* and *Client Secret*: ++ +.. Copy and paste the GitHub Client ID into the GitHub OAuth2 Key field. +.. Copy and paste the GitHub Client Secret into the GitHub OAuth2 Secret field. ++ +. In the *Base URL* field, enter the hostname of the GitHub Enterprise instance, for example, `https://github.example.com`. +. In the *Github OAuth2 Enterprise API URL* field, enter the API URL of the GitHub Enterprise instance, for example, `https://github.example.com/api/v3`. +. Enter the name of your GitHub enterprise organization, as used in your organization’s URL, for example, `https://github.com//` in the *GitHub OAuth2 Enterprise Org Name* field. + -The *GitHub Enterprise Organization OAuth2 Callback URL* field is already pre-populated and non-editable. -When the application is registered, GitHub displays the Client ID and Client Secret. - -. Click btn:[Edit] to configure GitHub Enterprise Organization settings. -. In the *GitHub Enterprise Organization URL* field, enter the hostname of the GitHub Enterprise Organization instance, for example, https://github.orgexample.com. -. In the *GitHub Enterprise Organization API URL* field, enter the API URL of the GitHub Enterprise Organization instance, for example, https://github.orgexample.com/api/v3. -. Copy and paste GitHub's Client ID into the *GitHub Enterprise Organization OAuth2 Key* field. -. Copy and paste GitHub's Client Secret into the *GitHub Enterprise Organization OAuth2 Secret* field. -. Enter the name of your GitHub Enterprise organization, as used in your organization's URL, for example, https://github.com// in the *GitHub Enterprise Organization Name* field. -. For more information on completing the mapping fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. Click btn:[Save]. - -.Verification -To verify that the authentication was configured correctly, logout of {ControllerName}. -The login screen displays the GitHub Enterprise Organization logo to enable logging in with those credentials. - -image:configure-controller-auth-github-ent-org-logo.png[image] +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +include::snippets/snip-gw-authentication-verification.adoc[] + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-github-enterprise-settings.adoc b/downstream/modules/platform/proc-controller-github-enterprise-settings.adoc index a78b59e8f8..b9f3667bc2 100644 --- a/downstream/modules/platform/proc-controller-github-enterprise-settings.adoc +++ b/downstream/modules/platform/proc-controller-github-enterprise-settings.adoc @@ -1,37 +1,41 @@ -[id="proc-controller-github-enterprise-settings"] +:_mod-docs-content-type: PROCEDURE -= GitHub Enterprise settings +[id="proc-controller-github-enterprise-settings"] -To set up social authentication for a GitHub Enterprise, you must obtain a GitHub Enterprise URL, an API URL, OAuth2 key and secret for a web application. += Configuring GitHub enterprise authentication -To obtain the URLs, refer to the link:https://docs.github.com/en/enterprise-server@3.1/rest/reference/enterprise-admin[GitHub Enterprise administration] documentation. +To set up social authentication for a GitHub enterprise, you must obtain a GitHub Enterprise URL, an API URL, OAuth2 key and secret for a web application. -To obtain the key and secret, you must first register your enterprise-owned application at \https://github.com/organizations//settings/applications. +To obtain the URLs, refer to the link:https://docs.github.com/en/enterprise-server@3.1/rest/reference/enterprise-admin[GitHub Enterprise administration documentation]. -To register the application, you must supply it with your Authorization callback URL, which is the *Callback URL* shown in the *Details* page. -Because it is hosted on site and not `github.com`, you must specify which authentication adapter it communicates with. +The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. To register the application, you must supply it with your webpage URL, which is the *Callback URL* shown in the Authenticator details for your authenticator configuration. See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. -Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. -The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. +Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. On the *Settings* page, select *GitHub settings* from the list of *Authentication* options. -. Click the *GitHub Enterprise* tab. +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *GitHub enterprise* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. When the application is registered, GitHub displays the *Client ID* and *Client Secret*: ++ +.. Copy and paste the GitHub Client ID into the GitHub OAuth2 Key field. +.. Copy and paste the GitHub Client Secret into the GitHub OAuth2 Secret field. + -The *GitHub Enterprise OAuth2 Callback URL* field is already pre-populated and non-editable. -When the application is registered, GitHub displays the Client ID and Client Secret. - -. Click btn:[Edit] to configure GitHub Enterprise settings. -. In the *GitHub Enterprise URL* field, enter the hostname of the GitHub Enterprise instance, for example, https://github.example.com. -. In the *GitHub Enterprise API URL* field, enter the API URL of the GitHub Enterprise instance, for example, https://github.example.com/api/v3. -. Copy and paste GitHub's Client ID into the *GitHub Enterprise OAuth2 Key* field. -. Copy and paste GitHub's Client Secret into the *GitHub Enterprise OAuth2 Secret* field. -. For more information on completing the mapping fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. Click btn:[Save]. - -.Verification -To verify that the authentication was configured correctly, logout of {ControllerName}. -The login screen displays the GitHub Enterprise logo to enable logging in with those credentials. - -image:configure-controller-auth-github-ent-logo.png[image] +. In the *Base URL* field, enter the hostname of the GitHub Enterprise instance, for example, `https://github.example.com`. +. In the *Github OAuth2 Enterprise API URL* field, enter the API URL of the GitHub Enterprise instance, for example, `https://github.example.com/api/v3`. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +include::snippets/snip-gw-authentication-verification.adoc[] + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-github-enterprise-team-settings.adoc b/downstream/modules/platform/proc-controller-github-enterprise-team-settings.adoc index 01af48b9b1..1a19a00c8c 100644 --- a/downstream/modules/platform/proc-controller-github-enterprise-team-settings.adoc +++ b/downstream/modules/platform/proc-controller-github-enterprise-team-settings.adoc @@ -1,41 +1,46 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-github-enterprise-team-settings"] -= GitHub Enterprise Team settings += Configuring GitHub enterprise team authentication -To set up social authentication for a GitHub Enterprise team, you must obtain a GitHub Enterprise Organization URL, an Organization API URL, an Organization OAuth2 key and secret for a web application. +To set up social authentication for a GitHub enterprise team, you must obtain a GitHub Enterprise Organization URL, an Organization API URL, an Organization OAuth2 key and secret for a web application. -To obtain the URLs, refer to the GitHub documentation on link:https://docs.github.com/en/enterprise-server@3.1/rest/reference/enterprise-admin[GitHub Enterprise administration]. +To obtain the URLs, refer to the link:https://docs.github.com/en/enterprise-server@3.1/rest/reference/enterprise-admin[GitHub Enterprise administration documentation]. -To obtain the key and secret, you must first register your enterprise team-owned application at `https://github.com/organizations//settings/applications`. +To obtain the key and secret, you must first register your enterprise organization-owned application at `https://github.com/organizations//settings/applications`. -To register the application, you must supply it with your Authorization callback URL, which is the *Callback URL* shown in the *Details* page. -Because it is hosted on site and not github.com, you must specify which authentication adapter it communicates with. +The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. To register the application, you must supply it with your webpage URL, which is the *Callback URL* shown in the Authenticator details for your authenticator configuration. See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. The OAuth2key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. .Procedure -. Find the numeric team ID using the link:https://fabian-kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/[GitHub API]. -The Team ID will be used to supply a required field in the UI. -. From the navigation panel, select {MenuAEAdminSettings}. -. On the *Settings* page, select *GitHub settings* from the list of *Authentication* options. -. Click the *GitHub Enterprise Team* tab. + +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *GitHub enterprise team* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. When the application is registered, GitHub displays the *Client ID* and *Client Secret*: ++ +.. Copy and paste the GitHub Client ID into the GitHub OAuth2 Key field. +.. Copy and paste the GitHub Client Secret into the GitHub OAuth2 Secret field. + -The *GitHub Enterprise Team OAuth2 Callback URL* field is already pre-populated and non-editable. -When the application is registered, GitHub displays the Client ID and Client Secret. - -. Click btn:[Edit] to configure GitHub Enterprise Team settings. -. In the *GitHub Enterprise Team URL* field, enter the hostname of the GitHub Enterprise team instance, for example, https://github.teamexample.com. -. In the *GitHub Enterprise Team API URL* field, enter the API URL of the GitHub Enterprise team instance, for example, -https://github.teamexample.com/api/v3. -. Copy and paste GitHub's Client ID into the *GitHub Enterprise Team OAuth2 Key* field. -. Copy and paste GitHub's Client Secret into the *GitHub Enterprise Team OAuth2 Secret* field. -. Copy and paste GitHub's team ID in the *GitHub Enterprise Team ID* field. -. For more information on completing the mapping fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. Click btn:[Save]. - -.Verification -To verify that the authentication was configured correctly, logout of {ControllerName}. -The login screen displays the GitHub Enterprise Teams logo to enable logging in with those credentials. - -image:configure-controller-auth-github-ent-teams-logo.png[image] +. In the *Base URL* field, enter the hostname of the GitHub Enterprise instance, for example, `https://github.orgexample.com`. +. In the *Github OAuth2 Enterprise API URL* field, enter the API URL of the GitHub Enterprise instance, for example, `https://github.example.com/api/v3`. +. Enter the OAuth2 key (Client ID) from your GitHub developer application in the *GitHub OAuth2* team ID field. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +include::snippets/snip-gw-authentication-verification.adoc[] + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-github-organization-settings.adoc b/downstream/modules/platform/proc-controller-github-organization-settings.adoc new file mode 100644 index 0000000000..43cf5f21ee --- /dev/null +++ b/downstream/modules/platform/proc-controller-github-organization-settings.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-github-organization-settings"] + += Configuring GitHub organization authentication + +When defining account authentication with either an organization or a team within an organization, you should use the specific organization and team settings. Account authentication can be limited by an organization and by a team within an organization. +You can also choose to permit all by specifying non-organization or non-team based settings. +You can limit users who can log in to the platform by limiting only those in an organization or on a team within an organization. + +To set up social authentication for a GitHub organization, you must obtain an OAuth2 key and secret for a web application using the link:https://docs.github.com/en/apps/using-github-apps/installing-your-own-github-app[registering the new application with GitHub]. + +The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. To register the application, you must supply it with your webpage URL, which is the Callback URL shown in the Authenticator details for your authenticator configuration. See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. + +.Procedure +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *GitHub organization* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. When the application is registered, GitHub displays the *Client ID* and *Client Secret*: ++ +.. Copy and paste the GitHub Client ID into the GitHub OAuth2 Key field. +.. Copy and paste the GitHub Client Secret into the GitHub OAuth2 Secret field. ++ +. Enter the name of your GitHub organization, as used in your organization’s URL, for example, `https://github.com//` in the *GitHub OAuth Organization Name* field. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +. Enter the authorization scope for users in the *GitHub OAuth2 Scope* field. The default is `read:org`. ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +include::snippets/snip-gw-authentication-verification.adoc[] + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-github-organization-setttings.adoc b/downstream/modules/platform/proc-controller-github-organization-setttings.adoc deleted file mode 100644 index 3800c32e70..0000000000 --- a/downstream/modules/platform/proc-controller-github-organization-setttings.adoc +++ /dev/null @@ -1,37 +0,0 @@ -[id="proc-controller-github-organization-setttings"] - -= GitHub Organization settings - -When defining account authentication with either an organization or a team within an organization, you should use the specific organization and team settings. -Account authentication can be limited by an organization and by a team within an organization. - -You can also choose to permit all by specifying non-organization or non-team based settings. - -You can limit users who can login to the controller by limiting only those in an organization or on a team within an organization. - -To set up social authentication for a GitHub Organization, you must obtain an OAuth2 key and secret for a web application. To do this, you must first register your organization-owned application at \https://github.com/organizations//settings/applications. - -To register the application, you must supply it with your Authorization callback URL, which is the *Callback URL* shown in the *Details* page. -Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. -The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. - -.Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. On the *Settings* page, select *GitHub settings* from the list of *Authentication* options. -. Select the *GitHub Organization* tab. -+ -The *GitHub Organization OAuth2 Callback URL* field is already pre-populated and non-editable. -+ -When the application is registered, GitHub displays the Client ID and Client Secret. - -. Click btn:[Edit] and copy and paste GitHub's Client ID into the *GitHub Organization OAuth2 Key* field. -. Copy and paste GitHub's Client Secret into the *GitHub Organization OAuth2 Secret* field. -. Enter the name of your GitHub organization, as used in your organization's URL, for example, \https://github.com// in the *GitHub Organization Name* field. -. For more information on completing the mapping fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. Click btn:[Save]. - -.Verification -To verify that the authentication was configured correctly, logout of {ControllerName}. -The login screen displays the GitHub Organization logo to enable logging in with those credentials. - -image:configure-controller-auth-github-orgs-logo.png[image] diff --git a/downstream/modules/platform/proc-controller-github-settings.adoc b/downstream/modules/platform/proc-controller-github-settings.adoc index 06682efff7..ca396d4c5f 100644 --- a/downstream/modules/platform/proc-controller-github-settings.adoc +++ b/downstream/modules/platform/proc-controller-github-settings.adoc @@ -1,28 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-github-settings"] -= GitHub settings += Configuring GitHub authentication -To set up social authentication for GitHub, you must obtain an OAuth2 key and secret for a web application. -To do this, you must first register the new application with GitHub at https://github.com/settings/developers. +You can connect GitHub identities to {PlatformNameShort} using OAuth. To set up GitHub authentication, you need to obtain an OAuth2 key and secret by registering your organization-owned application from GitHub using the link:https://docs.github.com/en/apps/using-github-apps/installing-your-own-github-app[registering the new application with GitHub]. -To register the application, you must supply it with your homepage URL, which is the *Callback URL* shown in the *Details* tab of the *GitHub default settings* page. -The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. +The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. To register the application, you must supply it with your webpage URL, which is the Callback URL shown in the Authenticator details for your authenticator configuration. See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. On the *Settings* page, select *GitHub settings* from the list of *Authentication* options. -. Select the *GitHub Default* tab if not already selected. -+ -The *GitHub OAuth2 Callback URL* field is already pre-populated and non-editable. -When the application is registered, GitHub displays the Client ID and Client Secret. -. Click btn:[Edit] and copy and paste the GitHub Client ID into the *GitHub OAuth2 Key* field. -. Copy and paste the GitHub Client Secret into the *GitHub OAuth2 Secret* field. -. For more information on completing the mapping fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. Click btn:[Save]. +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *GitHub* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. When the application is registered, GitHub displays the *Client ID* and *Client Secret*: ++ +.. Copy and paste the GitHub Client ID into the GitHub OAuth2 Key field. +.. Copy and paste the GitHub Client Secret into the GitHub OAuth2 Secret field. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. -.Verification -To verify that the authentication was configured correctly, logout of {ControllerName}. -The login screen now displays the GitHub logo to enable logging in with those credentials. +include::snippets/snip-gw-authentication-verification.adoc[] -image:configure-controller-auth-github-logo.png[image] +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-github-team-settings.adoc b/downstream/modules/platform/proc-controller-github-team-settings.adoc index ce1dc5f830..7f54c419ac 100644 --- a/downstream/modules/platform/proc-controller-github-team-settings.adoc +++ b/downstream/modules/platform/proc-controller-github-team-settings.adoc @@ -1,32 +1,40 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-github-team-settings"] -= GitHub Team settings += Configuring GitHub team authentication + +To set up social authentication for a GitHub team, you must obtain an OAuth2 key and secret for a web application using the instructions provided in link:https://docs.github.com/en/apps/using-github-apps/installing-your-own-github-app[registering the new application with GitHub]. + +The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. To register the application, you must supply it with your webpage URL, which is the *Callback URL* shown in the Authenticator details for your authenticator configuration. See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. -To set up social authentication for a GitHub Team, you must obtain an OAuth2 key and secret for a web application. -To do this, you must first register your team-owned application at `https://github.com/organizations//settings/applications`. -To register the application, you must supply it with your Authorization callback URL, which is the *Callback URL* shown in the *Details* page. -Each key and secret must belong to a unique application and cannot be shared or reused between different authentication -backends. -The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. +Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. .Procedure -. Find the numeric team ID using the link:https://fabian-kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/[GitHub API]. -The Team ID is used to supply a required field in the UI. -. From the navigation panel, select {MenuAEAdminSettings}. -. On the *Settings* page, select *GitHub settings* from the list of *Authentication* options. -. Click the *GitHub Team* tab. -+ -The *GitHub Team OAuth2 Callback URL* field is already pre-populated and non-editable. -When the application is registered, GitHub displays the Client ID and Client Secret. -. Click btn:[Edit] and copy and paste GitHub's Client ID into the *GitHub Team OAuth2 Key* field. -. Copy and paste GitHub's Client Secret into the *GitHub Team OAuth2 Secret* field. -. Copy and paste GitHub's team ID in the *GitHub Team ID* field. -. For more information on completing the mapping fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. Click btn:[Save] +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *GitHub team* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. When the application is registered, GitHub displays the *Client ID* and *Client Secret*: ++ +.. Copy and paste the GitHub Client ID into the GitHub OAuth2 Key field. +.. Copy and paste the GitHub Client Secret into the GitHub OAuth2 Secret field. ++ +. Copy and paste GitHub’s team ID in the *GitHub OAuth2 Team ID* field. +. Enter the authorization scope for users in the GitHub OAuth2 Scope field. The default is `read:org`. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. -.Verification -To verify that the authentication was configured correctly, logout of {ControllerName}. -The login screen displays the GitHub Team logo to enable logging in with those credentials. +include::snippets/snip-gw-authentication-verification.adoc[] -image:configure-controller-auth-github-teams-logo.png[image] +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-google-oauth2-settings.adoc b/downstream/modules/platform/proc-controller-google-oauth2-settings.adoc index b6af819555..0ec1d756b0 100644 --- a/downstream/modules/platform/proc-controller-google-oauth2-settings.adoc +++ b/downstream/modules/platform/proc-controller-google-oauth2-settings.adoc @@ -1,36 +1,48 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-google-oauth2-settings"] -= Google OAuth2 settings += Configuring Google OAuth2 authentication -To set up social authentication for Google, you must obtain an OAuth2 key and secret for a web application. -To do this, you must first create a project and set it up with Google. +To set up social authentication for Google, you must obtain an OAuth2 key and secret for a web application. To do this, you must first create a project and set it up with Google. For instructions, see link:https://support.google.com/googleapi/answer/6158849[Setting up OAuth 2.0] in the Google API Console Help documentation. -If you have already completed the setup process, you can access those credentials by going to the Credentials section of the -link:https://console.developers.google.com/[Google API Manager Console]. -The OAuth2 key (Client ID) and secret (Client secret) are used to supply the required fields in the UI. +If you have already completed the setup process, you can access those credentials by going to the Credentials section of the link:https://console.cloud.google.com/projectselector2/apis/dashboard?pli=1&supportedpurview=project[Google API Manager Console]. The OAuth2 key (Client ID) and secret (Client secret) are used to supply the required fields in the UI. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. On the *Settings* page, select *Google OAuth 2 settings* from the list of *Authentication* options. -+ -The *Google OAuth2 Callback URL* field is already pre-populated and non-editable. -. The following fields are also pre-populated. -If not, use the credentials Google supplied during the web application setup process, and look for the values with the same format as the ones shown in the example below: +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *Google OAuth* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. -* Click *Edit* and copy and paste Google's Client ID into the *Google OAuth2 Key* field. -* Copy and paste Google's Client secret into the *Google OAuth2 Secret* field. -+ -image:configure-controller-auth-google.png[image] +include::snippets/snip-gw-authentication-auto-migrate.adoc[] -. To complete the remaining optional fields, refer to the tooltips in each of the fields for instructions and required format. -. For more information on completing the mapping fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. Click btn:[Save]. +. The *Google OAuth2 Key* and *Google OAuth2 Secret* fields are pre-populated. ++ +If not, use the credentials Google supplied during the web application setup process. Save these settings for use in the following steps. ++ +. Copy and paste Google’s Client ID into the *Google OAuth2 Key* field. +. Copy and paste Google’s Client secret into the *Google OAuth2 Secret* field. +. Optional: Enter information for the following fields using the tooltips provided for instructions and required format: ++ +* *Access Token URL* +* *Access Token Method* +* *Authorization URL* +* *Revoke Token Method* +* *Revoke Token URL* +* *OIDC JWT Algorithm(s)* +* *OIDC JWT* ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +Click btn:[Create Authentication Method]. -.Verification -To verify that the authentication was configured correctly, logout of {ControllerName}. -The login screen displays the Google logo to indicate it as an alternate method of logging into {ControllerName}. +include::snippets/snip-gw-authentication-verification.adoc[] -image:configure-controller-auth-google-logo.png[image] +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-ingress-options.adoc b/downstream/modules/platform/proc-controller-ingress-options.adoc index ba83b5b6f8..c2a200759d 100644 --- a/downstream/modules/platform/proc-controller-ingress-options.adoc +++ b/downstream/modules/platform/proc-controller-ingress-options.adoc @@ -1,37 +1,52 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-ingress-options_{context}"] -= Configuring the Ingress type for your {ControllerName} operator += Configuring the ingress type for your {ControllerName} operator -The {PlatformName} operator installation form allows you to further configure your {ControllerName} operator Ingress under *Advanced configuration*. +The {OperatorPlatformNameShort} installation form allows you to further configure your {ControllerName} operator ingress under *Advanced configuration*. .Procedure -. Click btn:[Advanced Configuration]. +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Automation Controller* tab. +. For new instances, click btn:[Create AutomationController]. +.. For existing instances, you can edit the YAML view by clicking the {MoreActionsIcon} icon and then btn:[Edit AutomationController]. +. Click btn:[Advanced configuration]. . Under *Ingress type*, click the drop-down menu and select *Ingress*. . Under *Ingress annotations*, enter any annotations to add to the ingress. . Under *Ingress TLS secret*, click the drop-down menu and select a secret from the list. -After you have configured your {ControllerName} operator, click btn:[Create] at the bottom of the form view. {OCP} will now create the pods. This may take a few minutes. +.Verification -You can view the progress by navigating to menu:Workloads[Pods] and locating the newly created instance. +After you have configured your {ControllerName} operator, click btn:[Create] at the bottom of the form view. {OCP} creates the pods. This may take a few minutes. -.Verification +You can view the progress by navigating to menu:Workloads[Pods] and locating the newly created instance. Verify that the following operator pods provided by the {PlatformNameShort} Operator installation from {ControllerName} are running: -[cols="a,a,a"] +[cols="a,a,a,a"] |=== -| Operator manager controllers | {ControllerName} |{HubName} +| Operator manager controllers | {ControllerNameStart} |{HubNameStart} |{EDAName} (EDA) -| The operator manager controllers for each of the 3 operators, include the following: +| The operator manager controllers for each of the three operators, include the following: * automation-controller-operator-controller-manager * automation-hub-operator-controller-manager * resource-operator-controller-manager -| After deploying {ControllerName}, you will see the addition of these pods: +* aap-gateway-operator-controller-manager +* ansible-lightspeed-operator-controller-manager +* eda-server-operator-controller-manager + +| After deploying {ControllerName}, you can see the addition of the following pods: * controller * controller-postgres -| After deploying {HubName}, you will see the addition of these pods: +* controller-web +* controller-task + +| After deploying {HubName}, you can see the addition of the following pods: * hub-api * hub-content @@ -39,6 +54,14 @@ Verify that the following operator pods provided by the {PlatformNameShort} Oper * hub-redis * hub-worker +| After deploying EDA, you can see the addition of the following pods: + +* eda-activation-worker +* da-api +* eda-default-worker +* eda-event-stream +* eda-scheduler + |=== [NOTE] diff --git a/downstream/modules/platform/proc-controller-inv-source-aap.adoc b/downstream/modules/platform/proc-controller-inv-source-aap.adoc index 29570a1240..18f38ef4d3 100644 --- a/downstream/modules/platform/proc-controller-inv-source-aap.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-aap.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-aap"] = {PlatformName} @@ -7,11 +9,13 @@ Use the following procedure to configure an {ControllerName}-sourced inventory. .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *{PlatformName}* from the *Source* list. -. The *Add new source* window expands with the required *Credential* field. -Choose from an existing {PlatformName} Credential. -For more information, see xref:controller-credentials[Credentials]. +. Click btn:[Create source]. +. In the *Create source* page, select *{PlatformName}* from the *Source* list. +. The *Create source* window expands with additional fields. +Enter the following details: + +* Optional: *Credential*: Choose from an existing {PlatformName} Credential. +For more information, see xref:controller-credentials[Managing user credentials]. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. . Use the *Source Variables* field to override variables used by the `controller` inventory plugin. Enter variables by using either JSON or YAML syntax. diff --git a/downstream/modules/platform/proc-controller-inv-source-gce.adoc b/downstream/modules/platform/proc-controller-inv-source-gce.adoc index 0bdb877a69..ad6d5872da 100644 --- a/downstream/modules/platform/proc-controller-inv-source-gce.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-gce.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-gce"] = Google Compute Engine @@ -7,11 +9,11 @@ Use the following procedure to configure a Google-sourced inventory: .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source] +. Click btn:[Create source]. . In the *Add new source* page, select *Google Compute Engine* from the *Source* list. -. The *Add new source* window expands with the required *Credential* field. +. The *Create source* window expands with the required *Credential* field. Choose from an existing GCE Credential. -For more information, see xref:controller-credentials[Credentials]. +For more information, see xref:controller-credentials[Managing user credentials]. //+ //image:inventories-create-source-GCE-example.png[Inventories- create source - GCE example] diff --git a/downstream/modules/platform/proc-controller-inv-source-insights.adoc b/downstream/modules/platform/proc-controller-inv-source-insights.adoc index b765ced2fa..8ffbf5dec0 100644 --- a/downstream/modules/platform/proc-controller-inv-source-insights.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-insights.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-insights"] = Red Hat Insights @@ -7,11 +9,13 @@ Use the following procedure to configure a Red Hat Insights-sourced inventory. .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *Red Hat Insights* from the *Source* list. -. The *Add new source* window expands with the required *Credential* field. -Choose from an existing Red Hat Insights Credential. -For more information, see xref:controller-credentials[Credentials]. +. Click btn:[Create source]. +. In the *Create source* page, select *Red Hat Insights* from the *Source* list. +. The *Create source* window expands with additional fields. +Enter the following details: + +* Optional: *Credential*: Choose from an existing Red Hat Insights Credential. +For more information, see xref:controller-credentials[Managing user credentials]. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. . Use the *Source Variables* field to override variables used by the `insights` inventory plugin. Enter variables by using either JSON or YAML syntax. diff --git a/downstream/modules/platform/proc-controller-inv-source-open-shift-virt.adoc b/downstream/modules/platform/proc-controller-inv-source-open-shift-virt.adoc new file mode 100644 index 0000000000..fc6182b537 --- /dev/null +++ b/downstream/modules/platform/proc-controller-inv-source-open-shift-virt.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-inv-source-open-shift-virt"] + += OpenShift Virtualization + +This inventory source uses a cluster that is able to deploy Red Hat OpenShift Container Platform Virtualization. +To configure a Red Hat OpenShift Container Platform Virtualization, you need a virtual machine deployed in a specific namespace and an OpenShift or Kubernetes API Bearer Token credential. + +.Procedure + +. From the navigational panel, select {MenuInfrastructureInventories}. +. Select the inventory that you want to add a source to. +. In the *Sources* tab, click btn:[Add source]. +. From the menu:Source[] menu, select *OpenShift Virtualization*. +* The *Add new source* window expands with the required *Credential* field. ++ +Choose from an existing Kubernetes API Bearer Token credential. +For more information, see xref:ref-controller-credential-openShift[OpenShift or Kubernetes API Bearer Token credential type]. +In this example, the `cmv2.engineering.redhat.com` credential is used. +. You can optionally specify the *Verbosity*, *Host Filter*, *Enabled Variable/Value*, and *Update options* as described in the xref:proc-controller-add-source[Adding a source] steps. +. Use the *Source Variables* field to override variables used by the `kubernetes` inventory plugin. +Enter variables by using either JSON or YAML syntax. +Use the radio button to toggle between the two. +For more information about these variables, see the link:https://kubevirt.io/kubevirt.core/main/plugins/kubevirt.html#parameters[kubevirt.core.kubevirt inventory source] documentation. ++ +In the following example, the connections variable is used to specify access to a particular namespace in a cluster: ++ +---- +--- +connections: +- namespaces: + - hao-test +---- ++ +. Click btn:[Save] and then click btn:[Sync] to sync the inventory. diff --git a/downstream/modules/platform/proc-controller-inv-source-openstack.adoc b/downstream/modules/platform/proc-controller-inv-source-openstack.adoc index 5c3631a256..7344b2ee40 100644 --- a/downstream/modules/platform/proc-controller-inv-source-openstack.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-openstack.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-openstack"] = OpenStack @@ -7,11 +9,13 @@ Use the following procedure to configure an OpenStack-sourced inventory. .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *OpenStack* from the *Source* list. -. The *Add new Source* window expands with the required *Credential* field. -Choose from an existing OpenStack Credential. -For more information, see xref:controller-credentials[Credentials]. +. Click btn:[Create source]. +. In the *Create source* page, select *OpenStack* from the *Source* list. +. The *Create source* window expands with additional fields. +Enter the following details: + +* Optional: *Credential*: Choose from an existing OpenStack Credential. +For more information, see xref:controller-credentials[Managing user credentials]. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. . Use the *Source Variables* field to override variables used by the `openstack` inventory plugin. Enter variables by using either JSON or YAML syntax. diff --git a/downstream/modules/platform/proc-controller-inv-source-rh-virt.adoc b/downstream/modules/platform/proc-controller-inv-source-rh-virt.adoc index 64d52524c1..cdc58f8019 100644 --- a/downstream/modules/platform/proc-controller-inv-source-rh-virt.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-rh-virt.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-rh-virt"] = Red Hat Virtualization @@ -7,11 +9,13 @@ Use the following procedure to configure a Red Hat virtualization-sourced invent .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *Red Hat Virtualization* from the *Source* list. -. The *Add new source* window expands with the required *Credential* field. -Choose from an existing Red Hat Virtualization Credential. -For more information, see xref:controller-credentials[Credentials]. +. Click btn:[Create source]. +. In the *Create source* page, select *Red Hat Virtualization* from the *Source* list. +. The *Create source* window expands with additional fields. +Enter the following details: + +* Optional: *Credential*: Choose from an existing Red Hat Virtualization Credential. +For more information, see xref:controller-credentials[Managing user credentials]. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. . Use the *Source Variables* field to override variables used by the `ovirt` inventory plugin. Enter variables by using either JSON or YAML syntax. diff --git a/downstream/modules/platform/proc-controller-inv-source-satellite.adoc b/downstream/modules/platform/proc-controller-inv-source-satellite.adoc index 80f15beb13..8bac71c8c8 100644 --- a/downstream/modules/platform/proc-controller-inv-source-satellite.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-satellite.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-satellite"] = Red Hat Satellite 6 @@ -7,11 +9,13 @@ Use the following procedure to configure a Red Hat Satellite-sourced inventory. .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *Red Hat Satellite 6* from the *Source* list. -. The *Add new source* window expands with the required *Credential* field. -Choose from an existing Satellite Credential. -For more information, see xref:controller-credentials[Credentials]. +. Click btn:[Create source]. +. In the *Create source* page,, select *Red Hat Satellite 6* from the *Source* list. +. The *Create source* window expands with additional fields. +Enter the following details: + +* Optional: *Credential*: Choose from an existing Satellite Credential. +For more information, see xref:controller-credentials[Managing user credentials]. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. . Use the *Source Variables* field to specify parameters used by the `foreman` inventory source. Enter variables by using either JSON or YAML syntax. @@ -20,6 +24,7 @@ For more information about these variables, see the link:https://docs.ansible.co //+ //image:inventories-create-source-rhsat6-example.png[Inventories - create source - RH Satellite example] +.Troubleshooting If you meet an issue with the {ControllerName} inventory not having the "related groups" from Satellite, you might need to define these variables in the inventory source. For more information, see xref:controller-rh-satellite[Red Hat Satellite 6]. diff --git a/downstream/modules/platform/proc-controller-inv-source-terraform.adoc b/downstream/modules/platform/proc-controller-inv-source-terraform.adoc index 547d423f15..98407d5533 100644 --- a/downstream/modules/platform/proc-controller-inv-source-terraform.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-terraform.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-terraform"] // This Terraform module is for AAP 2.5 @@ -10,17 +12,17 @@ The plugin parses a terraform state file and add hosts for AWS EC2, GCE, and {Az .Procedure . From the navigation panel, select {MenuAEProjects}. -. On the *Projects* page, click btn:[Create project] to start the *Create Project* window. +. On the *Projects* page, click btn:[Create project] to start the *Create project* window. ** Enter the appropriate details according to the steps in xref:proc-controller-adding-a-project[Adding a new project]. . From the navigational panel, select {MenuInfrastructureInventories}. . Select the inventory that you want to add a source to. -. In the *Sources* tab, click btn:[Add source]. +. In the *Sources* tab, click btn:[Create source]. . From the menu:Source[] menu, select *Terraform State*. -* The *Add new source* window expands with the required *Credential* field. +* The *Create source* window expands with the optional *Credential* field. + -Choose from an existing Terraform backend configuration credential. For more information, see xref:ref-controller-credential-terraform[Terraform backend configuration]. -. Enable the options to *Overwrite* and *Update on Launch*. -. Use the *Source Variables* field to override variables used by the `terraform_state` inventory plugin. +Choose an existing Terraform backend configuration credential. For more information, see xref:ref-controller-credential-terraform[Terraform backend configuration]. +. Enable the options to *Overwrite* and *Update on launch*. +. Use the *Source variables* field to override variables used by the `terraform_state` inventory plugin. Enter variables by using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information about these variables, see the link:https://console.redhat.com/ansible/automation-hub/repo/published/cloud/terraform/content/inventory/terraform_state/[terraform_state] file. @@ -33,16 +35,15 @@ The following is an example Amazon S3 backend: backend_type: s3 ---- + -//The current 2.5 test environment does not have the following option yet: -. Select an *Execution Environment* that has a Terraform binary. +. Select an *Execution environment* that has a Terraform binary. This is required for the inventory plugin to run the Terraform commands that read inventory data from the Terraform state file. -.Additional resources -For more information, see the link:https://github.com/ansible-cloud/terraform_ee[Terraform EE] readme that has an example {ExecEnvShort} configuration with a Terraform binary. - -== Terraform provider for {PlatformNameShort} +[IMPORTANT] +==== +Terraform provider for {PlatformNameShort} inventories are managed by Terraform and you must not edit them in {PlatformNameShort} as it can introduce drift to the Terraform deployment. +==== -Inventories created this way are managed by Terraform and you must not edit them in {PlatformNameShort} as it can introduce drift to the Terraform deployment. +.Additional resources -You can create inventories and hosts within the Terraform configuration by using the Terraform provider for {PlatformNameShort}. -For more information, see the link:https://registry.terraform.io/providers/ansible/aap/latest/docs[AAP Provider] section of the Terraform documentation. +* link:https://github.com/ansible-cloud/terraform_ee[Terraform EE] +* link:https://registry.terraform.io/providers/ansible/aap/latest/docs[Red Hat Ansible Automation Platform provider] diff --git a/downstream/modules/platform/proc-controller-inv-source-vm-esxi.adoc b/downstream/modules/platform/proc-controller-inv-source-vm-esxi.adoc new file mode 100644 index 0000000000..081da0b1ef --- /dev/null +++ b/downstream/modules/platform/proc-controller-inv-source-vm-esxi.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-inv-source-vm-esxi"] + += VMware ESXi + +Use the following procedure to configure a VMWare-ESXI sourced inventory. + +.Procedure +. From the navigation panel, select {MenuInfrastructureInventories}. +. Select the inventory name you want to add a source to, and click the *Sources* tab. +. Click btn:[Create source]. +. Enter a *Name* for the source (required). +. In the *Create source* page, select *VMware ESXi* from the *Source* list. +. The *Create source* window expands with additional fields. +Enter the following details: + +* *Credential*: Choose from an existing VMware credential. +For more information, see xref:controller-credentials[Managing user credentials]. + +. Use the *Verbosity* menu to select the level of output on any inventory source's update jobs. +. Optional: You can specify the host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. +. Use the *Source Variables* field to override variables used by the `vmware_inventory` inventory plugin. +Enter variables by using either JSON or YAML syntax. +Use the radio button to toggle between the two. + +.Troubleshooting + +VMWare properties have changed from lower case to camel case. +{ControllerNameStart} provides aliases for the top-level keys, but lower case keys in nested properties have been discontinued. +For a list of valid and supported properties, see link:https://docs.ansible.com/ansible/4/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.html[Using Virtual machine attributes in VMware dynamic inventory plugin]. + +.Additional resources + +* link:https://github.com/ansible-collections/vmware.vmware/blob/main/plugins/inventory/esxi_hosts.py[VMware ESxi plugin] diff --git a/downstream/modules/platform/proc-controller-inv-source-vm-vcenter.adoc b/downstream/modules/platform/proc-controller-inv-source-vm-vcenter.adoc index e78f29c5d7..be18348a23 100644 --- a/downstream/modules/platform/proc-controller-inv-source-vm-vcenter.adoc +++ b/downstream/modules/platform/proc-controller-inv-source-vm-vcenter.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-inv-source-vm-vcenter"] = VMware vCenter @@ -7,11 +9,13 @@ Use the following procedure to configure a VMWare-sourced inventory. .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *VMware vCenter* from the *Source* list. -. The *Add new source* window expands with the required *Credential* field. -Choose from an existing VMware Credential. -For more information, see xref:controller-credentials[Credentials]. +. Click btn:[Create source]. +. In the *Create source* page, select *VMware vCenter* from the *Source* list. +. The *Create source* window expands with additional fields. +Enter the following details: + +* Optional: *Credential*: Choose from an existing VMware credential. +For more information, see xref:controller-credentials[Managing user credentials]. . Optional: You can specify the verbosity, host filter, enabled variables or values, and update options as described in xref:proc-controller-add-source[Adding a source]. . Use the *Source Variables* field to override variables used by the `vmware_inventory` inventory plugin. @@ -19,11 +23,10 @@ Enter variables by using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information about these variables, see the link:https://github.com/ansible-collections/community.vmware/blob/main/plugins/inventory/vmware_vm_inventory.py[vmware_inventory inventory plugin]. -[NOTE] -==== +.Troubleshooting + VMWare properties have changed from lower case to camel case. {ControllerNameStart} provides aliases for the top-level keys, but lower case keys in nested properties have been discontinued. -For a list of valid and supported properties, see link:https://docs.ansible.com/ansible/latest/collections/community/vmware/docsite/vmware_scenarios/vmware_inventory_vm_attributes.html[Using Virtual machine attributes in VMware dynamic inventory plugin]. -==== +For a list of valid and supported properties, see link:https://docs.ansible.com/ansible/4/scenario_guides/vmware_scenarios/vmware_inventory_vm_attributes.html[Using Virtual machine attributes in VMware dynamic inventory plugin]. //image:inventories-create-source-vmware-example.png[Inventories- create source - VMWare example] diff --git a/downstream/modules/platform/proc-controller-launch-job-template.adoc b/downstream/modules/platform/proc-controller-launch-job-template.adoc index 41b9bb2615..2dcf3422a4 100644 --- a/downstream/modules/platform/proc-controller-launch-job-template.adoc +++ b/downstream/modules/platform/proc-controller-launch-job-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-launch-job-template"] = Launching a job template @@ -11,7 +13,7 @@ Easier deployments drive consistency, by running your playbooks the same way eac .Procedure * Launch a job template by using one of these methods: -** From the navigation panel, select {MenuAETemplates} and click *Launch template* image:rightrocket.png[Rightrocket,15,15] next to the job template. +** From the navigation panel, select {MenuAETemplates} and click *Launch template* image:rightrocket.png[Launch,15,15] on the job template card. + //image::ug-job-template-launch.png[Job template launch] + @@ -40,7 +42,7 @@ Ensure that you complete the tabs in the order that the prompts appear. When launching, {ControllerName} automatically redirects the web browser to the *Job Status* page for this job under the *Jobs* tab. You can re-launch the most recent job from the list view to re-run on all hosts or just failed hosts in the specified inventory. -For more information, see the xref:controller-jobs[Jobs] section. +For more information, see the xref:controller-jobs[Jobs in automation controller] section. When slice jobs are running, job lists display the workflow and job slices, and a link to view their details individually. @@ -51,5 +53,5 @@ This endpoint accepts JSON and you can specify a list of unified job templates ( The user must have the appropriate permission to launch all the jobs. If all jobs are not launched an error is returned indicating why the operation was not able to complete. Use the `OPTIONS` request to return relevant schema. -For more information, see the link:https://docs.ansible.com/automation-controller/latest/html/controllerapi/api_ref.html#/Bulk[Bulk endpoint] of the Reference section of the Automation Controller API Guide. +For more information, see the link:{LinkControllerAPIOverview}/api_ref.html#/Bulk[Bulk endpoint] of the Reference section of the {TitleControllerAPIOverview}. ==== diff --git a/downstream/modules/platform/proc-controller-launch-workflow-template.adoc b/downstream/modules/platform/proc-controller-launch-workflow-template.adoc index 534dcbdbe9..0c52d58f98 100644 --- a/downstream/modules/platform/proc-controller-launch-workflow-template.adoc +++ b/downstream/modules/platform/proc-controller-launch-workflow-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-launch-workflow-template"] = Launching a workflow job template diff --git a/downstream/modules/platform/proc-controller-logging-in.adoc b/downstream/modules/platform/proc-controller-logging-in.adoc index 81045371cc..05116e490d 100644 --- a/downstream/modules/platform/proc-controller-logging-in.adoc +++ b/downstream/modules/platform/proc-controller-logging-in.adoc @@ -1,13 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-logging-in"] .Procedure -. With the login information provided after your installation completed, open a web browser and log in to the {ControllerName} by navigating to its server URL at: \https:/// +. With the login information provided after your installation completed, open a web browser and log in to the {PlatformNameShort} by navigating to its server URL at: \https:/// . Use the credentials specified during the installation process to login: * The default username is *admin*. * The password for *admin* is the value specified. -. Click the btn:[More Actions] icon *{MoreActionsIcon}* next to the desired user. +. Click the btn:[More Actions] icon *{MoreActionsIcon}* next to the required user. . Click btn:[Edit]. -. Edit the required details and click btn:[Save]. \ No newline at end of file +. Edit the required details and click btn:[Save]. diff --git a/downstream/modules/platform/proc-controller-management-notifications.adoc b/downstream/modules/platform/proc-controller-management-notifications.adoc index 77f5daf727..81d15d7124 100644 --- a/downstream/modules/platform/proc-controller-management-notifications.adoc +++ b/downstream/modules/platform/proc-controller-management-notifications.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-management-notifications"] = Setting notifications @@ -9,7 +11,7 @@ Use the following procedure to review or set notifications associated with a man //image:management-job-notifications.png[Notifications] -If none exist, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-notifications#controller-create-notification-template[Creating a notification template] in the _{ControllerUG}_. +If none exist, see link:{URLControllerUserGuide}/controller-notifications#controller-create-notification-template[Creating a notification template] in _{ControllerUG}_. image:management-job-notifications-empty.png[No notifications set] diff --git a/downstream/modules/platform/proc-controller-managing-live-events.adoc b/downstream/modules/platform/proc-controller-managing-live-events.adoc index 8831aef419..d5b4ef3513 100644 --- a/downstream/modules/platform/proc-controller-managing-live-events.adoc +++ b/downstream/modules/platform/proc-controller-managing-live-events.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-managing-live-events"] = Managing live events in the {ControllerName} UI diff --git a/downstream/modules/platform/proc-controller-metrics-utility-ocp.adoc b/downstream/modules/platform/proc-controller-metrics-utility-ocp.adoc new file mode 100644 index 0000000000..297d6d6bb4 --- /dev/null +++ b/downstream/modules/platform/proc-controller-metrics-utility-ocp.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: CONCEPT + +[id="controller-metrics-utility-ocp"] + += Configuring metrics-utility on {OCPShort} from the {PlatformNameShort} operator + +`metrics-utility` is included in the {OCPShort} image beginning with version 4.12, 4.512, and 4.6. +If your system does not have `metrics-utility` installed, update your OpenShift image to the latest version. + +Complete the following steps to configure the run schedule for `metrics-utility` on {OCPShort} using the {PlatformNameShort} operator: diff --git a/downstream/modules/platform/proc-controller-metrics-utility-rhel.adoc b/downstream/modules/platform/proc-controller-metrics-utility-rhel.adoc new file mode 100644 index 0000000000..f05e35ce5a --- /dev/null +++ b/downstream/modules/platform/proc-controller-metrics-utility-rhel.adoc @@ -0,0 +1,111 @@ +:_mod-docs-content-type: PROCEDURE + +[id="controller-metrics-utility-rhel"] + += Configuring metrics-utility on {RHEL} + +.Prerequisites: + +* An active {PlatformNameShort} subscription + +Metrics-utility is included with {PlatformNameShort}, so you do not need a separate installation. +The following commands gather the relevant data and generate a link:https://connect.redhat.com/en/programs/certified-cloud-service-provider[CCSP] report containing your usage metrics. +You can configure these commands as cronjobs to ensure they run at the beginning of every month. +See link:https://www.redhat.com/sysadmin/linux-cron-command[How to schedule jobs using the Linux 'cron' utility] for more on configuring using the cron syntax. + +.Procedure + +. Create two scripts in your user's home director in order to set correct variables to ensure that `metrics-utility` gathers all relevant data. +.. In `/home/my-user/cron-gather`: ++ +[source, ] +---- +#!/bin/sh + +# Specify the following variables to indicate where the report is deposited in your file system +export METRICS_UTILITY_SHIP_TARGET=directory +export METRICS_UTILITY_SHIP_PATH=/awx_devel/awx-dev/metrics-utility/shipped_data/billing + +# Run the following command to gather and store the data in the provided SHIP_PATH directory: +metrics-utility gather_automation_controller_billing_data --ship --until=10m +---- ++ +.. In `/home/my-user/cron-report`: ++ +[source, ] +---- +#!/bin/sh + +# Specify the following variables to indicate where the report is deposited in your file system +export METRICS_UTILITY_SHIP_TARGET=directory +export METRICS_UTILITY_SHIP_PATH=/awx_devel/awx-dev/metrics-utility/shipped_data/billing + +# Set these variables to generate a report: +export METRICS_UTILITY_REPORT_TYPE=CCSPv2 +export METRICS_UTILITY_PRICE_PER_NODE=11.55 # in USD +export METRICS_UTILITY_REPORT_SKU=MCT3752MO +export METRICS_UTILITY_REPORT_SKU_DESCRIPTION="EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" +export METRICS_UTILITY_REPORT_H1_HEADING="CCSP Reporting : ANSIBLE Consumption" +export METRICS_UTILITY_REPORT_COMPANY_NAME="Company Name" +export METRICS_UTILITY_REPORT_EMAIL="email@email.com" +export METRICS_UTILITY_REPORT_RHN_LOGIN="test_login" +export METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER="BUSINESS LEADER" +export METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER="PROCUREMENT LEADER" + +# Build the report +metrics-utility build_report +---- ++ +. To ensure that these files are executable, run: ++ +[source, ] +---- +chmod a+x /home/my-user/cron-gather /home/my-user/cron-report +---- ++ +. To open the cron file for editing, run: ++ +[source, ] +---- +crontab -e +---- ++ +. To configure the run schedule, add the following parameters to the end of the file and specify how often you want `metrics-utility` to gather information and build a report using link:https://www.redhat.com/sysadmin/linux-cron-command[cron syntax]. In the following example, the `gather` command is configured to run every hour at 00 minutes. The `build_report` command is configured to run on the second day of each month at 4:00 AM. ++ +[source, ] +---- +0 */1 * * * /home/my-user/cron-gather +0 4 2 * * /home/my-user/cron-report +---- ++ +. Save and close the file. +. To verify that you saved your changes, run: ++ +[source, ] +---- +crontab -l +---- ++ +. To ensure that data is being collected, run: ++ +[source, ] +---- +cat /var/log/cron +---- ++ +The following is an example of the output. Note that time and date might vary depending on how your configure the run schedule: ++ +[source, ] +---- +May 8 09:45:03 ip-10-0-6-23 CROND[51623]: (root) CMDOUT (No billing data for month: 2024-04) +May 8 09:45:03 ip-10-0-6-23 CROND[51623]: (root) CMDEND (metrics-utility build_report) +May 8 09:45:19 ip-10-0-6-23 crontab[51619]: (root) END EDIT (root) +May 8 09:45:34 ip-10-0-6-23 crontab[51659]: (root) BEGIN EDIT (root) +May 8 09:46:01 ip-10-0-6-23 CROND[51688]: (root) CMD (metrics-utility gather_automation_controller_billing_data --ship --until=10m) +May 8 09:46:03 ip-10-0-6-23 CROND[51669]: (root) CMDOUT (/tmp/9e3f86ee-c92e-4b05-8217-72c496e6ffd9-2024-05-08-093402+0000-2024-05-08-093602+0000-0.tar.gz) +May 8 09:46:03 ip-10-0-6-23 CROND[51669]: (root) CMDEND (metrics-utility gather_automation_controller_billing_data --ship --until=10m) +May 8 09:46:26 ip-10-0-6-23 crontab[51659]: (root) END EDIT (root) +---- ++ + +The generated report will have the default name CCSP--.xlsx and will be deposited in the ship path that you specified in step 1a. diff --git a/downstream/modules/platform/proc-controller-modify-run-schedule-OCP.adoc b/downstream/modules/platform/proc-controller-modify-run-schedule-OCP.adoc new file mode 100644 index 0000000000..82109397e8 --- /dev/null +++ b/downstream/modules/platform/proc-controller-modify-run-schedule-OCP.adoc @@ -0,0 +1,32 @@ +:_mod-docs-content-type: PROCEDURE + +[id="modify-the-run-schedule-on-OCP"] + += Modifying the run schedule on {OCPShort} from the {PlatformNameShort} operator + +Adjust the execution schedule of the `metrics-utility` within your {PlatformNameShort} deployment running on {OCPShort}. + +.Procedure + +. From the navigation panel, select menu:Workloads[Deployments]. +. On the next screen, select *automation-controller-operator-controller-manager*. +. Beneath the heading *Deployment Details*, click the down arrow button to change the number of pods to zero. This pauses the deployment so you can update the running schedule. +. From the navigation panel, select *Installed Operators*. +. From the list of installed operators, select {PlatformNameShort}. +. On the next screen, select the {ControllerName} tab. +. From the list that appears, select your {ControllerName} instance. +. On the next screen, select the `YAML` tab. +. In the `YAML` file, find the following parameters and enter a variable representing how often `metrics-utility` should gather data and how often it should produce a report: ++ +[source, ] +---- +metrics_utility_cronjob_gather_schedule: +metrics_utility_cronjob_report_schedule: +---- ++ +. Click btn:[Save]. +. From the navigation menu, select menu:Deployments[] and then select *automation-controller-operator-controller-manager*. +. Increase the number of pods to 1. +. To verify that you have changed the `metrics-utility` running schedule successfully, you can take one or both of the following steps: +.. Return to the `YAML` file and ensure that the previously described parameters reflect the correct variables. +.. From the navigation menu, select menu:Workloads[Cronjobs] and ensure that your cronjobs show the updated schedule. diff --git a/downstream/modules/platform/proc-controller-pass-extra-variables-provisioning-callbacks.adoc b/downstream/modules/platform/proc-controller-pass-extra-variables-provisioning-callbacks.adoc index a67a91385b..13d8802801 100644 --- a/downstream/modules/platform/proc-controller-pass-extra-variables-provisioning-callbacks.adoc +++ b/downstream/modules/platform/proc-controller-pass-extra-variables-provisioning-callbacks.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-pass-extra-variables-provisioning-callbacks"] = Passing extra variables to Provisioning Callbacks @@ -22,4 +24,4 @@ root@localhost:~$ curl -f -H 'Content-Type: application/json' -XPOST \ https:///api/v2/job_templates/7/callback ---- -For more information, see link:https://docs.ansible.com/automation-controller/4.4/html/administration/tipsandtricks.html#launch-jobs-curl[Launching Jobs with Curl] in the _{ControllerAG}_. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/controller-tips-and-tricks#ref-controller-launch-jobs-with-curl[Launching Jobs with Curl] in _{ControllerAG}_. \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-pin-instances.adoc b/downstream/modules/platform/proc-controller-pin-instances.adoc index 48fb54bb07..bc39c14e77 100644 --- a/downstream/modules/platform/proc-controller-pin-instances.adoc +++ b/downstream/modules/platform/proc-controller-pin-instances.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-pin-instances"] = Pinning instances manually to specific groups diff --git a/downstream/modules/platform/proc-controller-project-add-permissions.adoc b/downstream/modules/platform/proc-controller-project-add-permissions.adoc index ca43fbcfac..153e2e8707 100644 --- a/downstream/modules/platform/proc-controller-project-add-permissions.adoc +++ b/downstream/modules/platform/proc-controller-project-add-permissions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-project-add-permission"] = Adding project permissions diff --git a/downstream/modules/platform/proc-controller-project-remove-permissions.adoc b/downstream/modules/platform/proc-controller-project-remove-permissions.adoc index c4eb9f95b8..08013bbb18 100644 --- a/downstream/modules/platform/proc-controller-project-remove-permissions.adoc +++ b/downstream/modules/platform/proc-controller-project-remove-permissions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-project-remove-permissions"] = Removing permissions from a project diff --git a/downstream/modules/platform/proc-controller-proxy-settings.adoc b/downstream/modules/platform/proc-controller-proxy-settings.adoc new file mode 100644 index 0000000000..6a3d665605 --- /dev/null +++ b/downstream/modules/platform/proc-controller-proxy-settings.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-proxy-settings"] + += {ControllerNameStart} settings +After using the RPM installation program, you must configure {ControllerName} to use egress proxy. + +[NOTE] +==== +This is not required for containerized installers because podman uses system configured proxy and redirects all the container traffic to the proxy. +==== + +For {ControllerName}, set the `AWX_TASK_ENV` variable in `/api/v2/settings/`. +To do this through the UI use the following procedure: + +.Procedure + +. From the navigation panel, select {MenuSetJob}. +. Click btn:[Edit]. +. Add the variables to the *Extra Environment Variables* field ++ +and set: ++ +---- +"AWX_TASK_ENV": { +"http_proxy": "http://external-proxy_0:3128", +"https_proxy": "http://external-proxy_0:3128", +"no_proxy": "localhost,127.0.0.0/8" + } +---- \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-pulling-secret.adoc b/downstream/modules/platform/proc-controller-pulling-secret.adoc index e95b28dd11..51ffefb525 100644 --- a/downstream/modules/platform/proc-controller-pulling-secret.adoc +++ b/downstream/modules/platform/proc-controller-pulling-secret.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-pulling-secret"] = Pulling the secret diff --git a/downstream/modules/platform/proc-controller-remediate-insights-inventory.adoc b/downstream/modules/platform/proc-controller-remediate-insights-inventory.adoc index bb3b50abb3..b52689b5fa 100644 --- a/downstream/modules/platform/proc-controller-remediate-insights-inventory.adoc +++ b/downstream/modules/platform/proc-controller-remediate-insights-inventory.adoc @@ -1,8 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-remediate-insights-inventory"] = Remediating a Red Hat Insights inventory Remediation of a Red Hat Insights inventory enables {ControllerName} to run Red Hat Insights playbooks with a single click. + You can do this by creating a job template to run the Red Hat Insights remediation. .Procedure @@ -24,7 +27,7 @@ The credential does not have to be a Red Hat Insights credential. + image::ug-insights-create-job-template.png[Insights job template] + -. Click btn:[Save]. +. Click btn:[Create job template]. . Click the launch image:rightrocket.png[Launch,15,15] icon to launch the job template. When complete, the job results in the *Job Details* page. diff --git a/downstream/modules/platform/proc-controller-remove-inv-permissions.adoc b/downstream/modules/platform/proc-controller-remove-inv-permissions.adoc new file mode 100644 index 0000000000..285cf90b08 --- /dev/null +++ b/downstream/modules/platform/proc-controller-remove-inv-permissions.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-removing-inv-permissions"] + += Removing a permission + +Remove specific permissions from a user associated with a resource. +Disassociating a role restricts a user's access to functionalities or data they no longer need. + +.Procedure + +* To remove roles for a particular user, click the image:disassociate.png[Disassociate,10,10] icon next to its resource. +This launches a confirmation window, asking you to confirm the disassociation. + +//image:permissions-disassociate-confirm.png[image] \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-remove-old-activity-stream.adoc b/downstream/modules/platform/proc-controller-remove-old-activity-stream.adoc index d0ef19193b..02d0e813ac 100644 --- a/downstream/modules/platform/proc-controller-remove-old-activity-stream.adoc +++ b/downstream/modules/platform/proc-controller-remove-old-activity-stream.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-remove-old-activity-stream"] = Removing old activity stream data diff --git a/downstream/modules/platform/proc-controller-reset-tower-base.adoc b/downstream/modules/platform/proc-controller-reset-tower-base.adoc index 4f7d249732..8cf02502a4 100644 --- a/downstream/modules/platform/proc-controller-reset-tower-base.adoc +++ b/downstream/modules/platform/proc-controller-reset-tower-base.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-reset-tower-base"] = Resetting TOWER_URL_BASE @@ -14,6 +16,6 @@ Use the following procedure to reset `TOWER_URL_BASE` if the wrong address has b . From the navigation panel, select menu:{MenuAEAdminSettings}[System]. . Click btn:[Edit]. -. Enter the address in the *Base URL of the service* field for the DNS entry you wish to appear in notifications. +. Enter the address in the *Base URL of the service* field for the DNS entry you want to appear in notifications. //[ddacosta] Subscription is not an option from the Settings menu in the controller test environment. Need to verify where this lives and if it changes for 2.5 //. Re-add your license in menu:Settings[Subscription settings]. diff --git a/downstream/modules/platform/proc-controller-review-organizations.adoc b/downstream/modules/platform/proc-controller-review-organizations.adoc index 06c5f47376..3d023bb6c9 100644 --- a/downstream/modules/platform/proc-controller-review-organizations.adoc +++ b/downstream/modules/platform/proc-controller-review-organizations.adoc @@ -1,23 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-review-organizations"] -= Reviewing the organization += Organizations list view -The Organizations page displays the existing organizations for your installation. +The *Organizations* page displays the existing organizations for your installation. From here, you can search for a specific organization, filter the list of organizations, or change the sort order for the list. .Procedure -* From the navigation panel, select {MenuControllerOrganizations}. -+ -[NOTE] -==== -{ControllerNameStart} automatically creates a default organization. -If you have a Self-support level license, you have only the default organization available and must not delete it. -==== -You can use the default organization as it is initially set up and edit it later. -+ -[NOTE] -==== -Only Enterprise or Premium licenses can add new organizations. -==== - -Enterprise and Premium license users who want to add a new organization should see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#assembly-controller-organizations[Organizations] section in the _{ControllerUG}_. +. From the navigation panel, select menu:{MenuAMOrganizations}. +. In the Search bar, enter an appropriate keyword for the organization you want to search for and click the arrow icon. +. From the menu bar, you can sort the list of organizations by using the arrows for *Name* to toggle your sorting preference. +. You can also sort the list by selecting *Name*, *Created* or *Last modified* from the *Sort* list. +. You can view organization details by clicking an organization *Name* on the *Organizations* page. diff --git a/downstream/modules/platform/proc-controller-run-ad-hoc-commands.adoc b/downstream/modules/platform/proc-controller-run-ad-hoc-commands.adoc index 9c1f4bbb8a..1506e4b206 100644 --- a/downstream/modules/platform/proc-controller-run-ad-hoc-commands.adoc +++ b/downstream/modules/platform/proc-controller-run-ad-hoc-commands.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-run-ad-hoc-commands"] = Running Ad Hoc commands @@ -7,8 +9,6 @@ An example of an ad hoc command might be rebooting 50 machines in your infrastru Anything you can do ad hoc can be accomplished by writing a playbook. Playbooks can also glue many other operations together. -Use the following procedure to run an ad hoc command: - .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want to run an ad hoc command with. @@ -30,7 +30,7 @@ The Run command window opens. |=== | command | apt_repository | mount | win_service | shell | apt_rpm | ping | win_updates -| yum | service | selinux | win_group +| yum | service | SELinux | win_group | apt | group | setup | win_user | apt_key | user | win_ping | win_user |=== @@ -39,7 +39,7 @@ The Run command window opens. To target all hosts in the inventory enter `all` or `*`, or leave the field blank. This is automatically populated with whatever was selected in the previous view before clicking the launch button. * *Machine Credential*: Select the credential to use when accessing the remote hosts to run the command. -Choose the credential containing the username and SSH key or password that Ansible needs to log into the remote hosts. +Choose the credential containing the username and SSH key or password that Ansible needs to log in to the remote hosts. * *Verbosity*: Select a verbosity level for the standard output. * *Forks*: If needed, select the number of parallel or simultaneous processes to use while executing the command. * *Show Changes*: Select to enable the display of Ansible changes in the diff --git a/downstream/modules/platform/proc-controller-scheduling-deletion.adoc b/downstream/modules/platform/proc-controller-scheduling-deletion.adoc index 57849a8646..087913c5bd 100644 --- a/downstream/modules/platform/proc-controller-scheduling-deletion.adoc +++ b/downstream/modules/platform/proc-controller-scheduling-deletion.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-scheduling-deletion"] = Scheduling deletion diff --git a/downstream/modules/platform/proc-controller-scheduling-job-templates.adoc b/downstream/modules/platform/proc-controller-scheduling-job-templates.adoc index 8f589763f4..397d807db3 100644 --- a/downstream/modules/platform/proc-controller-scheduling-job-templates.adoc +++ b/downstream/modules/platform/proc-controller-scheduling-job-templates.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-scheduling-job-templates"] = Scheduling job templates diff --git a/downstream/modules/platform/proc-controller-scheduling-workflow-job-templates.adoc b/downstream/modules/platform/proc-controller-scheduling-workflow-job-templates.adoc index 408b8f9625..5859862162 100644 --- a/downstream/modules/platform/proc-controller-scheduling-workflow-job-templates.adoc +++ b/downstream/modules/platform/proc-controller-scheduling-workflow-job-templates.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-scheduling-workflow-job-templates"] = Scheduling a workflow job template diff --git a/downstream/modules/platform/proc-controller-search-job-slices.adoc b/downstream/modules/platform/proc-controller-search-job-slices.adoc index 1dc5f33c4f..61bc3a5df3 100644 --- a/downstream/modules/platform/proc-controller-search-job-slices.adoc +++ b/downstream/modules/platform/proc-controller-search-job-slices.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-search-job-slices"] = Searching job slices diff --git a/downstream/modules/platform/proc-controller-select-capacity.adoc b/downstream/modules/platform/proc-controller-select-capacity.adoc index ec32dbbb35..b80302e86f 100644 --- a/downstream/modules/platform/proc-controller-select-capacity.adoc +++ b/downstream/modules/platform/proc-controller-select-capacity.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-select-capacity"] = Selecting the correct capacity @@ -21,8 +23,9 @@ A value of 0.5 is a 50/50 balance between the two algorithms, which is 18: View or edit the capacity: -. From the *Instances Groups* list view, select the desired instance. -. Select the *Instances* tab and adjust the *Capacity Adjustment* slider. +. From the navigation panel, select {MenuInfrastructureInstanceGroups}. +. On the *Instance Groups* list view, select the required instance. +. Select the *Instances* tab and adjust the *Capacity adjustment* slider. + [NOTE] ==== diff --git a/downstream/modules/platform/proc-controller-set-up-LDAP.adoc b/downstream/modules/platform/proc-controller-set-up-LDAP.adoc index 079666dfbb..81870640cf 100644 --- a/downstream/modules/platform/proc-controller-set-up-LDAP.adoc +++ b/downstream/modules/platform/proc-controller-set-up-LDAP.adoc @@ -1,121 +1,110 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-LDAP"] -= Setting up LDAP authentication += Configuring LDAP authentication -When configured, a user who logs in with an LDAP username and password automatically has an {ControllerName} account created for them. -They can be automatically placed into organizations as either regular users or organization administrators. +As a platform administrator, you can configure LDAP as the source for account authentication information for {PlatformNameShort} users. -Users created in the user interface (Local) take precedence over those logging into {ControllerName} for their first time with an alternative authentication solution. -You must delete the local user if you want to re-use with another authentication method, such as LDAP. +[NOTE] +==== +If the LDAP server you want to connect to has a certificate that is self-signed or signed by an internal certificate authority (CA), the CA certificate must be added to the system’s trusted CAs. Otherwise, connection to the LDAP server will result in an error that the certificate issuer is not recognized. +==== -Users created through an LDAP login cannot change their username, given name, surname, or set a local password for themselves. -You can also configure this to restrict editing of other field names. +When LDAP is configured, an account is created for any user who logs in with an LDAP username and password and they can be automatically placed into organizations as either regular users or organization administrators. + +{PlatformNameShort} treats usernames as case-insensitive in LDAP. It sends the username that was entered without modification to the LDAP provider for authentication. After successful authentication, the platform converts the username to lowercase and stores it in the database. For example, if a user logs in as `JDOE`, their platform username will be `jdoe`. If the user logs in again as `JDoe`, their username will still be `jdoe`. + +However, if {PlatformNameShort} is configured with multiple LDAP authenticators, and the same user IDs exist across them, their usernames might differ. For instance, `JDOE` might have the username `jdoe`, while `jDOE` could be assigned `jdoe-`. [NOTE] ==== -If the LDAP server you want to connect to has a certificate that is self-signed or signed by a corporate internal certificate authority (CA), -you must add the CA certificate to the system's trusted CAs. -Otherwise, connection to the LDAP server results in an error that the certificate issuer is not recognized. -For more information, see xref:controller-import-CA-cert-LDAP[Importing a certificate authority in {ControllerName} for LDAPS integration]. -If prompted, use your Red Hat customer credentials to login. +If a user previously logged in using different case variations of their username, {PlatformNameShort} maps all case variations to the lowercase username. Existing users with other case variations are not valid for interactive log in. However, any existing OAuth tokens for the mixed case username still allow authentication. A system administrator can delete those case variation users if needed. +==== + +Users created through an LDAP login should not change their username, first name, last name, or set a local password for themselves. Any changes made to this information is overwritten the next time the user logs in to the platform. + +[IMPORTANT] +==== +Migration of LDAP authentication settings are not supported for 2.4 to 2.5 in the platform UI. If you are upgrading from {PlatformNameShort} 2.4 to 2.5, be sure to save your authentication provider data before upgrading. ==== .Procedure -. Create a user in LDAP that has access to read the entire LDAP structure. -. Use the `ldapsearch` command to test if you can make successful queries to the LDAP server. -You can install this tool from {ControllerName}'s system command line, and by using other Linux and OSX systems. -+ -.Example +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *LDAP* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. In the *LDAP Server URI* field, enter or modify the list of LDAP servers to which you want to connect. This field supports multiple addresses. +. In the *LDAP Bind DN text* field, enter the Distinguished Name (DN) to specify the user that the {PlatformNameShort} uses to connect to the LDAP server as shown in the following example: + -[literal, options="nowrap" subs="+attributes"] ---- -ldapsearch -x -H ldap://win -D "CN=josie,CN=Users,DC=website,DC=com" -b "dc=website,dc=com" -w Josie4Cloud +CN=josie,CN=users,DC=website,DC=com ---- -In this example, `CN=josie,CN=users,DC=website,DC=com` is the distinguished name of the connecting user. ++ +. In the *LDAP Bind Password* text field, enter the password to use for the binding user. +. Select a group type from the *LDAP Group Type* list. ++ +The group type defines the class name of the group, which manages the groups associated with users in your LDAP directory and is returned by the search specified in Step 14 of this procedure. The group type, along with group parameters and the group search, is used to find and assign groups to users during log in, and can also be evaluated during the mapping process. The following table lists the available group types, along with their descriptions and the necessary parameters for each. By default, LDAP groups will be mapped to Django groups by taking the first value of the cn attribute. You can specify a different attribute with `name_attr`. For example, `name_attr='cn'`. ++ +.Available LDAP group types +[cols="40%,40%,20%",options="header"] +|=== +| *LDAP Group Type* | *Description* | *Initializer method (_init_)* +| `PosixGroupType` | Handles the `posixGroup` object class. This checks for both primary group and group membership. | `name_attr='cn'` +| `MemberDNGroupType` | Handles the grouping mechanisms wherein the group object contains a list of its member DNs. | `member_attr, name_attr='cn'` +| `GroupOfNamesType` | Handles the `groupOfNames` object class. Equivalent to `MemberDNGroupType('member')`. | `name_attr='cn'` +| `GroupOfUniqueNamesType` | Handles the `groupOfUniqueNames` object class. Equivalent to `MemberDNGroupType('uniqueMember')`. | `name_attr='cn'` +| `ActiveDirectoryGroupType` | Handles the Active Directory groups. Equivalent to `MemberDNGroupType('member')`. | `name_attr='cn'` +| `OrganizationalRoleGroupType` | Handles the `organizationalRole` object class. Equivalent to `MemberDNGroupType('roleOccupant')`. | `name_attr='cn'` +| `NestedGroupOfNamesType` | Handles the `groupOfNames` object class. Equivalent to `NestedMemberDNGroupType('member')`. | `member_attr, name_attr='cn'` +| `NestedGroupOfUniqueNamesType` | Handles the `groupOfUniqueNames` object class. Equivalent to `NestedMemberDNGroupType('uniqueMember')`. | `name_attr='cn'` +| `NestedActiveDirectoryGroupType` | Handles the Active Directory groups. Equivalent to `NestedMemberDNGroupType('member')`. | `name_attr='cn'` +| `NestedOrganizationalRoleGroupType` | Handles the `organizationalRole` object class. Equivalent to `NestedMemberDNGroupType('roleOccupant')`. | `name_attr='cn'` +|=== + [NOTE] ==== -The `ldapsearch` utility is not automatically pre-installed with {ControllerName}. -However, you can install it from the `openldap-clients` package. +The group types that are supported by {PlatformNameShort} use the underlying link:https://django-auth-ldap.readthedocs.io/en/latest/reference.html#django_auth_ldap.config.LDAPGroupType[django-auth-ldap library]. To specify the parameters for the selected group type, see Step 14 of this procedure. ==== +. You can use *LDAP User DN Template* as an alternative to user search. This approach is more efficient for user lookups than searching if it is usable in your organizational environment. Enter the name of the template as shown in the following example: + -. From the navigation panel, select {MenuAEAdminSettings} in the {ControllerName} UI. -. Select *LDAP settings* in the list of *Authentication* options. -+ -You do not need multiple LDAP configurations per LDAP server, but you can configure many LDAP servers from this page, otherwise, leave the server at *Default*. +---- +uid=%(user)s,cn=users,cn=accounts,dc=example,dc=com +---- + -The equivalent API endpoints show `AUTH_LDAP_*` repeated: `AUTH_LDAP_1_*`, `AUTH_LDAP_2_*`, `AUTH_LDAP_5_*` to denote server designations. -. To enter or change the LDAP server address, click btn:[Edit] and enter in the *LDAP Server URI* field by using the same format as the one pre-populated in the text field. +where: `uid` is the user identifier, `cn` is the common name and `dc` is the domain component. + [NOTE] ==== -You can specify multiple LDAP servers by separating each with spaces or commas. Click the image:question_circle.png[Tooltip,12,12] icon to comply with the correct syntax and rules. +If this setting has a value it will be used instead of the *LDAP User Search* setting. ==== + -. Enter the password to use for the binding user in the *LDAP Bind Password* text field. -For more information about LDAP variables, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars#ref-hub-variables[Ansible automation hub variables]. -. Click to select a group type from the *LDAP Group Type* list. -+ -The LDAP group types that are supported by {ControllerName} use the underlying link:https://django-auth-ldap.readthedocs.io/en/latest/groups.html#types-of-groups[django-auth-ldap library]. -To specify the parameters for the selected group type, see Step 15. -. The *LDAP Start TLS* is disabled by default. -To enable TLS when the LDAP connection is not using SSL/TLS, set the toggle to *On*. -. Enter the distinguished name in the *LDAP Bind DN* text field to specify the user that {ControllerName} uses to connect (Bind) to the LDAP server. -* If that name is stored in key `sAMAccountName`, the *LDAP User DN Template* is populated from `(sAMAccountName=%(user)s)`. -Active Directory stores the username to `sAMAccountName`. -For OpenLDAP, the key is `uid` and the line becomes `(uid=%(user)s)`. -. Enter the distinguished group name to enable users within that group to access {ControllerName} in the *LDAP Require Group* field, using the same format as the one shown in the text field, `CN=controller Users,OU=Users,DC=website,DC=com`. -. Enter the distinguished group name to prevent users within that group from accessing {ControllerName} in the *LDAP Deny Group* field, using the same format as the one shown in the text field. -. Enter where to search for users while authenticating in the *LDAP User Search* field by using the same format as the one shown in the text field. -In this example, use: -+ -[literal, options="nowrap" subs="+attributes"] ----- -[ -"OU=Users,DC=website,DC=com", -"SCOPE_SUBTREE", -"(cn=%(user)s)" -] ----- -+ -The first line specifies where to search for users in the LDAP tree. -In the earlier example, the users are searched recursively starting from `DC=website,DC=com`. +. *LDAP Start TLS* is disabled by default. StartTLS allows your LDAP connection to be upgraded from an unencrypted connection to a secure connection using Transport Layer Security (TLS). To enable StartTLS when the LDAP connection is not using SSL, set the switch to *On*. + -The second line specifies the scope where the users should be searched: +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] + -* *SCOPE_BASE*: Use this value to indicate searching only the entry at the base DN, resulting in only that entry being returned. -* *SCOPE_ONELEVEL*: Use this value to indicate searching all entries one level under the base DN, but not including the base DN and not including any entries under that one level under the base DN. -* *SCOPE_SUBTREE*: Use this value to indicate searching of all entries at all levels under and including the specified base DN. +. Enter any *LDAP Connection Options* to set for the LDAP connection. LDAP referrals are disabled by default (to prevent certain LDAP queries from hanging with Active Directory). Option names should be strings as shown in the following example: + -The third line specifies the key name where the user name is stored. +---- +OPT_REFERRALS: 0 +OPT_NETWORK_TIMEOUT: 30 +---- +See the link:https://www.python-ldap.org/en/python-ldap-3.4.3/reference/ldap.html#options[python-LDAP Reference] for possible options and values that can be set. + -For many search queries, use the following correct syntax: +. Depending on the selected *LDAP Group Type*, different parameters are available in the *LDAP Group Type Parameters* field to account for this. `LDAP_GROUP_TYPE_PARAMS` is a dictionary, which is converted to `kwargs` and passed to the *LDAP Group Type* class selected. There are two common parameters used by group types: `name_attr` and `member_attr`. Where `name_attr` defaults to `cn` and `member_attr` defaults to `member`: + -[literal, options="nowrap" subs="+attributes"] ---- -[ - [ - "OU=Users,DC=northamerica,DC=acme,DC=com", - "SCOPE_SUBTREE", - "(sAMAccountName=%(user)s)" - ], - [ - "OU=Users,DC=apac,DC=corp,DC=com", - "SCOPE_SUBTREE", - "(sAMAccountName=%(user)s)" - ], - [ - "OU=Users,DC=emea,DC=corp,DC=com", - "SCOPE_SUBTREE", - "(sAMAccountName=%(user)s)" - ] -] +{"name_attr": "cn", "member_attr": "member"} ---- + -. In the *LDAP Group Search* text field, specify which groups to search and how to search them. In this example, use: +To determine the parameters that a specific *LDAP Group Type* requires, refer to the link:https://django-auth-ldap.readthedocs.io/en/latest/reference.html#django_auth_ldap.config.LDAPGroupType[django_auth_ldap documentation] on the classes `init` parameters. ++ +. In the *LDAP Group Search* field, specify which groups should be searched and how to search them as shown in the following example: + -[literal, options="nowrap" subs="+attributes"] ---- [ "dc=example,dc=com", @@ -124,14 +113,8 @@ For many search queries, use the following correct syntax: ] ---- + -* The first line specifies the BASE DN where the groups should be searched. -* The second line specifies the scope and is the same as that for the user directive. -* The third line specifies what the `objectClass` of a group object is in the LDAP that you are using. -+ -. Enter the user attributes in the *LDAP User Attribute Map* the text field. -In this example, use: +. In the *LDAP User Attribute Map* field, enter user attributes to map LDAP fields to your {PlatformNameShort} users, for example, `email` or `first_name` as shown in the following example: + -[literal, options="nowrap" subs="+attributes"] ---- { "first_name": "givenName", @@ -139,38 +122,45 @@ In this example, use: "email": "mail" } ---- +. In the *LDAP User Search* field, enter where to search for users during authentication as shown in the following example: + -The earlier example retrieves users by surname from the key `sn`. -You can use the same LDAP query for the user to decide what keys they are stored under. -+ -Depending on the selected *LDAP Group Type*, different parameters are available in the *LDAP Group Type Parameters* field to account for this. -`LDAP_GROUP_TYPE_PARAMS` is a dictionary that is converted by {ControllerName} to `kwargs` and passed to the *LDAP Group Type* class selected. -There are two common parameters used by any of the *LDAP Group Type*; `name_attr` and `member_attr`. -Where `name_attr defaults` to cn and `member_attr` defaults to member: -+ -[literal, options="nowrap" subs="+attributes"] ---- -{"name_attr": "cn", "member_attr": "member"} +[ +"OU=Users,DC=website,DC=com", +"SCOPE_SUBTREE", +"(cn=%(user)s)" +] ---- + -To find what parameters a specific *LDAP Group Type* expects, see the link:https://django-auth-ldap.readthedocs.io/en/latest/reference.html#django_auth_ldap.config.LDAPGroupType[django_auth_ldap] documentation around the classes `init` parameters. -+ -. Enter the user profile flags in the *LDAP User Flags by Group* text field. -The following example uses the syntax to set LDAP users as "Superusers" and "Auditors": +If the *LDAP User DN Template* is not set, the {PlatformNameShort} authenticates to LDAP using the *Bind DN Template* and *LDAP Bind Password*. After authentication, an LDAP search is performed to locate the user specified by this field. If the user is found, {PlatformNameShort} validates the provided password against the user found by the LDAP search. +Multiple search queries are supported for users with `LDAPUnion` by entering multiple search terms as shown in the following example: + -[literal, options="nowrap" subs="+attributes"] ---- -{ -"is_superuser": "cn=superusers,ou=groups,dc=website,dc=com", -"is_system_auditor": "cn=auditors,ou=groups,dc=website,dc=com" -} +[ + [ + "ou=users,dc=example,dc=com", + "SCOPE_SUBTREE", + "uid=%(user)s" + ], + [ + "ou=employees,dc=subdivision,dc=com", + "SCOPE_SUBTREE", + "uid=%(user)s" + ] +] ---- + -. For more information about completing the mapping fields, *LDAP Organization Map* and *LDAP Team Map*, see the xref:controller-LDAP-organization-team-mapping[LDAP Organization and team mapping] section. -. Click btn:[Save]. - +If non-unique users are found during multiple searches, those users will not be able to log in to {PlatformNameShort}. Based on the example provided, if a user with `uid=jdoe` was found in both the `ou=users,dc=example,dc=com` and `ou=employees,dc=subdivision,dc=com`, neither `jdoe` user would be able to log in. All other unique users that are found in either branch would still be able to log in. ++ [NOTE] ==== -{ControllerNameStart} does not actively synchronize users, but they are created during their initial login. -To improve performance associated with LDAP authentication, see xref:controller-prevent-LDAP-attributes[Preventing LDAP attributes from updating on each login]. +If the field *LDAP User DN Template* is populated, it takes precedence over the *LDAP User Search* field and only the template will be used to authenticate users. ==== ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-set-up-SAML.adoc b/downstream/modules/platform/proc-controller-set-up-SAML.adoc index a20e34455d..bfdb5f81fa 100644 --- a/downstream/modules/platform/proc-controller-set-up-SAML.adoc +++ b/downstream/modules/platform/proc-controller-set-up-SAML.adoc @@ -1,79 +1,76 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-SAML"] -= SAML authentication += Configuring SAML authentication + +SAML allows the exchange of authentication and authorization data between an Identity Provider (IdP) and a Service Provider (SP). +{PlatformNameShort} is a SAML SP that you can configure to talk with one or more SAML IdPs to authenticate users. + +Based on groups and attributes optionally provided by the SAML IdP, users can be placed into teams and organizations in {PlatformNameShort} based on the authenticator maps tied to this authenticator. This mapping ensures that when a user logs in through SAML, {PlatformNameShort} can correctly identify the user and assign the proper attributes like first name, last name, email, and group membership. -SAML enables the exchange of authentication and authorization data between an Identity Provider (IdP - a system of servers that provide the Single Sign On service) and a service provider, in this case, {ControllerName}. +.Prerequisites -You can configure {ControllerName} to communicate with SAML to authenticate (create/login/logout) {ControllerName} users. -You can embed User, Team, and Organization membership in the SAML response to {ControllerName}. +Before you configure SAML authentication in {PlatformNameShort}, be sure you do the following: -image::ag-configure-auth-saml-topology.png[SAML topology] +* Configure a SAML Identity Provider (IdP). +* Pre-configure the SAML IdP with the settings required for integration with {PlatformNameShort}. For example, in Microsoft Entra ID you can configure the following: +** *Identifier (Entity ID):* This can be any value that you want, but it needs to match the one configured in your {PlatformNameShort}. +** *Reply URL (Assertion Consumer Service (ACS) URL):* This URL is auto generated when the SAML method is configured in {PlatformNameShort}. That value must be copied from {PlatformNameShort} and pasted in your IdP settings. +* Gather the user attributes for your SAML IdP application. Different IdPs might use different attribute names and formats. Refer to documentation for your specific IdP for the exact attribute names and the expected values. +* Generate a private key and public certificate using the following command: ++ +----- +$ openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes +----- -The following instructions describe {ControllerName} as the service provider. -To authenticate users through RHSSO (keycloak), see link:https://www.ansible.com/blog/red-hat-single-sign-on-integration-with-ansible-tower[Red Hat Single Sign On Integration with the Automation Controller]. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. Select *SAML settings* from the list of *Authentication* options. +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this SAML configuration. +. Select *SAML* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. Enter the application-defined unique identifier used as the audience of the SAML service provider configuration in the *SAML Service Provider Entity ID* field. This is usually the base URL of your service provider, but the actual value depends on the Entity ID expected by your IdP. +. Include the certificate content in the *SAML Service Provider Public Certificate* field. This information is contained in the cert.pem you created as a prerequisite and must include the `—–BEGIN CERTIFICATE—–` and `—–END CERTIFICATE—–`. +. Include the private key content in the *SAML Service Provider Private Key* field. This information is contained in the key.pem you created as a prerequisite and must include the `—–BEGIN PRIVATE KEY—–` and `—–END PRIVATE KEY—–`. +. Enter the URL to redirect the user to for login initiation in the *IdP Login URL* field. This is the login URL from your SAML IdP application. +. Enter the public cert used for secrets coming from the *IdP in the IdP Public Cert* field. This is the SAML certificate available for download from IdP. + [NOTE] ==== -The *SAML Assertion Consume Service (ACS) URL* and *SAML Service Provider Metadata URL* fields are pre-populated and are non-editable. Contact the IdP administrator and provide the information contained in these fields. +The IdP in the IdP Public Cert field should contain the entire certificate, including the `—–BEGIN CERTIFICATE—–` and `—–END CERTIFICATE—–`. You must manually enter the prefix and suffix if the IdP does not include it. ==== -. Click btn:[Edit] and set the *SAML Service Provider Entity ID* to be the same as the *Base URL* of the {ControllerName} host field, found in the *Miscellaneous System settings* screen. -You can view it through the API in the `/api/v2/settings/system`, under the `CONTROLLER_BASE_URL` variable. -You can set the *Entity ID* to any one of the individual {ControllerName} cluster nodes, but it is good practice to set it to the URL of the service provider. -Ensure that the *Base URL* matches the FQDN of the load balancer, if used. ++ +. Enter the entity ID returned in the assertion in the *Entity ID*. This is the identifier from your IdP SAML application. You can find this value in the SAML metadata provided by your IdP. +. Enter user details in the *Groups*, *User Email*, *Username*, *User Last Name* and *User First Name*. +. Enter a permanent ID for the user in the *User Permanent ID* field. This field is required. + [NOTE] ==== -The *Base URL* is different for each node in a cluster. -A load balancer often sits in front of {ControllerName} cluster nodes to provide a single entry point, the {ControllerName} Cluster FQDN. -The SAML service provider must be able establish an outbound connection and route to the {ControllerName} Cluster Node or the {ControllerName} Cluster FQDN that you set in the *SAML Service Provider Entity ID*. +Additional attributes might be available through your SAML IdP. Those values must be included in either the *Additional Authenticators Fields* or the *SAML IDP to extra_data attribute mapping* field. Refer to those steps for details. ==== + -In the following example, the service provider is the {ControllerName} cluster, and therefore, the ID is set to the {ControllerName} Cluster FQDN: +. The *SAML Assertion Consumer Service (ACS) URL* field registers the service as a service provider (SP) with each identity provider (IdP) you have configured. Leave this field blank. After you save this authentication method, it is auto generated. This field must match the *Reply URL* setting in your IdP. +. Optional: Enter any *Additional Authenticator Fields* that this authenticator can take. These fields are not validated and are passed directly back to the authenticator. +For example, to ensure all SAML IdP attributes other than Email, Username, Last Name, First Name are included for mapping, enter the following: + -image::configure-auth-saml-service-provider.png[SAML service provider] +----- +GET_ALL_EXTRA_DATA: true +----- + -. Create a server certificate for the Ansible cluster. -Typically when an Ansible cluster is configured, the {ControllerName} nodes are configured to handle HTTP traffic only and the load balancer is an SSL Termination Point. -In this case, an SSL certificate is required for the load balancer, and not for the individual {ControllerName} Cluster Nodes. -You can enable or disable SSL per individual {ControllerName} node, but you must disable it when using an SSL terminated load balancer. -Use a non-expiring self signed certificate to avoid periodically updating certificates. -This way, authentication does not fail in case someone forgets to update the certificate. +Alternatively, you can include a list of SAML IdP attributes in the *SAML IDP to extra_data attribute mapping* field. + [NOTE] ==== -The *SAML Service Provider Public Certificate* field must contain the entire certificate, including the `-----BEGIN CERTIFICATE-----` and `-----END CERTIFICATE-----`. +Values defined in this field override the dedicated fields provided in the UI. Any values not defined here are not provided to the authenticator. ==== + -If you are using a CA bundle with your certificate, include the entire bundle in this field. -+ -.Example +. In the *SAML Service Provider Organization Info* field, provide the URL, display name, and the name of your app. + -[literal, options="nowrap" subs="+attributes"] ----- ------BEGIN CERTIFICATE----- -... cert text ... ------END CERTIFICATE----- ----- -+ -. Create an optional private key for the controller to use as a service provider and enter it in the *SAML Service Provider Private Key* field. -+ -.Example -+ -[literal, options="nowrap" subs="+attributes"] ----- ------BEGIN PRIVATE KEY----- -... key text ... ------END PRIVATE KEY----- ----- -+ -. Provide the IdP with details about the {ControllerName} cluster during the SSO process in the *SAML Service Provider Organization Info* field: -+ -[literal, options="nowrap" subs="+attributes"] ---- { "en-US": { @@ -84,15 +81,8 @@ If you are using a CA bundle with your certificate, include the entire bundle in } ---- + -[IMPORTANT] -==== -You must complete these fields to configure SAML correctly within {ControllerName}. -==== -+ -. Provide the IdP with the technical contact information in the *SAML Service Provider Technical Contact* field. -Do not remove the contents of this field: +. In the *SAML Service Provider Technical Contact* field, give the name and email address of the technical contact for your service provider. + -[literal, options="nowrap" subs="+attributes"] ---- { "givenName": "Some User", @@ -100,285 +90,68 @@ Do not remove the contents of this field: } ---- + -. Provide the IdP with the support contact information in the *SAML Service Provider Support Contact* field. -Do not remove the contents of this field: +. In the *SAML Service Provider Support Contact* field, give the name and email address of the support contact for your service provider. + -[literal, options="nowrap" subs="+attributes"] ----- +---- { "givenName": "Some User", "emailAddress": "suser@example.com" } ---- + -. In the *SAML Enabled Identity Providers* field, provide information on how to connect to each IdP listed. -The following example shows what {ControllerName} expects SAML attributes to be: -+ -[literal, options="nowrap" subs="+attributes"] ----- -Username(urn:oid:0.9.2342.19200300.100.1.1) -Email(urn:oid:0.9.2342.19200300.100.1.3) -FirstName(urn:oid:2.5.4.42) -LastName(urn:oid:2.5.4.4) ----- -+ -If these attributes are not known, map existing SAML attributes to `Username`, `Email`, `FirstName`, and `LastName`. -+ -Configure the required keys for each IdP: -+ -* `attr_user_permanent_id` - The unique identifier for the user. -It can be configured to match any of the attributes sent from the IdP. -It is normally set to `name_id` if the `SAML:nameid` attribute is sent to the {ControllerName} node. -It can be the username attribute or a custom unique identifier. -* `entity_id` - The Entity ID provided by the IdP administrator. -The administrator creates a SAML profile for {ControllerName} and it generates a unique URL. -* `url`- The Single Sign On (SSO) URL that {ControllerName} redirects the user to, when SSO is activated. -* `x509_cert` - The certificate provided by the IdP administrator that is generated from the SAML profile created on the IdP. -Remove the `---BEGIN CERTIFICATE---` and `---END CERTIFICATE---` headers, then enter the certificate as one non-breaking string. -+ -Multiple SAML IdPs are supported. -Some IdPs might provide user data using attribute names that differ from the default OIDs. -The SAML NameID is a special attribute used by some IdPs to tell the service provider (the {ControllerName} cluster) what the unique user identifier is. -If it is used, set the `attr_user_permanent_id` to `name_id` as shown in the following example. -Other attribute names can be overridden for each IdP: -+ -[literal, options="nowrap" subs="+attributes"] ----- -"myidp": { - "entity_id": "https://idp.example.com", - "url": "https://myidp.example.com/sso", - "x509cert": "" -}, -"onelogin": { - "entity_id": "https://app.onelogin.com/saml/metadata/123456", - "url": "https://example.onelogin.com/trust/saml2/http-post/sso/123456", -"x509cert": "", - "attr_user_permanent_id": "name_id", - "attr_first_name": "User.FirstName", - "attr_last_name": "User.LastName", - "attr_username": "User.email", - "attr_email": "User.email" - } -} ----- -+ -[WARNING] -==== -Do not create a SAML user that shares the same email with another user (including a non-SAML user). -Doing so results in the accounts being merged. -Note that this same behavior exists for system administrators. -Therefore, a SAML login with the same email address as the system administrator can login with system administrator privileges. -To avoid this, you can remove (or add) administrator privileges based on SAML mappings. -==== -+ -. Optional: Provide the *SAML Organization Map*. -For more information, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. -. You can configure {ControllerName} to look for particular attributes that contain Team and Organization membership to associate with users when they log into {ControllerName}. -The attribute names are defined in the *SAML Organization Attribute Mapping* and the *SAML Team Attribute Mapping* fields. -+ -.Example SAML Organization Attribute Mapping +. Optional: Provide extra configuration data in the *SAML Service Provider extra configuration data* field. For example, you can choose to enable signing requests for added security: + -The following is an example SAML attribute that embeds user organization membership in the attribute `member-of`: -+ -[literal, options="nowrap" subs="+attributes"] ----- - - - Engineering - IT - HR - Sales - - - Engineering - - ----- -+ -The following is the corresponding {ControllerName} configuration: -+ -[literal, options="nowrap" subs="+attributes"] ----- +----- { - "saml_attr": "member-of", - "saml_admin_attr": "admin-of", - "remove": true, - "remove_admins": false +"sign_request": True, } ----- -+ -* `saml_attr`: The SAML attribute name where the organization array can be found and `remove` is set to `true` to remove a user from all organizations before adding the user to the list of organizations. -To keep the user in the organizations they are in while adding the user to the organizations in the SAML attribute, set `remove` to `false`. -* `saml_admin_attr`: Similar to the `saml_attr` attribute, but instead of conveying organization membership, this attribute conveys administrator organization permissions. +----- + -.Example SAML Team Attribute Mapping +This field is the equivalent to the `SOCIAL_AUTH_SAML_SP_EXTRA` in the API. For more information, see link:https://github.com/SAML-Toolkits/python-saml#settings[OneLogin’s SAML Python Toolkit] to learn about the valid service provider extra (SP_EXTRA) parameters. +. Optional: Provide security settings in the *SAML Security Config* field. This field is the equivalent to the `SOCIAL_AUTH_SAML_SECURITY_CONFIG` field in the API. + -The following example is another SAML attribute that contains a team membership in a list: -+ -[literal, options="nowrap" subs="+attributes"] ---- - - - member - staff - - -{ - "saml_attr": "eduPersonAffiliation", - "remove": true, - "team_org_map": [ - { - "team": "member", - "organization": "Default1" - }, - { - "team": "staff", - "organization": "Default2" - } - ] -} ----- -+ -* `saml_attr`: The SAML attribute name where the team array can be found. -* `remove`: Set `remove` to `true` to remove the user from all teams before adding the user to the list of teams. -To keep the user in the teams they are in while adding the user to the teams in the SAML attribute, set `remove` to `false`. -* `team_org_map`: An array of dictionaries of the form `{ "team": "", "organization": "" }` that defines mapping from controller Team -> {ControllerName} organization. -You need this because the same named team can exist in multiple organizations in {ControllerName}. -The organization to which a team listed in a SAML attribute belongs to is ambiguous without this mapping. -+ -You can create an alias to override both teams and organizations in the *SAML Team Attribute Mapping* field. -This option is useful in cases when the SAML backend sends out complex group names, as show in the following example: -+ -[literal, options="nowrap" subs="+attributes"] ----- -{ - "remove": false, - "team_org_map": [ - { - "team": "internal:unix:domain:admins", - "organization": "Default", - "team_alias": "Administrators" - }, - { - "team": "Domain Users", - "organization_alias": "OrgAlias", - "organization": "Default" - } - ], - "saml_attr": "member-of" -} +// Indicates whether the messages sent by this SP // will be signed. [Metadata of the SP will offer this info] +"authnRequestsSigned": false, + +// Indicates a requirement for the , // and elements received by this SP to be signed. +"wantMessagesSigned": false, + +// Indicates a requirement for the elements received by // this SP to be signed. [Metadata of the SP will offer this info] +"wantAssertionsSigned": false, + +// Authentication context. +// Set to false and no AuthContext will be sent in the AuthNRequest, +// Set true or don't present this parameter and you will get an AuthContext 'exact' 'urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport' +// Set an array with the possible auth context values: array ('urn:oasis:names:tc:SAML:2.0:ac:classes:Password', 'urn:oasis:names:tc:SAML:2.0:ac:classes:X509'), +"requestedAuthnContext": true, ---- +For more information and additional options, see link:https://github.com/SAML-Toolkits/python-saml#settings[OneLogin's SAML Python Toolkit]. + -Once the user authenticates, {ControllerName} creates organization and team aliases. -+ -. Optional: Provide team membership mapping in the *SAML Team Map* field. -For more information, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team Mapping]. -. Optional: Provide security settings in the *SAML Security Config* field. -This field is the equivalent to the `SOCIAL_AUTH_SAML_SECURITY_CONFIG` field in the API. -For more information, see link:https://github.com/SAML-Toolkits/python-saml#settings[OneLogin's SAML Python Toolkit]. -+ -{ControllerNameStart} uses the `python-social-auth` library when users log in through SAML. -This library relies on the `python-saml` library to make the settings available for the next two optional fields, *SAML Service Provider extra configuration data* and *SAML IDP to extra_data attribute mapping*. +. Optional: In the *SAML IDP to extra_data attribute mapping* field, enter values to map IDP attributes to extra_data attributes. These values will include additional user information beyond standard attributes like Email or Username to be mapped. For example: + -* The *SAML Service Provider extra configuration data* field is equivalent to the `SOCIAL_AUTH_SAML_SP_EXTRA` in the API. -For more information, see link:https://github.com/SAML-Toolkits/python-saml#settings[OneLogin's SAML Python Toolkit] to learn about the valid service provider extra (`SP_EXTRA`) parameters. -* The *SAML IDP to extra_data attribute mapping* field is equivalent to the `SOCIAL_AUTH_SAML_EXTRA_DATA` in the API. -For more information, see Python's SAML link:https://python-social-auth.readthedocs.io/en/latest/backends/saml.html#advanced-settings[Advanced Settings] documentation. -* The *SAML User Flags Attribute Mapping* field enables you to map SAML roles and attributes to special user flags. -The following attributes are valid in this field: -** `is_superuser_role`: Specifies one or more SAML roles which grants a user the superuser flag. -** `is_superuser_attr`: Specifies a SAML attribute which grants a user the superuser flag. -** `is_superuser_value`: Specifies one or more values required for `is_superuser_attr` that is required for the user to be a superuser. -** `remove_superusers`: Boolean indicating if the superuser flag should be removed for users or not. -This defaults to `true`. -** `is_system_auditor_role`: Specifies one or more SAML roles which will grant a user the system auditor flag. -** `is_system_auditor_attr`: Specifies a SAML attribute which will grant a user the system auditor flag. -** `is_system_auditor_value`: Specifies one or more values required for `is_system_auditor_attr` that is required for the user to be a system auditor. -** `remove_system_auditors`: Boolean indicating if the `system_auditor` flag should be removed for users or not. -This defaults to `true`. +----- +- Department +- UserType +- Organization +----- + -The `role` and `value` fields are lists and are 'OR' logic. -If you specify two roles: [ "Role 1", "Role 2" ] and the SAML user has either role, the logic considers them to have the required role for the flag. -This is the same with the `value` field, if you specify: [ "Value 1", "Value 2"] and the SAML user has either value for their attribute the logic considers their attribute value to have matched. +For more information on the values you can include, see link:https://python-social-auth.readthedocs.io/en/latest/backends/saml.html#advanced-settings[advanced SAML settings]. + -If you specify `role` and `attr` for either `superuser` or `system_auditor`, the settings for `attr` take precedence over a role. -System administrators and System auditor roles are evaluated at login for a SAML user. -If you grant a SAML user one of these roles through the UI and not through the SAML settings, the roles are removed on the user's next login unless the `remove` flag is set to `false`. -The `remove` flag, if `false`, never enables the SAML adapter to remove the corresponding flag from a user. -The following table describes how the logic works: -+ -[cols="33%,33%,33%,33%,33%,33%",options="header"] -|=== -| *Has one or more roles* | *Has `attr`* | *Has one or more `attr Values`* | *Remove flag* | *Previous Flag* | *Is flagged* -| No | No | N/A | True | False | No -| No | No | N/A | False | False | No -| No | No | N/A | True | True | No -| No | No | N/A | False | True | Yes -| Yes | No | N/A | True | False | Yes -| Yes | No | N/A | False | False | Yes -| Yes | No | N/A | True | True | Yes -| Yes | No | N/A | False | False | Yes -| No | Yes | Yes | True | True | Yes -| No | Yes | Yes | True | False | Yes -| No | Yes | Yes | False | False | Yes -| No | Yes | Yes | True | True | Yes -| No | Yes | Yes | False | True | Yes -| No | Yes | No | True | False | No -| No | Yes | No | False | False | No -| No | Yes | No | True | True | No -| No | Yes | No | False | True | Yes -| No | Yes | Unset | True | False | Yes -| No | Yes | Unset | False | False | Yes -| No | Yes | Unset | True | True | Yes -| No | Yes | Unset | False | True | Yes -| Yes | Yes | Yes | True | False | Yes -| Yes | Yes | Yes | False | False | Yes -| Yes | Yes | Yes | True | True | Yes -| Yes | Yes | Yes | False | True | Yes -| Yes | Yes | No | True | False | No -| Yes | Yes | No | False | False | No -| Yes | Yes | No | True | True | No -| Yes | Yes | No | False | True | Yes -| Yes | Yes | Unset | True | False | Yes -| Yes | Yes | Unset | False | False | Yes -| Yes | Yes | Unset | True | True | Yes -| Yes | Yes | Unset | False | True | Yes -|=== -+ -Each time a SAML user authenticates to {ControllerName}, these checks are performed and the user flags are altered as needed. -If `System Administrator` or `System Auditor` is set for a SAML user within the UI, the SAML adapter overrides the UI setting based on the preceding rules. -If you prefer that the user flags for SAML users do not get removed when a SAML user logs in, you can set the `remove_` flag to `false`. -With the `remove` flag set to `false`, a user flag set to `true` through either the UI, API or SAML adapter is not removed. -However, if a user does not have the flag, and the preceding rules determine the flag should be added, it is added, even if the flag is `false`. +[IMPORTANT] +==== +Make sure you include all relevant values so that everything gets mapped correctly for your configuration. Alternatively, you can include the `GET_ALL_EXTRA_DATA: true` in the *Additional Authenticator Fields* to allow mapping of all available SAML IdP attributes. +==== + -.Example +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] + -[literal, options="nowrap" subs="+attributes"] ----- -{ - "is_superuser_attr": "blueGroups", - "is_superuser_role": ["is_superuser"], - "is_superuser_value": ["cn=My-Sys-Admins,ou=memberlist,ou=mygroups,o=myco.com"], - "is_system_auditor_attr": "blueGroups", - "is_system_auditor_role": ["is_system_auditor"], - "is_system_auditor_value": ["cn=My-Auditors,ou=memberlist,ou=mygroups,o=myco.com"] -} ----- -. Click btn:[Save]. +. Click btn:[Create Authentication Method]. -.Verification -To verify that the authentication is configured correctly, load the auto-generated URL found in the *SAML Service Provider Metadata URL* into a browser. -If you do not get XML output, you have not configured it correctly. - -Alternatively, logout of {ControllerName} and the login screen displays the SAML logo to indicate it as a alternate method of logging into {ControllerName}: +[IMPORTANT] +==== +You can configure an HTTPS redirect for SAML in operator-based deployments to simplify login for your users. For the steps to configure this setting, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-operator-enable-https-redirect[Enabling single sign-on (SSO) for {Gateway} on {OCPShort}]. +==== -image::ag-configure-auth-saml-logo.png[SAML logo] +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-set-up-azure.adoc b/downstream/modules/platform/proc-controller-set-up-azure.adoc index 8a58dca61f..e4b99ceb57 100644 --- a/downstream/modules/platform/proc-controller-set-up-azure.adoc +++ b/downstream/modules/platform/proc-controller-set-up-azure.adoc @@ -1,29 +1,100 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-azure"] +ifndef::controller-AG[] += Configuring {MSEntraID} authentication +endif::[] +ifdef::controller-AG[] = {Azure} active directory authentication +endif::controller-AG[] + +ifndef::controller-AG[] +To set up enterprise authentication for {MSEntraID}, formerly known as {Azure} Active Directory (AD), follow these steps: + +. *Configure your {PlatformNameShort}* to use {MSEntraID} authentication using the steps in this procedure. +. *Register {PlatformNameShort}* in {MSEntraID} by following the link:https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app[Quickstart: Register an application with the Microsoft identity platform]. This process provides you with an Application (client) ID and Application secret. +. *Add the redirect URL in {MSEntraID}*. After completing the configuration wizard for {MSEntraID} authentication in your platform, copy the URL displayed in the *Azure AD OAuth2 Callback URL* field. Then, go to your registered enterprise application in Azure and add this URL as a *Redirect URL* (also referred to as a *Callback URL* in {PlatformNameShort}) as described in link:https://learn.microsoft.com/en-us/entra/identity-platform/how-to-add-redirect-uri[How to add a redirect URI to your application]. This step is required for the login flow to work correctly. -To set up enterprise authentication for {Azure} Active Directory (AD), you need to obtain an OAuth2 key and secret by registering your organization-owned application from Azure at: -https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app. +The attributes provided by {MSEntraID} are not set in the {PlatformNameShort} configuration for this authentication type. Instead, the link:https://github.com/python-social-auth/social-core/blob/master/social_core/backends/azuread.py#L85-L98[social_core azuread backend] provides the translation of claims provided by {MSEntraID}. The user attributes that allow {PlatformNameShort} to correctly identify the user and assign the proper attributes like first name, last name, email, and username include the following: -Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. -To register the application, you must supply it with your webpage URL, which is the Callback URL shown in the *Authentication* tab of the *Settings* screen. +[cols="2*",options="header"] +|=== +| {PlatformNameShort} attribute | {MSEntraID} parameter +| authenticator_uid | upn +| Username | name +| First Name | given_name +| Last Name | family_name +| Email | email (falling back to upn) +|=== + +endif::[] +ifdef::controller-AG[] +To set up enterprise authentication for {Azure} Active Directory (AD), you need to obtain an OAuth2 key and secret by registering your organization-owned application from Azure at: https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app[Quickstart: Register an application with the Microsoft identity platform]. +endif::[] + +Each key and secret must belong to a unique application and cannot be shared or reused between different authentication backends. To register the application, you must supply it with your webpage URL, which is the Callback URL shown in the Authenticator details for your authenticator configuration. +ifndef::Controller-AG[] +See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. +endif::[] .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. +ifndef::controller-AG[] +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *Azuread* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. Click btn:[Edit], copy and paste Microsoft's *Application (Client) ID* to the *OIDC Key* field. +. If your {MSEntraID} is configured to provide user group information within a groups claim, ensure that the platform is configured with a *Groups Claim* name that matches your {MSEntraID} configuration. This allows the platform to correctly identify and associate groups for users logging in through {MSEntraID}. ++ +[NOTE] +==== +Groups coming from {MSEntraID} can be identified using either unique IDs or group names. +When creating group mappings for a {MSEntraID} authenticator, you can use either the unique ID or the group name. + +By default, {MSEntraID} uses groups as the default group claim name. So, be sure to either set the value to the default or to any custom override you have set in your IdP. The current default is set to preserve the existing behavior unless explicitly changed. +==== ++ +. Following instructions for link:https://docs.microsoft.com/en-us/azure/active-directory/develop/quickstart-register-app[registering your application with the Microsoft identity platform], supply the key (shown at one time only) to the client for authentication. ++ +. Copy and paste the secret key created for your {MSEntraID}/{Azure} AD application to the *OIDC Secret* field. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +include::snippets/snip-gw-authentication-verification.adoc[] + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] + +[role="_additional-resources"] +.Additional resources +* link:https://learn.microsoft.com/en-us/entra/identity-platform/v2-overview[What is the Microsoft identity platform?] +endif::[] + +ifdef::controller-AG[] +. From the navigation panel, select menu:Settings[]. . Select *Azure AD settings* from the list of *Authentication* options. + [NOTE] ==== -The *Azure AD OAuth2 Callback URL* field is already pre-populated and non-editable. -Once the application is registered, {Azure} displays the Application ID and Object ID. +The *Azure AD OAuth2 Callback URL* field is already pre-populated and non-editable. +Once the application is registered, Azure displays the Application ID and Object ID. ==== -. Click btn:[Edit], copy and paste {Azure}'s Application ID to the *Azure AD OAuth2 Key* field. +. Click btn:[Edit], copy and paste Azure's Application ID to the *Azure AD OAuth2 Key* field. + -Following {Azure} AD's documentation for connecting your application to {Azure} Active Directory, supply the key (shown at one time only) to the client for authentication. +Following Azure AD's documentation for connecting your app to {Azure} Active Directory, supply the key (shown at one time only) to the client for authentication. + -. Copy and paste the secret key created for your {Azure} AD application to the *Azure AD OAuth2 Secret* field of the *Settings - Authentication* screen. -. For more information on completing the {Azure} AD OAuth2 Organization Map and {Azure} AD OAuth2 Team Map fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team Mapping]. +. Copy and paste the secret key created for your Azure AD application to the *Azure AD OAuth2 Secret* field of the Settings - Authentication screen. +. For more information on completing the Azure AD OAuth2 Organization Map and Azure AD OAuth2 Team Map fields, see xref:ref-controller-organization-mapping[Organization mapping] and xref:ref-controller-team-mapping[Team mapping]. . Click btn:[Save]. .Verification @@ -32,4 +103,5 @@ To verify that the authentication is configured correctly, log out of {Controlle image::ag-configure-auth-azure-logo.png[Azure AD logo] .Additional resources -For application registering basics in {Azure} AD, see the link:https://learn.microsoft.com/en-us/entra/identity-platform/v2-overview[What is the Microsoft identity platform?] overview. +For application registering basics in Azure AD, see the link:https://learn.microsoft.com/en-us/entra/identity-platform/v2-overview[What is the Microsoft identity platform?] overview. +endif::[] diff --git a/downstream/modules/platform/proc-controller-set-up-generic-oidc.adoc b/downstream/modules/platform/proc-controller-set-up-generic-oidc.adoc index 798c18b5d6..50616d77cf 100644 --- a/downstream/modules/platform/proc-controller-set-up-generic-oidc.adoc +++ b/downstream/modules/platform/proc-controller-set-up-generic-oidc.adoc @@ -1,34 +1,59 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-generic-oidc"] -= Generic OIDC authentication += Configuring generic OIDC authentication -OpenID Connect (OIDC) uses the OAuth 2.0 framework. -It enables third-party applications to verify the identity and obtain basic end-user information. -The main difference between OIDC and SAML is that SAML has a service provider (SP)-to-IdP trust relationship, whereas OIDC establishes the trust with the channel (HTTPS) that is used to obtain the security token. -To obtain the credentials needed to set up OIDC with {ControllerName}, see the documentation from the IdP of your choice that has OIDC support. +OpenID Connect (OIDC) uses the OAuth 2.0 framework. It enables third-party applications to verify the identity and obtain basic end-user information. The main difference between OIDC and SAML is that SAML has a service provider (SP)-to-IdP trust relationship, whereas OIDC establishes the trust with the channel (HTTPS) that is used to obtain the security token. To obtain the credentials needed to set up OIDC with {PlatformNameShort}, see the documentation from the IdP of your choice that has OIDC support. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. Select *Generic OIDC settings* from the list of *Authentication* options. -. Click btn:[Edit] and enter the following information: +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *Generic OIDC* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. Enter the following information: ++ +* *OIDC Provider URL*: The URL for your OIDC provider. * *OIDC Key*: The client ID from your third-party IdP. * *OIDC Secret*: The client secret from your IdP. -* *OIDC Provider URL*: The URL for your OIDC provider. -* *Verify OIDC Provider Certificate*: Use the toggle to enable or disable the OIDC provider SSL certificate verification. -. Click btn:[Save]. + -[NOTE] -==== -Team and organization mappings for OIDC are currently not supported. -The OIDC adapter does authentication only and not authorization. -It is only capable of authenticating whether this user is who they say they are. -It does not authorize what this user is enabled to do. -Configuring generic OIDC creates the UserID appended with an ID or key to differentiate the same user ID originating from two different sources and therefore, considered different users. -So you get an ID of just the user name and the second is the username-. -==== - -.Verification -To verify that the authentication is configured correctly, logout of {ControllerName} and the login screen displays the OIDC logo to indicate it as a alternative method of logging into {ControllerName}: - -image:ag-configure-auth-oidc-logo.png[OIDClogo] +. Optional: Select the HTTP method to be used when requesting an access token from the *Access Token Method* list. The default method is *POST*. +. Optionally enter information for the following fields using the tooltips provided for instructions and required format: ++ +* *Access Token Method* - The default method is *POST*. +* *Access Token URL* +* *Access Token Method* +* *Authorization URL* +* *ID Key* +* *ID Token Issuer* +* *JWKS URI* +* *OIDC Public Key* +* *Revoke Token Method* - The default method is *GET*. +* *Revoke Token URL* +* *Response Type* +* *Token Endpoint Auth Method* +* *Userinfo URL* +* *Username Key* ++ +. Use the *Verify OIDC Provider Certificate* to enable or disable the OIDC provider SSL certificate verification. +. Use the *Redirect* State to enable or disable the state parameter in the redirect URI. It is recommended that this is enabled to prevent Cross Site Request Forgery (CSRF) attacks. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] + +// [ddacosta - removed as no longer true in 2.5.] +// [NOTE] +// ==== +// Team and organization mappings for OIDC are currently not supported. The OIDC adapter does authentication only and not authorization. It is only capable of authenticating whether this user is who they say they are. It does not authorize what this user is enabled to do. Configuring generic OIDC creates the UserID appended with an ID or key to differentiate the same user ID originating from two different sources and therefore, considered different users. So you get an ID of just the user name and the second is the username-. +// ==== diff --git a/downstream/modules/platform/proc-controller-set-up-github-webhook.adoc b/downstream/modules/platform/proc-controller-set-up-github-webhook.adoc index c98a1466b5..59e4d15ca4 100644 --- a/downstream/modules/platform/proc-controller-set-up-github-webhook.adoc +++ b/downstream/modules/platform/proc-controller-set-up-github-webhook.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-github-webhook"] = Setting up a GitHub webhook @@ -12,7 +14,7 @@ If you do not need {ControllerName} to post job statuses back to the webhook ser .. In the profile settings of your GitHub account, select *Settings*. .. From the navigation panel, select menu:<> Developer Settings[]. .. On the *Developer Settings* page, select *Personal access tokens*. -.. Select *Tokens(classic)* +.. Select *Tokens (classic)* .. From the *Personal access tokens* screen, click btn:[Generate a personal access token]. .. When prompted, enter your GitHub account password to continue. .. In the *Note* field, enter a brief description about what this PAT is used for. @@ -28,7 +30,7 @@ You cannot access this token again in GitHub. ==== + . Use the PAT to optionally create a GitHub credential: -.. Go to your instance, and xref:ref-controller-credential-gitHub-pat[Create a new credential for the GitHub PAT] using the generated token. +.. Go to your instance and create a new credential for the GitHub PAT, using the generated token. .. Make note of the name of this credential, as you use it in the job template that posts back to GitHub. + image::ug-webhooks-github-PAT-token.png[GitHub PAT token] @@ -41,7 +43,7 @@ image::ug-webhooks-webhook-credential.png[GitLab webhook credential] . Go to a GitHub repository where you want to configure webhooks and select menu:Settings[]. . From the navigation panel, select menu:Webhooks[Add webhook]. . To complete the *Add webhook* page, you must check the *Enable Webhook* option in a job template or workflow job template. -For more information, see step 3 in both xref:controller-create-job-template[Creating a job template] and xref:controller-create-workflow-template[Creating a workflow template]. +For more information, see step 3 in both xref:controller-create-job-template[Creating a job template] and xref:controller-create-workflow-template[Creating a workflow job template]. . Complete the following fields: * *Payload URL*: Copy the contents of the *Webhook URL* from the job template and paste it here. The results are sent to this address from GitHub. @@ -56,8 +58,8 @@ image::ug-webhooks-github-repo-choose-events.png[Github repo choose events] * *Active*: Leave this checked. . Click btn:[Add webhook]. . When your webhook is configured, it is displayed in the list of webhooks active for your repository, along with the ability to edit or delete it. -Click on a webhook, to go to the *Manage webhook* screen. +Click a webhook, to go to the *Manage webhook* screen. . Scroll to view the delivery attempts made to your webhook and whether they succeeded or failed. .Additional resources -For more information, see the link:https://docs.github.com/en/webhooks[Webhooks documentation]. +* link:https://docs.github.com/en/webhooks[Webhooks documentation] diff --git a/downstream/modules/platform/proc-controller-set-up-gitlab-webhook.adoc b/downstream/modules/platform/proc-controller-set-up-gitlab-webhook.adoc index 036754e83f..8767bc60c3 100644 --- a/downstream/modules/platform/proc-controller-set-up-gitlab-webhook.adoc +++ b/downstream/modules/platform/proc-controller-set-up-gitlab-webhook.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-gitlab-webhook"] = Setting up a GitLab webhook @@ -24,7 +26,7 @@ You cannot access this token again in GitLab. ==== + . Use the PAT to optionally create a GitLab credential: -.. Go to your instance, and xref:ref-controller-credential-gitLab-pat[create a new credential for the GitLab PAT] using the generated token. +.. Go to your instance, and create a new credential for the GitLab PAT, using the generated token. .. Make note of the name of this credential, as you use it in the job template that posts back to GitLab. + image::ug-webhooks-create-credential-gitlab-PAT-token.png[GitLab PAT token] @@ -37,7 +39,7 @@ image::ug-gitlab-webhook-credential.png[GitLab webhook credential] . Go to a GitLab repository where you want to configure webhooks. . From the navigation panel, select menu:Settings[Integrations]. . To complete the *Add webhook* page, you must check the *Enable Webhook* option in a job template or workflow job template. -For more information, see step 3 in both xref:controller-create-job-template[Creating a job template] and xref:controller-create-workflow-template[Creating a workflow template]. +For more information, see step 3 in both xref:controller-create-job-template[Creating a job template] and xref:controller-create-workflow-template[Creating a workflow job template]. . Complete the following fields: * *URL*: Copy the contents of the *Webhook URL* from the job template and paste it here. The results are sent to this address from GitLab. @@ -51,4 +53,4 @@ To have job status (pending, error, success) sent back to GitLab, you must selec Testing a webhook event displays the results on each page whether it succeeded or failed. .Additional resources -For more information, see link:https://docs.gitlab.com/ee/user/project/integrations/webhooks.html[Webhooks]. +* link:https://docs.gitlab.com/ee/user/project/integrations/webhooks.html[Webhooks] diff --git a/downstream/modules/platform/proc-controller-set-up-logging.adoc b/downstream/modules/platform/proc-controller-set-up-logging.adoc index 62a05363fe..8555ae8991 100644 --- a/downstream/modules/platform/proc-controller-set-up-logging.adoc +++ b/downstream/modules/platform/proc-controller-set-up-logging.adoc @@ -1,13 +1,25 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-set-up-logging"] -= Setting Up Logging += Setting up logging +ifdef::controller-AG[] Use the following procedure to set up logging to any of the aggregator types. +endif::controller-AG[] +ifdef::hardening[] +To set up logging to any of the aggregator types for centralized logging follow these steps: +endif::hardening[] .Procedure . From the navigation panel, select {MenuSetLogging}. . On the *Logging settings* page, click btn:[Edit]. -. Set the following configurable options: +ifdef::controller-AG[] ++ +image::logging-settings.png[Logging settings page] ++ +endif::controller-AG[] +. You can configure the following options: * *Logging Aggregator*: Enter the hostname or IP address that you want to send logs to. * *Logging Aggregator Port*: Specify the port for the aggregator if it requires one. @@ -19,10 +31,12 @@ However, TCP and UDP connections are determined by the hostname and port number Therefore, in the case of a TCP or UDP connection, supply the port in the specified field. If a URL is entered in the *Logging Aggregator* field instead, its hostname portion is extracted as the hostname. ==== ++ * *Logging Aggregator Type*: Click to select the aggregator service from the list: +ifdef::controller-AG[] + image:configure-controller-system-logging-types.png[Logging types] - +endif::controller-AG[] * *Logging Aggregator Username*: Enter the username of the logging aggregator if required. * *Logging Aggregator Password/Token*: Enter the password of the logging aggregator if required. * *Loggers to Send Data to the Log Aggregator Form*: All four types of data are pre-populated by default. @@ -43,20 +57,22 @@ Equivalent to the `rsyslogd queue.maxdiskspace` setting on the action (e.g. `omh It stores files in the directory specified by `LOG_AGGREGATOR_MAX_DISK_USAGE_PATH`. * *File system location for rsyslogd disk persistence*: Location to persist logs that should be retried after an outage of the external log aggregator (defaults to `/var/lib/awx`). Equivalent to the `rsyslogd queue.spoolDirectory` setting. -* *Log Format For API 4XX Errors*: Configure a specific error message. For more information, see xref:proc-controller-api-4xx-error-config[API 4XX Error Configuration]. +* *Log Format For API 4XX Errors*: Configure a specific error message. For more information, see link:{URLControllerAdminGuide}/assembly-controller-logging-aggregation#proc-controller-api-4xx-error-config[API 4XX Error Configuration]. Set the following options: * *Log System Tracking Facts Individually*: Click the tooltip image:question_circle.png[Help,15,15] icon for additional information, such as whether or not you want to turn it on, or leave it off by default. . Review your entries for your chosen logging aggregation. +ifdef::controller-AG[] The following example is set up for Splunk: + image:configure-controller-system-logging-splunk-example.png[Splunk logging example] +endif::controller-AG[] * *Enable External Logging*: Select this checkbox if you want to send logs to an external log aggregator. * *Enable/disable HTTPS certificate verification*: Certificate verification is enabled by default for the HTTPS log protocol. -Select this checkbox if yoiu want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection. +Select this checkbox if you want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection. * *Enable rsyslogd debugging*: Select this checkbox to enable high verbosity debugging for `rsyslogd`. Useful for debugging connection issues for external log aggregation. diff --git a/downstream/modules/platform/proc-controller-set-up-project.adoc b/downstream/modules/platform/proc-controller-set-up-project.adoc index b3bec7e8eb..9753d14b74 100644 --- a/downstream/modules/platform/proc-controller-set-up-project.adoc +++ b/downstream/modules/platform/proc-controller-set-up-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-project"] = Setting up a project diff --git a/downstream/modules/platform/proc-controller-set-up-prometheus.adoc b/downstream/modules/platform/proc-controller-set-up-prometheus.adoc index 086ba85fde..46ca77e955 100644 --- a/downstream/modules/platform/proc-controller-set-up-prometheus.adoc +++ b/downstream/modules/platform/proc-controller-set-up-prometheus.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-set-up-prometheus"] = Setting up Prometheus @@ -15,7 +17,7 @@ Alternatively, you can provide an OAuth2 token (which can be generated at `/api/ By default, the configuration assumes a user with username=`admin` and password=`password`. ==== + -Using an OAuth2 Token, created at the `/api/v2/tokens` endpoint to authenticate Prometheus with {ControllerName}, the following example provides a valid scrape configuration if the URL for your {ControllerName}'s metrics endpoint is `https://controller_host:443/metrics`. +Using an OAuth2 Token, created at the `/api/v2/tokens` endpoint to authenticate Prometheus with {ControllerName}, the following example provides a valid scrape configuration if the URL for your {ControllerName}'s metrics endpoint is `/https://controller_host:443/metrics`. + [literal, options="nowrap" subs="+attributes"] ---- @@ -40,7 +42,7 @@ For help configuring other aspects of Prometheus, such as alerts and service dis + If Prometheus is already running, you must restart it to apply the configuration changes by making a *POST* to the reload endpoint, or by killing the Prometheus process or service. -. Use a browser to navigate to your graph in the Prometheus UI at `http://:9090/graph` and test out some queries. +. Use a browser to navigate to your graph in the Prometheus UI at `/http://:9090/graph` and test out some queries. For example, you can query the current number of active {ControllerName} user sessions by executing: `awx_sessions_total{type="user"}`. + image:metrics-prometheus-ui-query-example.png[Prometheus queries] diff --git a/downstream/modules/platform/proc-controller-set-up-radius.adoc b/downstream/modules/platform/proc-controller-set-up-radius.adoc index 2ebd8d2c57..35c5b26ef3 100644 --- a/downstream/modules/platform/proc-controller-set-up-radius.adoc +++ b/downstream/modules/platform/proc-controller-set-up-radius.adoc @@ -1,14 +1,29 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-radius"] -= RADIUS authentication += Configuring RADIUS authentication -You can configure {ControllerName} to centrally use RADIUS as a source for authentication information. +You can configure {PlatformNameShort} to centrally use RADIUS as a source for authentication information. .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. Select *RADIUS settings* from the list of *Authentication* options. -. Click btn:[Edit] and enter the host or IP of the RADIUS server in the *RADIUS Server* field. -If you leave this field blank, RADIUS authentication is disabled. -. Enter the port and secret information in the next two fields. -. Click btn:[Save]. +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *Radius* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. Enter the host or IP of the RADIUS server in the *RADIUS Server* field. If you leave this field blank, RADIUS authentication is disabled. +. Enter the *Shared secret for authenticating to RADIUS server*. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-controller-set-up-tacacs+.adoc b/downstream/modules/platform/proc-controller-set-up-tacacs+.adoc index e739aa513e..61f8f78718 100644 --- a/downstream/modules/platform/proc-controller-set-up-tacacs+.adoc +++ b/downstream/modules/platform/proc-controller-set-up-tacacs+.adoc @@ -1,9 +1,10 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-set-up-tacacs"] -= TACACS Plus authentication += Configuring TACACS+ authentication -Terminal Access Controller Access-Control System Plus (TACACS+) is a protocol that handles remote authentication and related services for networked access control through a centralized server. -TACACS+ provides authentication, authorization and accounting (AAA) services, in which you can configure {ControllerName} to use as a source for authentication. +Terminal Access Controller Access-Control System Plus (TACACS+) is a protocol that handles remote authentication and related services for networked access control through a centralized server. TACACS+ provides authentication, authorization and accounting (AAA) services, in which you can configure Ansible Automation Platform to use as a source for authentication. [NOTE] ==== @@ -11,15 +12,27 @@ This feature is deprecated and will be removed in a future release. ==== .Procedure -. From the navigation panel, select {MenuAEAdminSettings}. -. Select *TACACs+ settings* from the list of *Authentication* options. -. Click btn:[Edit] and enter the following information: -* *TACACS+ Server*: Provide the hostname or IP address of the TACACS+ server with which to authenticate. -If you leave this field blank, TACACS+ authentication is disabled. -* *TACACS+ Port*: TACACS+ uses port 49 by default, which is already pre-populated. -* *TACACS+ Secret*: The secret key for TACACS+ authentication server. -* *TACACS+ Auth Session Timeout*: The session timeout value in seconds. -The default is 5 seconds. -* *TACACS+ Authentication Protocol*: The protocol used by the TACACS+ client. -The options are *ascii* or *pap*. -. Click btn:[Save]. + +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *TACACS+* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. Enter the following information: ++ +* Hostname of TACACS+ Server: Provide the hostname or IP address of the TACACS+ server with which to authenticate. If you leave this field blank, TACACS+ authentication is disabled. +* TACACS+ Authentication Protocol: The protocol used by the TACACS+ client. The options are ascii or pap. +* Shared secret for authenticating to TACACS+ server: The secret key for TACACS+ authentication server. +. The *TACACS+ client address sending enabled* is disabled by default. To enable client address sending, select the checkbox. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +Click btn:[Create Authentication Method]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] \ No newline at end of file diff --git a/downstream/modules/platform/proc-controller-sourced-from-project.adoc b/downstream/modules/platform/proc-controller-sourced-from-project.adoc index 20c0f7b1c9..aa01e9748b 100644 --- a/downstream/modules/platform/proc-controller-sourced-from-project.adoc +++ b/downstream/modules/platform/proc-controller-sourced-from-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-sourced-from-project"] = Sourcing from a Project @@ -10,38 +12,38 @@ Use the following procedure to configure a project-sourced inventory: .Procedure . From the navigation panel, select {MenuInfrastructureInventories}. . Select the inventory name you want a source to and click the *Sources* tab. -. Click btn:[Add source]. -. In the *Add new source* page, select *Sourced from a Project* from the *Source* list. +. Click btn:[Create source]. +. In the *Create source* page, select *Sourced from a Project* from the *Source* list. . Enter the following details in the additional fields: -* Optional: *Source Control Branch/Tag/Commit*: Enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout. +* Optional: *Source control branch/tag/commit*: Enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout. + -This field only displays if the sourced project has the *Allow Branch Override* option checked. For further information, see xref:proc-scm-git-subversion[SCM Types - Git and Subversion]. +This field only displays if the sourced project has the *Allow branch override* option checked. +For further information, see xref:proc-scm-git-subversion[SSCM Types - Configuring playbooks to use Git and Subversion]. + image:projects-create-scm-project-branch-override-checked.png[Allow branch override] + -Some commit hashes and refs might not be available unless you also provide a custom refspec in the next field. +Some commit hashes and refs might not be available unless you also give a custom refspec in the next field. If left blank, the default is HEAD which is the last checked out Branch/Tag/Commit for this project. * Optional: *Credential*: Specify the credential to use for this source. * *Project* (required): Pre-populates with a default project, otherwise, specify the project this inventory is using as its source. Click the image:search.png[Search,15,15] icon to choose from a list of projects. If the list is extensive, use the search to narrow the options. -* *Inventory File* (required): Select an inventory file associated with the sourced project. +* *Inventory file* (required): Select an inventory file associated with the sourced project. If not already populated, you can type it into the text field within the menu to filter extraneous file types. In addition to a flat file inventory, you can point to a directory or an inventory script. + image:inventories-create-source-sourced-from-project-filter.png[image] . Optional: You can specify the verbosity, host filter, enabled variable/value, and update options as described in xref:proc-controller-add-source[Adding a source]. -. Optional: To pass to the custom inventory script, you can set environment variables in the *Source Variables* field. +. Optional: To pass to the custom inventory script, you can set environment variables in the *Source variables* field. You can also place inventory scripts in source control and then run it from a project. -For more information, see link:{BaseURL}red_hat_ansible_automation_platform/2.4/html-single/automation_controller_administration_guide/index#assembly-inventory-file-importing[Inventory File Importing] in _{ControllerAG}_. +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/configuring_automation_execution/assembly-inventory-file-importing[Inventory File Importing] in _{ControllerAG}_. //+ //image:inventories-create-source-sourced-from-project-example.png[Inventories - create source - sourced from project example] -[NOTE] -==== +.Troubleshooting + If you are executing a custom inventory script from SCM, ensure that you set the execution bit (`chmod +x`) for the script in your upstream source control. If you do not, {ControllerName} throws a `[Error 13] Permission denied` error on execution. -==== diff --git a/downstream/modules/platform/proc-controller-sync-project.adoc b/downstream/modules/platform/proc-controller-sync-project.adoc index fdbdf8d8c3..d5ba99fad3 100644 --- a/downstream/modules/platform/proc-controller-sync-project.adoc +++ b/downstream/modules/platform/proc-controller-sync-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-sync-project"] = Syncing a project diff --git a/downstream/modules/platform/proc-controller-troubleshoot-ee-mount.adoc b/downstream/modules/platform/proc-controller-troubleshoot-ee-mount.adoc index 930d39c821..eca53e4548 100644 --- a/downstream/modules/platform/proc-controller-troubleshoot-ee-mount.adoc +++ b/downstream/modules/platform/proc-controller-troubleshoot-ee-mount.adoc @@ -1,8 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-ee-troubleshoot-mount"] = Troubleshooting {ExecEnvShort} mount options In some cases where the `/etc/ssh/*` files were added to the {ExecEnvShort} image due to customization of an {ExecEnvShort}, an SSH error can occur. + For example, exposing the `/etc/ssh/ssh_config.d:/etc/ssh/ssh_config.d:O` path enables the container to be mounted, but the ownership permissions are not mapped correctly. Use the following procedure if you meet this error, or have upgraded from an older version of {ControllerName}: diff --git a/downstream/modules/platform/proc-controller-updating-a-project.adoc b/downstream/modules/platform/proc-controller-updating-a-project.adoc index d5ddfcd544..13019dc1aa 100644 --- a/downstream/modules/platform/proc-controller-updating-a-project.adoc +++ b/downstream/modules/platform/proc-controller-updating-a-project.adoc @@ -1,8 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-updating-a-project"] = Updating projects from source control +Regularly updating your projects ensures that your environment has access to the latest versions of your playbooks, roles, and collections, reflecting any changes made in your Git, Subversion, or other integrated SCM repositories. +This process is important for maintaining synchronization between your SCM and {PlatformNameShort}. + .Procedure + . From the navigation panel, select {MenuAEProjects}. . Click the sync image:sync.png[Sync,15,15] icon next to the project that you want to update. + diff --git a/downstream/modules/platform/proc-controller-use-REST-manually.adoc b/downstream/modules/platform/proc-controller-use-REST-manually.adoc new file mode 100644 index 0000000000..7f0b60221f --- /dev/null +++ b/downstream/modules/platform/proc-controller-use-REST-manually.adoc @@ -0,0 +1,76 @@ +:_mod-docs-content-type: PROCEDURE + +[id="controller-use-REST-manually"] + += Using REST manually to callback + +To callback manually using REST: + +.Procedure + +. Examine the callback URL in the UI, in the form: +\https:///api/v2/job_templates/7/callback/ +* The "7" in the sample URL is the job template ID in {ControllerName}. +. Ensure that the request from the host is a POST. +The following is an example using `curl` (all on a single line): ++ +[literal, options="nowrap" subs="+attributes"] +---- +curl -k -i -H 'Content-Type:application/json' -XPOST -d '{"host_config_key": "redhat"}' \ + https:///api/v2/job_templates/7/callback/ +---- ++ +. Ensure that the requesting host is defined in your inventory for the callback to succeed. + +.Verification + +Successful requests result in an entry on the *Jobs* tab, where you can view the results and history. +You can access the callback by using REST, but the suggested method of using the callback is to use one of the example scripts that includes {ControllerName}: + +* `/usr/share/awx/request_tower_configuration.sh` (Linux/UNIX) +* `/usr/share/awx/request_tower_configuration.ps1` (Windows) + +Their usage is described in the source code of the file by passing the `-h` flag, as the following shows: +---- +./request_tower_configuration.sh -h +Usage: ./request_tower_configuration.sh + + +Request server configuration from Ansible Tower. + + +OPTIONS: + -h Show this message + -s Controller server (e.g. https://ac.example.com) (required) + -k Allow insecure SSL connections and transfers + -c Host config key (required) + -t Job template ID (required) + -e Extra variables +---- + +This script can retry commands and is therefore a more robust way to use callbacks than a simple `curl` request. +The script retries once per minute for up to ten minutes. + +[NOTE] +==== +This is an example script. +Edit this script if you need more dynamic behavior when detecting failure scenarios, as any non-200 error code may not be a transient error requiring retry. +==== + +You can use callbacks with dynamic inventory in {ControllerName}. +For example, when pulling cloud inventory from one of the supported cloud providers. +In these cases, along with setting *Update On Launch*, ensure that you configure an inventory cache timeout for the inventory source, to avoid hammering of your cloud's API endpoints. +Since the `request_tower_configuration.sh` script polls once per minute for up to ten minutes, a suggested cache invalidation time for inventory (configured on the inventory source itself) would be one or two minutes. + +Running the `request_tower_configuration.sh` script from a cron job is not recommended, however, a suggested cron interval is every 30 minutes. +Repeated configuration can be handled by scheduling {ControllerName} so that the primary use of callbacks by most users is to enable a base image that is bootstrapped into the latest configuration when coming online. +Running at first boot is best practice. +First boot scripts are init scripts that typically self-delete, so you set up an init script that calls a copy of the `request_tower_configuration.sh` script and make that into an auto scaling image. + +.Troubleshooting + +If {ControllerName} fails to locate the host either by name or IP address in one of your defined inventories, the request is denied. +When running a job template in this way, ensure that the host initiating the playbook run against itself is in the inventory. +If the host is missing from the inventory, the job template fails with a *No Hosts Matched* type error message. + +If your host is not in the inventory and *Update on Launch* is checked for the inventory group, {ControllerName} attempts to update cloud based inventory sources before running the callback. diff --git a/downstream/modules/platform/proc-controller-use-an-exec-env.adoc b/downstream/modules/platform/proc-controller-use-an-exec-env.adoc index 16ecd31ed1..14687b3fb8 100644 --- a/downstream/modules/platform/proc-controller-use-an-exec-env.adoc +++ b/downstream/modules/platform/proc-controller-use-an-exec-env.adoc @@ -1,12 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-use-an-exec-env"] = Adding an {ExecEnvShort} to a job template .Prerequisites -* An {ExecEnvShort} must have been created using ansible-builder as described in xref:ref-controller-building-exec-env[Build an {ExecEnvShort}]. -When an {ExecEnvShort} has been created, you can use it to run jobs. -Use the {ControllerName} UI to specify the {ExecEnvShort} to use in your job templates. +* You must build an {ExecEnvShort} as described in link:{URLControllerUserGuide}/assembly-controller-execution-environments#ref-controller-build-exec-envs[Build an {ExecEnvShort}] before you can create it using {ControllerName}. ++ +After building it, you must push it to a repository (such as quay) and then, when creating an {ExecEnvShort} in the UI with {ControllerName}, you must point to that repository to use it in {PlatformNameShort} to use it, for example, in a job template. * Depending on whether an {ExecEnvShort} is made available for global use or tied to an organization, you must have the appropriate level of administrator privileges to use an {ExecEnvShort} in a job. Execution environments tied to an organization require Organization administrators to be able to run jobs with those {ExecEnvShort}s. * Before running a job or job template that uses an {ExecEnvShort} that has a credential assigned to it, ensure that the credential contains a username, host, and password. @@ -27,7 +29,7 @@ The image name requires its full location (repository), the registry, image name + [NOTE] ==== -If you do not set a typing error for pull, the value defaults to *Only pull the image if not present before running*. +If you do not set a type for pull, the value defaults to *Only pull the image if not present before running*. ==== + * Optional: *Description*: diff --git a/downstream/modules/platform/proc-controller-use-ansible-sign.adoc b/downstream/modules/platform/proc-controller-use-ansible-sign.adoc index 999bcf9f16..8818c4ab8b 100644 --- a/downstream/modules/platform/proc-controller-use-ansible-sign.adoc +++ b/downstream/modules/platform/proc-controller-use-ansible-sign.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-controller-use-ansible-sign"] = Installing the ansible-sign CLI utility diff --git a/downstream/modules/platform/proc-controller-user-permissions.adoc b/downstream/modules/platform/proc-controller-user-permissions.adoc index c07233f547..743388013d 100644 --- a/downstream/modules/platform/proc-controller-user-permissions.adoc +++ b/downstream/modules/platform/proc-controller-user-permissions.adoc @@ -1,38 +1,27 @@ -[id="proc-controller-user-permissions"] +:_mod-docs-content-type: PROCEDURE -= Adding and removing user permissions +[id="proc-controller-user-permissions"] -To add permissions to a particular user: += Adding roles to a team -.Procedure -. From the *Users* list view, click on the name of a user. -. On the *Details* page, click btn:[Add]. -This opens the *Add user permissions* wizard. -+ -image:users-add-permissions-form.png[Add Permissions Form] -. Select the object to a assign permissions, for which the user will have access. -. Click btn:[Next]. -. Select the resource to assign team roles and click btn:[Next]. -+ -image:users-permissions-IG-select.png[image] - -. Select the resource you want to assign permissions to. -Different resources have different options available. -+ -image:users-permissions-IG-roles.png[image] - -. Click btn:[Save]. -. The *Roles* page displays the updated profile for the user with the permissions assigned for each selected resource. +You can assign permissions to teams, such as edit and administer resources and other elements. +You can set permissions through an inventory, project, job template and other resources, or within the Organizations view. [NOTE] ==== -You can also add teams, individual, or multiple users and assign them permissions at the object level. -This includes templates, credentials, inventories, projects, organizations, or instance groups. -This feature reduces the time for an organization to onboard many users at one time. +Teams can not be assigned to an organization by adding roles. Refer to the steps provided in link:{URLCentralAuth}/gw-managing-access#proc-gw-add-team-organization[Adding a team to an organization] for detailed instructions. ==== -.To remove permissions: -* Click the image:disassociate.png[Disassociate,10,10] icon next to the resource. -This launches a confirmation dialog asking you to confirm the disassociation. - - +.Procedure +. From the navigation panel, select {MenuAMTeams}. +. Select the team *Name* to which you want to add roles. +. Select the *Roles* tab and click btn:[Add roles]. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Select a *Resource type* and click btn:[Next]. +. Select the resources to receive the new roles and click btn:[Next]. +. Select the roles to apply to the resources and click btn:[Next]. +. Review the settings and click btn:[Finish]. ++ +The Add roles dialog displays indicating whether the role assignments were successfully applied, click btn:[Close] to close the dialog. diff --git a/downstream/modules/platform/proc-controller-verify-container-group.adoc b/downstream/modules/platform/proc-controller-verify-container-group.adoc index 08333ac469..5827895d49 100644 --- a/downstream/modules/platform/proc-controller-verify-container-group.adoc +++ b/downstream/modules/platform/proc-controller-verify-container-group.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-verify-container-group"] = Verifying container group functions @@ -6,8 +8,8 @@ To verify the deployment and termination of your container: .Procedure -. Create a mock inventory and associate the container group to it by populating the name of the container group in the *Instance Group* field. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-adding-new-inventory[Add a new inventory] in the _{ControllerUG}_. +. Create a mock inventory and associate the container group to it by populating the name of the container group in the *Instance groups* field. +For more information, see link:{URLControllerUserGuide}/controller-inventories#proc-controller-adding-new-inventory[Add a new inventory]. + image::ag-inventories-create-new-test-inventory.png[Create test inventory] + diff --git a/downstream/modules/platform/proc-controller-view-container-group-jobs.adoc b/downstream/modules/platform/proc-controller-view-container-group-jobs.adoc index d48ed1a946..df331c1545 100644 --- a/downstream/modules/platform/proc-controller-view-container-group-jobs.adoc +++ b/downstream/modules/platform/proc-controller-view-container-group-jobs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-view-container-group-jobs"] = Viewing container group jobs diff --git a/downstream/modules/platform/proc-controller-view-host.adoc b/downstream/modules/platform/proc-controller-view-host.adoc new file mode 100644 index 0000000000..9f3221273f --- /dev/null +++ b/downstream/modules/platform/proc-controller-view-host.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-controller-view-host"] + += Viewing the host details + +To view the Host details for a job run. + +.Procedure + +. From the navigation panel, select {MenuInfrastructureHosts}. +The *Hosts* page displays the following information about the host or hosts affected by recent job runs. + +. Selecting a particular host displays the *Details* page for that host, with the following information: + +* The *Name* of the Host. +* The *Inventory* associated with that host. Selecting this inventory displays details of the inventory. +* When the Host was *Created* and by whom. Selecting the creator displays details of the creator. +* When the Host was *Last modified*. Selecting the creator displays details of the creator. +* *Variables* associated with the Host. You can display the variables in YAML or JSON format. + +. Click btn:[Edit host] to edit details of the host. + +* Select the *Facts* tab to display facts associated with the host. +* Select the *Groups* tab to display the Groups associated with the host. +** Click btn:[Associate groups] to associate a group with the host. +* Select the *Jobs* tab to display the Jobs which ran on the host. +** Click the image:arrow.png[Expand,15,15] icon to display details of the job. ++ +image::hosts_jobs_details.png[Details of job associated with a host] + diff --git a/downstream/modules/platform/proc-controller-view-jobs-associated-with-instance-group.adoc b/downstream/modules/platform/proc-controller-view-jobs-associated-with-instance-group.adoc index 50afb6677c..ddef699ed7 100644 --- a/downstream/modules/platform/proc-controller-view-jobs-associated-with-instance-group.adoc +++ b/downstream/modules/platform/proc-controller-view-jobs-associated-with-instance-group.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-view-jobs-associated-with-instance-group"] = Viewing jobs associated with an instance group @@ -16,5 +18,4 @@ Each job displays the following details: * Who started the job and applicable resources associated with it, such as the template, inventory, project, and execution environment .Additional resources -The instances are run in accordance with instance group policies. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/controller-instance-and-container-groups#controller-instance-group-policies[Instance Group Policies] in the _{ControllerAG}_. +* xref:controller-instance-group-policies[Instance group policies]. diff --git a/downstream/modules/platform/proc-controller-view-payload-output.adoc b/downstream/modules/platform/proc-controller-view-payload-output.adoc index 03c4bc2cd3..45ba85dff1 100644 --- a/downstream/modules/platform/proc-controller-view-payload-output.adoc +++ b/downstream/modules/platform/proc-controller-view-payload-output.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-view-payload-output"] = Viewing the payload output @@ -10,6 +12,6 @@ You can view the entire payload exposed as an extra variable. . Select the job template with the webhook enabled. . Select the *Details* tab. . In the *Extra Variables* field, view the payload output from the `awx_webhook_payload` variable, as shown in the following example: - ++ image::ug-webhooks-jobs-extra-vars-payload.png[Webhooks extra variables payload] image::ug-webhooks-jobs-extra-vars-payload-expanded.png[Webhook extra variables payload expanded] diff --git a/downstream/modules/platform/proc-create-a-connection-secret.adoc b/downstream/modules/platform/proc-create-a-connection-secret.adoc index 362852bb98..7b164a456b 100644 --- a/downstream/modules/platform/proc-create-a-connection-secret.adoc +++ b/downstream/modules/platform/proc-create-a-connection-secret.adoc @@ -1,6 +1,9 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-create-connection-secret_{context}"] = Creating a {ControllerName} connection secret for {OperatorResourceShort} + To make your connection information available to the {OperatorResourceShort}, create a k8s secret with the token and host value. .Procedure diff --git a/downstream/modules/platform/proc-create-a-jobtemplate.adoc b/downstream/modules/platform/proc-create-a-jobtemplate.adoc index 4a42400ddc..401681cdc2 100644 --- a/downstream/modules/platform/proc-create-a-jobtemplate.adoc +++ b/downstream/modules/platform/proc-create-a-jobtemplate.adoc @@ -1,9 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-create-a-jobtemplate_{context}"] -= Creating a JobTemplate += Creating a JobTemplate custom resource + +A job template is a definition and set of parameters for running an Ansible job. For more information see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/using_automation_execution/index#controller-job-templates[Job Templates] section of the _{TitleControllerUserGuide}_ guide. + +.Procedure -* Create a job template on {ControllerName} by creating a JobTemplate resource: +* Create a job template on {ControllerName} by creating a JobTemplate custom resource: + ---- apiVersion: tower.ansible.com/v1alpha1 diff --git a/downstream/modules/platform/proc-create-an-ansiblejob.adoc b/downstream/modules/platform/proc-create-an-ansiblejob.adoc index cd6f7f0b3d..cdb866a4a0 100644 --- a/downstream/modules/platform/proc-create-an-ansiblejob.adoc +++ b/downstream/modules/platform/proc-create-an-ansiblejob.adoc @@ -1,7 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-create-an-ansiblejob_{context}"] -= Creating an AnsibleJob -Launch an automation job on {ControllerName} by creating an AnsibleJob resource. += Creating an AnsibleJob custom resource + +An AnsibleJob custom resource launches a job in the {ControllerName} instance specified in the Kubernetes secret ({ControllerName} host URL, token). +You can launch an automation job on {ControllerName} by creating an AnsibleJob resource. .Procedure . Specify the connection secret and job template you want to launch. diff --git a/downstream/modules/platform/proc-create-chatbot-config-secret.adoc b/downstream/modules/platform/proc-create-chatbot-config-secret.adoc new file mode 100644 index 0000000000..d2376f23ae --- /dev/null +++ b/downstream/modules/platform/proc-create-chatbot-config-secret.adoc @@ -0,0 +1,90 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-create-chatbot-config-secret"] + += Creating a chatbot configuration secret + +Create a configuration secret for the {AAPchatbot}, so that you can connect the intelligent assistant to the {PlatformNameShort} operator. + +.Procedure +. Log in to {OCP} as an administrator. +. Navigate to menu:Workloads[Secrets]. +. From the *Projects* list, select the namespace that you created when you installed the {PlatformNameShort} operator. +. Click menu:Create[Key/value secret]. +. In the *Secret name* field, enter a unique name for the secret. For example, `chatbot-configuration-secret`. +. Add the following keys and their associated values individually: ++ +[%header,cols="25%,75%"] +|==== +| Key +| Value + +2+| *Settings for all LLM setups* +|`chatbot_model` +|Enter the LLM model name that is configured on your LLM setup. + +|`chatbot_url` +|Enter the inference API base URL on your LLM setup. For example, `\https://your_inference_api/v1`. + +|`chatbot_token` +|Enter the API token or the API key. This token is sent along with the authorization header when an inference API is called. + +|`chatbot_llm_provider_type` +a|_Optional_ + +Enter the provider type of your LLM setup by using one of the following values: + +* {RHELAI}: `rhoai_vllm` (Default value) + +* {OCPAI}: `rhelai_vllm` + +* {IBMwatsonxai}: `watsonx` + +* {OpenAI}: `openai` + +* {AzureOpenAI}: `azure_openai` + +|`chatbot_context_window_size` +a| _Optional_ + +Enter a value to configure the context window length for your LLM setup. + +Default= `128000` + +|`chatbot_temperature_override` +a| _Optional_ + +A lower temperature generates predictable results, while a higher temperature allows more diverse or creative responses. + +Enter one of the following values: + +* `0`: Least creativity and randomness in the responses. +* `1`: Maximum creativity and randomness in the responses. +* `null`: Override or disable the default temperature setting. ++ +[NOTE] +==== +A few {OpenAI} o-series models (o1, o3-mini, and o4-mini models) do not support the temperature settings. Therefore, you must set the value to null to use these {OpenAI} models. +==== + +2+| *Additional setting for {IBMwatsonxai} only* + +|`chatbot_llm_provider_project_id` +| Enter the project ID of your IBM watsonx setup. + +2+| *Additional settings for {AzureOpenAI} only* + +|`chatbot_azure_deployment_name` +| Enter the deployment name of your {AzureOpenAI} setup. + +|`chatbot_azure_api_version` +| _Optional_ + +Enter the API version of your {AzureOpenAI} setup. + +|==== + +. Click *Create*. The chatbot authorization secret is successfully created. + + + diff --git a/downstream/modules/platform/proc-create-crs-resource-operator.adoc b/downstream/modules/platform/proc-create-crs-resource-operator.adoc new file mode 100644 index 0000000000..a13a30beb2 --- /dev/null +++ b/downstream/modules/platform/proc-create-crs-resource-operator.adoc @@ -0,0 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-create-crs-resource-operator_{context}"] + += Creating custom resources for {OperatorResourceShort} diff --git a/downstream/modules/platform/proc-create-password-hashes.adoc b/downstream/modules/platform/proc-create-password-hashes.adoc index e5854752d5..7ff7964e51 100644 --- a/downstream/modules/platform/proc-create-password-hashes.adoc +++ b/downstream/modules/platform/proc-create-password-hashes.adoc @@ -1,8 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-create-password-hashes"] = Creating PostgreSQL password hashes +Supply the hash values that replace the plain text passwords within the {ControllerName} configuration files. + .Procedure + . On your {ControllerName} node, run the following: + [literal, options="nowrap" subs="+quotes,attributes"] @@ -40,4 +45,3 @@ $encrypted$AESCBC$Z0FBQUFBQmNONU9BbGQ1VjJyNDJRVTRKaFRIR09Ib2U5TGdaYVRfcXFXRjlmdm + Note that the `$*_PASS` values are already in plain text in your inventory file. -These steps supply the hash values that replace the plain text passwords within the {ControllerName} configuration files. diff --git a/downstream/modules/platform/proc-create-postresql-secret.adoc b/downstream/modules/platform/proc-create-postresql-secret.adoc index 5fa04cfbf7..bdba4e0a4f 100644 --- a/downstream/modules/platform/proc-create-postresql-secret.adoc +++ b/downstream/modules/platform/proc-create-postresql-secret.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="create-postresql-secret_{context}"] = Creating a postgresql configuration secret @@ -8,7 +10,7 @@ For migration to be successful, you must provide access to the database for your .Procedure -. Create a yaml file for your postgresql configuration secret: +. Create a YAML file for your postgresql configuration secret: + ----- apiVersion: v1 diff --git a/downstream/modules/platform/proc-create-secret-key-secret.adoc b/downstream/modules/platform/proc-create-secret-key-secret.adoc index c52ff4b2ca..7170d93442 100644 --- a/downstream/modules/platform/proc-create-secret-key-secret.adoc +++ b/downstream/modules/platform/proc-create-secret-key-secret.adoc @@ -1,28 +1,70 @@ +:_mod-docs-content-type: PROCEDURE + [id="create-secret-key-secret_{context}"] = Creating a secret key secret [role=_abstract] -To migrate your data to {OperatorPlatform} on {OCPShort}, you must create a secret key that matches the secret key defined in the inventory file during your initial installation. Otherwise, the migrated data will remain encrypted and unusable after migration. +To migrate your data to {OperatorPlatformNameShort} on {OCPShort}, you must create a secret key. +If you are migrating {ControllerName}, {HubName}, and {EDAName} you must have a secret key for each that matches the secret key defined in the inventory file during your initial installation. +Otherwise, the migrated data remains encrypted and unusable after migration. + +[NOTE] +==== +When specifying the symmetric encryption secret key on the custom resources, note that for {ControllerName} the field is called `secret_key_name`. But for {HubName} and {EDAName}, the field is called `db_fields_encryption_secret`. + +==== + +[NOTE] +==== +In the Kubernetes secrets, {ControllerName} and {EDAName} use the same stringData key (`secret_key`) but, {HubName} uses a different key (`database_fields.symmetric.key`). +==== .Procedure -. Locate the old secret key in the inventory file you used to deploy {PlatformNameShort} in your previous installation. -. Create a yaml file for your secret key: +. Locate the old secret keys in the inventory file you used to deploy {PlatformNameShort} in your previous installation. +. Create a YAML file for your secret keys: + ----- +--- +apiVersion: v1 +kind: Secret +metadata: + name: -secret-key + namespace: +stringData: + secret_key: +type: Opaque +--- apiVersion: v1 kind: Secret metadata: - name: -secret-key + name: -secret-key namespace: stringData: - secret_key: + secret_key: type: Opaque +--- +apiVersion: v1 +kind: Secret +metadata: + name: -secret-key + namespace: +stringData: + database_fields.symmetric.key: +type: Opaque + ----- -. Apply the secret key yaml to the cluster: ++ +[NOTE] +==== +If `admin_password_secret` is not provided, the operator looks for a secret named `-admin-password` for the admin password. +If it is not present, the operator generates a password and creates a secret from it named `-admin-password`. +==== ++ +. Apply the secret key YAML to the cluster: + ----- -oc apply -f +oc apply -f ----- diff --git a/downstream/modules/platform/proc-creating-a-new-web-server-to-host-repositories.adoc b/downstream/modules/platform/proc-creating-a-new-web-server-to-host-repositories.adoc index fd2c10c520..ece31d63f8 100644 --- a/downstream/modules/platform/proc-creating-a-new-web-server-to-host-repositories.adoc +++ b/downstream/modules/platform/proc-creating-a-new-web-server-to-host-repositories.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-creating-a-new-web-server-to-host-repositories_{context}"] = Creating a new web server to host repositories @@ -57,7 +59,7 @@ $ sudo firewall-cmd --zone=public --add-service=http –add-service=https --perm $ sudo firewall-cmd --reload ---- -. On {ControllerName} and {HubName}, add a repo file at __/etc/yum.repos.d/local.repo__, and add the optional repos if needed: +. On automation services, add a repo file at __/etc/yum.repos.d/local.repo__, and add the optional repos if needed: + ---- [Local-BaseOS] diff --git a/downstream/modules/platform/proc-creating-ansible-role.adoc b/downstream/modules/platform/proc-creating-ansible-role.adoc new file mode 100644 index 0000000000..913348b1a4 --- /dev/null +++ b/downstream/modules/platform/proc-creating-ansible-role.adoc @@ -0,0 +1,59 @@ +:_mod-docs-content-type: PROCEDURE + +[id="creating-ansible-role_{context}"] + += Creating a role + +You can create roles using the {Galaxy} CLI tool, which is included with your {PlatformNameShort} bundle. Access role-specific commands from the `role` subcommand: + +[source,bash] +---- +ansible-galaxy role init +---- + +Standalone roles outside of Collections are supported. +Create new roles inside a Collection to take advantage of the features {PlatformNameShort} has to offer. + +.Procedure + +. In a terminal, navigate to the `roles` directory inside a collection. +. Create a role called `my_role` inside the collection: ++ +---- +$ ansible-galaxy role init my_role +---- ++ +The collection now includes a role named `my_role` inside the `roles` directory, as you can see in this example: ++ +---- +~/.ansible/collections/ansible_collections// + ... + └── roles/ + └── my_role/ + ├── .travis.yml + ├── README.md + ├── defaults/ + │ └── main.yml + ├── files/ + ├── handlers/ + │ └── main.yml + ├── meta/ + │ └── main.yml + ├── tasks/ + │ └── main.yml + ├── templates/ + ├── tests/ + │ ├── inventory + │ └── test.yml + └── vars/ + └── main.yml +---- +. A custom role skeleton directory can be supplied by using the `--role-skeleton` argument. +This allows organizations to create standardized templates for new roles to suit their needs. ++ +---- +$ ansible-galaxy role init my_role --role-skeleton ~/role_skeleton +---- ++ +This creates a role named `my_role` by copying the contents of `~/role_skeleton` into `my_role`. +The contents of `role_skeleton` can be any files or folders that are valid inside a role directory. diff --git a/downstream/modules/platform/proc-creating-the-custom-execution-environment-definition.adoc b/downstream/modules/platform/proc-creating-the-custom-execution-environment-definition.adoc index 291739651d..5939a4d2e1 100644 --- a/downstream/modules/platform/proc-creating-the-custom-execution-environment-definition.adoc +++ b/downstream/modules/platform/proc-creating-the-custom-execution-environment-definition.adoc @@ -1,17 +1,20 @@ -//Used in downstream/titles/aap-installation-guide/platform/assembly-disconnected-installation.adoc +//Used in downstream/titles/builder/builder/assembly-using builder.adoc :_newdoc-version: 2.15.1 :_template-generated: 2024-02-05 :_mod-docs-content-type: PROCEDURE -[id="creating-the-custom-execution-environment-definition_{context}"] +[id="creating-the-custom-execution-environment-definition"] = Creating the custom {ExecEnvShort} definition [role="_abstract"] -Once the {Builder} RPM is installed, use the following steps to create your custom {ExecEnvShort}. +When you have installed {Builder}, use the following steps to create your custom {ExecEnvShort}. -. Create a directory for the build artifacts used when creating your custom {ExecEnvShort}. Any new files created with the steps below will be created under this directory. +.Procedure + +. Create a directory to store your custom {ExecEnvShort} build artifacts. +Any new files created with the following steps are created under this directory. + ---- $ mkdir $HOME/custom-ee $HOME/custom-ee/files @@ -22,20 +25,17 @@ $ cd $HOME/custom-ee/ . Create an `execution-environment.yml` file that defines the requirements for your custom {ExecEnvShort}. + [NOTE] - ==== -Version 3 of the execution environment definition format is required, so ensure the `execution-environment.yml` file contains `version: 3` explicitly before continuing. +Version 3 of the {ExecEnvShort} definition format is required, so ensure the `execution-environment.yml` file contains `version: 3` explicitly before continuing. ==== + -.. Override the base image to point to the minimal execution environment available in your {PrivateHubName}. +.. Override the base image to point to the minimal {ExecEnvShort} available in your {PrivateHubName}. .. Define the additional build files needed to point to any disconnected content sources that will be used in the build process. Your custom `execution-environment.yml` file should look similar to the following example: + ---- -$ cat execution-environment.yml ---- version: 3 images: @@ -84,7 +84,7 @@ $ cat files/ansible.cfg server_list = private_hub [galaxy_server.private_hub] -url = https://private-hub.example.com/api/galaxy/ +url = /https://private-hub.example.com/api/galaxy/ ---- + . Create a `pip.conf` file under the `files/` subdirectory which points to the internal PyPI mirror (a web server or something like Nexus): @@ -97,10 +97,13 @@ trusted-host = ---- + -. Optional: If a `bindep.txt` file is being used to add RPMs the custom {ExecEnvShort}, create a `custom.repo` file under the `files/` subdirectory that points to your disconnected Satellite or other location hosting the RPM repositories. If this step is necessary, uncomment the steps in the example `execution-environment.yml` file that correspond with the `custom.repo` file. +. Optional: If you use a `bindep.txt` file to add RPMs the custom {ExecEnvShort}, create a `custom.repo` file under the `files/` subdirectory that points to your disconnected Satellite or other location hosting the RPM repositories. +If this step is necessary, uncomment the steps in the example `execution-environment.yml` file that correspond with the `custom.repo` file. + -The following example is for the UBI repos. Other local repos can be added to this file as well. The URL path may need to change depending on where the mirror content is located on the web server. +The following example is for the UBI repositories. +Other local repositories can be added to this file as well. +The URL path might need to change depending on where the mirror content is located on the web server. + ---- $ cat files/custom.repo @@ -119,9 +122,9 @@ gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release gpgcheck = 1 ---- + -. Add the CA certificate used to sign the private automation hub web server certificate. If the private automation hub uses self-signed certificates provided by the installer: +. Add the CA certificate used to sign the {PrivateHubName} web server certificate. If the {PrivateHubName} uses self-signed certificates provided by the installer: + -.. Copy the file `/etc/pulp/certs/pulp_webserver.crt` from your private automation hub and name it `hub-ca.crt`. +.. Copy the file `/etc/pulp/certs/pulp_webserver.crt` from your {PrivateHubName} and name it `hub-ca.crt`. .. Add the `hub-ca.crt` file to the `files/` subdirectory. + @@ -131,7 +134,7 @@ gpgcheck = 1 .. Make a copy of that CA certificate and name it `hub-ca.crt`. .. Add the `hub-ca.crt` file to the `files/`` subdirectory. + -. Once the preceeding steps have been completed, create your python `requirements.txt` and ansible collection `requirements.yml` files, with the content needed for your custom {ExecEnvShort} image. +. When you have completed the preceding steps, create your python `requirements.txt` and ansible collection `requirements.yml` files, with the content needed for your custom {ExecEnvShort} image. + [NOTE] diff --git a/downstream/modules/platform/proc-customizing-pod-specs.adoc b/downstream/modules/platform/proc-customizing-pod-specs.adoc index 19b870b5c9..b964ce0af1 100644 --- a/downstream/modules/platform/proc-customizing-pod-specs.adoc +++ b/downstream/modules/platform/proc-customizing-pod-specs.adoc @@ -1,11 +1,13 @@ -[id="proc-customizing-pod-specs"] +:_mod-docs-content-type: PROCEDURE -== Customizing the pod specification +[id="proc-customizing-pod-specs_{context}"] + += Customizing the pod specification You can use the following procedure to customize the pod. .Procedure -. In the {ControllerName} UI, go to {MenuInfrastructureInstanceGroups}. +. In the navigation panel, select {MenuInfrastructureInstanceGroups}. . Check btn:[Customize pod specification]. . In the *Pod Spec Override* field, specify the namespace by using the toggle to enable and expand the *Pod Spec Override* field. . Click btn:[Save]. diff --git a/downstream/modules/platform/proc-define-mesh-node-types.adoc b/downstream/modules/platform/proc-define-mesh-node-types.adoc index 2923f1692f..a825259059 100644 --- a/downstream/modules/platform/proc-define-mesh-node-types.adoc +++ b/downstream/modules/platform/proc-define-mesh-node-types.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-define-mesh-node-types"] -ifdef::controller-AG[] +ifdef::controller-UG[] = Managing instances -endif::controller-AG[] +endif::controller-UG[] ifdef::operator-mesh[] = Defining {AutomationMesh} node types endif::operator-mesh[] @@ -19,6 +21,13 @@ These hop nodes are not part of the Kubernetes cluster and are registered in {Co The following procedure demonstrates how to set the node type for the hosts. +ifdef::operator-mesh[] +[NOTE] +==== +By default, {SaaSonAWS} includes two hop nodes that you can peer execution nodes to. +==== +endif::operator-mesh[] + .Procedure //[ddacosta]Removed specified panel to simplify changes in the future. . From the navigation panel, select {MenuInfrastructureInstances}. @@ -50,40 +59,39 @@ Options: ** *Enable instance*: Check this box to make it available for jobs to run on an execution node. ** Check the *Managed by policy* box to enable policy to dictate how the instance is assigned. -** Check the *Peers from control nodes* box to enable control nodes to peer to this instance automatically. -For nodes connected to {ControllerName}, check the *Peers from Control* nodes box to create a direct communication link between that node and {ControllerName}. -For all other nodes: - -*** If you are not adding a hop node, make sure *Peers from Control* is checked. -*** If you are adding a hop node, make sure *Peers from Control* is not checked. -*** For execution nodes that communicate with hop nodes, do not check this box. -** To peer an execution node with a hop node, click the image:search.png[Search,15,15] icon next to the *Peers* field. -+ -The Select Peers window is displayed. -+ -Peer the execution node to the hop node. - -. Click btn:[Save]. -+ -image::instances_create_details.png[Create Instance details] - +** *Peers from control nodes*: +*** If you are configuring a hop node: +**** If the hop node needs to have requests pushed directly from {ControllerName}, then check the *Peers from Control* box. +// This creates a direct communication link between the hop node and {ControllerName}. +**** If the hop node is peered to another hop node, then make sure *Peers from Control* is not checked. +*** If you are configuring an execution node: +**** If the execution node needs to have requests pushed directly from {ControllerName}, then check the *Peers from Control* box. +// This creates a direct communication link between the execution node and {ControllerName}. +**** If the execution node is peered to a hop node, then make sure that *Peers from Control* is not checked. +. Click btn:[Associate peers]. +//+ +//image::instances_create_details.png[Create Instance details] ifdef::operator-mesh[] -. To view a graphical representation of your updated topology, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/assembly-controller-topology-viewer[Topology viewer]. +. To verify peering configuration and the direction of traffic, you can use the topology view +to view a graphical representation of your updated topology. +This can help to determine where your firewall rules might need to be updated. +For more information, see link:{URLControllerUserGuide}/assembly-controller-topology-viewer[Topology view]. endif::operator-mesh[] -ifdef::controller-AG[] -. To view a graphical representation of your updated topology, see xref:assembly-controller-topology-viewer[Topology viewer]. -endif::controller-AG[] +ifdef::controller-UG[] +. To view a graphical representation of your updated topology, see +xref:assembly-controller-topology-viewer[Topology view]. +endif::controller-UG[] + [NOTE] ==== -Execute the following steps from any computer that has SSH access to the newly created instance. +Complete the following steps from any computer that has SSH access to the newly created instance. ==== . Click the image:download.png[Download,15,15] icon next to *Download Bundle* to download the tar file that includes this new instance and the files necessary to install the created node into the {AutomationMesh}. //+ //image::instances_install_bundle.png[Install instance] + -The install bundle contains TLS certificates and keys, a certificate authority, and a proper Receptor configuration file. +The install bundle has TLS certificates and keys, a certificate authority, and a proper Receptor configuration file. + ---- receptor-ca.crt @@ -97,12 +105,6 @@ requirements.yml . Extract the downloaded `tar.gz` Install Bundle from the location where you downloaded it. To ensure that these files are in the correct location on the remote machine, the install bundle includes the `install_receptor.yml` playbook. -The playbook requires the Receptor collection. -Run the following command to download the collection: -+ ----- -ansible-galaxy collection install -r requirements.yml ----- . Before running the `ansible-playbook` command, edit the following fields in the `inventory.yml` file: + @@ -110,9 +112,9 @@ ansible-galaxy collection install -r requirements.yml all: hosts: remote-execution: - ansible_host: 10.0.0.6 - ansible_user: # user provided - ansible_ssh_private_key_file: ~/.ssh/ + ansible_host: localhost # change to the mesh node host name + ansible_user: # user provided + ansible_ssh_private_key_file: ~/.ssh/ ---- * Ensure `ansible_host` is set to the IP address or DNS of the node. @@ -150,26 +152,66 @@ Additionally, it retrieves any other collection dependencies that might be neede * Install the receptor collection on all nodes where your playbook will run, otherwise an error occurs. . If `receptor_listener_port` is defined, the machine also requires an available open port on which to establish inbound TCP connections, for example, 27199. -Run the following command to open port 27199 for receptor communication: +Run the following command to open port 27199 for receptor communication (Make sure you have port 27199 open in your firewall): + ---- sudo firewall-cmd --permanent --zone=public --add-port=27199/tcp ---- . Run the following playbook on the machine where you want to update your automation mesh: -+ + ---- ansible-playbook -i inventory.yml install_receptor.yml ---- + -After this playbook runs, your automation mesh is configured. +[NOTE] +==== +OpenSSL is required for this playbook. You can install it by running the following command: +---- +openssl -v +---- +If it returns then a version OpenSSL is installed. Otherwise you need to install OpenSSL with: +---- +sudo dnf install -y openssl +---- +==== + +After this playbook runs, your automation mesh is configured. + image::instances_list_view2.png[Instances list view] +[NOTE] +==== +It might be the case that some servers do not listen on the receptor port (the default is 27199) + +Suppose you have a Control plane with nodes A, B, and C + +The following is a peering set up for three controller nodes: + +Controller node A --> Controller node B + +Controller node A --> Controller node C + +Controller node B --> Controller node C + +You can force the listener by setting + +`receptor_listener=True` + +However, a connection Controller B --> A is likely to be rejected as that connection already exists. + +This means that nothing connects to Controller A as Controller A is creating the connections to the other nodes, and the following command does not return anything on Controller A: + +`[root@controller1 ~]# ss -ntlp | grep 27199 [root@controller1 ~]#` + +The RPM installer creates a strongly connected peering between the control plane nodes with a least privileged approach and opens the tcp listener only on those nodes where it is required. All the receptor connections are bidirectional, so once the connection is created, the receptor can communicate in both directions. +==== + ifdef::operator-mesh[] -To remove an instance from the mesh, see xref:ref-removing-instances[Removing instances]. +To remove an instance from the mesh, see link:{URLOperatorMesh}/assembly-automation-mesh-operator-aap#ref-removing-instances[Removing instances]. endif::operator-mesh[] -ifdef::controller-AG[] -To remove an instance from the mesh, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_for_operator-based_installations/assembly-automation-mesh-operator-aap#ref-removing-instances[Removing instances]. -endif::controller-AG[] +ifdef::controller-UG[] +To remove an instance from the mesh, see link:{URLControllerUserGuide}/assembly-controller-instances#ref-removing-instances[Removing instances]. +endif::controller-UG[] +r \ No newline at end of file diff --git a/downstream/modules/platform/proc-defining-node-types.adoc b/downstream/modules/platform/proc-defining-node-types.adoc index b65e1dd943..0de1a4e3c3 100644 --- a/downstream/modules/platform/proc-defining-node-types.adoc +++ b/downstream/modules/platform/proc-defining-node-types.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="defining-node-types"] diff --git a/downstream/modules/platform/proc-deploy-controller.adoc b/downstream/modules/platform/proc-deploy-controller.adoc new file mode 100644 index 0000000000..eebf7e6b77 --- /dev/null +++ b/downstream/modules/platform/proc-deploy-controller.adoc @@ -0,0 +1,30 @@ +[id="proc-deploy-controller"] + +:_mod-docs-content-type: PROCEDURE + += Deploy {ControllerName} + +To deploy {ControllerName} and specify variables for how often metrics-utility gathers usage information and generates a report, use the following procedure: + +.Procedure + +. From the navigation panel, select *Installed Operators*. +. Select *{PlatformNameShort}*. +. In the Operator details, select the {ControllerName} tab. +. Click btn:[Create {ControllerName}]. +. Select the YAML view option. +The YAML now shows the default parameters for {ControllerName}. +The relevant parameters for `metrics-utility` are the following: ++ +[cols="50%,50%",options="header"] +|==== +| *Parameter* | *Variable* +| *`metrics_utility_enabled`* | True. +| *`metrics_utility_cronjob_gather_schedule`* | `@hourly` or `@daily`. +| *`metrics_utility_cronjob_report_schedule`* | `@daily` or `@monthly`. +|==== ++ +. Find the `metrics_utility_enabled` parameter and change the variable to true. +. Find the `metrics_utility_cronjob_gather_schedule` parameter and enter a variable for how often the utility should gather usage information (for example, `@hourly` or `@daily`). +. Find the `metrics_utility_cronjob_report_schedule` parameter and enter a variable for how often the utility generates a report (for example, `@daily` or `@monthly`). +. Click btn:[Create]. diff --git a/downstream/modules/platform/proc-deprovision-group.adoc b/downstream/modules/platform/proc-deprovision-group.adoc index 44753741ae..f90991337e 100644 --- a/downstream/modules/platform/proc-deprovision-group.adoc +++ b/downstream/modules/platform/proc-deprovision-group.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-deprovisioning-groups"] @@ -16,10 +18,11 @@ You can deprovision any hosts in your inventory except for the first host specif .Procedure -* Add `*node_state=deprovision*` to the [group:vars] associated with the group you wish to deprovision. - +* Add `*node_state=deprovision*` to the [group:vars] associated with the group you want to deprovision. -.Example +.Group deprovision +[example] +==== ---- [execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com @@ -33,17 +36,4 @@ execution-node-7.example.com [execution_nodes:vars] node_state=deprovision ---- - -== Deprovisioning isolated instance groups -You have the option to manually remove any isolated instance groups using the `awx-manage` deprovisioning utility. - -WARNING: Use the deprovisioning command to only remove isolated instance groups. To deprovision instance groups from your {AutomationMesh} architecture, use the <> instead. - -.Procedure - -* Run the following command, replacing `____` with the name of the instance group: -+ -[subs="+quotes"] ----- -$ awx-manage unregister_queue --queuename=____ ----- +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-deprovision-isolated-groups.adoc b/downstream/modules/platform/proc-deprovision-isolated-groups.adoc new file mode 100644 index 0000000000..592d7fd30f --- /dev/null +++ b/downstream/modules/platform/proc-deprovision-isolated-groups.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-deprovision-isolated-groups"] + += Deprovisioning isolated instance groups +You have the option to manually remove any isolated instance groups using the `awx-manage` deprovisioning utility. + +[WARNING] +==== +Use the deprovisioning command to only remove isolated instance groups. +To deprovision instance groups from your {AutomationMesh} architecture, use the xref:proc-deprovisioning-groups[Deprovisioning groups using the installer] method instead. +==== + +.Procedure + +* Run the following command, replacing `____` with the name of the instance group: ++ +[subs="+quotes"] +---- +$ awx-manage unregister_queue --queuename=____ +---- \ No newline at end of file diff --git a/downstream/modules/platform/proc-deprovision-isolated-nodes.adoc b/downstream/modules/platform/proc-deprovision-isolated-nodes.adoc new file mode 100644 index 0000000000..5dbccf4dd6 --- /dev/null +++ b/downstream/modules/platform/proc-deprovision-isolated-nodes.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-deprovision-isolated-nodes"] + += Deprovisioning isolated nodes +You have the option to manually remove any isolated nodes using the `awx-manage` deprovisioning utility. + +[WARNING] +==== +Use the deprovisioning command to remove only isolated nodes that have not migrated to execution nodes. To deprovision execution nodes from your {AutomationMesh} architecture, use the xref:proc-deprovisioning-nodes[Deprovisioning individual nodes using the installer] method instead. +==== + +.Procedure + +. Shut down the instance: ++ +---- +$ automation-controller-service stop +---- +. Run the deprovision command from another instance, replacing `__host_name__` with the name of the node as listed in the inventory file: +[subs="+quotes"] ++ +---- +$ awx-manage deprovision_instance --hostname=____ +---- diff --git a/downstream/modules/platform/proc-deprovisioning-mesh-nodes.adoc b/downstream/modules/platform/proc-deprovisioning-mesh-nodes.adoc index 065e78e3be..01ca6c448e 100644 --- a/downstream/modules/platform/proc-deprovisioning-mesh-nodes.adoc +++ b/downstream/modules/platform/proc-deprovisioning-mesh-nodes.adoc @@ -1,4 +1,4 @@ - +:_mod-docs-content-type: PROCEDURE [id="proc-deprovisioning-nodes"] @@ -15,12 +15,12 @@ You can deprovision any of your inventory’s hosts except for the first host sp .Procedure * Append `*node_state=deprovision*` to nodes in the installer file you want to deprovision. ++ +The following example inventory file deprovisions two nodes from an {AutomationMesh} configuration. -.Example - -This example inventory file deprovisions two nodes from an {AutomationMesh} configuration. - - +.Deprovision nodes +[example] +==== ----- [automationcontroller] 126-addr.tatu.home ansible_host=192.168.111.126 node_type=control @@ -36,22 +36,4 @@ peers=connected_nodes 100-addr.tatu.home ansible_host=192.168.111.100 peers=110-addr.tatu.home node_type=hop ----- - -== Deprovisioning isolated nodes -You have the option to manually remove any isolated nodes using the `awx-manage` deprovisioning utility. - -WARNING: Use the deprovisioning command to remove only isolated nodes that have not migrated to execution nodes. To deprovision execution nodes from your {AutomationMesh} architecture, use the <> instead. - -.Procedure - -. Shut down the instance: -+ ----- -$ automation-controller-service stop ----- -. Run the deprovision command from another instance, replacing `__host_name__` with the name of the node as listed in the inventory file: -[subs="+quotes"] -+ ----- -$ awx-manage deprovision_instance --hostname=____ ----- +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-downloading-containerized-aap.adoc b/downstream/modules/platform/proc-downloading-containerized-aap.adoc index 9b8451c9fe..5c724c9ec0 100644 --- a/downstream/modules/platform/proc-downloading-containerized-aap.adoc +++ b/downstream/modules/platform/proc-downloading-containerized-aap.adoc @@ -1,30 +1,51 @@ :_mod-docs-content-type: PROCEDURE -[id="downloading-containerizzed-aap_{context}"] +[id="downloading-ansible-automation-platform"] = Downloading {PlatformNameShort} -[role="_abstract"] +Choose the installation program you need based on your {RHEL} environment internet connectivity and download the installation program to your {RHEL} host. + +.Prerequisites +* You have logged in to the {RHEL} host as your non-root user. .Procedure -. Download the latest installer tarball from link:https://access.redhat.com/downloads/content/480/ver=2.4/rhel---9/2.4/x86_64/product-software[access.redhat.com]. This can be done directly within the RHEL host, which saves time. +. Download the latest version of containerized {PlatformNameShort} from the link:{PlatformDownloadUrl}[{PlatformNameShort} download page]. +.. For online installations: *{PlatformNameShort} {PlatformVers} Containerized Setup* +.. For offline or bundled installations: *{PlatformNameShort} {PlatformVers} Containerized Setup Bundle* -. If you have downloaded the tarball and optional manifest zip file onto your laptop, copy them onto your RHEL host. +. Copy the installation program `.tar.gz` file and the optional manifest `.zip` file onto your {RHEL} host. +.. You can use the `scp` command to securely copy the files. The basic syntax for `scp` is: + -Decide where you would like the installer to reside on the filesystem. Installation related files will be created under this location and require at least 10Gb for the initial installation. +---- +scp [options] +---- + -. Unpack the installer tarball into your installation directory, and cd into the unpacked directory. +Use the following `scp` command to copy the installation program `.tar.gz` file to an AWS EC2 instance with a private key (replace the placeholder `<>` values with your actual information): + -.. online installer +---- +scp -i ansible-automation-platform-containerized-setup-.tar.gz ec2-user@: +---- ++ +. Decide where you want the installation program to reside on the file system. This is referred to as your installation directory. +.. Installation related files are created under this location and require at least 15 GB for the initial installation. + +. Unpack the installation program `.tar.gz` file into your installation directory, and go to the unpacked directory. ++ +.. To unpack the online installer: + ---- -$ tar xfvz ansible-automation-platform-containerized-setup-2.4-2.tar.gz +$ tar xfvz ansible-automation-platform-containerized-setup-.tar.gz ---- + -.. bundled installer +.. To unpack the offline or bundled installer: + ---- -$ tar xfvz ansible-automation-platform-containerized-setup-bundle-2.4-2-.tar.gz +$ tar xfvz ansible-automation-platform-containerized-setup-bundle--.tar.gz ---- +[role="_additional-resources"] +.Additional resources + +* link:https://man7.org/linux/man-pages/man1/scp.1.html[scp(1) - Linux manual page] diff --git a/downstream/modules/platform/proc-edge-manager-add-devices-cli.adoc b/downstream/modules/platform/proc-edge-manager-add-devices-cli.adoc new file mode 100644 index 0000000000..024cef1e36 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-add-devices-cli.adoc @@ -0,0 +1,61 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-add-devices-cli"] + += Adding devices to a fleet on the CLI + +Define the label selector to add devices into a fleet. + +Complete the following tasks: + +.Procedure + +. Run the following command to verify that the label selector returns the devices that you want to add to the fleet: + ++ +[source,bash] +---- +flightctl get devices -l type=pos-terminal -l stage=development +---- + +. If running the command returns the expected list of devices, you can define a fleet that selects the devices by using the following YAML file: + ++ +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Fleet +metadata: + name: my_fleet +spec: + selector: + matchLabels: + type: pos-terminal + stage: development +[...] +---- + +. Apply the change by running the following command: + ++ +[source,bash] +---- +flightctl apply -f my_fleet.yaml +---- + +. Check for any overlaps with the selector of other fleets by running the following command: + ++ +[source,bash] +---- +flightctl get fleets/my_fleet -o json | jq -r '.status.conditions[] | select(.type=="OverlappingSelectors").status' +---- + ++ +See the following example output: + ++ +[source,bash] +---- +False +---- diff --git a/downstream/modules/platform/proc-edge-manager-add-devices-ui.adoc b/downstream/modules/platform/proc-edge-manager-add-devices-ui.adoc new file mode 100644 index 0000000000..6bc59f8d80 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-add-devices-ui.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-add-devices-ui"] + += Adding devices to a fleet on the web UI + +Define the label selector to add devices into a fleet on the web UI. + +Complete the following tasks: + +.Procedure + +. From the navigation panel, select menu:Application Links[Edge Manager]. +This opens the external Edge Manager instance. +. From the navigation panel, select *Fleets*. +Select the fleet that you want to add devices to. +. Click btn:[Actions] and select *Edit fleet*. +. In the *General info* tab, click *Add label* under the *Device selector* option. +. Add the label to select devices for your fleet. +Any devices with that label are added to the fleet. + diff --git a/downstream/modules/platform/proc-edge-manager-build-app-packages.adoc b/downstream/modules/platform/proc-edge-manager-build-app-packages.adoc new file mode 100644 index 0000000000..254e3fda78 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-build-app-packages.adoc @@ -0,0 +1,55 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-build-app-packages"] + += Building an application package image + +The {RedHatEdge} can download application packages from an Open Container Initiative (OCI) compatible registry. +You can build an OCI container image that includes your application package in the `podman-compose` format and push the image to your OCI registry. + +Complete the following steps: + +.Procedure + +. Define the functionality of the application in a file called `podman-compose.yaml` that follows the Podman Compose specification: + +** Create a file called `Containerfile` with the following content: ++ +[source,bash] +---- +FROM scratch <1> +COPY podman-compose.yaml /podman-compose.yaml +LABEL appType="compose" <2> +---- +<1> Embed the compose file in a `scratch` container. +<2> Add the `appType=compose` label. + +. Build and push the container image to your OCI registry: + +.. Define the image repository that you have permissions to write to by running the following command: ++ +[source,bash] +---- +OCI_IMAGE_REPO=quai.io// +---- + +.. Define the image tag by running the following command: ++ +[source,bash] +---- +OCI_IMAGE_TAG=v1 +---- + +.. Build the application container image by running the following command: ++ +[source,bash] +---- +podman build -t ${OCI_IMAGE_REPO}:${OCI_IMAGE_TAG} . +---- + +.. Push the container image by running the following command: ++ +[source,bash] +---- +podman push ${OCI_IMAGE_REPO}:${OCI_IMAGE_TAG} . +---- diff --git a/downstream/modules/platform/proc-edge-manager-build-bootc-image.adoc b/downstream/modules/platform/proc-edge-manager-build-bootc-image.adoc new file mode 100644 index 0000000000..23cd241bb2 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-build-bootc-image.adoc @@ -0,0 +1,93 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-build-bootc-image"] + += Building the operating system image with _bootc_ + +Build the operating system image with the `bootc` that contains the {RedHatEdge} agent. +You can optionally include the following items in your operating system image: + +* The agent configuration for early binding +* Any drivers +* Host configuration +* Application workloads that you need + +Complete the following steps: + +.Procedure + +. Create a `Containerfile` file with the following content to build a RHEL 9-based operating system image that includes the {RedHatEdge} agent and configuration: +//The following containerfile includes RHACM, confirm step for AAP. ++ +[source,bash] +---- +FROM registry.redhat.io/rhel9/rhel-bootc: <1> +RUN subscription-manager repos --enable rhacm-2.13-for-rhel-9-$(uname -m)-rpms && \ + dnf -y install flightctl-agent && \ + dnf -y clean all && \ + systemctl enable flightctl-agent.service && \ + systemctl mask bootc-fetch-apply-updates.timer <2> +---- +<1> The base image that is referenced in `FROM` is a bootable container (`bootc`) image that already has a Linux kernel, which allows you to reuse existing standard container build tools and workflows. +<2> Disables the default automatic updates. The updates are managed by the {RedHatEdge}. + ++ +[IMPORTANT] +==== +If your device relies on containers from a private repository, you must place the device pull secret in the `/etc/ostree/auth.json` path. +The pull secret must exist on the device before the secret can be consumed. +==== + +** *Optional:* To enable `podman-compose` application support, add the following section to the `Containerfile` file: + ++ +[source,bash] +---- +RUN dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm && \ + dnf -y install podman-compose && \ + dnf -y clean all && \ + systemctl enable podman.service +---- + +** *Optional:* If you created the `config.yaml` for early binding, add the following section to the `Containerfile`: + ++ +[source,bash] +---- +ADD config.yaml /etc/flightctl/ +---- + ++ +For more information, see xref:edge-manager-request-cert[Optional: Requesting an enrollment certificate for early binding]. + +. Define the Open Container Initiative (OCI) registry by running the following command: + ++ +[source,bash] +---- +OCI_REGISTRY=registry.redhat.io +---- + +. Define the image repository that you have permissions to write to by running the following command: + ++ +[source,bash] +---- +OCI_IMAGE_REPO=${OCI_REGISTRY}// +---- + +. Define the image tag by running the following command: + ++ +[source,bash] +---- +OCI_IMAGE_TAG=v1 +---- + +. Build the operating system image for your target platform: + ++ +[source,bash] +---- +sudo podman build -t ${OCI_IMAGE_REPO}:${OCI_IMAGE_TAG} +---- diff --git a/downstream/modules/platform/proc-edge-manager-build-disk-image.adoc b/downstream/modules/platform/proc-edge-manager-build-disk-image.adoc new file mode 100644 index 0000000000..56337beb56 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-build-disk-image.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-build-disk-image"] + += Building the operating system disk image + +Build the operating system disk image that contains the file system for your devices. + +Complete the following steps: + +.Procedure + +. Create a directory called `output` by running the following command: + ++ +[source,bash] +---- +mkdir -p output +---- + +. Use `bootc-image-builder` to generate an operating system disk image of type `iso` from your operating system image by running the following command: + ++ +[source,bash] +---- +sudo podman run --rm -it --privileged --pull=newer \ + --security-opt label=type:unconfined_t \ + -v "${PWD}/output":/output \ + -v /var/lib/containers/storage:/var/lib/containers/storage \ + registry.redhat.io/rhel9/bootc-image-builder:latest \ + --type iso \ + ${OCI_IMAGE_REPO}:${OCI_IMAGE_TAG} +---- + +When the `bootc-image-builder` completes, you can find the ISO disk image at the `${PWD}/output/bootiso/install.iso` path. diff --git a/downstream/modules/platform/proc-edge-manager-build-image-QCoW2.adoc b/downstream/modules/platform/proc-edge-manager-build-image-QCoW2.adoc new file mode 100644 index 0000000000..15c9dff7ad --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-build-image-QCoW2.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-build-image-QCoW2"] + += Building the QCoW2 disk image + +{OCPV} can download disk images from an OCI registry but it expects a container disk image instead of an OCI artifact. + +Complete the following steps to build, sign, and upload the QCoW2 disk image: + +.Procedure + +. Create a file called `Containerfile.qcow2` with the following content: + ++ +[source,bash] +---- +FROM registry.access.redhat.com/ubi9/ubi:latest AS builder +ADD --chown=107:107 output/qcow2/disk.qcow2 /disk/ <1> +RUN chmod 0440 /disk/* <2> +FROM scratch +COPY --from=builder /disk/* /disk/ <3> +---- +<1> Adds the QCoW2 disk image to a builder container to set the required `107` file ownership, which is the QEMU user. +<2> Sets the required `0440` file permissions. +<3> Copies the file to a scratch image. + +. Build, sign, and publish your disk image by running the following command: ++ +[source,bash] +---- +sudo chown -R $(whoami):$(whoami) "${PWD}/output" +OCI_DISK_IMAGE_REPO=${OCI_IMAGE_REPO}/diskimage-qcow2 +sudo podman build -t ${OCI_DISK_IMAGE_REPO}:${OCI_IMAGE_TAG} -f Containerfile.qcow2 . +sudo podman push --sign-by-sigstore-private-key ./signingkey.private ${OCI_DISK_IMAGE_REPO}:${OCI_IMAGE_TAG} +---- diff --git a/downstream/modules/platform/proc-edge-manager-build-image-bootc.adoc b/downstream/modules/platform/proc-edge-manager-build-image-bootc.adoc new file mode 100644 index 0000000000..102b92d8fe --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-build-image-bootc.adoc @@ -0,0 +1,33 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-build-image-bootc"] + += Building the bootc image + +Build, sign, and publish the `bootc` operating system image by following the generic image building process: + +.Procedure + +. Create a directory called `output` by running the following command: + ++ +[source,bash] +---- +mkdir -p output +---- + +. Generate an operating system disk image of type `vmdk` from your operating system image by running the following command: + ++ +[source,bash] +---- +sudo podman run --rm -it --privileged --pull=newer \ + --security-opt label=type:unconfined_t \ + -v "${PWD}/output":/output \ + -v /var/lib/containers/storage:/var/lib/containers/storage \ + registry.redhat.io/rhel9/bootc-image-builder:latest \ + --type qcow2 \ + ${OCI_IMAGE_REPO}:${OCI_IMAGE_TAG} +---- + +When the `bootc-image-builder` completes, you can find the disk image under `${PWD}/output/vmdk/disk.vmdk`. diff --git a/downstream/modules/platform/proc-edge-manager-build-sign-image.adoc b/downstream/modules/platform/proc-edge-manager-build-sign-image.adoc new file mode 100644 index 0000000000..fc86e67363 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-build-sign-image.adoc @@ -0,0 +1,47 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-build-sign-image"] + += Signing and publishing the _bootc_ operating system image by using Sigstore + +To sign the `bootc` operating system image by using Sigstore, complete the following steps: + +.Procedure + +. Generate a Sigstore key pair named `signingkey.pub` and `signingkey.private`: + ++ +[source,bash] +---- +skopeo generate-sigstore-key --output-prefix signingkey +---- + +. Configure container tools such as Podman and Skopeo to upload Sigstore signatures together with your signed image to your OCI registry: + ++ +[source,bash] +---- +sudo tee "/etc/containers/registries.d/${OCI_REGISTRY}.yaml" > /dev/null </.git +---- + +. Create the `Repository` resource by running the following command: + ++ +[source,bash] +---- +flightctl apply -f site-settings-repo.yaml +---- + +. Verify that the resource has been correctly created and is accessible by {RedHatEdge} running the following command: + ++ +[source,bash] +---- +flightctl get repository/site-settings +---- ++ +See the following example output: + ++ +[source,bash] +---- +NAME TYPE REPOSITORY URL ACCESSIBLE +site-settings git https://github.com//.git True +---- + +. Apply the `example-site` configuration to a device by updating the device specification: + ++ +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Device +metadata: + name: +spec: +[...] + config: <1> + - name: example-site + configType: GitConfigProviderSpec + gitRef: + repository: site-settings + targetRevision: production + path: /etc/example-site <2> +[...] +---- +<1> The example configuration takes all the files from the `example-site` directory from the `production` branch of the `site-settings` repository and places the files in the root directory (`/`). +<2> Ensure that the target path is writeable by creating your directory structure. The root directory (`/`) is not writeable in `bootc` systems. diff --git a/downstream/modules/platform/proc-edge-manager-deploy-apps.adoc b/downstream/modules/platform/proc-edge-manager-deploy-apps.adoc new file mode 100644 index 0000000000..774edb182b --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-deploy-apps.adoc @@ -0,0 +1,67 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-deploy-apps"] + += Deploying applications to a device using the CLI + +Deploy an application package to a device from an OCI registry by using the CLI. + +Complete the following steps: + +.Procedure + +. Specify the application package that you want to deploy in the `spec.applications` field in the `Device` resource: ++ +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Device +metadata: + name: +spec: +[...] + applications: + - name: wordpress <1> + image: quay.io/rhem-demos/wordpress-app:latest <2> + envVars: <3> + WORDPRESS_DB_HOST: + WORDPRESS_DB_USER: + WORDPRESS_DB_PASSWORD: +[...] +---- +<1> A user-defined name for the application that is used when the web console and the CLI list applications. +<2> A reference to an application package in an OCI registry. +<3> Optional. A list of key-value pairs that are passed to the deployment tool as environment variables or command line flags. ++ +[NOTE] +==== +For each application in the `applications` section of the device specification, you can find the corresponding device status information. +==== ++ +. Verify the status of an application deployment on a device by inspecting the device status information by running the following command: ++ +[source,bash] +---- +flightctl get device/ -o yaml +---- ++ +See the following example output: ++ +[source,yaml] +---- +[...] +spec: + applications: + - name: example-app + image: quay.io/flightctl-demos/example-app:v1 +status: + applications: + - name: example-app + ready: 3/3 + restarts: 0 + status: Running + applicationsSummary: + info: All application workloads are healthy. + status: Healthy +[...] +---- diff --git a/downstream/modules/platform/proc-edge-manager-enroll-device-cli.adoc b/downstream/modules/platform/proc-edge-manager-enroll-device-cli.adoc new file mode 100644 index 0000000000..057d15abbf --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-enroll-device-cli.adoc @@ -0,0 +1,62 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-enroll-device-cli"] + += Enrolling devices on the CLI + +You must enroll devices into the {RedHatEdge} service before you can manage them. + +.Prerequisites + +* You must install the {RedHatEdge} CLI. +See xref:edge-manager-install-CLI[Installing the Red Hat Edge Manager CLI]. +* You must log in to the {RedHatEdge} service. + +.Procedure + +. List all devices that are currently waiting for approval by running the following command: + ++ +-- +[source,bash] +---- +flightctl get enrollmentrequests --field-selector="status.approval.approved != true" +---- + +See the following example: + +[source,bash] +---- +NAME APPROVAL APPROVER APPROVED LABELS + Pending +---- +-- ++ + +[NOTE] +==== +The unique device name is generated by the agent and you cannot change it. +The agent chooses a base32-encoded hash of its public key as the device name. +==== ++ + +. Approve an enrollment request by specifying the name of the enrollment request. Optionally, you can add labels to the device by using the `--label` or `-l` flags. See the following example: + ++ +-- +[source,bash] +---- +flightctl approve -l region=eu-west-1 -l site=factory-berlin enrollmentrequest/54shovu028bvj6stkovjcvovjgo0r48618khdd5huhdjfn6raskg +---- + +See the following example output: + +[source,bash] +---- +NAME APPROVAL APPROVER APPROVED LABELS + Approved user region=eu-west-1,site=factory-berlin +---- +-- + +After you approve the enrollment request, the service issues the management certificate for the device and registers the device in the device inventory. +You can then manage the device. diff --git a/downstream/modules/platform/proc-edge-manager-generate-device-log.adoc b/downstream/modules/platform/proc-edge-manager-generate-device-log.adoc new file mode 100644 index 0000000000..3b9a80a2b2 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-generate-device-log.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-generate-device-log"] + += Generating a device log bundle + +The device includes a script that generates a bundle of logs necessary to debug the agent. + +.Procedure + +* Run the following command on the device and include the .tar file in the bug report. ++ +[NOTE] +==== +This depends on an SSH connection to extract the .tar file. +==== ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo flightctl-must-gather +---- + diff --git a/downstream/modules/platform/proc-edge-manager-image-build.adoc b/downstream/modules/platform/proc-edge-manager-image-build.adoc new file mode 100644 index 0000000000..f10543b33d --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-image-build.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-image-build"] + += The image building process + +. Choose a base `bootc` operating system image, such as a Fedora, CentOS, or RHEL image. +. Create a container file that layers the following items onto the base `bootc` image: ++ +* The {RedHatEdge} agent and configuration. +* Optional: Any drivers specific to your target deployment environment. +* Optional: Host configuration, for example, certificate authority bundles, and application workloads that are common to all deployments. ++ +. Build, publish, and sign a `bootc` operating system image using `podman` and `skopeo`. +. Create an operating system disk image by using `bootc-image-builder`. +. Build, publish, and sign an operating system disk image using `skopeo`. + +[NOTE] +==== +The operating system disk image has partitions, volumes, the file system, and the initial `bootc` image. +You only need to create the operating system disk image once, during provisioning. +For later device updates, you only need the `bootc` operating system image, which has the files in the file system. +==== + +.Additional resources + +* xref:edge-manager-build-bootc[Building a _bootc_ operating system image for the {RedHatEdge}] +* xref:edge-manager-images-special-considerations[Special considerations for building images] diff --git a/downstream/modules/platform/proc-edge-manager-image-pullsecrets.adoc b/downstream/modules/platform/proc-edge-manager-image-pullsecrets.adoc new file mode 100644 index 0000000000..8a08ba1d48 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-image-pullsecrets.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-image-pullsecrets"] + += Optional: Using image pull secrets + +If your device relies on containers from a private repository, you must configure a pull secret for the registry. +Complete the following steps: + +.Procedure + +. Depending on the kind of container image you use, place the pull secret in one or both of the following system paths on the device: ++ +* Operating system images use the `/etc/ostree/auth.json` path. +* Application container images use the `/root/.config/containers/auth.json` path. ++ +[IMPORTANT] +===== +The pull secret must exist on the device before the secret can be consumed. +===== + +. Ensure that the pull secrets use the following format: + ++ +[source,json] +---- +{ + "auths": { + "registry.example.com": { + "auth": "base64-encoded-credentials" + } + } +} +---- + +For more information, see the xref:edge-manager-additional-resources-images[Additional resources] section. diff --git a/downstream/modules/platform/proc-edge-manager-install-CLI.adoc b/downstream/modules/platform/proc-edge-manager-install-CLI.adoc new file mode 100644 index 0000000000..fdec7f58ce --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-install-CLI.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-install-CLI"] + += Installing the {RedHatEdge} CLI + +To install the {RedHatEdge} CLI, complete the following steps: + +.Procedure + +. Enable the subscription manager for the repository appropriate for your system by running the following command: ++ +[source,bash] +---- +sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms +---- ++ +For a full list of available repositories for the {RedHatEdge}, see the link:{URLEdgeManager}/assembly-edge-manager-images#edge-manager-additional-resources-images[_Additional resources_] section. + +. Install the `flightctl` CLI with your package manager by running the following command: + ++ +[source,bash] +---- +sudo dnf install flightctl +---- + +If you link:{URLEdgeManager}/assembly-edge-manager-install#edge-manager-oauth-manually[set up the OAuth application manually], you also need to make sure that one utility `xdg-open`, `x-www-browser`, or `www-browser` is available, for example, by installing `xdg-utils`. diff --git a/downstream/modules/platform/proc-edge-manager-install-rpm-package.adoc b/downstream/modules/platform/proc-edge-manager-install-rpm-package.adoc new file mode 100644 index 0000000000..2a24e1ad75 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-install-rpm-package.adoc @@ -0,0 +1,110 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-install-rpm-package"] + += Installing the {RedHatEdge} RPM package + +Prepare your {RHEL} host for the installation of the {RedHatEdge} by enabling the necessary repositories, installing the `flightctl-services` package, configuring the baseDomain, and then starting and verifying the running services. + +.Prerequisites + +* An active {PlatformNameShort} subscription with a running instance and the necessary API URLs and OAuth credentials. +* A separate machine from {PlatformNameShort} to install the {RedHatEdge} on. +* Podman installed for managing containers. +* A {RHEL} host with: + +** Minimal installation +** 4 cores and 16GB RAM (recommended) +** Administrative access (root or sudo-capable user) +** SSH access + +.Procedure + +. SSH into your {RHEL} host. +. Authenticate and log in to the Red Hat Container Registry: ++ +---- +sudo podman login registry.redhat.io +---- ++ +. Install the necessary repositories and packages: +** Ensure that the {PlatformNameShort} repositories are enabled by running the following example command based on the version of {RHEL} and architecture of your host: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms +---- ++ +** Install the {RedHatEdge} service by running: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo dnf install -y flightctl-services +---- ++ +. Update the installed `/etc/flightctl/service-config.yaml` to set the `baseDomain`: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo vi /etc/flightctl/service-config.yaml +---- ++ +[IMPORTANT] +==== +Ensure that you set the `baseDomain` in the service configuration correctly. +By default, the installation process attempts to automatically set this value based on the IP address of your {RHEL} host. + +However, if your environment uses a specific domain name to access this host, for example `rhem-example.com`, it is recommended that you manually update the `baseDomain` in `/etc/flightctl/service-config.yaml` to this hostname. + +Setting the `baseDomain` correctly ensures that all generated URLs, certificates, and internal configurations within the {RedHatEdge} are accurate for your network setup. +This is especially important for integration with {PlatformNameShort} and for ensuring that the UI is accessible through the intended domain name. + +You can check the currently configured `baseDomain` using: + +---- +grep baseDomain: /etc/flightctl/service-config.yaml +---- +==== ++ +. Enable and start the services: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo systemctl enable flightctl.target +sudo systemctl start flightctl.target +---- ++ +. Verify that services are running: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo systemctl list-units flightctl-*.service +---- ++ +You should see these 7 services running: ++ + +* flightctl-db +* flightctl-kv +* flightctl-api +* flightctl-periodic +* flightctl-worker +* flightctl-ui +* flightctl-cli-artifacts + ++ +. Go to the UI at the `baseDomain` stored in the service configuration file: ++ +---- +grep baseDomain: /etc/flightctl/service-config.yaml +---- ++ +Visit the displayed `baseDomain` in your web browser to access the UI. + +.Troubleshooting + +If your services do not run correctly, use the following log command to troubleshoot further and remediate: + +---- +journalctl -u flightctl- -b --no-pager +---- diff --git a/downstream/modules/platform/proc-edge-manager-integrate-aap.adoc b/downstream/modules/platform/proc-edge-manager-integrate-aap.adoc new file mode 100644 index 0000000000..784344c3bb --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-integrate-aap.adoc @@ -0,0 +1,65 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-integrate-aap"] + += Integrating with {PlatformNameShort} + +Integrate the {RedHatEdge} with your {PlatformNameShort} instance by modifying the `service-config.yaml` file to include authentication type, API URLs, OAuth client ID, and an optional OAuth token, followed by restarting the services. + +.Procedure + +. Stop the flightctl services before editing your `service-config.yaml` file: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo systemctl stop flightctl.target +---- ++ +. Configure the integration settings by editing the configuration file: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo vi /etc/flightctl/service-config.yaml +---- ++ +. Update the configuration file to integrate with {PlatformNameShort}: ++ +[source,yaml] +---- +global: + baseDomain: <1> + auth: + type: aap <2> + insecureSkipTlsVerify: false <3> + aap: + apiUrl: https://your-aap-instance.example.com <4> + externalApiUrl: https://your-aap-instance.example.com <5> + oAuthApplicationClientId: <6> + oAuthToken: <7> +---- ++ +<1> The domain name or IP for the host, this is automatically set when the RPM is installed but you can override this. +It is the only field that is mandatory. +<2> Set this to `aap` to enable {PlatformNameShort} authentication. +<3> Set to `false`. +Only set this to `true` to skip TLS certificate verification for the {PlatformNameShort} URLs. +For production environments, consider configuring a CA certificate (see the Self-signed certificates section). +<4> The internal facing API URL for the running {PlatformNameShort} instance that makes requests against. +You can configure this URL to be an internally accessible URL for the running {PlatformNameShort} instance. +For example, if there are separate internal or external ingresses. +<5> The externally accessible URL of your running {PlatformNameShort} instance. +<6> If you are using the automatic method, this field is not necessary. +This is the Client ID of the OAuth application configured in {PlatformNameShort} for the {RedHatEdge}. +If you do not have one yet, you can leave this empty and give an `oAuthToken` to allow the setup to create it. +<7> If you are using the manual method, this field is not necessary. +This is an OAuth token with write permissions for the "Default" organization in your {PlatformNameShort} instance. +This is only needed if you want the setup process to automatically create the OAuth application. +Once created, this token is no longer necessary. + ++ +. Start the services: ++ +[literal, options="nowrap" subs="+attributes"] +---- +sudo systemctl start flightctl.target +---- diff --git a/downstream/modules/platform/proc-edge-manager-log-into-CLI.adoc b/downstream/modules/platform/proc-edge-manager-log-into-CLI.adoc new file mode 100644 index 0000000000..81adf01b4b --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-log-into-CLI.adoc @@ -0,0 +1,50 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-log-into-CLI"] + += Logging into the {RedHatEdge} through the CLI + +How you log in the Red Hat Edge Manager depends on whether you choose the link:{URLEdgeManager}/assembly-edge-manager-install#edge-manager-oauth-auto[automatic] or link:{URLEdgeManager}/assembly-edge-manager-install#edge-manager-oauth-manually[manual] method when you initially set up the application. + +.Procedure + +* If you use the automatic setup you can create a personal access token, even only with Read scope (under the profile icon in the top right corner of your {PlatformNameShort} UI > *User details* > *Tokens* tab) and then use this token to log in directly through the CLI, with the following example syntax: ++ +[source,bash] +---- +flightctl login https://:3443 --token= --insecure-skip-tls-verify +---- + +* If you use the manual setup, use the *Client ID* to log in through a web-based process, with the following example syntax: ++ +[source,bash] +---- +flightctl login https://:3443 --web --client-id= --insecure-skip-tls-verify +---- ++ +** This opens in a web browser and asks you to approve. ++ +The `--insecure-skip-tls-verify` parameter is used only if you have not generated your own valid certificates. + +.Next steps + +Use the following commands to help you with the CLI: + +* To output a list of available commands, use: ++ +[source,bash] +---- +flightctl +---- +* To output both the flightctl CLI version and the back-end {RedHatEdge} version, use: ++ +[source,bash] +---- +flightctl version +---- + +[IMPORTANT] +==== +To ensure supportability and proper functionality, the version of the flightctl CLI must match the version of the {RedHatEdge} in use. +Mismatched versions are not supported. +==== diff --git a/downstream/modules/platform/proc-edge-manager-monitor-device-resources-cli.adoc b/downstream/modules/platform/proc-edge-manager-monitor-device-resources-cli.adoc new file mode 100644 index 0000000000..8c6e8c2817 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-monitor-device-resources-cli.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-monitor-device-resources-cli"] + += Monitoring device resources on the CLI + +Monitor the resources of your device through the CLI, providing you with the tools and commands to track performance and troubleshoot issues. + +.Procedure + +* Add resource monitors in the `resources:` section of the device's specification. + +For example, add the following monitor for your disk: + +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Device +metadata: + name: +spec: +[...] + resources: + - monitorType: Disk + samplingInterval: 5s <1> + path: /application_data <2> + alertRules: + - severity: Warning <3> + duration: 30m + percentage: 75 + description: Disk space for application data is >75% full for over 30m. + - severity: Critical <4> + duration: 10m + percentage: 90 + description: Disk space for application data is >90% full over 10m. +[...] +---- +<1> Samples usage every 5 seconds. +<2> Checks disk usage on the file system that is associated with the `/applications_data` path. +<3> Initiates a warning if the average usage exceeds 75% for more than 30 minutes. +<4> Initiates a critical alert if the average usage exceeds 90% for over 10 minutes. diff --git a/downstream/modules/platform/proc-edge-manager-oauth-auto.adoc b/downstream/modules/platform/proc-edge-manager-oauth-auto.adoc new file mode 100644 index 0000000000..b15ad7eeba --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-oauth-auto.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-oauth-auto"] + += Setting up the OAuth application automatically + +Automatic setup of an OAuth application by generating an OAuth token within {PlatformNameShort} and adding it to your configuration file. +Upon service startup, the application is automatically created, and the client ID updated. + +.Procedure + +. Generate an OAuth token in {PlatformNameShort}: +.. From the navigation panel, select menu:{MenuAM}[Users]. +.. Select a user with write permissions to the *Default* organization (admin user recommended). +.. Click the *Tokens* tab for that user. +.. Click btn:[Create token] and enter the relevant details. +... *Scope*: Select *Write*. +. Go to the link:{URLEdgeManager}/assembly-edge-manager-install#edge-manager-integrate-aap[Integrating with {PlatformNameShort}] section for the steps to edit your `service-config.yaml` file and complete setting up the OAuth application automatically. diff --git a/downstream/modules/platform/proc-edge-manager-oauth-manually.adoc b/downstream/modules/platform/proc-edge-manager-oauth-manually.adoc new file mode 100644 index 0000000000..7f411ecfe4 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-oauth-manually.adoc @@ -0,0 +1,32 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-oauth-manually"] + += Setting up the OAuth application manually + +Manually set up an OAuth application within your {PlatformNameShort} instance. +This is important for enabling token-based authentication and integrating external applications such as the {RedHatEdge}. + +.Procedure + +. From the navigation panel on your {PlatformNameShort} instance, go to menu:{MenuAM}[OAuth Applications]. +. Click btn:[Create OAuth application]. +. Enter the following details: +** *Name*: Enter a name such as "Red Hat Edge Manager". +This is the name visible in the {PlatformNameShort} UI. +** *URL*: The `baseDomain` of your {PlatformNameShort} UI with `https://`. +** *Organization*: Select *Default*. +** *Authorization grant type*: Select *Authorization code*. +** *Client*: Select *Public*. +** *Redirect URIs*: +*** The redirect configured for your UI is your `baseDomain` with a /callback route appended, such as `https://your-edge-manager-ip-or-domain:443/callback`. +If you have more than one URI, enter them in this field separated by a space, not commas or other delimiters. +*** To provide a redirect for CLI usage (`flightctl login`), configure a redirect URI, such as `http://127.0.0.1/callback`. +. Click btn:[Create OAuth application]. +An *Application Links* section is now visible in the navigation panel. +. Copy the *Client ID* as you need it to update the *oAuthApplicationClientId* in your `service-config.yaml` file with this value. +. Go to the link:{URLEdgeManager}/assembly-edge-manager-install#edge-manager-integrate-aap[Integrating with {PlatformNameShort}] section for the steps to edit your `service-config.yaml` file and complete setting up the OAuth application manually. + +.Additional resources + +* link:{URLCentralAuth}/gw-token-based-authentication[Configuring access to external applications with token-based authentication] diff --git a/downstream/modules/platform/proc-edge-manager-provision-cloudinit-config.adoc b/downstream/modules/platform/proc-edge-manager-provision-cloudinit-config.adoc new file mode 100644 index 0000000000..3972d87ded --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-provision-cloudinit-config.adoc @@ -0,0 +1,39 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-provision-cloudinit-config"] + += Creating the _cloud-init_ configuration + +To create the `cloud-init` configuration, complete the following steps: + +.Procedure + +. Request a new {RedHatEdge} agent enrollment configuration and store it in a file called `config.yaml` by running the following command: + ++ +[source,bash] +---- +flightctl certificate request --signer=enrollment --expiration=365d --output=embedded > config.yaml +---- + +. Create a cloud configuration user data file called `cloud-config.yaml` that places the agent configuration in the correct location on the first boot by running the following command: + ++ +[source,bash] +---- +cat < cloud-config.yaml +#cloud-config +write_files: +- path: /etc/flightctl/config.yaml + content: $(cat config.yaml | base64 -w0) + encoding: b64 +EOF +---- + +. Create a Kubernetes `Secret` that contains the cloud configuration user data file: + ++ +[source,bash] +---- +oc create secret generic enrollment-secret --from-file=userdata=cloud-config.yaml +---- diff --git a/downstream/modules/platform/proc-edge-manager-provision-virt-create.adoc b/downstream/modules/platform/proc-edge-manager-provision-virt-create.adoc new file mode 100644 index 0000000000..3e365d61f8 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-provision-virt-create.adoc @@ -0,0 +1,62 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-provision-virt-create"] + += Creating the virtual machine + +Create a virtual machine that has its primary disk populated from your QCoW2 container disk image and a `cloud-init` configuration drive that is populated from your enrollment secret. + +Complete the following steps: + +.Procedure + +. Create a file that has the `VirtualMachine` resource manifest by running the following command: + ++ +[source,bash] +---- +cat < my-bootc-vm.yaml +apiVersion: kubevirt.io/v1 +kind: VirtualMachine +metadata: + name: my-bootc-vm +spec: + runStrategy: RerunOnFailure + template: + spec: + domain: + cpu: + cores: 1 + memory: + guest: 1024M + devices: + disks: + - name: containerdisk + disk: + bus: virtio + - name: cloudinitdisk + disk: + bus: virtio + volumes: + - name: containerdisk + containerDisk: + image: ${OCI_DISK_IMAGE_REPO}:${OCI_IMAGE_TAG} + - name: cloudinitdisk + cloudInitConfigDrive: + secretRef: + name: enrollment-secret +EOF +---- + +. Apply the resource manifest to your cluster by running the following command: + ++ +[source,bash] +---- +oc apply -f my-bootc-vm.yaml +---- + +.Additional resources + +* For more information about how to inject the configuration through the `cloud-init` user data, see the link:https://cloudinit.readthedocs.io/en/latest/[Cloud-init documentation]. +* See xref:edge-manager-virt[Building images for {OCPV}]. diff --git a/downstream/modules/platform/proc-edge-manager-request-cert.adoc b/downstream/modules/platform/proc-edge-manager-request-cert.adoc new file mode 100644 index 0000000000..4facff42b4 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-request-cert.adoc @@ -0,0 +1,47 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-request-cert"] + += Optional: Requesting an enrollment certificate for early binding + +If you want to include an agent configuration in the image, complete the following steps: + +.Procedure + +. Log in to the flightctl CLI by following the steps in xref:edge-manager-log-into-CLI[Logging into the Red Hat Edge Manager through the CLI]. ++ +[NOTE] +==== +The CLI uses the certificate authority pool of the host to verify the identity of the {RedHatEdge} service. The verification can lead to a TLS verification error when using self-signed certificates, if you do not add your certificate authority certificate to the pool. You can bypass the server verification by adding the `--insecure-skip-tls-verify` flag to your command. +==== + +. Get the enrollment credentials in the format of an agent configuration file by running the following command: + ++ +[source,bash] +---- +flightctl certificate request --signer=enrollment --expiration=365d --output=embedded > config.yaml +---- ++ + +[NOTE] +==== +* The `--expiration=365d` option specifies that the credentials are valid for a year. +* The `--output=embedded` option specifies that the output is an agent configuration file with the enrollment credentials embedded. +==== ++ +The returned `config.yaml` contains the URLs of the {RedHatEdge} service, the certificate authority bundle, and the enrollment client certificate and key for the agent. +See the following example: + ++ +[source,yaml] +---- +enrollment-service: + authentication: + client-certificate-data: LS0tLS1CRUdJTiBD... + client-key-data: LS0tLS1CRUdJTiBF... + service: + certificate-authority-data: LS0tLS1CRUdJTiBD... + server: https://agent-api.flightctl.127.0.0.1.nip.io:7443 + enrollment-ui-endpoint: https://ui.flightctl.127.0.0.1.nip.io:8081 +---- diff --git a/downstream/modules/platform/proc-edge-manager-sign-disk-image.adoc b/downstream/modules/platform/proc-edge-manager-sign-disk-image.adoc new file mode 100644 index 0000000000..3d5113bb8f --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-sign-disk-image.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-sign-disk-image"] + += Optional: Signing and publishing the operating system disk image to an Open Container Initiative registry + +Sign and publish your disk image to your Open Container Initiative (OCI) registry. Optionally, you can compress and publish the disk image as an OCI artifact to the same OCI registry as your `bootc` images, which facilitates a unified hosting and distribution of `bootc` and disk images. To publish your ISO disk image to a repository named after your `bootc` image with `/diskimage-iso` appended. + +.Prerequisites + +* You created a private key by using Sigstore. +See xref:edge-manager-build-sign-image[Signing and publishing the _bootc_ operating system image by using Sigstore]. + +Sign and publish your disk image to your OCI registry by completing the following steps: + +.Procedure + +. Change the owner of the directory where the ISO disk image is located from `root` to your current user by running the following command: + ++ +[source,bash] +---- +sudo chown -R $(whoami):$(whoami) "${PWD}/output" +---- + +. Define the `OCI_DISK_IMAGE_REPO` environmental variable to be the same repository as your `bootc` image with `/diskimage-iso` appended by running the following command: ++ +[source,bash] +---- +OCI_DISK_IMAGE_REPO=${OCI_IMAGE_REPO}/diskimage-iso +---- + +. Create a manifest list by running the following command: ++ +[source,bash] +---- +sudo podman manifest create \ + ${OCI_DISK_IMAGE_REPO}:${OCI_IMAGE_TAG} +---- + +. Add the ISO disk image to the manifest list as an OCI artifact by running the following command: ++ +[source,bash] +---- +sudo podman manifest add \ + --artifact --artifact-type application/vnd.diskimage.iso \ + --arch=amd64 --os=linux \ + ${OCI_DISK_IMAGE_REPO}:${OCI_IMAGE_TAG} \ + "${PWD}/output/bootiso/install.iso" +---- + +. Sign the manifest list with your private Sigstore key and push the image to the registry by running the following command: ++ +[source,bash] +---- +sudo podman manifest push --all \ + --sign-by-sigstore-private-key ./signingkey.private \ + ${OCI_DISK_IMAGE_REPO}:${OCI_IMAGE_TAG} \ + docker://${OCI_DISK_IMAGE_REPO}:${OCI_IMAGE_TAG} +---- diff --git a/downstream/modules/platform/proc-edge-manager-update-labels.adoc b/downstream/modules/platform/proc-edge-manager-update-labels.adoc new file mode 100644 index 0000000000..a68ffcb26d --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-update-labels.adoc @@ -0,0 +1,52 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-update-labels"] + += Updating labels on the CLI + +Update labels on your devices by using the CLI. + +Complete the following steps: + +.Procedure + +. Export the current definition of the device into a file by running the following command: + ++ +[source,bash] +---- +flightctl get device/ -o yaml > my_device.yaml +---- + +. Use your preferred editor to edit the `my_device.yaml` file. +See the following example: ++ +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Device +metadata: + labels: + some_key: some_value + some_other_key: some_other_value + name: +spec: +[...] +---- + +. Save the file and apply the updated device definition by running the following command: + ++ +[source,bash] +---- +flightctl apply -f my_device.yaml +---- + +. Verify your changes by running the following example output: ++ +[source,bash] +---- +NAME ALIAS OWNER SYSTEM UPDATED APPLICATIONS LAST SEEN LABELS + Online Up-to-date 3 minutes ago some_key=some_value,some_other_key=some_other_value + Online Up-to-date 4 minutes ago region=eu-west-1,site=factory-madrid +---- diff --git a/downstream/modules/platform/proc-edge-manager-update-os-cli.adoc b/downstream/modules/platform/proc-edge-manager-update-os-cli.adoc new file mode 100644 index 0000000000..90708ad824 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-update-os-cli.adoc @@ -0,0 +1,43 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-update-os-cli"] + += Updating the operating system on the CLI + +Update a device using the CLI. + +Complete the following steps: + +.Procedure + +. Get the current resource manifest of the device by running the following command: + ++ +[source,bash] +---- +flightctl get device/ -o yaml > my_device.yaml +---- + +. Edit the `Device` resource to specify the new operating system name and version target. + ++ +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Device +metadata: + name: +spec: +[...] + os: + image: quay.io/flightctl/rhel:9.5 +[...] +---- + +. Apply the updated `Device` resource by running the following command: + ++ +[source,bash] +---- +flightctl apply -f .yaml +---- diff --git a/downstream/modules/platform/proc-edge-manager-view-device-config.adoc b/downstream/modules/platform/proc-edge-manager-view-device-config.adoc new file mode 100644 index 0000000000..ae285758fc --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-view-device-config.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-view-device-config"] + += Viewing a device's effective target configuration + +The device manifest returned by the `flightctl get device` command still only has references to external configuration and secret objects. +Only when the device agent queries the service, the service replaces the references with the actual configuration and secret data. +While this better protects potentially sensitive data, it also makes troubleshooting faulty configurations hard. +This is why a user can be authorized to query the effective configuration as rendered by the service to the agent. + +.Procedure + +* To query the effective configuration, use the following command: ++ +[literal, options="nowrap" subs="+attributes"] +---- +flightctl get device/${device_name} --rendered | jq +---- diff --git a/downstream/modules/platform/proc-edge-manager-view-device-inventory-cli.adoc b/downstream/modules/platform/proc-edge-manager-view-device-inventory-cli.adoc new file mode 100644 index 0000000000..705376e101 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-view-device-inventory-cli.adoc @@ -0,0 +1,96 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-view-device-inventory-cli"] + += Viewing device inventory and device details on the CLI + +Complete the following steps: + +.Procedure + +. View the devices in the device inventory by running the following command: + ++ +-- +[source,bash] +---- +flightctl get devices +---- + +See the following example output: + +[source,bash] +---- +NAME ALIAS OWNER SYSTEM UPDATED APPLICATIONS LAST SEEN + Online Up-to-date 3 seconds ago +---- +-- + +. View the details of this device in YAML format by running the following command: + ++ +-- +[source,bash] +---- +flightctl get device/ -o yaml +---- + +See the following example output: + +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Device +metadata: + name: + labels: <1> + region: eu-west-1 + site: factory-berlin +spec: + os: + image: quay.io/flightctl/rhel:9.5 <2> + config: + - name: my-os-configuration <3> + configType: GitConfigProviderSpec + gitRef: + path: /configuration + repository: my-configuration-repo + targetRevision: production +status: + os: + image: quay.io/flightctl/rhel:9.5 <4> + config: + renderedVersion: "1" <5> + applications: + data: {} <6> + summary: + status: Unknown <7> + resources: <8> + cpu: Healthy + disk: Healthy + memory: Healthy + systemInfo: <9> + architecture: amd64 + bootID: 037750f7-f293-4c5b-b06e-481eef4e883f + operatingSystem: linux + summary: + info: "" + status: Online <10> + updated: + status: UpToDate <11> + lastSeen: "2024-08-28T11:45:34.812851905Z" <12> +[...] +---- +<1> User-defined labels assigned to the device. +<2> The target OS image version of the device. +<3> The target OS configuration of the device. +<4> The current OS image version of the device +<5> The current OS configuration version of the device. +<6> The current list of deployed applications of the device. +<7> The health status of applications on the device. +<8> The availability of CPU, disk, and memory resources. +<9> Basic system information. +<10> The health status of the device. +<11> The update status of the device. +<12> The last check-in time and date of the device. +-- diff --git a/downstream/modules/platform/proc-edge-manager-view-device-inventory-ui.adoc b/downstream/modules/platform/proc-edge-manager-view-device-inventory-ui.adoc new file mode 100644 index 0000000000..ebcd721588 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-view-device-inventory-ui.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-view-device-inventory-ui"] + += Viewing device inventory and device details on the web UI + +Complete the following steps: + +.Procedure + +. From the navigation panel, select menu:Application Links[Edge Manager]. +This opens the external Edge Manager instance. +. From the navigation panel, select *Devices* where you can view your device inventory, details, and decommission devices. diff --git a/downstream/modules/platform/proc-edge-manager-view-device-labels-ui.adoc b/downstream/modules/platform/proc-edge-manager-view-device-labels-ui.adoc new file mode 100644 index 0000000000..b3c638d7f6 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-view-device-labels-ui.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-view-device-labels-ui"] + += Viewing devices and their labels on the web UI + +View devices and their associated labels on the web UI. You can use labels to organize your devices and device fleets. + +Complete the following steps: + +. From the navigation panel, select menu:Application Links[Edge Manager]. +This opens the external Edge Manager instance. +. From the navigation panel, select *Devices*. +. Select the device you want to manage. +In the *Details* tab you can view the associated labels under *Labels*. diff --git a/downstream/modules/platform/proc-edge-manager-view-devices-cli.adoc b/downstream/modules/platform/proc-edge-manager-view-devices-cli.adoc new file mode 100644 index 0000000000..7eb78d79a4 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-view-devices-cli.adoc @@ -0,0 +1,43 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-view-devices-cli"] + += Viewing devices and their labels on the CLI + +View devices and their associated labels. +You can use labels to organize your devices and device fleets. + +Complete the following steps: + +.Procedure + +. View devices in your inventory with their labels by using the `-o wide` option: ++ +[source,bash] +---- +flightctl get devices -o wide +---- ++ +See the following example output: ++ +[source,bash] +---- +NAME ALIAS OWNER SYSTEM UPDATED APPLICATIONS LAST SEEN LABELS + Online Up-to-date 3 seconds ago region=eu-west-1,site=factory-berlin + Online Up-to-date 1 minute ago region=eu-west-1,site=factory-madrid +---- ++ +. View devices in your inventory with a specific label or set of labels by using the `-l ` option: ++ +[source,bash] +---- +flightctl get devices -l site=factory-berlin -o wide +---- ++ +See the following example output: ++ +[source,bash] +---- +NAME ALIAS OWNER SYSTEM UPDATED APPLICATIONS LAST SEEN LABELS + Online Up-to-date 3 seconds ago region=eu-west-1,site=factory-berlin +---- diff --git a/downstream/modules/platform/proc-edge-manager-virt.adoc b/downstream/modules/platform/proc-edge-manager-virt.adoc new file mode 100644 index 0000000000..78412eefd8 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-virt.adoc @@ -0,0 +1,44 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-virt"] + += Building images for Red Hat OpenShift Virtualization + +When building operating system images and disk images for Red Hat OpenShift Virtualization, you can follow the generic image building process with the following changes: + +* Using late binding by injecting the enrollment certificate or the agent configuration through `cloud-init` when provisioning the virtual device. +* Adding the `open-vm-tools` guest tools to the image. +* Building a disk image of type `qcow2` instead of `iso`. + +Complete the generic steps with changes to the following steps: + +.Procedure + +. Build an operating system image based on RHEL 9 that includes the {RedHatEdge} agent and VM guest tools but excludes the agent configuration. + +. Create a file named `Containerfile` with the following content: + ++ +[source,bash] +---- +FROM registry.redhat.io/rhel9/bootc-image-builder:latest +RUN subscription-manager repos --enable rhacm-2.13-for-rhel-9-$(uname -m)-rpms && \ + dnf -y install flightctl-agent && \ + dnf -y clean all && \ + systemctl enable flightctl-agent.service +RUN dnf -y install cloud-init open-vm-tools && \ + dnf -y clean all && \ + ln -s ../cloud-init.target /usr/lib/systemd/system/default.target.wants && \ + systemctl enable vmtoolsd.service +---- + +. *Optional:* To enable `podman-compose` application support, add the following section to the `Containerfile` file: + ++ +[source,bash] +---- +RUN dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm && \ + dnf -y install podman-compose && \ + dnf -y clean all && \ + systemctl enable podman.service +---- diff --git a/downstream/modules/platform/proc-edge-manager-vmware.adoc b/downstream/modules/platform/proc-edge-manager-vmware.adoc new file mode 100644 index 0000000000..47c3418ff0 --- /dev/null +++ b/downstream/modules/platform/proc-edge-manager-vmware.adoc @@ -0,0 +1,56 @@ +:_mod-docs-content-type: PROCEDURE + +[id="edge-manager-vmware"] + += Building images for VMware vSphere + +When building operating system images and disk images for VMware vSphere, you can follow the generic image building process with the following changes: + +* Using late binding by injecting the enrollment certificate or the agent configuration through `cloud-init` when provisioning the virtual device. +* Adding the `open-vm-tools` guest tools to the image. +* Building a disk image of type `vmdk` instead of `iso`. + +Complete the generic steps with changes to the following steps: + +.Procedure + +. Build an operating system image based on RHEL 9 that includes the {RedHatEdge} agent and VM guest tools but excludes the agent configuration. + +. Create a file named `Containerfile` with the following content: + ++ +[source,bash] +---- +FROM registry.redhat.io/rhel9/bootc-image-builder:latest +RUN subscription-manager repos --enable rhacm-2.13-for-rhel-9-$(uname -m)-rpms && \ + dnf -y install flightctl-agent && \ + dnf -y clean all && \ + systemctl enable flightctl-agent.service && \ +RUN dnf -y install cloud-init open-vm-tools && \ + dnf -y clean all && \ + ln -s ../cloud-init.target /usr/lib/systemd/system/default.target.wants && \ + systemctl enable vmtoolsd.service +---- + +. Create a directory called `output` by running the following command: + ++ +[source,bash] +---- +mkdir -p output +---- + +. Generate an operating system disk image of type `vmdk` from your operating system image by running the following command: ++ +[source,bash] +---- +sudo podman run --rm -it --privileged --pull=newer \ + --security-opt label=type:unconfined_t \ + -v "${PWD}/output":/output \ + -v /var/lib/containers/storage:/var/lib/containers/storage \ + registry.redhat.io/rhel9/bootc-image-builder:latest \ + --type vmdk \ + ${OCI_IMAGE_REPO}:${OCI_IMAGE_TAG} +---- + +When the `bootc-image-builder` completes, you can find the disk image under `${PWD}/output/vmdk/disk.vmdk`. diff --git a/downstream/modules/platform/proc-editing-inventory-file-for-updates.adoc b/downstream/modules/platform/proc-editing-inventory-file-for-updates.adoc index b5803ea2be..99bea943b7 100644 --- a/downstream/modules/platform/proc-editing-inventory-file-for-updates.adoc +++ b/downstream/modules/platform/proc-editing-inventory-file-for-updates.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="editing-inventory-file-for-updates_{context}"] = Setting up the inventory file @@ -5,6 +7,8 @@ Before upgrading your {PlatformName} installation, edit the `inventory` file so that it matches your desired configuration. You can keep the same parameters from your existing {PlatformNameShort} deployment or you can modify the parameters to match any changes to your environment. +You can find sample inventory files in the link:https://github.com/ansible/test-topologies/[Test topologies] GitHub repository, or in our link:{LinkTopologies} guide. + .Procedure . Navigate to the installation program directory. @@ -12,24 +16,24 @@ Bundled installer:: + [source,options="nowrap",subs=attributes+] ----- -$ cd ansible-automation-platform-setup-bundle-2.4-1-x86_64 +$ cd ansible-automation-platform-setup-bundle-2.5-4-x86_64 ----- + Online installer:: + [source,options="nowrap",subs=attributes+] ----- -$ cd ansible-automation-platform-setup-2.4-1 +$ cd ansible-automation-platform-setup-2.5-4 ----- . Open the `inventory` file for editing. . Modify the `inventory` file to provision new nodes, deprovision nodes or groups, and import or generate {HubName} API tokens. + -You can use the same `inventory` file from an existing {PlatformNameShort} 2.1 installation if there are no changes to the environment. +You can use the same `inventory` file from an existing {PlatformNameShort} installation if there are no changes to the environment. + [NOTE] ==== -Provide a reachable IP address or fully qualified domain name (FQDN) for the `[automationhub]` and `[automationcontroller]` hosts to ensure that users can synchronize and install content from {HubNameMain} from a different node. +Provide a reachable IP address or fully qualified domain name (FQDN) for all hosts to ensure that users can synchronize and install content from {HubNameMain} from a different node. Do not use `localhost`. If `localhost` is used, the upgrade will be stopped as part of preflight checks. ==== @@ -42,30 +46,3 @@ If `localhost` is used, the upgrade will be stopped as part of preflight checks. ---- include::ini/clustered-nodes.ini[] ---- - -.Deprovisioning nodes or groups in a cluster - -* Append `node_state-deprovision` to the node or group within the `inventory` file. - -.Importing and generating API tokens - -When upgrading from {PlatformName} 2.0 or earlier to {PlatformName} 2.1 or later, you can use your existing {HubName} API token or generate a new token. In the inventory file, edit one of the following fields before running the {PlatformName} installer setup script `setup.sh`: - -* Import an existing API token with the `automationhub_api_token` flag as follows: -+ -[options="nowrap",subs="+quotes"] ----- -automationhub_api_token=____ ----- - -* Generate a new API token, and invalidate any existing tokens, with the `generate_automationhub_token` flag as follows: -+ -[options="nowrap",subs="+quotes"] ----- -generate_automationhub_token=True ----- - -[role="_additional-resources"] -.Additional resources -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index[{PlatformName} Installation Guide] -* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_automation_mesh_guide_for_vm-based_installations/assembly-deprovisioning-mesh[Deprovisioning individual nodes or instance groups] diff --git a/downstream/modules/platform/proc-editing-inventory-file.adoc b/downstream/modules/platform/proc-editing-inventory-file.adoc index 4c59f9d082..bc684b2db8 100644 --- a/downstream/modules/platform/proc-editing-inventory-file.adoc +++ b/downstream/modules/platform/proc-editing-inventory-file.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-editing-installer-inventory-file_{context}"] @@ -29,8 +31,15 @@ $ cd ansible-automation-platform-setup- ----- + . Open the `inventory` file with a text editor. -. Edit `inventory` file parameters to specify your installation scenario. You can use one of the supported xref:con-install-scenario-examples[Installation scenario examples] as the basis for your `inventory` file. +. Edit `inventory` file parameters to specify your installation scenario. +ifdef::mesh-VM[] +For further information, see link:{URLInstallationGuide}/assembly-platform-install-scenario#proc-editing-installer-inventory-file_platform-install-scenario[Editing the {PlatformName} installer inventory file] +endif::mesh-VM[] +ifdef::aap-install[] +You can use one of the supported xref:con-install-scenario-examples[Installation scenario examples] as the basis for your `inventory` file. [role="_additional-resources"] .Additional resources * For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see xref:appendix-inventory-files-vars[Inventory file variables]. +endif::aap-install[] + diff --git a/downstream/modules/platform/proc-enable-hstore-extension.adoc b/downstream/modules/platform/proc-enable-hstore-extension.adoc index e5552cf33a..eaf878e606 100644 --- a/downstream/modules/platform/proc-enable-hstore-extension.adoc +++ b/downstream/modules/platform/proc-enable-hstore-extension.adoc @@ -1,14 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-enable-hstore-extension"] = Enabling the hstore extension for the {HubName} PostgreSQL database -From {PlatformNameShort} {PlatformVers}, the database migration script uses `hstore` fields to store information, therefore the `hstore` extension to the {HubName} PostgreSQL database must be enabled. +The database migration script uses `hstore` fields to store information, therefore the `hstore` extension must be enabled in the {HubName} PostgreSQL database. This process is automatic when using the {PlatformNameShort} installer and a managed PostgreSQL server. -If the PostgreSQL database is external, you must enable the `hstore` extension to the {HubName} PostreSQL database manually before {HubName} installation. +If the PostgreSQL database is external, you must enable the `hstore` extension in the {HubName} PostgreSQL database manually before installation. -If the `hstore` extension is not enabled before {HubName} installation, a failure is raised during database migration. +If the `hstore` extension is not enabled before installation, a failure raises during database migration. .Procedure . Check if the extension is available on the PostgreSQL server ({HubName} database). @@ -18,7 +20,7 @@ If the `hstore` extension is not enabled before {HubName} installation, a failur $ psql -d <{HubName} database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'" ---- + -Where the default value for `<{HubName} database>` is `automationhub`. +. Where the default value for `<{HubName} database>` is `automationhub`. + *Example output with `hstore` available*: @@ -49,20 +51,14 @@ To install the RPM package, use the following command: ---- dnf install postgresql-contrib ---- -. Create the `hstore` PostgreSQL extension on the {HubName} database with the following command: +. Load the `hstore` PostgreSQL extension into the {HubName} database with the following command: + [options="nowrap" subs="+quotes,attributes"] ---- $ psql -d <{HubName} database> -c "CREATE EXTENSION hstore;" ---- + -The output of which is: -+ -[options="nowrap" subs="+quotes,attributes"] ----- -CREATE EXTENSION ----- -. In the following output, the `installed_version` field contains the `hstore` extension used, indicating that `hstore` is enabled. +In the following output, the `installed_version` field lists the `hstore` extension used, indicating that `hstore` is enabled. + [options="nowrap" subs="+quotes,attributes"] ---- diff --git a/downstream/modules/platform/proc-enable-pac.adoc b/downstream/modules/platform/proc-enable-pac.adoc new file mode 100644 index 0000000000..006bb4394a --- /dev/null +++ b/downstream/modules/platform/proc-enable-pac.adoc @@ -0,0 +1,60 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-08 +:_mod-docs-content-type: PROCEDURE + +[id="enable-pac_{context}"] += Enabling policy enforcement + +During installation, you must configure your {PlatformNameShort} instance to include the policy enforcement feature. You can do this by modifying the feature flag variables in your configuration file. +Follow the instructions below relevant to your installation type. + +.{OCPShort} Installation + +For {OCPShort} installations, you must modify the {PlatformNameShort} custom resource. Add the following to the spec section: + +[source,yaml] +---- +spec: + feature_flags: + FEATURE_POLICY_AS_CODE_ENABLED: True +---- + +After applying the changes, wait for the operator to complete the update process. The operator automatically handles the necessary service restarts and configuration updates. + +.RPM Installation + +For RPM-based installations, modify the inventory file used by the installer to add the following variable: + +[source,yaml] +---- +feature_flags: + FEATURE_POLICY_AS_CODE_ENABLED: True +---- + +See link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#defining-variables-at-runtime[Defining variables at runtime] for more on adding vars. After modifying the inventory file, rerun the installer to apply changes. + +.Containerized Installation + +For containerized installations, modify the inventory file used by the installer to add: + +[source,yaml] +---- +feature_flags: + FEATURE_POLICY_AS_CODE_ENABLED: True +---- + +After modifying the inventory file, rerun the installer to apply the changes. + +.Verifying feature flag status + +To verify that the feature flag is enabled, you can check the feature flags state endpoint: + +[source,yaml] +---- +https:///api/controller/v2/feature_flags_state/ +---- +The endpoint will return a `JSON` response containing the current state of all feature flags, including `FEATURE_POLICY_AS_CODE_ENABLED`. + +[role="_additional-resources"] +.Additional resources +* link:https://access.redhat.com/articles/7109282[How to set feature flags for {PlatformName}] \ No newline at end of file diff --git a/downstream/modules/platform/proc-enable-pods-ref-images.adoc b/downstream/modules/platform/proc-enable-pods-ref-images.adoc index 184314183f..16e9a8df73 100644 --- a/downstream/modules/platform/proc-enable-pods-ref-images.adoc +++ b/downstream/modules/platform/proc-enable-pods-ref-images.adoc @@ -1,6 +1,8 @@ -[id="proc-enable-pods-ref-images"] +:_mod-docs-content-type: PROCEDURE -== Enabling pods to reference images from other secured registries +[id="proc-enable-pods-ref-images_{context}"] + += Enabling pods to reference images from other secured registries If a container group uses a container from a secured registry that requires a credential, you can associate a Container Registry credential with the Execution Environment that is assigned to the job template. {ControllerNameStart} uses this to create an `ImagePullSecret` for you in the {OCPShort} namespace where the container group job runs, and cleans it up after the job is done. diff --git a/downstream/modules/platform/proc-enable-proxy-support.adoc b/downstream/modules/platform/proc-enable-proxy-support.adoc index 720871fb2a..86d7e87eb3 100644 --- a/downstream/modules/platform/proc-enable-proxy-support.adoc +++ b/downstream/modules/platform/proc-enable-proxy-support.adoc @@ -1,17 +1,19 @@ +:_mod-docs-content-type: PROCEDURE [id="proc-enable-proxy-support_{context}"] -= Enable proxy support - -To provide proxy server support, {ControllerName} handles proxied requests (such as ALB, NLB , HAProxy, Squid, Nginx and tinyproxy in front of {ControllerName}) via the *REMOTE_HOST_HEADERS* list variable in the {ControllerName} settings. By default, *REMOTE_HOST_HEADERS* is set to `["REMOTE_ADDR", "REMOTE_HOST"]`. += Enabling proxy support through a load balancer +//FYI - In 2.5 EA, the System menu is specific to controller so do not change to AAP. +A forward proxy deals with client traffic, regulating and securing it. +To provide proxy server support, {ControllerName} handles proxied requests (such as ALB, NLB , HAProxy, Squid, Nginx and tinyproxy in front of {ControllerName}) using the *REMOTE_HOST_HEADERS* list variable in the {ControllerName} settings. By default, *REMOTE_HOST_HEADERS* is set to `["REMOTE_ADDR", "REMOTE_HOST"]`. To enable proxy server support, edit the *REMOTE_HOST_HEADERS* field in the settings page for your {ControllerName}: .Procedure -. On your {ControllerName}, navigate to {MenuAEAdminSettings}. -. Select *Miscellaneous System settings* from the list of *System* options. -. In the *REMOTE_HOST_HEADERS* field, enter the following values: +. From the navigation panel, select {MenuSetSystem}. +. Click btn:[Edit] +. In the *Remote Host Headers* field, enter the following values: + ---- [ @@ -20,5 +22,6 @@ To enable proxy server support, edit the *REMOTE_HOST_HEADERS* field in the sett "REMOTE_HOST" ] ---- +. Click btn:[Save] to save your settings. -{ControllerNameStart} determines the remote host’s IP address by searching through the list of headers in *REMOTE_HOST_HEADERS* until the first IP address is located. +{ControllerNameStart} determines the remote host’s IP address by searching through the list of headers in *Remote Host Headers* until the first IP address is located. diff --git a/downstream/modules/platform/proc-enabling-automation-hub-collection-and-container-signing.adoc b/downstream/modules/platform/proc-enabling-automation-hub-collection-and-container-signing.adoc new file mode 100644 index 0000000000..7de1eeacbb --- /dev/null +++ b/downstream/modules/platform/proc-enabling-automation-hub-collection-and-container-signing.adoc @@ -0,0 +1,152 @@ +:_mod-docs-content-type: PROCEDURE + +[id="enabling-automation-hub-collection-and-container-signing_{context}"] += Enabling automation content collection and container signing + +Automation content signing is disabled by default. To enable it, the following installation variables are required in the inventory file: + +[source,yaml] +---- +# Collection signing +hub_collection_signing=true +hub_collection_signing_key= + +# Container signing +hub_container_signing=true +hub_container_signing_key= +---- + +The following variables are required if the keys are protected by a passphrase: + +[source,yaml] +---- +# Collection signing +hub_collection_signing_pass= + +# Container signing +hub_container_signing_pass= +---- + +The `hub_collection_signing_key` and `hub_container_signing_key` variables require the set up of keys before running an installation. + +Automation content signing currently only supports GnuPG (GPG) based signature keys. For more information about GPG, see the link:https://www.gnupg.org/documentation/manpage.html[GnuPG man page]. + +[NOTE] +==== +The algorithm and cipher used is the responsibility of the customer. +==== + +.Procedure + +. On a RHEL9 server run the following command to create a new key pair for collection signing: ++ +---- +gpg --gen-key +---- ++ +. Enter your information for "Real name" and "Email address": ++ +Example output: ++ +---- +gpg --gen-key +gpg (GnuPG) 2.3.3; Copyright (C) 2021 Free Software Foundation, Inc. +This is free software: you are free to change and redistribute it. +There is NO WARRANTY, to the extent permitted by law. + +Note: Use "gpg --full-generate-key" for a full featured key generation dialog. + +GnuPG needs to construct a user ID to identify your key. + +Real name: Joe Bloggs +Email address: jbloggs@example.com +You selected this USER-ID: + "Joe Bloggs " + +Change (N)ame, (E)mail, or (O)kay/(Q)uit? O +---- ++ +If this fails, your environment does not have the necessary prerequisite packages installed for GPG. Install the necessary packages to proceed. ++ +. A dialog box will appear and ask you for a passphrase. This is optional but recommended. +. The keys are then generated, and produce output similar to the following: ++ +---- +We need to generate a lot of random bytes. It is a good idea to perform +some other action (type on the keyboard, move the mouse, utilize the +disks) during the prime generation; this gives the random number +generator a better chance to gain enough entropy. +gpg: key 022E4FBFB650F1C4 marked as ultimately trusted +gpg: revocation certificate stored as '/home/aapuser/.gnupg/openpgp-revocs.d/F001B037976969DD3E17A829022E4FBFB650F1C4.rev' +public and secret key created and signed. + +pub rsa3072 2024-10-25 [SC] [expires: 2026-10-25] + F001B037976969DD3E17A829022E4FBFB650F1C4 +uid Joe Bloggs +sub rsa3072 2024-10-25 [E] [expires: 2026-10-25] +---- ++ +Note the expiry date that you can set based on company standards and needs. ++ +. You can view all of your GPG keys by running the following command: ++ +---- +gpg --list-secret-keys --keyid-format=long +---- ++ +. To export the public key run the following command: ++ +---- +gpg --export -a --output collection-signing-key.pub +---- ++ +. To export the private key run the following command: ++ +---- +gpg -a --export-secret-keys > collection-signing-key.priv +---- ++ +. If a passphrase is detected, you will be prompted to enter the passphrase. +. To view the private key file contents, run the following command: ++ +---- +cat collection-signing-key.priv +---- ++ +Example output: ++ +---- +-----BEGIN PGP PRIVATE KEY BLOCK----- + +lQWFBGcbN14BDADTg5BsZGbSGMHypUJMuzmIffzzz4LULrZA8L/I616lzpBHJvEs +sSN6KuKY1TcIwIDCCa/U5Obm46kurpP2Y+vNA1YSEtMJoSeHeamWMDd99f49ItBp + + + +j920hRy/3wJGRDBMFa4mlQg= +=uYEF +-----END PGP PRIVATE KEY BLOCK----- +---- ++ +. Repeat steps 1 to 9 to create a key pair for container signing. +. Add the following variables to the inventory file and run the installation to create the signing services: ++ +[source,yaml] +---- +# Collection signing +hub_collection_signing=true +hub_collection_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-2.5-2/collection-signing-key.priv +# This variable is required if the key is protected by a passphrase +hub_collection_signing_pass= + +# Container signing +hub_container_signing=true +hub_container_signing_key=/home/aapuser/aap/ansible-automation-platform-containerized-setup-2.5-2/container-signing-key.priv +# This variable is required if the key is protected by a passphrase +hub_container_signing_pass= +---- + +[role="_additional-resources"] +.Additional resources + +* link:{URLHubManagingContent}/managing-containers-hub#working-with-signed-containers[Working with signed containers] diff --git a/downstream/modules/platform/proc-encrypt-postgres-password.adoc b/downstream/modules/platform/proc-encrypt-postgres-password.adoc index d02c66a8fc..4b01a42541 100644 --- a/downstream/modules/platform/proc-encrypt-postgres-password.adoc +++ b/downstream/modules/platform/proc-encrypt-postgres-password.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-encrypt-postgres-password"] = Encrypting the Postgres password diff --git a/downstream/modules/platform/proc-find-delete-PVCs.adoc b/downstream/modules/platform/proc-find-delete-PVCs.adoc index e1630a196c..3b5c8d1a5f 100644 --- a/downstream/modules/platform/proc-find-delete-PVCs.adoc +++ b/downstream/modules/platform/proc-find-delete-PVCs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-find-delete-PVCs_{context}"] = Finding and deleting PVCs diff --git a/downstream/modules/platform/proc-gs-add-ee-to-job-template.adoc b/downstream/modules/platform/proc-gs-add-ee-to-job-template.adoc new file mode 100644 index 0000000000..2d7bdd1ae5 --- /dev/null +++ b/downstream/modules/platform/proc-gs-add-ee-to-job-template.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-add-ee-to-job-template_{context}"] + += Adding an {ExecEnvShort} to a job template + +.Prerequisites + +* An {ExecEnvShort} must have been created using ansible-builder as described in link:{URLBuilder}/assembly-using-builder[Using {Builder}]. +When an {ExecEnvShort} has been created, you can use it to run jobs. +Use the {ControllerName} UI to specify the execution environment to use in your job templates. +* Depending on whether an {ExecEnvShort} is made available for global use or tied to an organization, you must have the appropriate level of administrator privileges to use an {ExecEnvShort} in a job. +Execution environments tied to an organization require Organization administrators to be able to run jobs with those {ExecEnvShort}. +* Before running a job or job template that uses an {ExecEnvShort} that has a credential assigned to it, ensure that the credential has a username, host, and password. + +.Procedure + +. From the navigation panel, select {MenuInfrastructureExecEnvironments}. +. Click btn:[Create execution environment] to create an {ExecEnvShort}. +. Enter the appropriate details into the following fields: +.. *Name* (required): Enter a name for the {ExecEnvShort}. +.. *Image* (required): Enter the image name. The image name requires its full location (repository), the registry, image name, and version tag, as in the following example: `quay.io/ansible/awx-ee:latestrepo/project/image-name:tag` +.. Optional: *Pull*: Choose the type of pull when running jobs: +... *Always pull container before running*: Pulls the latest image file for the container. +... *Only pull the image if not present before running*: Only pulls the latest image if none are specified. +... *Never pull container before running*: Never pull the latest version of the container image. ++ +NOTE: If you do not set a type for pull, the value defaults to *Only pull the image if not present before running*. ++ +.. Optional: *Description*: Enter an optional description. +.. Optional: *Organization*: Assign the organization to specifically use this {ExecEnvShort}. To make the {ExecEnvShort} available for use across multiple organizations, leave this field blank. +.. *Registry credential*: If the image has a protected container registry, provide the credential to access it. +. Click btn:[Create {ExecEnvShort}]. Your newly added {ExecEnvShort} is ready to be used in a job template. +. To add an {ExecEnvShort} to a job template, navigate to {MenuAETemplates} and select your template. +..Click btn:[Edit template] and specify your {ExecEnvShort} in the field labeled *{ExecEnvShort}*. + +.Verification +After you add an {ExecEnvShort} to a job template, the template is listed in the *Templates* tab in your {ExecEnvShort} details. \ No newline at end of file diff --git a/downstream/modules/platform/proc-gs-auto-dev-create-automation-decision-proj.adoc b/downstream/modules/platform/proc-gs-auto-dev-create-automation-decision-proj.adoc new file mode 100644 index 0000000000..b8f44b0a07 --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-dev-create-automation-decision-proj.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-dev-create-automation-decision-proj"] + += Creating an automation decision project + +Like automation execution projects, automation decision projects are logical collections of automation decision content. +You can use the project function to organize your automation decision content in a way that makes sense to you. + +.Prerequisites + +* You have set up any neccessary credentials. +For more information, see the link:{URLEDAUserGuide}/eda-credentials#eda-set-up-credential[Setting up credentials] section of the {TitleEDAUserGuide} guide. +* You have an existing repository containing rulebooks that are integrated with playbooks contained in a repository to be used by {ControllerName}. + +.Procedure + +. From the navigation panel, select *{MenuADProjects}*. +. Click btn:[Create project]. +. Enter the following information: +* *Name*: Enter project name. +* *Description*: This field is optional. +* *Organization*: Select the organization associated with the project. +* *Source control type*: Git is the only SCM type available for use. +* *Proxy*: Proxy used to access HTTP or HTTPS servers. +* *Source control branch/tag/commit*: Branch to checkout. Can also be tags, commit hashes, or arbitrary refs. +* *Source control refspec*: A refspec to fetch. This parameter allows access to references through the branch field not otherwise available. +* Optional: *Source control credential*: The token needed to use the source control URL. +* *Content signature validation credential*: Enables content signing to verify that the content has remained secure during project syncing. If the content is tampered with, the job will not run. +* *Options*: Checking the box next to *Verify SSL* verifies the SSL with HTTPS when the project is imported. +. Click btn:[Create project]. + +Your project is now created and can be managed in the *Projects* screen. + +After saving the new project, the project's details page is displayed. +From there or the *Projects* list view, you can edit or delete it. diff --git a/downstream/modules/platform/proc-gs-auto-dev-create-automation-execution-proj.adoc b/downstream/modules/platform/proc-gs-auto-dev-create-automation-execution-proj.adoc new file mode 100644 index 0000000000..815a08d952 --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-dev-create-automation-execution-proj.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-dev-create-automation-execution-proj"] + += Creating an automation execution project + +A project is a logical collection of playbooks. +Projects are useful as a way to group your automation content according to the organizing principle of your choice. + +You can set up an automation execution project in the platform UI. + +.Procedure + +. From the navigation panel, select {MenuAEProjects}. +. On the *Projects* page, click btn:[Create project] to launch the *Create Project* window. +. Enter the appropriate details into the following required fields: + +* *Name* (required) +* Optional: *Description* +* *Organization* (required): A project must have at least one organization. Select one organization now to create the project. When the project is created you can add additional organizations. +* Optional: *Execution Environment*: Enter the name of the {ExecEnvShort} or search from a list of existing ones to run this project. +* *Source Control Type* (required): Select an SCM type associated with this project from the menu. +Options in the following sections become available depending on the type chosen. +For more information, see link:{URLControllerUserGuide}/controller-projects#proc-controller-adding-a-project[Managing playbooks manually] or link:{URLControllerUserGuide}/controller-projects#ref-projects-manage-playbooks-with-source-control[Managing playbooks using source control]. +* Optional: *Content Signature Validation Credential*: Use this field to enable content verification. +Specify the GPG key to use for validating content signature during project synchronization. +If the content has been tampered with, the job will not run. +For more information, see link:{URLControllerUserGuide}/assembly-controller-project-signing[Project signing and verification]. ++ +. Click btn:[Create project]. + diff --git a/downstream/modules/platform/proc-gs-auto-dev-create-template.adoc b/downstream/modules/platform/proc-gs-auto-dev-create-template.adoc new file mode 100644 index 0000000000..7bbba91fbf --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-dev-create-template.adoc @@ -0,0 +1,179 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-dev-create-template"] + += Creating a job template + +.Procedure + +. From the navigation panel, select {MenuAETemplates}. +. On the *Templates* page, select *Create job template* from the *Create template* list. +. Enter the appropriate details in the following fields: ++ +[NOTE] +==== +If a field has the *Prompt on launch* checkbox selected, launching the job prompts you for the value for that field when launching. + +Most prompted values override any values set in the job template. + +Exceptions are noted in the following table. +==== ++ +[cols="33%,33%,33%",options="header"] +|=== +| *Field* | *Options* | *Prompt on Launch* +| Name | Enter a name for the job.| N/A +| Description| Enter an arbitrary description as appropriate (optional). | N/A +| Job Type a| Choose a job type: + +- Run: Start the playbook when launched, running Ansible tasks on the selected hosts. + +- Check: Perform a "dry run" of the playbook and report changes that would be made without actually making them. +Tasks that do not support check mode are missed and do not report potential changes. + +For more information about job types see the link:https://docs.ansible.com/ansible/latest/playbook_guide/index.html[Playbooks] section of the Ansible documentation.| Yes +| Inventory | Choose the inventory to use with this job template from the inventories available to the logged in user. + +A System Administrator must grant you or your team permissions to be able to use certain inventories in a job template. | Yes. + +Inventory prompts show up as its own step in a later prompt window. +| Project | Select the project to use with this job template from the projects available to the user that is logged in. | N/A +| Source control branch | This field is only present if you chose a project that allows branch override. +Specify the overriding branch to use in your job run. +If left blank, the specified SCM branch (or commit hash or tag) from the project is used. + +For more information, see link:{URLControllerUserGuide}/controller-jobs#controller-job-branch-overriding[Job branch overriding]. | Yes +| Execution Environment | Select the container image to be used to run this job. +You must select a project before you can select an {ExecEnvShort}. | Yes. + +Execution environment prompts show up as its own step in a later prompt window. +| Playbook | Choose the playbook to be launched with this job template from the available playbooks. +This field automatically populates with the names of the playbooks found in the project base path for the selected project. +Alternatively, you can enter the name of the playbook if it is not listed, such as the name of a file (such as foo.yml) you want to use to run with that playbook. +If you enter a filename that is not valid, the template displays an error, or causes the job to fail. | N/A +| Credentials | Select the image:examine.png[examine,15,15] icon to open a separate window. + +Choose the credential from the available options to use with this job template. + +Use the drop-down menu list to filter by credential type if the list is extensive. +Some credential types are not listed because they do not apply to certain job templates. a| +- If selected, when launching a job template that has a default credential and supplying another credential replaces the default credential if it is the same type. +The following is an example of this message: + +`Job Template default credentials must be replaced +with one of the same type. Please select a credential +for the following types in order to proceed: Machine.` + +- You can add more credentials as you see fit. + +- Credential prompts show up as its own step in a later prompt window. +| Labels a| - Optionally supply labels that describe this job template, such as `dev` or `test`. + +- Use labels to group and filter job templates and completed jobs in the display. + +- Labels are created when they are added to the job template. +Labels are associated with a single Organization by using the Project that is provided in the job template. +Members of the Organization can create labels on a job template if they have edit permissions (such as the admin role). + +- Once you save the job template, the labels appear in the *Job Templates* overview in the Expanded view. + +- Select image:disassociate.png[Disassociate,10,10] beside a label to remove it. +When a label is removed, it is no longer associated with that particular Job or Job Template, but it remains associated with any other jobs that reference it. + +- Jobs inherit labels from the Job Template at the time of launch. +If you delete a label from a Job Template, it is also deleted from the Job. a| - If selected, even if a default value is supplied, you are prompted when launching to supply additional labels, if needed. +- You cannot delete existing labels, selecting image:disassociate.png[Disassociate,10,10] only removes the newly added labels, not existing default labels. +| Forks | The number of parallel or simultaneous processes to use while executing the playbook. +A value of zero uses the Ansible default setting, which is five parallel processes unless overridden in `/etc/ansible/ansible.cfg`. | Yes +| Limit a| A host pattern to further constrain the list of hosts managed or affected by the playbook. You can separate many patterns by colons (:). +As with core Ansible: + +* a:b means "in group a or b" +* a:b:&c means "in a or b but must be in c" +* a:!b means "in a, and definitely not in b" + +For more information, see link:https://docs.ansible.com/ansible/latest/inventory_guide/intro_patterns.html[Patterns: targeting hosts and groups] in the Ansible documentation. | Yes + +If not selected, the job template executes against all nodes in the inventory or only the nodes predefined on the *Limit* field. +When running as part of a workflow, the workflow job template limit is used instead. +| Verbosity | Control the level of output Ansible produces as the playbook executes. +Choose the verbosity from Normal to various Verbose or Debug settings. +This only appears in the *details* report view. +Verbose logging includes the output of all commands. +Debug logging is exceedingly verbose and includes information about SSH operations that can be useful in certain support instances. + +Verbosity `5` causes {ControllerName} to block heavily when jobs are running, which could delay reporting that the job has finished (even though it has) and can cause the browser tab to lock up.| Yes +| Job Slicing | Specify the number of slices you want this job template to run. +Each slice runs the same tasks against a part of the inventory. +For more information about job slices, see link:{URLControllerUserGuide}/controller-job-slicing[Job Slicing]. | Yes +| Timeout a| This enables you to specify the length of time (in seconds) that the job can run before it is canceled. Consider the following for setting the timeout value: + +- There is a global timeout defined in the settings which defaults to 0, indicating no timeout. +- A negative timeout (<0) on a job template is a true "no timeout" on the job. +- A timeout of 0 on a job template defaults the job to the global timeout (which is no timeout by default). +- A positive timeout sets the timeout for that job template. | Yes +| Show Changes | Enables you to see the changes made by Ansible tasks. | Yes +| Instance Groups | Choose link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-instance-group-policies[Instance and Container Groups] to associate with this job template. +If the list is extensive, use the image:examine.png[examine,15,15] icon to narrow the options. +Job template instance groups contribute to the job scheduling criteria, see link:{URLControllerUserGuide}/assembly-controller-topology-viewer#controller-job-runtime-behavior[Job Runtime Behavior] and link:{URLControllerAdminGuide}/controller-clustering#controller-cluster-job-runtime[Control where a job runs] for rules. +A System Administrator must grant you or your team permissions to be able to use an instance group in a job template. +Use of a container group requires admin rights. a| - Yes. + +If selected, you are providing the jobs preferred instance groups in order of preference. If the first group is out of capacity, later groups in the list are considered until one with capacity is available, at which point that is selected to run the job. + +- If you prompt for an instance group, what you enter replaces the normal instance group hierarchy and overrides all of the organizations' and inventories' instance groups. + +- The Instance Groups prompt shows up as its own step in a later prompt window. +| Job Tags | Type and select the *Create* menu to specify which parts of the playbook should be executed. +For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes +| Skip Tags | Type and select the *Create* menu to specify certain tasks or parts of the playbook to skip. +For more information and examples see link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html[Tags] in the Ansible documentation. | Yes +| Extra Variables a| - Pass extra command line variables to the playbook. +This is the "-e" or "-extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_variables.html#defining-variables-at-runtime[Defining variables at runtime]. +- Provide key or value pairs by using either YAML or JSON. +These variables have a maximum value of precedence and overrides other variables specified elsewhere. +The following is an example value: +`git_branch: production +release_version: 1.5` | Yes. + +If you want to be able to specify `extra_vars` on a schedule, you must select *Prompt on launch* for Variables on the job template, or enable a survey on the job template. Those answered survey questions become `extra_vars`. +|=== ++ +. You can set the following options for launching this template, if necessary: +* *Privilege escalation*: If checked, you enable this playbook to run as an administrator. +This is the equal of passing the `--become` option to the `ansible-playbook` command. +* *Provisioning callback*: If checked, you enable a host to call back to {ControllerName} through the REST API and start a job from this job template. +For more information, see link:{URLControllerUserGuide}/controller-job-templates#controller-provisioning-callbacks[Provisioning Callbacks]. +* *Enable webhook*: If checked, you turn on the ability to interface with a predefined SCM system web service that is used to launch a job template. +GitHub and GitLab are the supported SCM systems. +If you enable webhooks, other fields display, prompting for additional information: ++ +//image::ug-job-templates-options-webhooks.png[Job templates webhooks] ++ +** *Webhook service*: Select which service to listen for webhooks from. +** *Webhook URL*: Automatically populated with the URL for the webhook service to POST requests to. +** *Webhook key*: Generated shared secret to be used by the webhook service to sign payloads sent to {ControllerName}. +You must configure this in the settings on the webhook service in order for {ControllerName} to accept webhooks from this service. +** *Webhook credential*: Optionally, give a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. ++ +Before you can select it, the credential must exist. ++ +See link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-types[Credential Types] to create one. +** For additional information about setting up webhooks, see link:{URLControllerUserGuide}/controller-work-with-webhooks[Working with Webhooks]. +* *Concurrent jobs*: If checked, you are allowing jobs in the queue to run simultaneously if not dependent on one another. Check this box if you want to run job slices simultaneously. For more information, see link:{URLControllerUserGuide}/controller-jobs#controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. +* *Enable fact storage*: If checked, {ControllerName} stores gathered facts for all hosts in an inventory related to the job running. +* *Prevent instance group fallback*: Check this option to allow only the instance groups listed in the *Instance Groups* field to run the job. +If clear, all available instances in the execution pool are used based on the hierarchy described in link:{URLControllerAdminGuide}/controller-clustering#controller-cluster-job-runtime[Control where a job runs]. +. Click btn:[Create job template], when you have completed configuring the details of the job template. + +Creating the template does not exit the job template page but advances to the Job Template *Details* tab. +After saving the template, you can click btn:[Launch template] to start the job. +You can also click btn:[Edit] to add or change the attributes of the template, such as permissions, notifications, view completed jobs, and add a survey (if the job type is not a scan). +You must first save the template before launching, otherwise, btn:[Launch template] remains disabled. + +//image::ug-job-template-details.png[Job template details] + +.Verification + +. From the navigation panel, select {MenuAETemplates}. +. Verify that the newly created template appears on the *Templates* page. diff --git a/downstream/modules/platform/proc-gs-auto-dev-set-up-decision-env.adoc b/downstream/modules/platform/proc-gs-auto-dev-set-up-decision-env.adoc new file mode 100644 index 0000000000..863d63a968 --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-dev-set-up-decision-env.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-dev-set-up-decision-env"] + += Setting up a new decision environment + +The following steps describe how to import a decision environment into the platform. + +.Prerequisites + +* You have set up any necessary credentials. +For more information, see the link:{URLEDAUserGuide}/eda-credentials#eda-set-up-credential[Setting up credentials] section of the {TitleEDAUserGuide} guide. +* You have pushed a decision environment image to an image repository or you chose to use the image `de-supported` provided at link:http://registry.redhat.io/[registry.redhat.io]. + +.Procedure + +. Navigate to {MenuADDecisionEnvironments}. +. Click btn:[Create decision environment]. +. Enter the following: ++ +Name:: Insert the name. +Description:: This field is optional. +Image:: This is the full image location, including the container registry, image name, and version tag. +Credential:: This field is optional. This is the token needed to use the decision environment image. ++ +. Select btn:[Create decision environment]. + +Your decision environment is now created and can be managed on the *Decision Environments* page. + +After saving the new decision environment, the decision environment's details page is displayed. +From there or the *Decision Environments* list view, you can edit or delete it. diff --git a/downstream/modules/platform/proc-gs-auto-op-launch-template.adoc b/downstream/modules/platform/proc-gs-auto-op-launch-template.adoc new file mode 100644 index 0000000000..2b2551fccf --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-op-launch-template.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-op-launch-template"] + += Launching a job template + +{PlatformNameShort} offers push-button deployment of Ansible playbooks. +You can configure a template to store all the parameters that you would normally pass to the Ansible playbook on the command line. +In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line. + +.Procedure + +. From the navigation panel, select {MenuAETemplates}. +. Select a template to view its details. A default job template is created during your initial setup to help you get started, but you can also create your own. +. From the *Templates* page, click the launch icon to run your job template. + +The *Templates* list view shows job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the timestamp of the last job that ran using that template. You can click the arrow icon next to each entry to expand and view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various template fields and attributes. + +From this screen you can launch, edit, and copy a job template. + +For more information about templates see the link:{URLControllerUserGuide}/controller-job-templates[Job Templates] and link:{URLControllerUserGuide}/controller-workflow-job-templates[Workflow job templates] sections of the {TitleControllerUserGuide} guide. + diff --git a/downstream/modules/platform/proc-gs-auto-op-projects.adoc b/downstream/modules/platform/proc-gs-auto-op-projects.adoc new file mode 100644 index 0000000000..b3075c4099 --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-op-projects.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-op-projects"] + += Automation execution projects + +A project is a logical collection of Ansible playbooks that you can manage in {PlatformNameShort}. + +Platform administrators and automation developers have the permissions to create projects. +As an automation operator you can view and sync projects. + +== Viewing project details + +The *Projects* page displays a list of projects that are currently available. + +. From the navigation panel, select {MenuAEProjects}. +. Click a project to view its details. +. For each project listed, you can sync the latest revision, edit the project, or copy the project's attributes using the icons next to each project. \ No newline at end of file diff --git a/downstream/modules/platform/proc-gs-auto-op-review-job-output.adoc b/downstream/modules/platform/proc-gs-auto-op-review-job-output.adoc new file mode 100644 index 0000000000..34b610973b --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-op-review-job-output.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-op-review-job-output"] + += Reviewing job output + +When you relaunch a job, the jobs *Output* view is displayed. + +.Procedure + +. From the navigation panel, select {MenuAEJobs}. +. Select a job. This takes you to the *Output* view for that job, where you can filter job output by these criteria: +* The *Search output* option allows you to search by keyword. +* The *Event* option enables you to filter by the events of interest, such as errors, host failures, host retries, and items skipped. You can include as many events in the filter as necessary. diff --git a/downstream/modules/platform/proc-gs-auto-op-review-job-status.adoc b/downstream/modules/platform/proc-gs-auto-op-review-job-status.adoc new file mode 100644 index 0000000000..bff48d047b --- /dev/null +++ b/downstream/modules/platform/proc-gs-auto-op-review-job-status.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-auto-op-review-job-status"] + += Reviewing a job status + +The *Jobs* list view displays a list of jobs and their statuses, shown as completed successfully, failed, or as an active (running) job. + +.Procedure + +. From the navigation panel, select {MenuAEJobs}. ++ +The default view is collapsed (Compact) with the job name, status, job type, start, and finish times. You can click the arrow icon to expand and see more information. You can sort this list by various criteria, and perform a search to filter the jobs of interest. +. From this screen, you can complete the following tasks: +* View a job's details and standard output. +* Relaunch jobs. +* Remove selected jobs. + +The relaunch operation only applies to relaunches of playbook runs and does not apply to project or inventory updates, system jobs, or workflow jobs. diff --git a/downstream/modules/platform/proc-gs-downloading-content.adoc b/downstream/modules/platform/proc-gs-downloading-content.adoc new file mode 100644 index 0000000000..5b9d2aabb2 --- /dev/null +++ b/downstream/modules/platform/proc-gs-downloading-content.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gs-downloading-content_{context}"] + += Downloading content + +After collections are finalized, you can import them to a location where they can be distributed to others across your organization. + +.Procedure + +. Log in to {PlatformName}. +. From the navigation panel, select {MenuACCollections}. +The *Collections* page displays all collections across all repositories. +You can search for a specific collection. +. Select the collection that you want to export. +The collection details page opens. +. From the *Install* tab, select *Download tarball*. +The `.tar` file is downloaded to your default browser downloads folder. +You can now import it to the location of your choosing. + diff --git a/downstream/modules/platform/proc-gs-eda-set-up-rulebook-activation.adoc b/downstream/modules/platform/proc-gs-eda-set-up-rulebook-activation.adoc new file mode 100644 index 0000000000..c41f319f53 --- /dev/null +++ b/downstream/modules/platform/proc-gs-eda-set-up-rulebook-activation.adoc @@ -0,0 +1,62 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-09-24 +:_mod-docs-content-type: PROCEDURE + +[id="gs-eda-set-up-rulebook-activation_{context}"] += Setting up a rulebook activation + +.Prerequisites + +* You have set up a project. +* You have set up a decision environment. + +.Procedure +. From the navigation panel, select {MenuADRulebookActivations}. +. Click btn:[Create rulebook activation]. +. Enter the following information: +* *Name*: Insert the name. +* *Description*: This field is optional. +* *Organization*: This field is optional. +* *Project*: This field is optional. +* *Rulebook*: Rulebooks are displayed according to the project you selected. +* *Credential*: Select 0 or more credentials for this rulebook activation. This field is optional. ++ +[NOTE] +==== +The credentials that display in this field are customized based on your rulebook activation and only include the following credential types: Vault, {PlatformName}, or any custom credential types that you have created. For more information about credentials, see link:{URLEDAUserGuide}/eda-credentials[Credentials] in the {TitleEDAUserGuide} guide. +==== +//[J. Self] Might need to update the link above for the updated Credentials section. +* *Decision environment*: A decision environment is a container image to run Ansible rulebooks. ++ +[NOTE] +==== +In {EDAcontroller}, you cannot customize the pull policy of the decision environment. +By default, it follows the behavior of the *always* policy. +Every time an activation is started, the system tries to pull the most recent version of the image. +==== ++ +* *Restart policy*: This is the policy that determines how an activation should restart after the container process running the source plugin ends. Select from the following options: +** *Always*: This restarts the rulebook activation immediately, regardless of whether it ends successfully or not, and occurs no more than 5 times. +** *Never*: This never restarts a rulebook activation when the container process ends. +** *On failure*: This restarts the rulebook activation after 60 seconds by default, only when the container process fails, and occurs no more than 5 times. +* *Log level*: This field defines the severity and type of content in your logged events. Select from one of the following options: +** *Error*: Logs that contain error messages that are displayed in the *History* tab of an activation. +** *Info*: Logs that contain useful information about rulebook activations, such as a success or failure, triggered action names and their related action events, and errors. +** *Debug*: Logs that contain information that is only useful during the debug phase and might be of little value during production. This log level includes both error and log level data. +* *Service name*: This defines a service name for Kubernetes to configure inbound connections if the activation exposes a port. This field is optional. +* *Rulebook activation enabled?*: Toggle to automatically enable the rulebook activation to run. +* *Variables*: The variables for the rulebook are in JSON or YAML format. The content would be equivalent to the file passed through the `--vars` flag of ansible-rulebook command. +* *Options*: Check the *Skip audit events* option if you do not want to see your events in the Rule Audit. +. Click btn:[Create rulebook activation]. + +Your rulebook activation is now created and can be managed on the *Rulebook Activations* page. + +After saving the new rulebook activation, the rulebook activation's details page is displayed, with either a *Pending*, *Running*, or *Failed* status. +From there or the *Rulebook Activations* list view, you can restart or delete it. + +[NOTE] +==== +Occasionally, when a source plugin shuts down, it causes a rulebook to exit gracefully after a certain amount of time. +When a rulebook activation shuts down, any tasks that are waiting to be performed will be canceled, and an info level message is sent to the activation log. +For more information, see link:https://ansible.readthedocs.io/projects/rulebook/en/stable/rulebooks.html#[Rulebooks]. +==== diff --git a/downstream/modules/platform/proc-gs-logging-in.adoc b/downstream/modules/platform/proc-gs-logging-in.adoc new file mode 100644 index 0000000000..e60256fc16 --- /dev/null +++ b/downstream/modules/platform/proc-gs-logging-in.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-logging-in"] + += Logging in for the first time + +Log in to the {PlatformNameShort} as an administrator and enter your subscription information. +You can then create user profiles and assign roles. + +.Procedure + +. With the login information provided after your installation completed, open a web browser and log in to {PlatformName} by navigating to its server URL at: https:/// +. Use the credentials specified during the installation process to login: +** The default username is *admin*. +** The password for *admin* is the value specified during installation. + +After your first login, you are prompted to add your subscription information. + diff --git a/downstream/modules/platform/proc-gs-platform-admin-create-user.adoc b/downstream/modules/platform/proc-gs-platform-admin-create-user.adoc new file mode 100644 index 0000000000..5d0bf23ba6 --- /dev/null +++ b/downstream/modules/platform/proc-gs-platform-admin-create-user.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-platform-admin-create-user"] + += Creating a user + +There are three types of users in {PlatformNameShort}: + +Normal user:: Normal users have read and write access limited to the resources (such as inventory, projects, and job templates) for which that user has been granted the appropriate roles and privileges. Normal users are the default type of user. +{PlatformNameShort} Administrator:: An administrator (also known as a Superuser) has full system administration privileges — with full read and write privileges over the entire installation. An administrator is typically responsible for managing all aspects of and delegating responsibilities for day-to-day work to various users. +{PlatformNameShort} Auditor:: Auditors have read-only capability for all objects within the environment. + +.Procedure +. From the navigation panel, select {MenuAMUsers}. +. Click btn:[Create user]. +. Enter the details about your new user in the fields on the *Create* user page. Fields marked with an asterisk (*) are required. +. Normal users are the default when no User type is specified. To define a user as an administrator or auditor, select a *User type* checkbox. ++ +[NOTE] +==== +If you are modifying your own password, log out and log back in again for it to take effect. +==== ++ +. Select the *Organization* to be assigned for this user. For information about creating a new organization, refer to xref:proc-controller-create-organization[Creating an organization]. +. Click btn:[Create user]. + +When the user is successfully created, the *User* dialog opens. From here, you can review and modify the user’s Teams, Roles, Tokens and other membership details. + +[NOTE] +==== +If the user is not newly-created, the details screen displays the last login activity of that user. +==== + +If you log in as yourself, and view the details of your user profile, you can manage tokens from your user profile by selecting the *Tokens* tab. +// [ddacosta - Removing until OAuth and Applications content is completed.] For more information, see xref:proc-controller-apps-create-tokens[Adding a token]. diff --git a/downstream/modules/platform/proc-gs-publish-to-a-collection.adoc b/downstream/modules/platform/proc-gs-publish-to-a-collection.adoc new file mode 100644 index 0000000000..e203b0d030 --- /dev/null +++ b/downstream/modules/platform/proc-gs-publish-to-a-collection.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-publish-to-a-collection_{context}"] + += Publishing to a collection + +You can configure your projects to be uploaded to Git, or to the source control manager of your choice. + +.Procedure + +. From the navigation panel, select {MenuAEProjects}. +. Locate or create the project that you want to publish to your source control manager. +. In the project *Details* tab, select *Edit project*. +. Select *Git* from the *Source Control Type* drop-down menu. +. Enter the appropriate details into the following fields: +.. *Source Control URL* - see an example in the tooltip. +.. Optional: *Source control branch/tag/commit*: Enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control to checkout. Some commit hashes and references might not be available unless you also provide a custom refspec in the next field. If left blank, the default is `HEAD`, which is the last checked out branch, tag, or commit for this project. +.. *Source Control Refspec* - This field is an option specific to Git source control and only advanced users familiar and comfortable with Git should specify which references to download from the remote repository. For more information, see link:{URLControllerUserGuide}/controller-jobs#controller-job-branch-overriding[Job branch overriding]. +.. *Source Control Credential* - If authentication is required, select the appropriate source control credential. +. Optional: *Options* - select the launch behavior, if applicable: +.. *Clean* - Removes any local modifications before performing an update. +.. *Delete* - Deletes the local repository in its entirety before performing an update. Depending on the size of the repository this can significantly increase the amount of time required to complete an update. +.. *Track submodules* - Tracks the latest commit. See the tooltip for more information. +.. *Update Revision on Launch* - Updates the revision of the project to the current revision in the remote source control, and caches the roles directory from link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_reuse_roles.html[{Galaxy}] or link:{URLControllerUserGuide}/controller-projects#ref-projects-collections-support[Collections support]. {ControllerNameStart} ensures that the local revision matches and that the roles and collections are up-to-date with the last update. In addition, to avoid job overflows if jobs are spawned faster than the project can synchronize, selecting this enables you to configure a cache timeout to cache previous project synchronizations for a given number of seconds. +.. *Allow Branch Override* - Enables a job template or an inventory source that uses this project to start with a specified SCM branch or revision other than that of the project. For more information, see link:{URLControllerUserGuide}/controller-jobs#controller-job-branch-overriding[Job branch overriding]. +. Click btn:[Save] to save your project. + diff --git a/downstream/modules/platform/proc-gs-social-auth-github.adoc b/downstream/modules/platform/proc-gs-social-auth-github.adoc new file mode 100644 index 0000000000..d41977c884 --- /dev/null +++ b/downstream/modules/platform/proc-gs-social-auth-github.adoc @@ -0,0 +1,33 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-social-auth-github"] + += Configuring GitHub authentication + +You can connect GitHub identities to {PlatformNameShort} using OAuth. To set up GitHub authentication, you need to obtain an OAuth2 key and secret by registering your organization-owned application from GitHub using the link:https://docs.github.com/en/apps/using-github-apps/installing-your-own-github-app[registering the new application with GitHub]. + +The OAuth2 key (Client ID) and secret (Client Secret) are used to supply the required fields in the UI. To register the application, you must supply it with your webpage URL, which is the Callback URL shown in the Authenticator details for your authenticator configuration. +//See xref:gw-display-auth-details[Displaying authenticator details] for instructions on accessing this information. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Select *GitHub* from the *Authentication type* list and click btn:[Next]. +. Enter a *Name* for this authentication configuration. +. When the application is registered, GitHub displays the *Client ID* and *Client Secret*: ++ +.. Copy and paste the GitHub Client ID into the GitHub OAuth2 Key field. +.. Copy and paste the GitHub Client Secret into the GitHub OAuth2 Secret field. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Next]. + +include::snippets/snip-gw-authentication-verification.adoc[] + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] diff --git a/downstream/modules/platform/proc-gs-upload-collection.adoc b/downstream/modules/platform/proc-gs-upload-collection.adoc new file mode 100644 index 0000000000..c91bd7fb7f --- /dev/null +++ b/downstream/modules/platform/proc-gs-upload-collection.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-upload-collection_{context}"] + += Uploading a collection to {HubName} + +If you want to share a collection that you have created with the rest of the Ansible community, you can upload it to {HubName}. + +[NOTE] + +==== +Sharing a collection with the Ansible community requires getting the collection certified or validated by our Partner Engineering team. This action is available only to partner clients. For more about becoming a partner, see our link:https://connect.redhat.com/en/partner-resources/software-certification-documentation[documentation on software certification]. +==== + +You can upload your collection by using either the {HubName} user interface or the `ansible-galaxy` client. + +.Prerequisites + +* You have configured the `ansible-galaxy` client for {HubName}. +* You have at least one namespace. +* You have run all content through `ansible-test sanity` + +.Procedure + +. From the navigation panel, select {MenuACNamespaces}. +. Within the My namespaces tab, locate and click into the namespace to which you want to upload a collection. +. Select the *Collections* tab, and then click btn:[Upload collection]. +. In the New collection modal, click *Select file*. Locate the file on your system. +. Click btn:[Upload]. + +Using the `ansible-galaxy` client, enter the following command: + +[source,bash] +---- +$ ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET +---- diff --git a/downstream/modules/platform/proc-gs-use-base-execution-env.adoc b/downstream/modules/platform/proc-gs-use-base-execution-env.adoc new file mode 100644 index 0000000000..3421ce3f06 --- /dev/null +++ b/downstream/modules/platform/proc-gs-use-base-execution-env.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-use-base-execution-env_{context}"] + += Using the base {ExecEnvNameSing} + +Your subscription with {PlatformNameShort} gives you access to some base {ExecEnvName}. You can use a base {ExecEnvShort} as a starting point for creating a customized {ExecEnvShort}. + +{PlatformNameShort} includes the following default execution environments: + +* `Minimal` - Includes the latest Ansible-core 2.15 release along with Ansible Runner, but does not include collections or other content +* `EE Supported` - Minimal, plus all Red Hat-supported collections and dependencies + +Base images included with {PlatformNameShort} are hosted on the Red Hat Ecosystem Catalog (registry.redhat.io). + +.Prerequisites + +* You have a valid {PlatformName} subscription. + +.Procedure + +. Log in to registry.redhat.io. ++ +[source,bash] +---- +$ podman login registry.redhat.io +---- ++ +. Pull the base images from the registry: +[source,bash] +---- +$podman pull registry.redhat.io/aap/ +---- + +.Additional resources +While these environments cover many automation use cases, you can also customize these containers for your specific needs. For more information about customizing your execution environment, see link:{URLBuilder}/assembly-publishing-exec-env#proc-customize-ee-image[Customizing an existing automation {ExecEnvShort} image] in the {TitleBuilder} guide. + \ No newline at end of file diff --git a/downstream/modules/platform/proc-gs-write-playbook.adoc b/downstream/modules/platform/proc-gs-write-playbook.adoc new file mode 100644 index 0000000000..af64600447 --- /dev/null +++ b/downstream/modules/platform/proc-gs-write-playbook.adoc @@ -0,0 +1,71 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gs-write-playbook"] + += Writing a playbook + +Create a playbook that pings your hosts and prints a "Hello world" message. + +Ansible uses the YAML syntax. +YAML is a human-readable language that enables you to create playbooks without having to learn a complicated coding language. + +.Procedure + +. Create a file named `playbook.yaml` in your `ansible_quickstart` directory, with the following content: ++ +---- +- name: My first play + hosts: myhosts + tasks: + - name: Ping my hosts + ansible.builtin.ping: + + - name: Print message + ansible.builtin.debug: + msg: Hello world +---- +. Run your playbook: ++ +---- +$ ansible-playbook -i inventory.ini playbook.yaml +---- + +Ansible returns the following output: +---- +PLAY [My first play] ******************************************************** + +TASK [Gathering Facts] ****************************************************** +ok: [192.0.2.50] +ok: [192.0.2.51] +ok: [192.0.2.52] + +TASK [Ping my hosts] ******************************************************** +ok: [192.0.2.50] +ok: [192.0.2.51] +ok: [192.0.2.52] + +TASK [Print message] ******************************************************** +ok: [192.0.2.50] => { + "msg": "Hello world" +} +ok: [192.0.2.51] => { + + "msg": "Hello world" +} +ok: [192.0.2.52] => { + "msg": "Hello world" +} + +PLAY RECAP ****************************************************************** +192.0.2.50: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +192.0.2.51: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 +192.0.2.52: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + +---- + +.Additional resources + +* For more information on playbooks, see link:{LinkPlaybooksGettingStarted}. +* If you need help writing a playbook, see +link:https://developers.redhat.com/products/ansible/lightspeed?source=sso[{LightspeedFullName}]. + diff --git a/downstream/modules/platform/proc-gw-add-admin-organization.adoc b/downstream/modules/platform/proc-gw-add-admin-organization.adoc new file mode 100644 index 0000000000..682d482a78 --- /dev/null +++ b/downstream/modules/platform/proc-gw-add-admin-organization.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-add-admin-organization"] + += Adding an administrator to an organization + +You can add administrators to an organization which allows them to manage the membership and settings of the organization. For example, they can create new users and teams within the organization, and grant permission to users within the organization. +To add an administrator to an organization, the user must already exist. + +.Procedure + +. From the navigation panel, select {MenuAMOrganizations}. +. From the Organizations list view, select the organization to which you want to add a user, administrator, or team. +. Click the *Administrators* tab. +. Click btn:[Add administrators]. +. Select the users from the list by clicking the checkbox next to the name to assign the administrator role to them for this organization. +. Click btn:[Add administrators]. +. To remove a particular administrator from the organization, select *Remove administrator* from the *More actions {MoreActionsIcon}* list next to the administrator name. This launches a confirmation dialog, asking you to confirm the removal. ++ +[NOTE] +==== +If the user had previously been added as a member to this organization, they will continue to be a member of this organization. However, if they were added to the organization when the administrator assignment was made, they will be removed from the organization. +==== diff --git a/downstream/modules/platform/proc-gw-add-admin-team.adoc b/downstream/modules/platform/proc-gw-add-admin-team.adoc new file mode 100644 index 0000000000..3388b657de --- /dev/null +++ b/downstream/modules/platform/proc-gw-add-admin-team.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-add-admin-team"] + += Adding administrators to a team + +You can add administrators to a team which allows them to manage the membership and settings of that team. For example, they can create new users and grant permission to users within the team. +To add an administrator to a team, the administrator must already have been created. For more information, see xref:proc-controller-creating-a-user[Creating a user]. + +.Procedure + +. From the navigation panel, select {MenuAMTeams}. +. Select the team to which you want to add an administrator. +. Select the *Administrators* tab and click btn:[Add administrator(s)]. +. Select one or more users from the list by clicking the checkbox next to the name to add them as administrators of this team. +. Click btn:[Add administrators]. diff --git a/downstream/modules/platform/proc-gw-add-team-organization.adoc b/downstream/modules/platform/proc-gw-add-team-organization.adoc new file mode 100644 index 0000000000..9f1a3b5e43 --- /dev/null +++ b/downstream/modules/platform/proc-gw-add-team-organization.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-add-team-organization"] + += Adding a team to an organization + +You can provide team access to an organization by adding roles to the team. To add roles to a team, the team must already exist in the organization. For more information, see xref:proc-controller-creating-a-team[Creating a team]. +To add roles for a team, the role must already exist. See xref:proc-gw-create-roles[Creating a role] for more information. + +.Procedure + +. From the navigation panel, select {MenuAMOrganizations}. +. From the Organizations list view, select the organization to which you want to add team access. +. Click the *Teams* tab. If no teams exist, click btn:[Create team] to create a team and add it to this organization. +. Click btn:[Add roles]. +. Select the roles you want the selected team to have. Scroll down for a complete list of roles. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Click btn:[Next] to review the roles settings. +. Click btn:[Finish] to apply the roles to the selected teams. The Add roles dialog displays the updated roles assigned for each team. +. Click btn:[Close]. ++ +[NOTE] +==== +A team with associated roles retains them if they are reassigned to another organization. +==== ++ +. To manage roles for teams in an organization, click the *{SettingsIcon}* icon next to the user and select *Manage roles*. diff --git a/downstream/modules/platform/proc-gw-adjust-mapping-order.adoc b/downstream/modules/platform/proc-gw-adjust-mapping-order.adoc new file mode 100644 index 0000000000..799429a8cf --- /dev/null +++ b/downstream/modules/platform/proc-gw-adjust-mapping-order.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-adjust-mapping-order"] + += Adjusting the Mapping order + +If you have one or more authenticator maps defined, you can manage the order of the maps. Authenticator maps are run in order when logging in lowest order to highest. If one authenticator map determines a user should be a member of a team but a subsequent map determines the user should not be a member of the same team the ruling form the second map will take precedence over the result of the first map. Authenticator maps with the same order are executed in an undefined order. + +For example, if the first authenticator map is of type `is_superuser` and the trigger is set to *never*, any user logging into the system would never be granted the `is_superuser` flag. + +And, if the second map is of type `is_superuser` and the trigger is based on the user having a specific group, any user logging in would initially be denied the `is_superuser` permission. However, any user with the specified group would subsequently be granted the `is_superuser` permission by the second rule. + +The order of rules is important beyond whether you want to process organizations, teams or roles first. They can also be used to refine access and careful consideration is needed to avoid login issues. + +For example: + +* Authenticator map A denies all users access to the system +* Authenticator map B allows the user `john` access to the system + +When the mapping order is set to A, B; the first map denies access for all users, including `john`. The second map subsequently allows `john` access to the system and the result is that `john` is granted access and is able to log in to the platform. + +However, when the mapping order is changed to B, A; the first map allows `john` access to the system. The second map subsequently denies all users access to the system (including `john`) and the result is that `john` is denied access and is unable to log in to the platform. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. In the list view, select the authenticator name displayed in the *Name* column. +. Select the *Mapping* tab from the *Details* page of your authenticator. +. Click btn:[Manage mappings]. +. Adjust the mapping order by dragging and dropping the mappings up or down in the list using the draggable icon. ++ +[NOTE] +==== +The mapping precedence is determined by the order in which the mappings are listed. +==== ++ +. After your authenticator maps are in the correct order, click btn:[Apply]. diff --git a/downstream/modules/platform/proc-gw-allow-mapping.adoc b/downstream/modules/platform/proc-gw-allow-mapping.adoc new file mode 100644 index 0000000000..a6805890c8 --- /dev/null +++ b/downstream/modules/platform/proc-gw-allow-mapping.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: CONCEPT + +[id="gw-allow-mapping"] + += Allow mapping + +With allow mapping, you can control which users have access to the system by defining the conditions that must be met. + +.Procedure + +. After configuring the authentication details for your authentication method, select the *Mapping* tab. +. Select *Allow* from the *Add authentication mapping* list. +. Enter a unique rule *Name* to identify the rule. +. Select a *Trigger* from the list. See xref:gw-authenticator-map-triggers[Authenticator map triggers] for more information about map triggers. +. Select *Revoke* to deny user access to the system when the trigger conditions are not matched. +. Click btn:[Next]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-mapping-next-steps.adoc[] + diff --git a/downstream/modules/platform/proc-gw-authentication-list-view.adoc b/downstream/modules/platform/proc-gw-authentication-list-view.adoc new file mode 100644 index 0000000000..044614ea95 --- /dev/null +++ b/downstream/modules/platform/proc-gw-authentication-list-view.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-authentication-list-view"] + += Authentication list view + +On the *Authentication Methods* page, you can view and manage the configured authentication methods for your organization. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. ++ +The *Authentication Methods* page is displayed. ++ +. Click btn:[Create authentication] and follow the steps for creating an authentication method in xref:gw-config-authentication-type[Configuring an authentication type]. Otherwise, proceed to step 3. +. From the menu bar, you can sort the list of authentication methods by using the arrows in the menu bar for *Order*, *Name* and *Authentication type*. +. Click the toggles to *Enable* or *Disable* authenticators. \ No newline at end of file diff --git a/downstream/modules/platform/proc-gw-config-keycloak-settings.adoc b/downstream/modules/platform/proc-gw-config-keycloak-settings.adoc new file mode 100644 index 0000000000..866789d7e5 --- /dev/null +++ b/downstream/modules/platform/proc-gw-config-keycloak-settings.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-keycloak-authentication"] + += Configuring keycloak authentication + +You can configure {PlatformNameShort} to integrate Keycloak to manage user authentication. + +[NOTE] +==== +When using this authenticator some specific setup in your Keycloak instance is required. Refer to the link:https://python-social-auth.readthedocs.io/en/latest/backends/keycloak.html[Python Keycloak reference] for more details. +==== + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this authentication configuration. +. Select *Keycloak* from the *Authentication type* list. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] + +. Enter the location where the user's token can be retrieved in the *Keycloak Access Token URL* field. +. Optional: Enter the redirect location the user is taken to during the login flow in the *Keycloak Provider URL* field. +. Enter the Client ID from your Keycloak installation in the *Keycloak OIDC Key* field. +. Enter the RS256 public key provided by your Keycloak realm in the *Keycloak Public Key* field. +. Enter the OIDC secret (Client Secret) from your Keycloak installation in the *Keycloak OIDC Secret* field. ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Create Authentication Method]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] + +.Troubleshooting +If you receive an `jwt.exceptions.InvalidAudienceError: Audience doesn't match` error, you must re-enable the audience by doing the following: + +. From the navigation for your Keycloak configuration, select menu:Client scopes[_YOUR-CLIENT-ID-dedicated_ > Add mapper > Audience]. +. Pick a name for the mapper. +. Select the *Client ID* corresponding to your client in `Included Client Audience`. + diff --git a/downstream/modules/platform/proc-gw-create-roles.adoc b/downstream/modules/platform/proc-gw-create-roles.adoc new file mode 100644 index 0000000000..4717952dd3 --- /dev/null +++ b/downstream/modules/platform/proc-gw-create-roles.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-create-roles"] + += Creating a role + +{PlatformNameShort} services provide a set of predefined roles with permissions sufficient for standard automation tasks. It is also possible to configure custom roles, and assign one or more permission filters to them. Permission filters define the actions allowed for a specific resource type. + +.Procedure + +. From the navigation panel, select {MenuAMRoles}. +. Select a tab for the component resource for which you want to create custom roles. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Click btn:[Create role]. +. Provide a *Name* and optionally include a *Description* for the role. +. Select a *Content Type*. +. Select the *Permissions* you want assigned to this role. +. Click btn:[Create role] to create your new role. diff --git a/downstream/modules/platform/proc-gw-define-rules-triggers.adoc b/downstream/modules/platform/proc-gw-define-rules-triggers.adoc new file mode 100644 index 0000000000..60bcefc454 --- /dev/null +++ b/downstream/modules/platform/proc-gw-define-rules-triggers.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-define-rules-triggers"] + += Defining authentication mapping rules and triggers + +Authentication map types can be used with any type of authenticator. Each map has a trigger that defines when the map should be evaluated as true. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. In the list view, select the authenticator name displayed in the *Name* column. +. Select the *Mapping* tab from the *Details* page of your authenticator. +. Click btn:[Create mapping]. +. Select a map type from the *Authentication mapping* list. See xref:gw-authenticator-map-types[Authenticator map types] for detailed descriptions of the different map types. Choices include: ++ +* xref:gw-allow-mapping[Allow] +* xref:ref-controller-organization-mapping[Organization] +* xref:ref-controller-team-mapping[Team] +* xref:gw-role-mapping[Role] +* xref:gw-superuser-mapping[Is Superuser] ++ +. Enter a unique rule *Name* to identify the rule. +. Select a *Trigger* from the list. See xref:gw-authenticator-map-triggers[Authenticator map triggers] for more details. Choices include: ++ +* *Always* +* *Never* +* *Group* +* *Attribute* ++ +. Click btn:[Create mapping]. +. Repeat this procedure to create additional mapping rules and triggers for the authenticator. +. Proceed to xref:gw-adjust-mapping-order[Adjust the Mapping order] to optionally reorder the mappings for your authenticator. ++ +[NOTE] +==== +The mapping order setting is only available if there is more than one authenticator map defined. +==== diff --git a/downstream/modules/platform/proc-gw-delete-authenticator.adoc b/downstream/modules/platform/proc-gw-delete-authenticator.adoc new file mode 100644 index 0000000000..859cc93144 --- /dev/null +++ b/downstream/modules/platform/proc-gw-delete-authenticator.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-delete-authenticator"] + += Deleting an authenticator + +You can modify the settings of previously configured authenticators from the *Authentication* list view. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. In the list view, select the checkbox next to the authenticator you want to delete. +. Select *Delete authentication* from the *{MoreActionsIcon}* list. ++ +[NOTE] +==== +You can delete multiple authenticators by selecting the checkbox next to each authenticator you want to remove, and clicking *Delete selected authentication* from the *{MoreActionsIcon}* list on the menu bar. +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-gw-delete-organization.adoc b/downstream/modules/platform/proc-gw-delete-organization.adoc new file mode 100644 index 0000000000..9eeaeb89b9 --- /dev/null +++ b/downstream/modules/platform/proc-gw-delete-organization.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-delete-organization"] + += Deleting an organization + +Before you can delete an organization, you must be an Organization administrator or System administrator. When you delete an organization, the organization, team, users and resources are permanently removed from {PlatformNameShort}. + +[NOTE] +==== +When you attempt to delete items that are used by other resources, a message is displayed warning you that the deletion might impact other resources and prompts you to confirm the deletion. Some screens contain items that are invalid or have been deleted previously, and will fail to run. +==== + +.Procedure +. From the navigation panel, select {MenuAMOrganizations}. +. Click the *{MoreActionsIcon}* icon next to the organization you want removed and select *Delete organization*. +. Select the confirmation checkbox and click btn:[Delete organizations] to proceed with the deletion. Otherwise, click btn:[Cancel]. ++ +[NOTE] +==== +You can delete multiple organizations by selecting the checkbox next to each organization you want to remove, and selecting *Delete selected organizations* from the *More actions {MoreActionsIcon}* list on the menu bar. +==== diff --git a/downstream/modules/platform/proc-gw-delete-roles.adoc b/downstream/modules/platform/proc-gw-delete-roles.adoc new file mode 100644 index 0000000000..e67cde04ee --- /dev/null +++ b/downstream/modules/platform/proc-gw-delete-roles.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-delete-roles"] + += Deleting a role + +Built in roles can not be deleted, however, you can delete custom roles from the *Roles* list view. + +.Procedure + +. From the navigation panel, select {MenuAMRoles}. +. Select a tab for the component resource for which you want to create custom roles. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Click the *More Actions* icon *{MoreActionsIcon}* next to the role you want and select *Delete role*. +. To delete roles in bulk, select the roles you want to delete from the *Roles* list view, click the *More Actions* icon *{MoreActionsIcon}*, and select *Delete roles*. diff --git a/downstream/modules/platform/proc-gw-delete-team.adoc b/downstream/modules/platform/proc-gw-delete-team.adoc new file mode 100644 index 0000000000..42b74f96b5 --- /dev/null +++ b/downstream/modules/platform/proc-gw-delete-team.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-delete-team"] + += Deleting a team + +Before you can delete a team, you must have team permissions. When you delete a team, the inherited permissions members got from that team are revoked. + +.Procedure + +. From the navigation panel, select {MenuAMTeams}. +. Select the check box for the team that you want to remove. +. Select the {MoreActionsIcon} icon and select *Delete team*. ++ +[NOTE] +==== +You can delete multiple teams by selecting the checkbox next to each team you want to remove, and selecting *Delete teams* from the *More actions {MoreActionsIcon}* list. +==== diff --git a/downstream/modules/platform/proc-gw-display-auth-details.adoc b/downstream/modules/platform/proc-gw-display-auth-details.adoc new file mode 100644 index 0000000000..44e24c5055 --- /dev/null +++ b/downstream/modules/platform/proc-gw-display-auth-details.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-display-auth-details"] + += Displaying authenticator details + +After you locate the authenticator you want to review, you can display the configuration details: + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. In the list view, select the authenticator name displayed in the *Name* column. ++ +The authenticator *Details* page is displayed. ++ +. From the *Details* page, you can review the configuration settings applied to the authenticator. diff --git a/downstream/modules/platform/proc-gw-edit-authenticator.adoc b/downstream/modules/platform/proc-gw-edit-authenticator.adoc new file mode 100644 index 0000000000..d52d7b0c4b --- /dev/null +++ b/downstream/modules/platform/proc-gw-edit-authenticator.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-edit-authenticator"] + += Editing an authenticator + +You can modify the settings of previously configured authenticators from the *Authentication* list view. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. In the list view, you can either: ++ +.. Select the btn:[Edit] image:leftpencil.png[Edit,15,15] icon next to authenticator you want to modify, or +.. Select the authenticator name displayed in the *Name* column and click btn:[Edit authenticator] from the *Details* page. ++ +. Modify the authentication details or mapping configurations as required. +. Click btn:[Save]. \ No newline at end of file diff --git a/downstream/modules/platform/proc-gw-edit-roles.adoc b/downstream/modules/platform/proc-gw-edit-roles.adoc new file mode 100644 index 0000000000..192a43733d --- /dev/null +++ b/downstream/modules/platform/proc-gw-edit-roles.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-edit-roles"] + += Editing a role + +Built in roles can not be changed, however, you can modify custom roles from the *Roles* list view. The *Editable* column in the *Roles* list view indicates whether a role is _Built-in_ or _Editable_. + +.Procedure + +. From the navigation panel, select {MenuAMRoles}. +. Select a tab for the component resource for which you want to modify a custom role. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Click the *Edit role* icon image:leftpencil.png[Edit,15,15] next to the role you want and modify the role settings as needed. +. Click btn:[Save role] to save your changes. diff --git a/downstream/modules/platform/proc-gw-editing-a-user.adoc b/downstream/modules/platform/proc-gw-editing-a-user.adoc new file mode 100644 index 0000000000..527e591176 --- /dev/null +++ b/downstream/modules/platform/proc-gw-editing-a-user.adoc @@ -0,0 +1,39 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-editing-a-user"] + += Editing a user + +You can modify the properties of a user account after it is created. + +In upgrade scenarios, there might be pre-existing user accounts from {ControllerName} or {HubName} services. When editing these user accounts, the *User type* checkboxes indicate whether the account had one of the following service level administrator privileges: + +Automation Execution Administrator:: A previously defined {ControllerName} administrator with full read and write privileges over automation execution resources only. +Automation Decisions Administrator:: A previously defined {EDAName} administrator with full read and write privileges over automation decision resources only. +Automation Content Administrator:: A previously defined {HubName} administrator with full read and write privileges over automation content resources only. + +Platform administrators can revoke or assign administrator permissions for the individual services and designate the user as either an *{PlatformNameShort} Administrator*, *{PlatformNameShort} Auditor* or normal user. Assigning administrator privileges to all of the individual services automatically designates the user as an *{PlatformNameShort} Administrator*. See xref:proc-controller-creating-a-user[Creating a user] for more information about user types. + +To see whether a user had service level auditor privileges, you must refer to the API. + +[NOTE] +==== +Users previously designated as {ControllerName} or {HubName} administrators are labeled as *Normal* in the *User type* column in the xref:proc-gw-users-list-view[Users list view]. You can see whether these users have administrator privileges, from the *Edit Users* page. +==== + +.Procedure + +. From the navigation panel, select {MenuAMUsers}. + +. Select the check box for the user that you want to modify. + +. Click the *Pencil* icon and select *Edit user*. + +. The *Edit* user page is displayed where you can modify user details such as, *Password*, *Email*, *User type*, and *Organization*. ++ +[NOTE] +==== +If the user account was migrated to {PlatformNameShort} 2.5 during the upgrade process and had administrator privileges for an individual service, additional User type checkboxes will be available. You can use these checkboxes to revoke or add individual privileges or designate the user as a platform administrator, system auditor or normal user. +==== ++ +. After your changes are complete, click *Save user*. \ No newline at end of file diff --git a/downstream/modules/platform/proc-gw-local-authentication.adoc b/downstream/modules/platform/proc-gw-local-authentication.adoc new file mode 100644 index 0000000000..ebe68ac10b --- /dev/null +++ b/downstream/modules/platform/proc-gw-local-authentication.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-local-authentication"] + += Configuring local authentication + +As a platform administrator, you can configure local system authentication. With local authentication, users and their passwords are checked against local system accounts. + +[NOTE] +==== +A local authenticator is automatically created by the {PlatformNameShort} installation process, and is configured with the specified admin credentials in the inventory file before installation. After successful installation, you can log in to the {PlatformNameShort} using those credentials. +==== + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a *Name* for this Local configuration. The configuration name is required, must be unique across all authenticators, and must not be longer than 512 characters. +. Select *Local* from the *Authentication type* list. + +include::snippets/snip-gw-authentication-auto-migrate.adoc[] ++ +include::snippets/snip-gw-authentication-additional-auth-fields.adoc[] ++ +include::snippets/snip-gw-authentication-common-checkboxes.adoc[] ++ +. Click btn:[Next]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-authentication-next-steps.adoc[] \ No newline at end of file diff --git a/downstream/modules/platform/proc-gw-organizations-exec-env.adoc b/downstream/modules/platform/proc-gw-organizations-exec-env.adoc new file mode 100644 index 0000000000..757439bc63 --- /dev/null +++ b/downstream/modules/platform/proc-gw-organizations-exec-env.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-organizations-exec-env"] + += Working with {ExecEnvShort}s + +When {ControllerName} is enabled on the platform, you can review any {ExecEnvShort}s you have set up and manage their settings within the organization resource. + +For more information about execution environments, see link:{URLControllerUserGuide}/assembly-controller-execution-environments[Execution environments] in _{TitleControllerUserGuide}_ guide. + + +.Procedure + +. From the navigation panel, select {MenuAMOrganizations}. +. From the Organizations list view, select the organization whose {ExecEnvShort}s you want to manage. +. Select the *Execution Environments* tab. +. If no {ExecEnvShort}s are available, click btn:[Create {ExecEnvShort}] to create one. Alternatively, you can create an {ExecEnvShort} from the navigation panel by selecting {MenuInfrastructureExecEnvironments}. +. Click btn:[Create {ExecEnvShort}]. ++ +[NOTE] +==== +After creating a new {ExecEnvShort}s, return to {MenuAMOrganizations} and select the organization in which you created the {ExecEnvShort} to update the list on that tab. +==== ++ +. Select the {ExecEnvShort}s to use with your particular organization. diff --git a/downstream/modules/platform/proc-gw-remove-roles-team.adoc b/downstream/modules/platform/proc-gw-remove-roles-team.adoc new file mode 100644 index 0000000000..3eae34284e --- /dev/null +++ b/downstream/modules/platform/proc-gw-remove-roles-team.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-remove-roles-team"] + += Removing roles from a team + +You can remove roles from a team by selecting the - icon next to the resource. This launches a confirmation dialog, asking you to confirm the removal. + +.Procedure + +. From the navigation panel, select {MenuAMTeams}. +. Select the team *Name* from which you want to remove roles. +. Select the *Roles* tab. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Select the checkbox next to each resource you want to remove and click *Remove selected roles* from the *{MoreActionsIcon}* list on the menu bar. +. Select the checkbox to confirm removal of the selected roles and click *Remove role*. diff --git a/downstream/modules/platform/proc-gw-remove-roles-user.adoc b/downstream/modules/platform/proc-gw-remove-roles-user.adoc new file mode 100644 index 0000000000..f6e47671ce --- /dev/null +++ b/downstream/modules/platform/proc-gw-remove-roles-user.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-remove-roles-user"] + += Removing roles from a user +You can remove roles from a user by selecting the *-* icon next to the resource. This launches a confirmation dialog, asking you to confirm the removal. + +.Procedure + +. From the navigation panel, select {MenuAMUsers}. +. Select the user Name from which you want to remove roles. +. Select the *Roles* tab. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Select the checkbox next to each resource you want to remove and click *Remove selected roles* from the *More actions {MoreActionsIcon}* list on the menu bar. +. Select the checkbox to confirm removal of the selected roles and click btn:[Remove role]. diff --git a/downstream/modules/platform/proc-gw-role-mapping.adoc b/downstream/modules/platform/proc-gw-role-mapping.adoc new file mode 100644 index 0000000000..37769b6663 --- /dev/null +++ b/downstream/modules/platform/proc-gw-role-mapping.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-role-mapping"] + += Role mapping + +Role mapping is the mapping of a user either to a global role, such as Platform Auditor, or team or organization role. + +When a Team and/or Organization is specified together with the appropriate Role, the behavior is identical with Organization mapping or Team mapping. + +Role mapping can be specified separately for each account authentication. + +.Procedure + +. After configuring the authentication details for your authentication method, select the *Mapping* tab. +. Select *Role* from the *Add authentication mapping* list. +. Enter a unique rule *Name* to identify the rule. +. Select a *Trigger* from the list. See xref:gw-authenticator-map-triggers[Authenticator map triggers] for more information about map triggers. +. Select *Revoke* to remove the role for the user when none of the trigger conditions are matched. +. Select a *Role* to be applied or removed for matching users. +. Click btn:[Next]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-mapping-next-steps.adoc[] + + + diff --git a/downstream/modules/platform/proc-gw-roles.adoc b/downstream/modules/platform/proc-gw-roles.adoc new file mode 100644 index 0000000000..8c2314b6d5 --- /dev/null +++ b/downstream/modules/platform/proc-gw-roles.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-roles"] + += Displaying roles + +You can display the roles assigned for component resources from the menu:Access Management[] menu. + +.Procedure + +. From the navigation panel, select {MenuAMRoles}. +. Select a tab for the component resource for which you want to create custom roles. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. From the table header, you can sort the list of roles by using the arrows for *Name*, *Description*, *Created* and *Editable* or by making sort selections in the *Sort* list. +. You can filter the list of roles by selecting *Name* or *Editable* from the filter list and clicking the arrow. diff --git a/downstream/modules/platform/proc-gw-searching-authenticator.adoc b/downstream/modules/platform/proc-gw-searching-authenticator.adoc new file mode 100644 index 0000000000..b3bd1e0f52 --- /dev/null +++ b/downstream/modules/platform/proc-gw-searching-authenticator.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-searching-authenticator"] + += Searching for an authenticator + +You can search for a previously configured authenticator from the Authentication list view. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. In the search bar, enter an appropriate keyword for the authentication method you want to search for and click the arrow icon. +. If you don’t find what you’re looking for, you can narrow your search. From the filter list, select *Name* or *Authentication type* depending on the search term you want to use. +. Scroll through the list of search results and select the authenticator you want to review. diff --git a/downstream/modules/platform/proc-gw-select-auth-type.adoc b/downstream/modules/platform/proc-gw-select-auth-type.adoc new file mode 100644 index 0000000000..21f90dff6e --- /dev/null +++ b/downstream/modules/platform/proc-gw-select-auth-type.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-select-auth-type"] + += Selecting an authentication type + +On the *Authentication Methods* page you can select the type of authenticator plugin you want to configure. + +.Procedure + +. From the navigation panel, select {MenuAMAuthentication}. +. Click btn:[Create authentication]. +. Enter a unique *Name* for the authenticator. The name is required, must be unique across all authenticators, and must not be longer than 512 characters. This becomes the unique identifier generated for the authenticator. ++ +[NOTE] +==== +Changing the name does not update the unique identifier of the authenticator. For example, if you create an authenticator with the name `My Authenticator` and later change it to `My LDAP Authenticator` you will not be able to create another authenticator with the name `My Authenticator` because the unique identifier is still in use. +==== ++ +. Select the authenticator type from the *Authentication type* list. See xref:gw-config-authentication-type[Configuring an authentication type] for the complete list of authentication plugins available. +. The *Authentication details* section automatically updates to show the fields relevant to the selected authentication type. See the respective sections in Configuring an authentication type for the required details. ++ +For all authentication types you can enter a *Name*, *Additional Authenticator Fields* and *Create Objects*. ++ +. Enable or disable *Enabled* to specify if the authenticator should be enabled or disabled. If enabled, users are able to login from the authenticator. If disabled, users will not be allowed to login from the authenticator. +. Enable or disable *Create Object* to specify whether the authenticator should create teams and organizations in the system when a user logs in. ++ +Enabled:: Teams and organizations defined in the authenticator maps are created and the users added to them. +Disabled:: Organizations and teams defined in the authenticator maps will not be created automatically in the system. However, if they already exist (i.e. created by a superuser), users who trigger the maps are granted access to them. ++ +. Enable or disable *Remove Users*. If enabled, any access previously granted to a user is removed when they authenticate from this source. If disabled, permissions are only added or removed from the user based on the results of this authenticator's authenticator mappings. ++ +For example, assume a user has been granted the `is_superuser` permission in the system. And that user will log into an authenticator whose maps will not formulate an opinion as to whether or not the user should be a superuser. +If *Remove Users* is enabled, the `is_superuser` permission will be removed from the user, the authenticator maps will not have an opinion as to whether it should be there or not so, after login the user will not have the `is_superuser` permission. ++ +If *Remove Users* is disabled, the `is_superuser` permission _will not_ be removed from the user. The authenticator maps will not have an opinion as to whether it should be there or not so after login the user _will_ have the `is_superuser` permission. ++ +. Click btn:[Create mapping] and proceed to xref:gw-define-rules-triggers[Define authentication mapping rules and triggers]. diff --git a/downstream/modules/platform/proc-gw-superuser-mapping.adoc b/downstream/modules/platform/proc-gw-superuser-mapping.adoc new file mode 100644 index 0000000000..3262c7f53e --- /dev/null +++ b/downstream/modules/platform/proc-gw-superuser-mapping.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + +[id="gw-superuser-mapping"] + += Superuser mapping + +Superuser mapping is the mapping of a user to the superuser role, such as System Administrator. + +.Procedure + +. After configuring the authentication details for your authentication method, select the *Mapping* tab. +. Select *Superuser* from the *Add authentication mapping* list. +. Enter a unique rule *Name* to identify the rule. +. Select a *Trigger* from the list. See xref:gw-authenticator-map-triggers[Authenticator map triggers] for more information about map triggers. +. Select *Revoke* to remove the superuser role from the user when none of the trigger conditions are matched. +. Click btn:[Next]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-mapping-next-steps.adoc[] \ No newline at end of file diff --git a/downstream/modules/platform/proc-gw-team-access-resources.adoc b/downstream/modules/platform/proc-gw-team-access-resources.adoc new file mode 100644 index 0000000000..2ee8b1d0eb --- /dev/null +++ b/downstream/modules/platform/proc-gw-team-access-resources.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-team-access"] + += Providing team access to a resource +You can grant users access based on their team membership. When you add a user as a member of a team, they inherit access to the roles and resources defined for that team. + +[NOTE] +==== +Direct team access cannot be granted to {MenuACAdminRemoteRegistries} resources. +==== + +.Procedure + +. From the navigation panel, select a resource to which you want to provide team access. For example, {MenuAETemplates}. +. Select the *Team Access* tab. +. Click btn:[Add roles]. +. Click the checkbox beside the team to assign that team to your chosen type of resource and click btn:[Next]. +. Select the roles you want applied to the team for the chosen resource and click btn:[Next]. +. Review the settings and click btn:[Finish]. The Add roles dialog displays indicating whether the role assignments were successfully applied. +. You can remove resource access for a team by selecting the *Remove role* icon next to the team. This launches a confirmation dialog, asking you to confirm the removal. diff --git a/downstream/modules/platform/proc-gw-team-add-user.adoc b/downstream/modules/platform/proc-gw-team-add-user.adoc new file mode 100644 index 0000000000..8918eb6eee --- /dev/null +++ b/downstream/modules/platform/proc-gw-team-add-user.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-team-add-user"] + += Adding users to a team +To add a user to a team, the user must already have been created. For more information, see xref:proc-controller-creating-a-user[Creating a user]. Adding a user to a team adds them as a member only. Use the *Roles* tab to assign a role for different resources to the selected team. + +The following tab selections are available when adding users to a team. When user accounts from {ControllerName} or {HubName} organizations have been migrated to {PlatformNameShort} 2.5 during the upgrade process, the *Automation Execution* and *Automation Content* tabs show content based on whether the users were added to those organizations prior to migration. + +{PlatformNameShort}:: Reflects all users added to the organization at the platform level. From this tab, you can add users as organization members and, optionally provide specific organization level roles. + +Automation Execution:: Reflects users that were added directly to the {ControllerName} organization prior to an upgrade and migration. From this tab, you can only view existing memberships in {ControllerName} and remove those memberships but you can not add new memberships. New organization memberships must be added through the platform. + +Automation Content:: Reflects users that were added directly to the {HubName} organization prior to an upgrade and migration. From this tab, you can only view existing memberships in {HubName} and remove those memberships but you can not add new memberships. + +New user memberships to a team must be added at the platform level. + + +.Procedure + +. From the navigation panel, select {MenuAMTeams}. +. Select the team to which you want to add users. +. Select the *Users* tab. +. Select the *{PlatformNameShort}* tab and click btn:[Add users] to add user access to the team, or select the *Automation Execution* or *Automation Content* tab to view or remove user access from the team. +. Select one or more users from the list by clicking the checkbox next to the name to add them as members of this team. +. Click btn:[Add users]. + \ No newline at end of file diff --git a/downstream/modules/platform/proc-gw-team-list-view.adoc b/downstream/modules/platform/proc-gw-team-list-view.adoc new file mode 100644 index 0000000000..ecdf03b30f --- /dev/null +++ b/downstream/modules/platform/proc-gw-team-list-view.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-team-list-view"] + += Teams list view + +The Teams page displays the existing teams for your installation. From here, you can search for a specific team, filter the list of teams by team name or organization, or change the sort order for the list. + +.Procedure + +. From the navigation panel, select {MenuAMTeams}. +. In the *Search* bar, enter an appropriate keyword for the team you want to search for and click the arrow icon. +. From the menu bar, you can sort the list of teams by using the arrows for *Name* and *Organization* to toggle your sorting preference. +. You can view team details by clicking a team *Name* on the *Teams* page. +. You can view organization details by clicking the link in the *Organization* column. diff --git a/downstream/modules/platform/proc-gw-team-remove-user.adoc b/downstream/modules/platform/proc-gw-team-remove-user.adoc new file mode 100644 index 0000000000..19780da7c2 --- /dev/null +++ b/downstream/modules/platform/proc-gw-team-remove-user.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-team-remove-user"] + += Removing users from a team + +You can remove a user from a team from the Team list view. + +.Procedure + +. From the navigation panel, select {MenuAMTeams}. +. Select the team from which you want to remove users. +. Select the *Users* tab. +. Click the *Remove user* icon next to the user you want to remove as a member of the team. +. You can delete multiple users by selecting the checkbox next to each user you want to remove, and selecting *Remove selected users* from the *More actions {MoreActionsIcon}* list. ++ +[NOTE] +==== +If the user is a Team administrator, you can remove their membership to the team from the *Administrators* tab. +==== ++ +This launches a confirmation dialog, asking you to confirm the removal. diff --git a/downstream/modules/platform/proc-gw-user-access-resources.adoc b/downstream/modules/platform/proc-gw-user-access-resources.adoc new file mode 100644 index 0000000000..d361a11c1d --- /dev/null +++ b/downstream/modules/platform/proc-gw-user-access-resources.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-user-access-resources"] + += Providing user access to a resource + +You can grant users access to resources through the roles to which they are assigned. + +[NOTE] +==== +Direct user access cannot be granted to {MenuACAdminRemoteRegistries} resources. +==== + +.Procedure + +. From the navigation panel, select a resource to which you want to provide team access. For example, {MenuAETemplates}. +. Select the *User access* tab. +. Click btn:[Add roles]. +. Click the checkbox beside the user to assign that user to your chosen type of resource and click btn:[Next]. +. Select the roles you want applied to the user for the chosen resource and click btn:[Next]. +. Review the settings and click btn:[Finish]. The Add roles dialog displays indicating whether the role assignments were successfully applied. +. You can remove resource access for a user by selecting the *Remove role* icon next to the user. This launches a confirmation dialog, asking you to confirm the removal. diff --git a/downstream/modules/platform/proc-gw-users-list-view.adoc b/downstream/modules/platform/proc-gw-users-list-view.adoc new file mode 100644 index 0000000000..b95c7cd926 --- /dev/null +++ b/downstream/modules/platform/proc-gw-users-list-view.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-gw-users-list-view"] + += Users list view + +The *Users* page displays the existing users for your installation. From here, you can search for a specific user, filter the list of users, or change the sort order for the list. + +When user accounts have been migrated to {PlatformNameShort} 2.5 during the upgrade process, these accounts are also displayed in the *Users* list view. Users previously designated as {ControllerName} or {HubName} administrators are labeled as *Normal* in the *User type* column. You can see whether these users have administrator privileges, by editing the account. See xref:gw-editing-a-user[Editing a user] for instructions. + +.Procedure + +. From the navigation panel, select {MenuAMUsers}. +. In the *Search* bar, enter an appropriate keyword for the user you want to search for and click the arrow icon. +. From the menu bar, you can sort the list of users by using the arrows for *Username*, *Email*, *First name*, *Last name* or *Last login* to toggle your sorting preference. +. You can view user details by selecting a *Username* from the *Users* list view. diff --git a/downstream/modules/platform/proc-hs-eda-setup.adoc b/downstream/modules/platform/proc-hs-eda-setup.adoc new file mode 100644 index 0000000000..55b22e70c6 --- /dev/null +++ b/downstream/modules/platform/proc-hs-eda-setup.adoc @@ -0,0 +1,49 @@ +:_mod-docs-content-type: PROCEDURE +[id="proc-hs-eda-setup"] + += Setting up horizontal scaling for {EDAcontroller} + +To scale up (add more nodes) or scale down (remove nodes), you must update the content of the inventory file to add or remove nodes and rerun the installation program. + +.Procedure + +// Procedure for RPM installer +ifdef::aap-install[] +. Update the inventory to add two more worker nodes: ++ +----- +[automationedacontroller] + +3.88.116.111 routable_hostname=automationedacontroller-api.example.com eda_node_type=api + +3.88.116.112 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker + +# two more worker nodes +3.88.116.113 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker + +3.88.116.114 routable_hostname=automationedacontroller-api.example.com eda_node_type=worker +----- ++ +. Re-run the installer. +endif::aap-install[] + + +// Procedure for Containerized installer +ifdef::container-install[] +. Update the inventory to add two more worker nodes: ++ +----- +[automationeda] + +3.88.116.111 routable_hostname=automationeda-api.example.com eda_type=api + +3.88.116.112 routable_hostname=automationeda-api.example.com eda_type=worker + +# two more worker nodes +3.88.116.113 routable_hostname=automationeda-api.example.com eda_type=worker + +3.88.116.114 routable_hostname=automationeda-api.example.com eda_type=worker +----- ++ +. Re-run the installer. +endif::container-install[] diff --git a/downstream/modules/platform/proc-hub-ingress-options.adoc b/downstream/modules/platform/proc-hub-ingress-options.adoc index 2c6d95cb32..eaf7542d1a 100644 --- a/downstream/modules/platform/proc-hub-ingress-options.adoc +++ b/downstream/modules/platform/proc-hub-ingress-options.adoc @@ -1,26 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-hub-ingress-options_{context}"] -= Configuring the Ingress type for your {HubName} operator += Configuring the ingress type for your {HubName} operator -The {PlatformName} operator installation form allows you to further configure your {HubName} operator Ingress under *Advanced configuration*. +The {OperatorPlatformNameShort} installation form allows you to further configure your {HubName} operator ingress under *Advanced configuration*. .Procedure +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Automation Hub* tab. +. For new instances, click btn:[Create AutomationHub]. +.. For existing instances, you can edit the YAML view by clicking the {MoreActionsIcon} icon and then btn:[Edit AutomationHub]. . Click btn:[Advanced Configuration]. . Under *Ingress type*, click the drop-down menu and select *Ingress*. . Under *Ingress annotations*, enter any annotations to add to the ingress. . Under *Ingress TLS secret*, click the drop-down menu and select a secret from the list. -After you have configured your {HubName} operator, click btn:[Create] at the bottom of the form view. {OCP} will now create the pods. This may take a few minutes. +.Verification -You can view the progress by navigating to menu:Workloads[Pods] and locating the newly created instance. +After you have configured your {HubName} operator, click btn:[Create] at the bottom of the form view. {OCP} creates the pods. This may take a few minutes. -.Verification +You can view the progress by navigating to menu:Workloads[Pods] and locating the newly created instance. Verify that the following operator pods provided by the {PlatformNameShort} Operator installation from {HubName} are running: [cols="a,a,a"] |=== -| Operator manager controllers | {ControllerName} |{HubName} +| Operator manager controllers | {ControllerNameStart} |{HubNameStart} | The operator manager controllers for each of the 3 operators, include the following: diff --git a/downstream/modules/platform/proc-hub-route-options.adoc b/downstream/modules/platform/proc-hub-route-options.adoc index d1d8220bbb..9ebd78b6c2 100644 --- a/downstream/modules/platform/proc-hub-route-options.adoc +++ b/downstream/modules/platform/proc-hub-route-options.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-hub-route-options_{context}"] = Configure your {HubName} operator route options @@ -6,6 +8,12 @@ The {PlatformName} operator installation form allows you to further configure yo .Procedure +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Automation Hub* tab. +. For new instances, click btn:[Create AutomationHub]. +.. For existing instances, you can edit the YAML view by clicking the {MoreActionsIcon} icon and then btn:[Edit AutomationHub]. . Click btn:[Advanced configuration]. . Under *Ingress type*, click the drop-down menu and select *Route*. . Under *Route DNS host*, enter a common host name that the route answers to. diff --git a/downstream/modules/platform/proc-import-mesh-ca.adoc b/downstream/modules/platform/proc-import-mesh-ca.adoc index 66759f75ec..5462a8b0c7 100644 --- a/downstream/modules/platform/proc-import-mesh-ca.adoc +++ b/downstream/modules/platform/proc-import-mesh-ca.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="importing-mesh-ca_{context}"] = Importing a Certificate Authority (CA) certificate diff --git a/downstream/modules/platform/proc-install-aap-operator-chatbot.adoc b/downstream/modules/platform/proc-install-aap-operator-chatbot.adoc new file mode 100644 index 0000000000..cce999bd53 --- /dev/null +++ b/downstream/modules/platform/proc-install-aap-operator-chatbot.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-install-operator-chatbot"] + += Installing and configuring the {PlatformNameShort} operator + +Install and configure the {PlatformNameShort} operator on the {OCPShort}, so that you can deploy and use the {AAPchatbot}. + +:chapter: chatbot +== Installing the {PlatformNameShort} operator +Install the {PlatformNameShort} operator on {OCPShort}. + +include::proc-install-aap-operator.adoc[leveloffset=+1] + +* Verify that the {PlatformNameShort} operator displays a *Succeeded* status. + +:chapter: chatbot-configuration +== Configuring the {PlatformNameShort} operator +After installing the {OperatorPlatformNameShort} in your namespace, configure the {PlatformNameShort} operator to link your components to the {Gateway}. + +include::proc-operator-link-components.adoc[leveloffset=+2] + +You must also verify that all the pods are running successfully. Perform the following steps: + +. Navigate to menu:Workloads[Pods]. +. Switch to the project named as described in the namespace metadata in the *YAML* configuration view. +. Verify that all pods display either a *Running* or *Completed* status, with no pods displaying an error status. diff --git a/downstream/modules/platform/proc-install-aap-operator.adoc b/downstream/modules/platform/proc-install-aap-operator.adoc index 327bbba38a..248237aaa2 100644 --- a/downstream/modules/platform/proc-install-aap-operator.adoc +++ b/downstream/modules/platform/proc-install-aap-operator.adoc @@ -1,16 +1,53 @@ -[id="proc-install-aap-operator"] +:_mod-docs-content-type: PROCEDURE + +[id="proc-install-aap-operator_{context}"] .Procedure . Log in to {OCP}. . Navigate to menu:Operators[OperatorHub]. -. Search for the {PlatformName} operator and click btn:[Install]. +. Search for {PlatformNameShort} and click btn:[Install]. . Select an *Update Channel*: -+ -* *stable-2.x*: installs a namespace-scoped operator, which limits deployments of {HubName} and {ControllerName} instances to the namespace the operator is installed in. This is suitable for most cases. The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace. -* *stable-2.x-cluster-scoped*: deploys {HubName} and {ControllerName} across multiple namespaces in the cluster and requires administrator privileges for all namespaces in the cluster. +* *stable-2.x*: installs a namespace-scoped operator, which limits deployments of {HubName} and {ControllerName} instances to the namespace the operator is installed in, this is suitable for most cases. +The stable-2.x channel does not require administrator privileges and utilizes fewer resources because it only monitors a single namespace. +* *stable-2.x-cluster-scoped*: installs the {OperatorPlatformNameShort} in a single namespace that manages {PlatformNameShort} custom resources and deployments in all namespaces. +The {OperatorPlatformNameShort} requires administrator privileges for all namespaces in the cluster. . Select *Installation Mode*, *Installed Namespace*, and *Approval Strategy*. . Click btn:[Install]. -The installation process will begin. When installation is complete, a modal will appear notifying you that the {PlatformName} operator is installed in the specified namespace. +.Verification + +The installation process begins. When installation finishes, a modal appears notifying you that the {OperatorPlatformNameShort} is installed in the specified namespace. + +* Click btn:[View Operator] to view your newly installed {OperatorPlatformNameShort} and verify the following operator custom resources are present: + +[cols="a,a,a,a"] +|=== +|{ControllerNameStart} | {HubNameStart} |{EDAName} (EDA) |{LightspeedShortName} + +| + +* Automation Controller +* Automation Controller Backup +* Automation Controller Restore +* Automation Controller Mesh Ingress + + +| + +* Automation Hub +* Automation Hub Backup +* Automation Hub Restore + + +| + +* EDA +* EDA Backup +* EDA Restore + + +| + +* Ansible Lightspeed -* Click btn:[View Operator] to view your newly installed {PlatformName} operator. +|=== diff --git a/downstream/modules/platform/proc-install-cli-aap-operator.adoc b/downstream/modules/platform/proc-install-cli-aap-operator.adoc index 6b6272d03e..21ed3ffa4d 100644 --- a/downstream/modules/platform/proc-install-cli-aap-operator.adoc +++ b/downstream/modules/platform/proc-install-cli-aap-operator.adoc @@ -1,16 +1,22 @@ // Used in // assemblies/platform/assembly-installing-aap-operator-cli.adoc // titles/aap-operator-installation/ +:_mod-docs-content-type: PROCEDURE -[id="proc-install-cli-aap-operator{context}"] +[id="install-cli-aap-operator_{context}"] -= Subscribing a namespace to an operator using the {OCPShort} CLI += Installing the {OperatorPlatformNameShort} in a namespace Use this procedure to subscribe a namespace to an operator. +[IMPORTANT] +==== +You cannot deploy {PlatformNameShort} in the default namespace on your OpenShift Cluster. The `aap` namespace is recommended. You can use a custom namespace, but it should run only {PlatformNameShort}. +==== + .Procedure -. Create a project for the operator +. Create a project for the operator. + ----- oc new-project ansible-automation-platform @@ -43,46 +49,68 @@ metadata: name: ansible-automation-platform namespace: ansible-automation-platform spec: - channel: 'stable-2.4' + channel: 'stable-2.5' installPlanApproval: Automatic name: ansible-automation-platform-operator source: redhat-operators sourceNamespace: openshift-marketplace --- -apiVersion: automationcontroller.ansible.com/v1beta1 -kind: AutomationController -metadata: - name: example - namespace: ansible-automation-platform -spec: - replicas: 1 - ----- + This file creates a `Subscription` object called `_ansible-automation-platform_` that subscribes the `ansible-automation-platform` namespace to the `ansible-automation-platform-operator` operator. + -It then creates an `AutomationController` object called `_example_` in the `ansible-automation-platform` namespace. -+ -To change the {ControllerName} name from `_example_`, edit the _name_ field in the `kind: AutomationController` section of [filename]`sub.yaml` and replace `__` with the name you want to use: -+ -[subs="+quotes"] ------ -apiVersion: automationcontroller.ansible.com/v1beta1 -kind: AutomationController -metadata: - name: ____ - namespace: ansible-automation-platform ------ . Run the [command]`*oc apply*` command to create the objects specified in the [filename]`sub.yaml` file: + ----- oc apply -f sub.yaml ----- ++ +. Verify the CSV PHASE reports "Succeeded" before proceeding using the [command]`oc get csv -n ansible-automation-platform` command: ++ +----- +oc get csv -n ansible-automation-platform -To verify that the namespace has been successfully subscribed to the `ansible-automation-platform-operator` operator, run the [command]`*oc get subs*` command: +NAME DISPLAY VERSION REPLACES PHASE +aap-operator.v2.5.0-0.1728520175 Ansible Automation Platform 2.5.0+0.1728520175 aap-operator.v2.5.0-0.1727875185 Succeeded +----- ++ +. Create an `AnsibleAutomationPlatform` object called `_example_` in the `ansible-automation-platform` namespace. ++ +To change the {PlatformNameShort} and its components from from `_example_`, edit the _name_ field in the `metadata:` section and replace example with the name you want to use: ++ ----- -$ oc get subs -n ansible-automation-platform +oc apply -f - < + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + lightspeed: + disabled: true +EOF ----- For further information about subscribing namespaces to operators, see link:{BaseURL}/openshift_container_platform/{OCPLatest}/html/operators/user-tasks#olm-installing-operator-from-operatorhub-using-cli_olm-installing-operators-in-namespace[Installing from OperatorHub using the CLI] in the {OCP} _Operators_ guide. diff --git a/downstream/modules/platform/proc-install-ha-hub-selinux.adoc b/downstream/modules/platform/proc-install-ha-hub-selinux.adoc index 8889e456b3..59f418fc2a 100644 --- a/downstream/modules/platform/proc-install-ha-hub-selinux.adoc +++ b/downstream/modules/platform/proc-install-ha-hub-selinux.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-install-ha-hub-selinux"] = Enabling a high availability (HA) deployment of {HubName} on SELinux @@ -11,6 +13,11 @@ You must add the context for `/var/lib/pulp` pulpcore_static and run the {Platfo .Prerequisites * You have already configured a NFS export on your server. ++ +[NOTE] +==== +The NFS share is hosted on an external server and is not a part of high availability {HubName} deployment. +==== .Procedure . Create a mount point at `/var/lib/pulp`: @@ -21,7 +28,7 @@ $ mkdir /var/lib/pulp/ . Open `/etc/fstab` using a text editor, then add the following values: + ---- -srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache 0 0 +srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0 ---- . Run the reload systemd manager configuration command: diff --git a/downstream/modules/platform/proc-installing-ansible-core.adoc b/downstream/modules/platform/proc-installing-ansible-core.adoc deleted file mode 100644 index 23202ee3b9..0000000000 --- a/downstream/modules/platform/proc-installing-ansible-core.adoc +++ /dev/null @@ -1,22 +0,0 @@ -:_mod-docs-content-type: PROCEDURE - -[id="installing-ansible-core_{context}"] - -= Installing ansible-core - -[role="_abstract"] - - - -.Procedure - -. Install ansible-core and other tools: -+ ----- -sudo dnf install -y ansible-core wget git rsync ----- -. Set a fully qualified hostname: -+ ----- -sudo hostnamectl set-hostname your-FQDN-hostname ----- diff --git a/downstream/modules/platform/proc-installing-containerized-aap.adoc b/downstream/modules/platform/proc-installing-containerized-aap.adoc index 0ababc924f..fc182dc49a 100644 --- a/downstream/modules/platform/proc-installing-containerized-aap.adoc +++ b/downstream/modules/platform/proc-installing-containerized-aap.adoc @@ -1,173 +1,81 @@ :_mod-docs-content-type: PROCEDURE -[id="installing-containerized-aap_{context}"] +[id="installing-containerized-aap"] = Installing containerized {PlatformNameShort} -[role="_abstract"] +After you prepare the {RHEL} host, download {PlatformNameShort}, and configure the inventory file, run the `install` playbook to install containerized {PlatformNameShort}. +.Prerequisites -Installation of {PlatformNameShort} is controlled with inventory files. Inventory files define the hosts and containers used and created, variables for components, and other information needed to customize the installation. +You have done the following: -For convenience an example inventory file is provided, that you can copy and modify to quickly get started. - -[NOTE] -==== -There is no default database choice given in the inventory file. You must follow the instructions in the inventory file to make the appropriate choice between an internally provided postgres, or provide your own externally managed and supported database option. -==== - -Edit the inventory file by replacing the `< >` placeholders with your specific variables, and uncommenting any lines specific to your needs. +* link:{URLContainerizedInstall}/aap-containerized-installation#preparing-the-rhel-host-for-containerized-installation[Prepared the {RHEL} host] +* link:{URLContainerizedInstall}/aap-containerized-installation#preparing-the-managed-nodes-for-containerized-installation[Prepared the managed nodes] +* link:{URLContainerizedInstall}/aap-containerized-installation#downloading-ansible-automation-platform[Downloaded {PlatformNameShort}] +* link:{URLContainerizedInstall}/aap-containerized-installation#configuring-inventory-file[Configured the inventory file] +* Logged in to the {RHEL} host as your non-root user +.Procedure +. Go to the installation directory on your {RHEL} host. +. Run the `install` playbook: ++ ---- -# This is the AAP installer inventory file -# Please consult the docs if you're unsure what to add -# For all optional variables please consult the included README.md - -# This section is for your AAP Controller host(s) -# ------------------------------------------------- -[automationcontroller] -fqdn_of_your_rhel_host ansible_connection=local - -# This section is for your AAP Automation Hub host(s) -# ----------------------------------------------------- -[automationhub] -fqdn_of_your_rhel_host ansible_connection=local - -# This section is for your AAP EDA Controller host(s) -# ----------------------------------------------------- -[automationeda] -fqdn_of_your_rhel_host ansible_connection=local - -# This section is for your AAP Execution host(s) -# ------------------------------------------------ -#[execution_nodes] -#fqdn_of_your_rhel_host - -# This section is for the AAP database(s) -# ----------------------------------------- -# Uncomment the lines below and amend appropriately if you want AAP to install and manage the postgres databases -# Leave commented out if you intend to use your own external database and just set appropriate _pg_hosts vars -# see mandatory sections under each AAP component -#[database] -#fqdn_of_your_rhel_host ansible_connection=local - -[all:vars] - -# Common variables needed for installation -# ---------------------------------------- -postgresql_admin_username=postgres -postgresql_admin_password= -# If using the online (non-bundled) installer, you need to set RHN registry credentials -registry_username= -registry_password= -# If using the bundled installer, you need to alter defaults by using: -#bundle_install=true -# The bundle directory must include /bundle in the path -#bundle_dir= -# To add more decision environment images you need to set the de_extra_images variable -#de_extra_images=[{'name': 'Custom decision environment', 'image': '//:'}] -# To add more execution environment images you need to set the ee_extra_images variable -#ee_extra_images=[{'name': 'Custom execution environment', 'image': '//:'}] -# To use custom TLS CA certificate/key you need to set these variables -#ca_tls_cert= -#ca_tls_key= - -# AAP Database - optional -# -------------------------- -# To use custom TLS certificate/key you need to set these variables -#postgresql_tls_cert= -#postgresql_tls_key= - -# AAP Controller - mandatory -# -------------------------- -controller_admin_password= -controller_pg_host=fqdn_of_your_rhel_host -controller_pg_password= - -# AAP Controller - optional -# ------------------------- -# To use the postinstall feature you need to set these variables -#controller_postinstall=true -#controller_license_file= -#controller_postinstall_dir= -# When using config-as-code in a git repository -#controller_postinstall_repo_url= -#controller_postinstall_repo_ref=main -# To use custom TLS certificate/key you need to set these variables -#controller_tls_cert= -#controller_tls_key= - -# AAP Automation Hub - mandatory -# ------------------------------ -hub_admin_password= -hub_pg_host=fqdn_of_your_rhel_host -hub_pg_password= - -# AAP Automation Hub - optional -# ----------------------------- -# To use the postinstall feature you need to set these variables -#hub_postinstall=true -#hub_postinstall_dir= -# When using config-as-code in a git repository -#hub_postinstall_repo_url= -#hub_postinstall_repo_ref=main -# To customize the number of worker containers -#hub_workers=2 -# To use the collection signing feature you need to set these variables -#hub_collection_signing=true -#hub_collection_signing_key= -# To use the container signing feature you need to set these variables -#hub_container_signing=true -#hub_container_signing_key= -# To use custom TLS certificate/key you need to set these variables -#hub_tls_cert= -#hub_tls_key= - -# AAP EDA Controller - mandatory -# ------------------------------ -eda_admin_password= -eda_pg_host=fqdn_of_your_rhel_host -eda_pg_password= - -# AAP EDA Controller - optional -# ----------------------------- -# When using an external controller node unmanaged by the installer. -#controller_main_url=https://fqdn_of_your_rhel_host -# To customize the number of default/activation worker containers -#eda_workers=2 -#eda_activation_workers=2 -# To use custom TLS certificate/key you need to set these variables -#eda_tls_cert= -#eda_tls_key= - -# AAP Execution Nodes - optional -# ----------------------------- -#receptor_port=27199 -#receptor_protocol=tcp -# To use custom TLS certificate/key you need to set these variables -#receptor_tls_cert= -#receptor_tls_key= -# To use custom RSA key pair you need to set these variables -#receptor_signing_private_key= -#receptor_signing_public_key= +ansible-playbook -i ansible.containerized_installer.install ---- - -Use the following command to install containerized {PlatformNameShort}: - ++ +For example: ++ ---- ansible-playbook -i inventory ansible.containerized_installer.install ---- ++ +You can add additional parameters to the installation command as needed: ++ +---- +ansible-playbook -i -e @ --ask-vault-pass -K -v ansible.containerized_installer.install +---- ++ +For example: ++ +---- +ansible-playbook -i inventory -e @vault.yml --ask-vault-pass -K -v ansible.containerized_installer.install +---- +** `-i ` - The inventory file to use for the installation. +** `-e @ --ask-vault-pass` - (Optional) If you are using a vault to store sensitive variables, add this to the installation command. +** `-K` - (Optional) If your privilege escalation requires you to enter a password, add this to the installation command. You are then prompted for the BECOME password. +** `-v` - (Optional) You can use increasing verbosity, up to 4 v’s (`-vvvv`) to see the details of the installation process. However, it is important to note that this can significantly increase installation time, so use it only as needed or requested by Red Hat support. -[NOTE] -==== - If your privilege escalation requires a password to be entered, append *-K* to the command line. You will then be prompted for the *BECOME* password. -==== +. The installation of containerized {PlatformNameShort} begins. -You can use increasing verbosity, up to 4 v's (-vvvv) to see the details of the installation process. +.Verification + +* After the installation completes, verify that you can access the platform UI which is available by default at the following URL: ++ +---- +https://:443 +---- ++ +* Log in as the admin user with the credentials you created for `gateway_admin_username` and `gateway_admin_password`. ++ +* The default ports and protocols used for {PlatformNameShort} are 80 (HTTP) and 443 (HTTPS). You can customize the ports with the following variables: ++ +---- +envoy_http_port=80 +envoy_https_port=443 +---- ++ +* If you want to disable HTTPS, set `envoy_disable_https` to `true`: ++ +---- +envoy_disable_https: true +---- -[NOTE] -==== -This can significantly increase installation time, so it is recommended that you use it only as needed or requested by Red Hat support. -==== \ No newline at end of file +[role="_additional-resources"] +.Additional resources +* link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_privilege_escalation.html[Understanding privilege escalation: become] +* link:{URLHardening}/hardening-aap#ref-sensitive-variables-install-inventory_hardening-aap[Sensitive variables in the installation inventory] +* link:{URLGettingStarted}[Getting started with {PlatformNameShort}] +* link:{URLContainerizedInstall}/troubleshooting-containerized-ansible-automation-platform[Troubleshooting containerized {PlatformNameShort}] diff --git a/downstream/modules/platform/proc-installing-the-aap-setup-bundle.adoc b/downstream/modules/platform/proc-installing-the-aap-setup-bundle.adoc index 141a89ce1b..2c57df77b6 100644 --- a/downstream/modules/platform/proc-installing-the-aap-setup-bundle.adoc +++ b/downstream/modules/platform/proc-installing-the-aap-setup-bundle.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + // this info is duplicated here: {BaseURL}/red_hat_ansible_automation_platform/1.2/html/installing_and_upgrading_private_automation_hub/installing_online_or_offline#doc-wrapper [id="installing-the-aap-setup-bundle_{context}"] @@ -5,6 +7,7 @@ = Downloading and installing the {PlatformNameShort} setup bundle [role="_abstract"] + Choose the setup bundle to download {PlatformNameShort} for disconnected installations. This bundle includes the RPM content for {PlatformNameShort} and the default {ExecEnvShort} images that will be uploaded to your {PrivateHubName} during the installation process. @@ -12,69 +15,17 @@ Choose the setup bundle to download {PlatformNameShort} for disconnected install . Download the {PlatformNameShort} setup bundle package by navigating to the link:{PlatformDownloadUrl}[{PlatformName} download] page and clicking btn:[Download Now] for the {PlatformNameShort} {PlatformVers} Setup Bundle. -. From {ControllerName}, untar the bundle: +. On control node, untar the bundle: + ---- $ tar xvf \ - ansible-automation-platform-setup-bundle-2.4-1.tar.gz -$ cd ansible-automation-platform-setup-bundle-2.4-1 ----- -+ -. Edit the inventory file to include the required options: - -.. automationcontroller group -.. automationhub group -.. admin_password -.. pg_password -.. automationhub_admin_password -.. automationhub_pg_host, automationhub_pg_port -.. automationhub_pg_password -+ -*Example Inventory file* -+ ----- -[automationcontroller] -automationcontroller.example.org ansible_connection=local - -[automationcontroller:vars] -peers=execution_nodes - -[automationhub] -automationhub.example.org - -[all:vars] -admin_password='password123' - -pg_database='awx' -pg_username='awx' -pg_password='dbpassword123' - -receptor_listener_port=27199 - -automationhub_admin_password='hubpassword123' - -automationhub_pg_host='automationcontroller.example.org' -automationhub_pg_port=5432 - -automationhub_pg_database='automationhub' -automationhub_pg_username='automationhub' -automationhub_pg_password='dbpassword123' -automationhub_pg_sslmode='prefer' + ansible-automation-platform-setup-bundle-2.5-1.tar.gz +$ cd ansible-automation-platform-setup-bundle-2.5-1 ---- + -. Run the {PlatformNameShort} setup bundle executable as the root user: -+ ----- -$ sudo -i -# cd /path/to/ansible-automation-platform-setup-bundle-2.4-1 -# ./setup.sh ----- -+ -. When installation is complete, navigate to the Fully Qualified Domain Name (FQDN) for the {ControllerName} node that was specified in the installation inventory file. - -. Log in using the administrator credentials specified in the installation inventory file. +. Edit the inventory file to include variables based on your host names and desired password values. [NOTE] ==== -The inventory file must be kept intact after installation because it is used for backup, restore, and upgrade functions. Keep a backup copy in a secure location, given that the inventory file contains passwords. -==== \ No newline at end of file +See section xref:con-install-scenario-examples[3.2 Inventory file examples base on installation scenarios] for a list of examples that best fits your scenario. +==== diff --git a/downstream/modules/platform/proc-installing-with-internet.adoc b/downstream/modules/platform/proc-installing-with-internet.adoc index db3ae20e2b..bdb98d7271 100644 --- a/downstream/modules/platform/proc-installing-with-internet.adoc +++ b/downstream/modules/platform/proc-installing-with-internet.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-installing-with-internet_{context}"] @@ -12,11 +14,25 @@ Choose the {PlatformName} installer if your {RHEL} environment is connected to t . Navigate to the link:{PlatformDownloadUrl}[{PlatformName} download] page. . Click btn:[Download Now] for the *Ansible Automation Platform Setup*. -. Extract the files: -+ +. Transfer the file to the target server using `scp` or `curl`: +.. Using `scp`: +... Run the following command, replacing `private_key.pem`, `user`, and `server_ip` with your appropriate values: +----- +$ scp -i private_key.pem aap-bundled-installer.tar.gz user@server_ip: +----- +.. Using `curl`: +... If the setup file URL is available, you can download it directly to the target server using `curl`. Replace `` with the file URL: +----- +$ curl -0 +----- + +[NOTE] +==== +If the file needs to be extracted after downloading, run the following command: ----- -$ tar xvzf ansible-automation-platform-setup-.tar.gz +$ tar xvzf aap-bundled-installer.tar.gz ----- +==== .RPM install @@ -25,16 +41,18 @@ $ tar xvzf ansible-automation-platform-setup-.tar.gz v.{PlatformVers} for RHEL 8 for x86_64 + ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms ansible-automation-platform-installer +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms ansible-automation-platform-installer ---- + -v.{PlatformVers} for RHEL 9 for x86-64 +v.{PlatformVers} for RHEL 9 for x86_64 + ---- -$ sudo dnf install --enablerepo=ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms ansible-automation-platform-installer +$ sudo dnf install --enablerepo=ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms ansible-automation-platform-installer ---- [NOTE] +==== `dnf install` enables the repo as the repo is disabled by default. +==== When you use the RPM installer, the files are placed under the `/opt/ansible-automation-platform/installer` directory. diff --git a/downstream/modules/platform/proc-installing-without-internet.adoc b/downstream/modules/platform/proc-installing-without-internet.adoc index 0c217df60d..f34d80569a 100644 --- a/downstream/modules/platform/proc-installing-without-internet.adoc +++ b/downstream/modules/platform/proc-installing-without-internet.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-installing-without-internet_{context}"] diff --git a/downstream/modules/platform/proc-inventory-file-setup-rpm.adoc b/downstream/modules/platform/proc-inventory-file-setup-rpm.adoc new file mode 100644 index 0000000000..f2f450720b --- /dev/null +++ b/downstream/modules/platform/proc-inventory-file-setup-rpm.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: PROCEDURE + +[id="inventory-file-setup-rpm"] + += Setting up the inventory file + +Before upgrading your {PlatformName} installation, edit the `inventory` file so that it matches your desired configuration. You can keep the same parameters from your existing {PlatformNameShort} deployment or you can modify the parameters to match any changes to your environment. + +.Procedure + +. Navigate to the installation program directory. ++ +*Bundled installer* ++ +---- +$ cd ansible-automation-platform-setup-bundle-- +---- ++ +*Online installer* ++ +---- +$ cd ansible-automation-platform-setup- +---- ++ +. Update the `inventory` file to match your desired configuration. You can use the same inventory file from an existing {PlatformNameShort} installation if there are no changes to the environment. + +[NOTE] +==== +Provide a reachable IP address or fully qualified domain name (FQDN) for all hosts to ensure that users can synchronize and install content from {HubNameMain} from a different node. Do not use localhost. If localhost is used, the upgrade will be stopped as part of preflight checks. +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-modifying-the-run-schedule.adoc b/downstream/modules/platform/proc-modifying-the-run-schedule.adoc new file mode 100644 index 0000000000..e0935adf64 --- /dev/null +++ b/downstream/modules/platform/proc-modifying-the-run-schedule.adoc @@ -0,0 +1,30 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-07-15 +:_mod-docs-content-type: PROCEDURE + +[id="modifying-the-run-schedule_{context}"] + += Modifying the run schedule + +You can configure `metrics-utility` to run at specified times and intervals. Run frequency is expressed in cronjobs. For more information on the cron utility, see link:https://www.redhat.com/sysadmin/linux-cron-command[How to schedule jobs using the Linux ‘Cron’ utility]. + +To modify the run schedule on {RHEL} and on {OCPShort}, use one of the following procedures: + +.Procedure + +. From the command line, run: ++ +[source, ] +---- +crontab -e +---- ++ +. After the code editor has opened, update the `gather` and `build` parameters using cron syntax as shown below: ++ +[source, ] +---- +*/2 * * * * metrics-utility gather_automation_controller_billing_data --ship --until=10m +*/5 * * * * metrics-utility build_report +---- ++ +. Save and close the file. diff --git a/downstream/modules/platform/proc-operator-aap-faq.adoc b/downstream/modules/platform/proc-operator-aap-faq.adoc index 1501a95031..7749bde297 100644 --- a/downstream/modules/platform/proc-operator-aap-faq.adoc +++ b/downstream/modules/platform/proc-operator-aap-faq.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: PROCEDURE + [id="operator-aap-troubleshooting_{context}"] -= Frequently asked questions on platform gateway += Frequently asked questions on {Gateway} -If I delete my Ansible Automation Platform deployment will I still have access to Automation Controller?:: +If I delete my {PlatformNameShort} deployment will I still have access to {ControllerName}?:: No, {ControllerName}, {HubName}, and {EDAName} are nested within the deployment and are also deleted. Something went wrong with my deployment but I'm not sure what, how can I find out?:: @@ -10,7 +12,7 @@ You can follow along in the command line while the operator is reconciling, this Alternatively you can click into the deployment instance to see the status conditions being updated as the deployment goes on. Is it still possible to view individual component logs?:: -When troubleshooting you should examine the *AnsibleAutomationPlatform* instance for the main logs and then each individual component (*EDA*, *AutomationHub*, *AutomationController*) for more specific information. +When troubleshooting you should examine the *{PlatformNameShort}* instance for the main logs and then each individual component (*EDA*, *AutomationHub*, *AutomationController*) for more specific information. Where can I view the condition of an instance?:: To display status conditions click into the instance, and look under the *Details* or *Events* tab. @@ -19,4 +21,7 @@ Alternatively, to display the status conditions you can run the get command: Can I track my migration in real time?:: To help track the status of the migration or to understand why migration might have failed you can look at the migration logs as they are running. Use the logs command: -`oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f` \ No newline at end of file +`oc logs fresh-install-controller-migration-4.6.0-jwfm6 -f` + +I have configured my SAML but authentication fails with this error: "Unable to complete social auth login" What can I do?:: +You must update your {PlatformNameShort} instance to include the `REDIRECT_IS_HTTPS` extra setting. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#proc-operator-enable-https-redirect[Enabling single sign-on (SSO) for {Gateway} on {OCPShort}] for help with this. \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-access-aap.adoc b/downstream/modules/platform/proc-operator-access-aap.adoc index 3ff066bb8c..41c0cd1a38 100644 --- a/downstream/modules/platform/proc-operator-access-aap.adoc +++ b/downstream/modules/platform/proc-operator-access-aap.adoc @@ -1,16 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + [id="operator-access-aap_{context}"] -= Accessing the platform gateway -You should use the *AnsibleAutomationPlatform* instance as your default. += Accessing the {Gateway} + +You should use the *{PlatformNameShort}* instance as your default. This instance links the {ControllerName}, {HubName}, and {EDAName} deployments to a single interface. .Procedure -To access your *AnsibleAutomationPlatform* instance: +To access your *{PlatformNameShort}* instance: +. Log in to {OCP}. . Navigate to menu:Networking[Routes] -. Click the link under *Location* for *AnsibleAutomationPlatform*. -. This redirects you to the *AnsibleAutomationPlatform* login page. Enter your username in the *Username* field. +. Click the link under *Location* for *{PlatformNameShort}*. +. This redirects you to the {PlatformNameShort} login page. Enter "admin" as your username in the *Username* field. . For the password you need to: .. Go to to menu:Workloads[Secrets]. .. Click btn:[-admin-password] and copy the password. @@ -20,12 +24,14 @@ To access your *AnsibleAutomationPlatform* instance: .. Click btn:[Subscription manifest] or btn:[Username/password]. .. Upload your manifest or enter your username and password. .. Select your subscription from the *Subscription* list. -.. Click btn:[Next]. -+ This redirects you to the *Analytics* page. +.. Click btn:[Next]. This redirects you to the *Analytics* page. . Click btn:[Next]. . Select the *I agree to the terms of the license agreement* checkbox. . Click btn:[Next]. -You now have access to the platform gateway user interface. -If you cannot access the {PlatformNameShort} see <> for help with troubleshooting and debugging. +.Verification +You now have access to the {Gateway} user interface. + +.Troubleshooting +If you cannot access the {PlatformNameShort} see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#operator-aap-troubleshooting_configure-aap-operator[Frequently asked questions on {Gateway}] for help with troubleshooting and debugging. diff --git a/downstream/modules/platform/proc-operator-config-csrf-gateway.adoc b/downstream/modules/platform/proc-operator-config-csrf-gateway.adoc new file mode 100644 index 0000000000..af69b57896 --- /dev/null +++ b/downstream/modules/platform/proc-operator-config-csrf-gateway.adoc @@ -0,0 +1,85 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-config-csrf-gateway_{context}"] + += Configuring your CSRF settings for your {Gateway} Operator ingress + +The {OperatorPlatformName} creates Openshift Routes and configures your Cross-site request forgery (CSRF) settings automatically. When using external ingress, you must configure your CSRF on the ingress to allow for cross-site requests. You can configure your {Gateway} operator ingress under *Advanced configuration*. + +.Procedure + +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Ansible Automation Platform* tab. +. For new instances, click btn:[Create AnsibleAutomationPlatform]. +.. For existing instances, you can edit the YAML view by clicking the {MoreActionsIcon} icon and then btn:[Edit AnsibleAutomationPlatform]. +. Click btn:[Advanced Configuration]. +. Under *Ingres annotations*, enter any annotations to add to the ingress. +. Under *Ingress TLS secret*, click the drop-down list and select a secret from the list. +. Under *YAML view* paste in the following code: ++ +---- +spec: + extra_settings: + - setting: CSRF_TRUSTED_ORIGINS + value: + - https://my-aap-domain.com +---- ++ +. After you have configured your {Gateway}, click btn:[Create] at the bottom of the form view (Or btn:[Save] in the case of editing existing instances). + +.Verification + +{OCP} creates the pods. This may take a few minutes. You can view the progress by navigating to menu:Workloads[Pods] and locating the newly created instance. Verify that the following operator pods provided by the {OperatorPlatformName} installation from {Gateway} are running: + +[cols="a,a,a,a,a"] +|=== +| Operator manager controllers pods | {ControllerNameStart} pods |{HubNameStart} pods |{EDAName} (EDA) pods |{Gateway} pods + +| The operator manager controllers for each of the four operators, include the following: + +* automation-controller-operator-controller-manager +* automation-hub-operator-controller-manager +* resource-operator-controller-manager +* aap-gateway-operator-controller-manager +* ansible-lightspeed-operator-controller-manager +* eda-server-operator-controller-manager + +| After deploying {ControllerName}, you can see the addition of the following pods: + +* {ControllerNameStart} web +* {ControllerNameStart} task +* Mesh ingress +* {ControllerNameStart} postgres + +| After deploying {HubName}, you can see the addition of the following pods: + +* {HubNameStart} web +* {HubNameStart} task +* {HubNameStart} API +* {HubNameStart} worker + +| After deploying EDA, you can see the addition of the following pods: + +* EDA API +* EDA Activation +* EDA worker +* EDA stream +* EDA Scheduler + +| After deploying {Gateway}, you can see the addition of the following pods: + + +* {Gateway} +* {Gateway} redis + +|=== + +[NOTE] +==== +A missing pod can indicate the need for a pull secret. Pull secrets are required for protected or private image registries. See link:https://docs.openshift.com/container-platform/4.11/openshift_images/managing_images/using-image-pull-secrets.html[Using image pull secrets] for more information. You can diagnose this issue further by running `oc describe pod ` to see if there is an ImagePullBackOff error on that pod. +==== + + + diff --git a/downstream/modules/platform/proc-operator-create-controller-credential.adoc b/downstream/modules/platform/proc-operator-create-controller-credential.adoc new file mode 100644 index 0000000000..de5dc5df8a --- /dev/null +++ b/downstream/modules/platform/proc-operator-create-controller-credential.adoc @@ -0,0 +1,63 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-create-controller-credential_{context}"] + += Creating an {ControllerName} credential custom resource + +Credentials authenticate the {ControllerName} user when launching jobs against machines, synchronizing with inventory sources, and importing project content from a version control system. + +SSH and AWS are the most commonly used credentials. For a full list of supported credentials see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/using_automation_execution/controller-credentials#ref-controller-credential-types[Credential types] section of the _{TitleControllerUserGuide}_ guide. + +For help with defining values you can refer to the link:https://access.redhat.com/login?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsolutions%2F7050627[OpenAPI (Swagger) file for Red Hat Ansible Automation Platform API] KCS article. + +[TIP] +==== +You can use `\https:///api/controller/v2/credential_types/` to view the list of credential types on your instance. +To get the full list use the following `curl` command: + +---- +export AAP_TOKEN="your-oauth2-token" +export AAP_URL="https://your-aap-controller.example.com" + +curl -s -H "Authorization: Bearer $AAP_TOKEN" "$AAP_URL/api/controller/v2/credential_types/" | jq -r '.results[].name' +---- +==== + +.Procedure + +* Create an AWS or SSH credential on {ControllerName} by creating a credential custom resource: +** SSH credential: ++ +---- +apiVersion: tower.ansible.com/v1alpha1 +kind: AnsibleCredential +metadata: + name: ssh-cred +spec: + name: ssh-cred + organization: Default + connection_secret: controller-access + description: "SSH credential" + type: "Machine" + ssh_username: "cat" + ssh_secret: my-ssh-secret + runner_pull_policy: IfNotPresent +---- ++ +** AWS credential: ++ +---- +apiVersion: tower.ansible.com/v1alpha1 +kind: AnsibleCredential +metadata: + name: aws-cred +spec: + name: aws-access + organization: Default + connection_secret: controller-access + description: "This is a test credential" + type: "Amazon Web Services" + username_secret: aws-secret + password_secret: aws-secret + runner_pull_policy: IfNotPresent +---- \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-create-controller-inventory.adoc b/downstream/modules/platform/proc-operator-create-controller-inventory.adoc new file mode 100644 index 0000000000..e64f902fc3 --- /dev/null +++ b/downstream/modules/platform/proc-operator-create-controller-inventory.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-create-controller-inventory_{context}"] + += Creating an {ControllerName} inventory custom resource + +By using an inventory file, {PlatformNameShort} can manage a large number of hosts with a single command. +Inventories also help you use {PlatformNameShort} more efficiently by reducing the number of command line options you have to specify. +For more information see the link:{BaseURL}/red_hat_ansible_automation_platform/{PLatformVers}/html-single/using_automation_execution/index#controller-inventories[Inventories] section of the _{TitleControllerUserGuide}_ guide. + +.Procedure + +* Create an inventory on {ControllerName} by creating an inventory custom resource: ++ +---- +metadata: + name: inventory-new +spec: + connection_secret: controller-access + description: my new inventory + name: newinventory + organization: Default + state: present + instance_groups: + - default + variables: + string: "string_value" + bool: true + number: 1 + list: + - item1: true + - item2: "1" + object: + string: "string_value" + number: 2 +---- ++ diff --git a/downstream/modules/platform/proc-operator-create-controller-project.adoc b/downstream/modules/platform/proc-operator-create-controller-project.adoc new file mode 100644 index 0000000000..ccf8fcdb92 --- /dev/null +++ b/downstream/modules/platform/proc-operator-create-controller-project.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-create-controller-project_{context}"] + += Creating an {ControllerName} project custom resource + +A Project is a logical collection of Ansible playbooks, represented in {ControllerName}. For more information see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/using_automation_execution/index#controller-projects[Projects] section of the _{TitleControllerUserGuide}_ guide. + +.Procedure + +* Create a project on {ControllerName} by creating an {ControllerName} project custom resource: ++ +---- +apiVersion: tower.ansible.com/v1alpha1 +kind: AnsibleProject +metadata: + name: git +spec: + repo: https://github.com/ansible/ansible-tower-samples + branch: main + name: ProjectDemo-git + scm_type: git + organization: Default + description: demoProject + connection_secret: controller-access + runner_pull_policy: IfNotPresent +---- ++ \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-create-controller-schedule.adoc b/downstream/modules/platform/proc-operator-create-controller-schedule.adoc new file mode 100644 index 0000000000..bfc24386a1 --- /dev/null +++ b/downstream/modules/platform/proc-operator-create-controller-schedule.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-create-controller-schedule_{context}"] + += Creating an {ControllerName} schedule custom resource + +.Procedure + +* Create a schedule on {ControllerName} by creating an {ControllerName} schedule custom resource: ++ +---- +apiVersion: tower.ansible.com/v1alpha1 +kind: AnsibleSchedule +metadata: + name: schedule +spec: + connection_secret: controller-access + runner_pull_policy: IfNotPresent + name: "Demo Schedule" + rrule: "DTSTART:20210101T000000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1" + unified_job_template: "Demo Job Template" +---- ++ \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-create-controller-workflow-template.adoc b/downstream/modules/platform/proc-operator-create-controller-workflow-template.adoc new file mode 100644 index 0000000000..ea3cd042ed --- /dev/null +++ b/downstream/modules/platform/proc-operator-create-controller-workflow-template.adoc @@ -0,0 +1,41 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-create-controller-workflow-template_{context}"] + += Creating an {ControllerName} workflow template custom resource + +A workflow job template links together a sequence of disparate resources to track the full set of jobs that were part of the release process as a single unit. +For more information see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/using_automation_execution/index#controller-workflow-job-templates[Workflow job templates] section of the _{TitleControllerUserGuide}_ guide. + +.Procedure + +* Create a workflow template on {ControllerName} by creating a workflow template custom resource: ++ +---- +apiVersion: tower.ansible.com/v1alpha1 +kind: WorkflowTemplate +metadata: + name: workflowtemplate-sample +spec: + connection_secret: controller-access + name: ExampleTowerWorkflow + description: Example Workflow Template + organization: Default + inventory: Demo Inventory + workflow_nodes: + - identifier: node101 + unified_job_template: + name: Demo Job Template + inventory: + organization: + name: Default + type: job_template + - identifier: node102 + unified_job_template: + name: Demo Job Template + inventory: + organization: + name: Default + type: job_template +---- ++ \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-create-controller-workflow.adoc b/downstream/modules/platform/proc-operator-create-controller-workflow.adoc new file mode 100644 index 0000000000..1a790e2495 --- /dev/null +++ b/downstream/modules/platform/proc-operator-create-controller-workflow.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-create-controller-workflow_{context}"] + += Creating an {ControllerName} workflow custom resource + +Workflows enable you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions. +For more information see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/using_automation_execution/index#controller-workflows[Workflows in automation controller] section of the _{TitleControllerUserGuide}_ guide. + +.Procedure + +* Create a workflow on {ControllerName} by creating a workflow custom resource: ++ +---- +apiVersion: tower.ansible.com/v1alpha1 +kind: AnsibleWorkflow +metadata: + name: workflow +spec: + inventory: Demo Inventory + workflow_template_name: Demo Job Template + connection_secret: controller-access + runner_pull_policy: IfNotPresent +---- ++ \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-create_crs.adoc b/downstream/modules/platform/proc-operator-create_crs.adoc new file mode 100644 index 0000000000..c83f9c5a23 --- /dev/null +++ b/downstream/modules/platform/proc-operator-create_crs.adoc @@ -0,0 +1,59 @@ +:_mod-docs-content-type: PROCEDURE + +[id="operator-create-crs_{context}"] + += Creating {PlatformNameShort} custom resources + +After upgrading to the latest version of {OperatorPlatformNameShort} on {OCPShort}, you can create an {PlatformNameShort} custom resource (CR) that specifies the names of your existing deployments, in the same namespace. + +The following example outlines the steps to deploy a new {EDAName} setup after upgrading to the latest version, with existing {ControllerName} and {HubName} deployments already in place. + +The link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#appendix-operator-crs_performance-considerations[Appendix] contains more examples of {PlatformNameShort} CRs for different deployments. + +.Procedure + +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Details* tab. +. On the *{PlatformNameShort}* tile click btn:[Create instance]. +. From the *Create {PlatformNameShort}* page enter a name for your instance in the *Name* field. +. Click btn:[YAML view] and paste the following YAML (link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#appendix-operator-crs_performance-considerations[aap-existing-controller-and-hub-new-eda.yml]): ++ +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + name: existing-controller #obtain name from controller CR + disabled: false + + eda: + disabled: false + + hub: + name: existing-hub + disabled: false +---- +. Click btn:[Create]. ++ +[NOTE] +==== +You can override the operator’s default image for {ControllerName}, {HubName}, or platform-resource app images by specifying the preferred image on the YAML spec. +This enables upgrading a specific deployment, like a controller, without updating the operator. + +The recommended approach however, is to upgrade the operator and use the default image values. +==== ++ + +.Verification +Navigate to your {OperatorPlatformNameShort} deployment and click btn:[All instances] to verify whether all instances have deployed correctly. +You should see the *{PlatformNameShort}* instance and the deployed *AutomationController*, *EDA*, and *AutomationHub* instances here. + +Alternatively, you can verify whether all instances deployed correctly by running `oc get route` in the command line. diff --git a/downstream/modules/platform/proc-operator-deploy-central-config.adoc b/downstream/modules/platform/proc-operator-deploy-central-config.adoc index 57566a1e80..fdc4d31f1e 100644 --- a/downstream/modules/platform/proc-operator-deploy-central-config.adoc +++ b/downstream/modules/platform/proc-operator-deploy-central-config.adoc @@ -1,56 +1,70 @@ +:_mod-docs-content-type: PROCEDURE + [id="operator-deploy-central-config_{context}"] -= Deploying the platform gateway with existing {PlatformNameShort} components -You can link any components of the {PlatformNameShort}, that you have already installed to a new *AnsibleAutomationPlatform* instance. += Deploying the {Gateway} with existing {PlatformNameShort} components + +You can link any components of the {PlatformNameShort}, that you have already installed to a new *{PlatformNameShort}* instance. The following procedure simulates a scenario where you have {ControllerName} as an existing component and want to add {HubName} and {EDAName}. .Procedure . Log in to {OCP}. -. Go to to menu:Operators[Installed Operators]. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. . Click btn:[Subscriptions] and edit your *Update channel* to *stable-2.5*. -. Click btn:[Details] and on the *AnsibleAutomationPlatform* tile click btn:[Create instance]. -. From the *Create AnsibleAutomationPlatform* page enter a name for your instance in the *Name* field. +. Click btn:[Details] and on the *{PlatformNameShort}* tile click btn:[Create instance]. +. From the *Create {PlatformNameShort}* page enter a name for your instance in the *Name* field. +* When deploying an {PlatformNameShort} instance, ensure that `auto_update` is set to the default value of `false` on your existing {ControllerName} instance in order for the integration to work. . Click btn:[YAML view] and copy in the following: + ---- -yaml apiVersion: aap.ansible.com/v1alpha1 - kind: AnsibleAutomationPlatform - metadata: - name: example-aap - namespace: aap - spec: - # Platform - image_pull_policy: IfNotPresent - # Components - controller: - disabled: false - name: existing-controller-name - eda: - disabled: false - hub: - disabled: false - ## uncomment if using file storage for Content pod - storage_type: file - file_storage_storage_class: your-rwx-storage-class - file_storage_size: 10Gi - - ## uncomment if using S3 storage for Content pod - # storage_type: S3 - # object_storage_s3_secret: example-galaxy-object-storage - - ## uncomment if using Azure storage for Content pod - # storage_type: azure - # object_storage_azure_secret: azure-secret-name - lightspeed: - disabled: true +kind: AnsibleAutomationPlatform +metadata: + name: example-aap + namespace: aap +spec: + database: + resource_requirements: + requests: + cpu: 200m + memory: 512Mi + storage_requirements: + requests: + storage: 100Gi + + # Platform + image_pull_policy: IfNotPresent + + # Components + controller: + disabled: false + name: existing-controller-name + eda: + disabled: false + hub: + disabled: false + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage + ---- -.. For new components, if you do not specify a name, a default name generates. +.. For new components, if you do not specify a name, a default name is generated. . Click btn:[Create]. -. To access your new instance, see <>. +. To access your new instance, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#operator-access-aap_install-aap-gateway[Accessing the {Gateway}]. -NOTE: If you have an existing controller with a managed Postgres pod, after creating the *AnsibleAutomationPlatform* resource your {ControllerName} instance will continue to use that original Postgres pod. If you were to do a fresh install you would have a single Postgres managed pod for all instances. +[NOTE] +==== +If you have an existing controller with a managed Postgres pod, after creating the *{PlatformNameShort}* resource your {ControllerName} instance will continue to use that original Postgres pod. If you were to do a fresh install you would have a single Postgres managed pod for all instances. +==== diff --git a/downstream/modules/platform/proc-operator-deploy-redis.adoc b/downstream/modules/platform/proc-operator-deploy-redis.adoc new file mode 100644 index 0000000000..4a38dc3b72 --- /dev/null +++ b/downstream/modules/platform/proc-operator-deploy-redis.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="operator-deploy-redis"] + += Deploying clustered Redis on {OperatorPlatformName} + +When you create an {PlatformNameShort} instance through the {OperatorPlatformNameShort}, standalone Redis is assigned by default. +To deploy clustered Redis, use the following procedure. + +//Add a link to the section when ready +For more information about Redis, refer to Caching and queueing system in the _Planning your installation_ guide. + +.Prerequisites +* You have installed an {OperatorPlatformNameShort} deployment. + +.Procedure +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Details* tab. +. On the *{PlatformNameShort}* tile click btn:[Create instance]. +.. For existing instances, you can edit the YAML view by clicking the {MoreActionsIcon} icon and then btn:[Edit AnsibleAutomationPlatform]. +.. Change the *redis_mode* value to "cluster". +.. Click btn:[Reload], then btn:[Save]. +. Click to expand *Advanced configuration*. +. For the *Redis Mode* list, select *Cluster*. +. Configure the rest of your instance as necessary, then click btn:[Create]. + +.Verification + +Your instance deploys with a cluster Redis with 6 Redis replicas as default. + +[NOTE] +==== +You can modify your {HubName} default redis cache PVC volume size, for help with this see, link:https://access.redhat.com/articles/7117186[Modifying the default redis cache PVC volume size automation hub]. +==== \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-enable-https-redirect.adoc b/downstream/modules/platform/proc-operator-enable-https-redirect.adoc new file mode 100644 index 0000000000..f45c87cdc5 --- /dev/null +++ b/downstream/modules/platform/proc-operator-enable-https-redirect.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-enable-https-redirect"] + += Enabling HTTPS redirect for single sign-on (SSO) for {Gateway} on {OCPShort} + +HTTPS redirect for SAML, allows you to log in once and access all of the {Gateway} without needing to reauthenticate. + +.Prerequisites + +* You have successfully configured SAML in the gateway from the {OperatorPlatformNameShort}. Refer to link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/access_management_and_authentication/index#controller-set-up-SAML[Configuring SAML authentication] for help with this. + +.Procedure + +. Log in to {OCP}. +. Go to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select *All Instances* and go to your *AnsibleAutomationPlatform* instance. +. Click the {MoreActionsIcon} icon and then select btn:[Edit AnsibleAutomationPlatform]. +. In the *YAML view* paste the following YAML code under the `spec:` section: ++ +---- +spec: + extra_settings: + - setting: REDIRECT_IS_HTTPS + value: '"True"' + +---- ++ +. Click btn:[Save]. + +.Verification + +After you have added the `REDIRECT_IS_HTTPS` setting, wait for the pod to redeploy automatically. You can verify this setting makes it into the pod by running: +---- +oc exec -it -- grep REDIRECT /etc/ansible-automation-platform/gateway/settings.py +---- \ No newline at end of file diff --git a/downstream/modules/platform/proc-operator-external-db-controller.adoc b/downstream/modules/platform/proc-operator-external-db-controller.adoc index ed32371ed8..a386db3774 100644 --- a/downstream/modules/platform/proc-operator-external-db-controller.adoc +++ b/downstream/modules/platform/proc-operator-external-db-controller.adoc @@ -1,35 +1,35 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-operator-external-db-controller"] -= Configuring an external database for {ControllerName} on {PlatformName} operator += Configuring an external database for {ControllerName} on {OperatorPlatformName} [role="_abstract"] For users who prefer to deploy {PlatformNameShort} with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the `oc create` command. -By default, the {PlatformName} operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your {PlatformNameShort} deployment. You can deploy {PlatformNameShort} with an external database instead of the managed PostgreSQL pod that the {PlatformName} operator automatically creates. +By default, the {OperatorPlatformNameShort} automatically creates and configures a managed PostgreSQL pod in the same namespace as your {PlatformNameShort} deployment. You can deploy {PlatformNameShort} with an external database instead of the managed PostgreSQL pod that the {OperatorPlatformNameShort} automatically creates. Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations. [NOTE] ==== -The same external database (PostgreSQL instance) can be used for both {HubName} and {ControllerName} as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. +The same external database (PostgreSQL instance) can be used for both {HubName}, {ControllerName}, and {Gateway} as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. ==== -The following section outlines the steps to configure an external database for your {ControllerName} on a {PlatformNameShort} operator. +The following section outlines the steps to configure an external database for your {ControllerName} on a {OperatorPlatformNameShort}. .Prerequisite -The external database must be a PostgreSQL database that is the version supported by the current release of {PlatformNameShort}. +The external database must be a PostgreSQL database that is the version supported by the current release of {PlatformNameShort}. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the {ControllerName} spec. [NOTE] ==== -{PlatformNameShort} {PlatformVers} supports PostgreSQL 13. +{PlatformNameShort} {PlatformVers} supports {PostgresVers}. ==== .Procedure -The external postgres instance credentials and connection information must be stored in a secret, which is then set on the {ControllerName} spec. - -. Create a `postgres_configuration_secret` .yaml file, following the template below: +. Create a `postgres_configuration_secret` YAML file, following the template below: + ---- apiVersion: v1 @@ -47,7 +47,7 @@ stringData: type: "unmanaged" type: Opaque ---- -<1> Namespace to create the secret in. This should be the same namespace you wish to deploy to. +<1> Namespace to create the secret in. This should be the same namespace you want to deploy to. <2> The resolvable hostname for your database node. <3> External port defaults to `5432`. <4> Value for variable `password` should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. diff --git a/downstream/modules/platform/proc-operator-external-db-gateway.adoc b/downstream/modules/platform/proc-operator-external-db-gateway.adoc new file mode 100644 index 0000000000..0d75ae9014 --- /dev/null +++ b/downstream/modules/platform/proc-operator-external-db-gateway.adoc @@ -0,0 +1,98 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-external-db-gateway"] + += Configuring an external database for {Gateway} on {OperatorPlatformName} + +[role="_abstract"] +There are two scenarios for deploying {PlatformNameShort} with an external database: + +[cols="a,a"] +|=== +| Scenario | Action required +| Fresh install | You must specify a single external database instance for the platform to use for the following: + +* {GatewayStart} +* {ControllerNameStart} +* {HubNameStart} +* {EDAName} +* {LightspeedShortName} (If enabled) + +See the _aap-configuring-external-db-all-default-components.yml_ example in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#operator-crs[14.1. Custom resources] section for help with this. + +If using {LightspeedShortName}, use the _aap-configuring-external-db-with-lightspeed-enabled.yml_ example. + +| Existing external database in 2.4 | Your existing external database remains the same after upgrading but you must specify the `external-postgres-configuration-gateway` (spec.database.database_secret) on the {PlatformNameShort} custom resource. +|=== + + +To deploy {PlatformNameShort} with an external database, you must first create a Kubernetes secret with credentials for connecting to the database. + +By default, the {OperatorPlatformNameShort} automatically creates and configures a managed PostgreSQL pod in the same namespace as your {PlatformNameShort} deployment. You can deploy {PlatformNameShort} with an external database instead of the managed PostgreSQL pod that the {OperatorPlatformNameShort} automatically creates. + +Using an external database lets you share and reuse resources and manually manage backups, upgrades, and performance optimizations. + +[NOTE] +==== +The same external database (PostgreSQL instance) can be used for both {HubName}, {ControllerName}, and {Gateway} as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. +==== + +The following section outlines the steps to configure an external database for your {Gateway} on a {OperatorPlatformNameShort}. + +.Prerequisite +The external database must be a PostgreSQL database that is the version supported by the current release of {PlatformNameShort}. The external postgres instance credentials and connection information must be stored in a secret, which is then set on the {Gateway} spec. + +[NOTE] +==== +{PlatformNameShort} {PlatformVers} supports {PostgresVers}. +==== + +.Procedure + +. Create a `postgres_configuration_secret` YAML file, following the template below: ++ +---- +apiVersion: v1 +kind: Secret +metadata: + name: external-postgres-configuration + namespace: <1> +stringData: + host: "" <2> + port: "" <3> + database: "" + username: "" + password: "" <4> + type: "unmanaged" +type: Opaque +---- +<1> Namespace to create the secret in. This should be the same namespace you want to deploy to. +<2> The resolvable hostname for your database node. +<3> External port defaults to `5432`. +<4> Value for variable `password` should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. +// [Christian Adams] We can roll out a fix for it 3/12, then next async release for everything. It may be good to exclude step 5 for ssl mode here. We'll need to track added that in once the fix is in for the operator. - Removing point 5 here until a fix is implemented. +// <5> The variable `sslmode` is valid for `external` databases only. The allowed values are: `*prefer*`, `*disable*`, `*allow*`, `*require*`, `*verify-ca*`, and `*verify-full*`. +. Apply `external-postgres-configuration-secret.yml` to your cluster using the `oc create` command. ++ +---- +$ oc create -f external-postgres-configuration-secret.yml +---- ++ +[NOTE] +==== +The following example is for a {Gateway} deployment. +To configure an external database for all components, use the _aap-configuring-external-db-all-default-components.yml_ example in the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#operator-crs[14.1. Custom resources] section. +==== ++ +. When creating your `AnsibleAutomationPlatform` custom resource object, specify the secret on your spec, following the example below: ++ +---- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: example-aap + Namespace: aap +spec: + database: + database_secret: automation-platform-postgres-configuration +---- diff --git a/downstream/modules/platform/proc-operator-external-db-hub.adoc b/downstream/modules/platform/proc-operator-external-db-hub.adoc index 3a6ade5028..73f58776ed 100644 --- a/downstream/modules/platform/proc-operator-external-db-hub.adoc +++ b/downstream/modules/platform/proc-operator-external-db-hub.adoc @@ -1,35 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-operator-external-db-hub"] -= Configuring an external database for {HubName} on {PlatformName} operator += Configuring an external database for {HubName} on {OperatorPlatformName} [role="_abstract"] For users who prefer to deploy {PlatformNameShort} with an external database, they can do so by configuring a secret with instance credentials and connection information, then applying it to their cluster using the `oc create` command. -By default, the {PlatformName} operator automatically creates and configures a managed PostgreSQL pod in the same namespace as your {PlatformNameShort} deployment. +By default, the {OperatorPlatformNameShort} automatically creates and configures a managed PostgreSQL pod in the same namespace as your {PlatformNameShort} deployment. You can choose to use an external database instead if you prefer to use a dedicated node to ensure dedicated resources or to manually manage backups, upgrades, or performance tweaks. [NOTE] ==== -The same external database (PostgreSQL instance) can be used for both {HubName} and {ControllerName} as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. +The same external database (PostgreSQL instance) can be used for both {HubName}, {ControllerName}, and {Gateway} as long as the database names are different. In other words, you can have multiple databases with different names inside a single PostgreSQL instance. ==== -The following section outlines the steps to configure an external database for your {HubName} on a {PlatformNameShort} operator. +The following section outlines the steps to configure an external database for your {HubName} on a {OperatorPlatformNameShort}. .Prerequisite The external database must be a PostgreSQL database that is the version supported by the current release of {PlatformNameShort}. +The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the {HubName} spec. [NOTE] ==== -{PlatformNameShort} {PlatformVers} supports PostgreSQL 13. +{PlatformNameShort} {PlatformVers} supports {PostgresVers}. ==== .Procedure -The external postgres instance credentials and connection information will need to be stored in a secret, which will then be set on the {HubName} spec. - -. Create a `postgres_configuration_secret` .yaml file, following the template below: +. Create a `postgres_configuration_secret` YAML file, following the template below: + ---- apiVersion: v1 @@ -47,7 +48,7 @@ stringData: type: "unmanaged" type: Opaque ---- -<1> Namespace to create the secret in. This should be the same namespace you wish to deploy to. +<1> Namespace to create the secret in. This should be the same namespace you want to deploy to. <2> The resolvable hostname for your database node. <3> External port defaults to `5432`. <4> Value for variable `password` should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. diff --git a/downstream/modules/platform/proc-operator-link-components.adoc b/downstream/modules/platform/proc-operator-link-components.adoc index a31d7dd978..673ad0837f 100644 --- a/downstream/modules/platform/proc-operator-link-components.adoc +++ b/downstream/modules/platform/proc-operator-link-components.adoc @@ -1,18 +1,34 @@ +:_mod-docs-content-type: PROCEDURE + [id="operator-link-components_{context}"] -= Linking your components to the platform gateway += Linking your components to the {Gateway} -After installing the {OperatorPlatform} in your namespace you can set up your *AnsibleAutomationPlatform* instance. +After installing the {OperatorPlatformNameShort} in your namespace you can set up your *{PlatformNameShort}* instance. Then link all the platform components to a single user interface. .Procedure -. Go to your {OperatorPlatform} and click btn:[Details]. -. On the *AnsibleAutomationPlatform* tile click btn:[Create instance]. -. From the *Create AnsibleAutomationPlatform* page enter a name for your instance in the *Name* field. + +. Log in to {OCP}. +. Navigate to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select the *Details* tab. + +. On the *{PlatformNameShort}* tile click btn:[Create instance]. +. From the *Create {PlatformNameShort}* page enter a name for your instance in the *Name* field. . Click btn:[YAML view] and paste the following: + ---- spec: + database: + resource_requirements: + requests: + cpu: 200m + memory: 512Mi + storage_requirements: + requests: + storage: 100Gi + controller: disabled: false @@ -22,14 +38,14 @@ spec: hub: disabled: false storage_type: file - file_storage_storage_class: nfs-local-rwx + file_storage_storage_class: file_storage_size: 10Gi ---- . Click btn:[Create]. .Verification -Go to your {OperatorPlatform} deployment and click btn:[All instances] to verify if all instances deployed correctly. -You should see the *AnsibleAutomationPlatform* instance and the deployed *AutomationController*, *EDA*, and *AutomationHub* instances here. +Go to your {OperatorPlatformNameShort} deployment and click btn:[All instances] to verify if all instances deployed correctly. +You should see the *{PlatformNameShort}* instance and the deployed *AutomationController*, *EDA*, and *AutomationHub* instances here. Alternatively you can check by the command line, run: `oc get route` diff --git a/downstream/modules/platform/proc-operator-mesh-upgrading-receptors.adoc b/downstream/modules/platform/proc-operator-mesh-upgrading-receptors.adoc new file mode 100644 index 0000000000..de09cde3f3 --- /dev/null +++ b/downstream/modules/platform/proc-operator-mesh-upgrading-receptors.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-operator-mesh-upgrading-receptors"] + += Upgrading receptors + +A software update addresses any issues or bugs to provide a better experience of working with the technology. Anyone with administrative rights can update the receptor on an execution node. + +Red Hat recommends performing updates to the receptor after any {PlatformNameShort} control plane updates. This ensures you are using the latest version. The best practice is to perform regular updates outside of any updates to the control plane. + + +.Procedure + +. Check the current receptor version: ++ +---- +receptor --version +---- ++ +. Update the receptor: ++ +---- +sudo dnf update ansible-runner receptor -y +---- ++ +[NOTE] +==== +To upgrade all packages (not just the receptor), use `dnf update`, then reboot with `reboot`. +==== ++ +. Verify the installation. After the update is complete, check the receptor version again to verify the update: ++ +---- +receptor --version +---- ++ +. Restart the receptor service: ++ +---- +sudo systemctl restart receptor +---- ++ +. Ensure the receptor is working correctly and is properly connected to the controller or other nodes in the system. + + diff --git a/downstream/modules/platform/proc-operator-scaling-down-aap.adoc b/downstream/modules/platform/proc-operator-scaling-down-aap.adoc new file mode 100644 index 0000000000..1c7bbb4b8c --- /dev/null +++ b/downstream/modules/platform/proc-operator-scaling-down-aap.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: PROCEDURE + +[id="operator-scaling-down-aap"] + += Scaling down {OperatorPlatformName} deployments + +You can scale down all {PlatformNameShort} deployments and StatefulSets by using the `idle_aap` variable. +This is useful for scenarios such as upgrades, migrations, or disaster recovery. + + +.Procedure + +. Log in to {OCP}. +. Go to menu:Operators[Installed Operators]. +. Select your {OperatorPlatformNameShort} deployment. +. Select *All Instances* and go to your *AnsibleAutomationPlatform* instance. +. Click the *{MoreActionsIcon}* icon and then select btn:[Edit AnsibleAutomationPlatform]. +. In the *YAML view* paste the following YAML code under the `spec:` section: ++ +---- +idle_aap: true +---- ++ +. Click btn:[Save]. + +.Next steps + +Setting the `idle_aap` value to `true` scales down all active deployments. Setting the value to `false` scales the deployments back up. + diff --git a/downstream/modules/platform/proc-operator-troubleshoot-ext-db.adoc b/downstream/modules/platform/proc-operator-troubleshoot-ext-db.adoc new file mode 100644 index 0000000000..e8e48021a6 --- /dev/null +++ b/downstream/modules/platform/proc-operator-troubleshoot-ext-db.adoc @@ -0,0 +1,47 @@ +:_mod-docs-content-type: PROCEDURE + +[id="aap-operator-troubleshoot-ext-db_{context}"] + += Troubleshooting an external database with an unexpected DataStyle set + +When upgrading the {OperatorPlatformNameShort} you may encounter an error like the following: + +---- +NotImplementedError: can't parse timestamptz with DateStyle 'Redwood, SHOW_TIME': '18-MAY-23 20:33:55.765755 +00:00' +---- + +Errors like this occur when you have an external database with an unexpected DateStyle set. +You can refer to the following steps to resolve this issue. + +.Procedure + +. Edit the `/var/lib/pgsql/data/postgres.conf` file on the database server: ++ +---- +# vi /var/lib/pgsql/data/postgres.conf +---- ++ +. Find and comment out the line: ++ +---- +#datestyle = 'Redwood, SHOW_TIME' +---- ++ +. Add the following setting immediately below the newly-commented line: ++ +---- +datestyle = 'iso, mdy' +---- ++ +. Save and close the `postgres.conf` file. +. Reload the database configuration: ++ +---- +# systemctl reload postgresql +---- ++ + +[NOTE] +==== +Running this command does not disrupt database operations. +==== diff --git a/downstream/modules/platform/proc-operator-upgrade.adoc b/downstream/modules/platform/proc-operator-upgrade.adoc index 2bf9e97d62..a35488a104 100644 --- a/downstream/modules/platform/proc-operator-upgrade.adoc +++ b/downstream/modules/platform/proc-operator-upgrade.adoc @@ -1,16 +1,37 @@ +:_mod-docs-content-type: PROCEDURE + [id="upgrading-operator_{context}"] -= Upgrading the {OperatorPlatform} += Upgrading the {OperatorPlatformNameShort} + +To upgrade to the latest version of {OperatorPlatformNameShort} on {OCPShort}, you can do the following: + +.Prerequisites +* Read the link:{URLReleaseNotes}[{TitleReleaseNotes}] for 2.5 -[role=_abstract] +* [Optional] You need to deploy all of your {PlatformName} services ({ControllerNAme}, {HubName}, {EDAName}) to the same, single namespace before upgrading to 2.5 (only for existing deployments). For more information see, link:https://access.redhat.com/solutions/7092056[Migrating from one namespace to another]. +* Review the link:{URLOperatorBackup}[{TitleOperatorBackup}] guide and backup your services: +** AutomationControllerBackup +** AutomationHubBackup +** EDABackup -To upgrade to the latest version of {OperatorPlatform} on {OCPShort}, do the following: +[IMPORTANT] +==== +Upgrading from {EDAName} 2.4 is not supported. If you are using {EDAName} 2.4 in production, contact Red{nbsp}Hat before you upgrade. +==== -.Prodedure +.Procedure . Log in to {OCPShort}. . Navigate to menu:Operators[Installed Operators]. +. Select the {OperatorPlatformNameShort} installed on your project namespace. . Select the *Subscriptions* tab. -. Under *Upgrade status*, click btn:[Upgrade Available]. +. Change the channel from stable-2.4 to stable-2.5. An InstallPlan is created for the user. . Click btn:[Preview InstallPlan]. . Click btn:[Approve]. +. Create a Custom Resource (CR) using the {PlatformNameShort} UI. The {ControllerName} and {HubName} UIs remain until all SSO configuration is supported in the {Gateway} UI. + +[role="_additional-resources"] +.Additional resources + +* link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#configure-aap-operator_operator-platform-doc[Configuring the {OperatorPlatformName} on {OCP}] diff --git a/downstream/modules/platform/proc-pac-add-policy-to-inventory.adoc b/downstream/modules/platform/proc-pac-add-policy-to-inventory.adoc new file mode 100644 index 0000000000..d11e6fb9db --- /dev/null +++ b/downstream/modules/platform/proc-pac-add-policy-to-inventory.adoc @@ -0,0 +1,19 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-09 +:_mod-docs-content-type: PROCEDURE + +[id="pac-add-policy-to-inventory_{context}"] + += Associating a policy with an inventory + +To associate a policy with an inventory, take the following steps: + +.Procedure + +. From the navigation panel, select {MenuInfrastructureInventories}. +. On the *Inventories* page: +.. To edit an existing inventory, find the inventory you want to edit and click the pencil icon image:leftpencil.png[Edit page,15,15] to go to the editing screen. +.. To create a new inventory, click btn:[Create inventory]. +. In the field titled *Policy enforcement*, enter the query path associated with the policy you want to implement. +You must format the query path as `package/rule`. +. Click btn:[Save inventory] if you are editing an existing inventory, or click btn:[Create inventory] if you are creating a new inventory. diff --git a/downstream/modules/platform/proc-pac-add-policy-to-org.adoc b/downstream/modules/platform/proc-pac-add-policy-to-org.adoc new file mode 100644 index 0000000000..b23ee5ca4f --- /dev/null +++ b/downstream/modules/platform/proc-pac-add-policy-to-org.adoc @@ -0,0 +1,19 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-09 +:_mod-docs-content-type: PROCEDURE + +[id="pac-add-policy-to-org_{context}"] + += Associating a policy with an organization + +To associate a policy with an organization, take the following steps. + +.Procedure + +. From the navigation panel, select {MenuAMOrganizations}. +. On the *Organizations* page: +.. To edit an existing organization, find the organization you want to edit and click the pencil icon image:leftpencil.png[Edit page,15,15] to go to the editing screen. +.. To create a new organization, click btn:[Create organization]. +. In the field labeled *Policy enforcement*, enter the query path associated with the policy you want to implement. +You must format the query path as `package/rule`. +. Click btn:[Next] and then btn:[Finish] to save your settings. diff --git a/downstream/modules/platform/proc-pac-add-policy-to-template.adoc b/downstream/modules/platform/proc-pac-add-policy-to-template.adoc new file mode 100644 index 0000000000..74da83432a --- /dev/null +++ b/downstream/modules/platform/proc-pac-add-policy-to-template.adoc @@ -0,0 +1,19 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-09 +:_mod-docs-content-type: PROCEDURE + +[id="pac-add-policy-to-template_{context}"] + += Associating a policy with a job template + +To associate a policy with a job template, take the following steps: + +.Procedure + +. From the navigation panel, select {MenuAETemplates}. +. On the *Automation Templates* page: +.. To edit an existing job template, find the job template you want to edit and click the pencil icon image:leftpencil.png[Edit page,15,15] to go to the editing screen. +.. To create a new job template, click btn:[Create template]. +. In the field titled *Policy enforcement*, enter the query path associated with the policy you want to implement. +You must format the query path as `package/rule`. +. Click btn:[Save job template] if you are editing an existing job template, or click btn:[Create job template] if you are creating a new job template. diff --git a/downstream/modules/platform/proc-post-migration-cleanup.adoc b/downstream/modules/platform/proc-post-migration-cleanup.adoc deleted file mode 100644 index a3f510099f..0000000000 --- a/downstream/modules/platform/proc-post-migration-cleanup.adoc +++ /dev/null @@ -1,19 +0,0 @@ -[id="post-migration-cleanup_{context}"] - -= Post migration cleanup - -[role=_abstract] - -After your data migration is complete, you must delete any Instance Groups that are no longer required. - -.Procedure -. Log in to {PlatformName} as the administrator with the password you created during migration. -+ -[NOTE] -==== -Note: If you did not create an administrator password during migration, one was automatically created for you. To locate this password, go to your project, select menu:Workloads[Secrets] and open controller-admin-password. From there you can copy the password and paste it into the {PlatformName} password field. -==== -+ -. Select {MenuInfrastructureInstanceGroups}. -. Select all Instance Groups except controlplane and default. -. Click btn:[Delete]. diff --git a/downstream/modules/platform/proc-post-migration-delete-instance.adoc b/downstream/modules/platform/proc-post-migration-delete-instance.adoc new file mode 100644 index 0000000000..0fa7e53434 --- /dev/null +++ b/downstream/modules/platform/proc-post-migration-delete-instance.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="post-migration-delete-instance_{context}"] + += Deleting Instance Groups post migration + +[role=_abstract] + +You can use the following procedure to delete any unnecessary instance groups after you have successfully migrated. + +[NOTE] +==== +If you did not create an administrator password during migration, one was automatically created for you. +To locate this password, go to your project, select menu:Workloads[Secrets] and open controller-admin-password. +From there you can copy the password and paste it into the {PlatformName} password field. +==== + +.Procedure +. Log in to {PlatformName} as the administrator with the password you created during migration. +. Select {MenuInfrastructureInstanceGroups}. +. Select all Instance Groups except `controlplane` and `default`. +. Click btn:[Delete]. diff --git a/downstream/modules/platform/proc-post-migration-unlink-db.adoc b/downstream/modules/platform/proc-post-migration-unlink-db.adoc new file mode 100644 index 0000000000..a79a57bd82 --- /dev/null +++ b/downstream/modules/platform/proc-post-migration-unlink-db.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + +[id="post-migration-unlink-db_{context}"] + += Unlinking the old database configuration secret post migration + +[role=_abstract] + +After a successful migration you must unlink your old database configuration secret. + +.Procedure + +. Log in to *{OCP}*. +. Navigate to menu:Operators[Installed Operators]. +. Select the {OperatorPlatformNameShort} installed on your project namespace. +. Select the *Automation Controller* tab. +. Click your *AutomationController* object. You can then view the object through the *Form view* or *YAML view*. The following inputs are available through the *YAML view*. +. Locate the `old_postgres_configuration_secret` item within the spec section of the YAML contents. +. Delete the line that contains this item. +. Click btn:[Save]. diff --git a/downstream/modules/platform/proc-postgresql-enable-mtls-authentication.adoc b/downstream/modules/platform/proc-postgresql-enable-mtls-authentication.adoc new file mode 100644 index 0000000000..85aede2c79 --- /dev/null +++ b/downstream/modules/platform/proc-postgresql-enable-mtls-authentication.adoc @@ -0,0 +1,33 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-enable-mtls-authentication_{context}"] + += Enabling mutual TLS (mTLS) authentication + +mTLS authentication is disabled by default; however, you can optionally enable the authentication. + +.Procedure +To configure each component's database with mTLS authentication, add the following variables to your inventory file under the `[all:vars]` group and ensure each component has a different TLS certificate and key: + +[source,yaml,subs="+attributes"] +---- +# {ControllerNameStart} +pgclient_sslcert=/path/to/awx.cert +pgclient_sslkey=/path/to/awx.key +pg_sslmode=verify-full or verify-ca + +# {GatewayStart} +automationgateway_pgclient_sslcert=/path/to/gateway.cert +automationgateway_pgclient_sslkey=/path/to/gateway.key +automationgateway_pg_sslmode=verify-full or verify-ca + +# {HubNameStart} +automationhub_pgclient_sslcert=/path/to/pulp.cert +automationhub_pgclient_sslkey=/path/to/pulp.key +automationhub_pg_sslmode=verify-full or verify-ca + +# {EDAName} +automationedacontroller_pgclient_sslcert=/path/to/eda.cert +automationedacontroller_pgclient_sslkey=/path/to/eda.key +automationedacontroller_pg_sslmode=verify-full or verify-ca +---- \ No newline at end of file diff --git a/downstream/modules/platform/proc-postgresql-use-custom-certificates.adoc b/downstream/modules/platform/proc-postgresql-use-custom-certificates.adoc new file mode 100644 index 0000000000..2241cf4b86 --- /dev/null +++ b/downstream/modules/platform/proc-postgresql-use-custom-certificates.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-use-custom-tls-certificates_{context}"] + += Using custom TLS certificates + +By default, the installation program generates self-signed TLS certificates and keys for all {PlatformNameShort} services. However, you can optionally use custom TLS certificates. + +.Procedure + +* To replace these with your own custom certificate and key, set the following inventory file variables: ++ +[source,yaml,subs="+attributes"] +---- +aap_ca_cert_file= +aap_ca_key_file= +---- + +* If any of your certificates are signed by a custom Certificate Authority (CA), then you must specify the Certificate Authority's certificate by using the `custom_ca_cert` inventory file variable: ++ +[source,yaml,subs="+attributes"] +---- +custom_ca_cert= +---- ++ +[NOTE] +==== +If you have more than one custom CA certificate, combine them into a single file, then reference the combined certificate with the `custom_ca_cert` inventory file variable. +==== diff --git a/downstream/modules/platform/proc-preparing-the-managed-nodes-for-containerized-installation.adoc b/downstream/modules/platform/proc-preparing-the-managed-nodes-for-containerized-installation.adoc new file mode 100644 index 0000000000..34a0191adf --- /dev/null +++ b/downstream/modules/platform/proc-preparing-the-managed-nodes-for-containerized-installation.adoc @@ -0,0 +1,43 @@ +:_mod-docs-content-type: PROCEDURE + +[id="preparing-the-managed-nodes-for-containerized-installation"] + += Preparing the managed nodes for containerized installation + +Managed nodes, also referred to as hosts, are the devices that {PlatformNameShort} is configured to manage. + +To ensure a consistent and secure setup of containerized {PlatformNameShort}, create a dedicated user on each host. {PlatformNameShort} connects as this user to run tasks on the host. + +Once configured, you can define the dedicated user for each host by adding `ansible_user=` in your inventory file, for example: `aap.example.org ansible_user=aap`. + +Complete the following steps for each host: + +.Procedure + +. Log in to the host as the root user. +. Create a new user. Replace `` with the username you want, for example `aap`. ++ +---- +$ adduser +---- ++ +. Set a password for the new user. Replace `` with the username you created. ++ +---- +$ passwd +---- ++ +. Configure the user to run sudo commands. +.. To do this open the sudoers file: ++ +---- +$ vi /etc/sudoers +---- ++ +.. Add the following line to the file (replacing `` with the username you created): ++ +---- + ALL=(ALL) NOPASSWD: ALL +---- ++ +.. Save and exit the file. diff --git a/downstream/modules/platform/proc-preparing-the-rhel-host-for-containerized-installation.adoc b/downstream/modules/platform/proc-preparing-the-rhel-host-for-containerized-installation.adoc index e95525966c..e9ff12f482 100644 --- a/downstream/modules/platform/proc-preparing-the-rhel-host-for-containerized-installation.adoc +++ b/downstream/modules/platform/proc-preparing-the-rhel-host-for-containerized-installation.adoc @@ -1,40 +1,74 @@ :_mod-docs-content-type: PROCEDURE -[id="preparing-the-rhel-host-for-containerized-installation_{context}"] +[id="preparing-the-rhel-host-for-containerized-installation"] -= Preparing the RHEL host for containerized installation += Preparing the {RHEL} host for containerized installation -[role="_abstract"] +Containerized {PlatformNameShort} runs the component services as Podman based containers on top of a {RHEL} host. Prepare the {RHEL} host to ensure a successful installation. .Procedure -Containerized {PlatformNameShort} runs the component services as podman based containers on top of a RHEL host. The installer takes care of this once the underlying host has been prepared. Use the following instructions for this. - -. Log into your RHEL host as your non-root user. +. Log in to the {RHEL} host as your non-root user. ++ +. Ensure the hostname associated with your host is set as a fully qualified domain name (FQDN). +.. To check the hostname associated with your host, run the following command: ++ +---- +hostname -f +---- ++ +Example output: ++ +---- +aap.example.org +---- +.. If the hostname is not a FQDN, you can set it with the following command: ++ +---- +sudo hostnamectl set-hostname +---- ++ +. Register your {RHEL} host with `subscription-manager`: ++ +---- +sudo subscription-manager register +---- ++ -. Run *dnf repolist* to validate only the BaseOS and appstream repos are setup and enabled on the host: +. Verify that only the BaseOS and AppStream repositories are enabled on the host: ++ +---- +$ sudo dnf repolist +---- ++ +Example output: + ---- -$ dnf repolist Updating Subscription Management repositories. repo id repo name rhel-9-for-x86_64-appstream-rpms Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs) rhel-9-for-x86_64-baseos-rpms Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs) ---- + -. Ensure that these repos and only these repos are available to the host OS. If you need to know how to do this use this guide: -link:{BaseURL}/red_hat_enterprise_linux/9/html/managing_software_with_the_dnf_tool/assembly_managing-custom-software-repositories_managing-software-with-the-dnf-tool[Chapter 10. Managing custom software repositories Red Hat Enterprise Linux] - -. Ensure that the host has DNS configured and can resolve hostnames and IPs using a fully qualified domain name (FQDN). This is essential to ensure services can talk to one another. - -.Using unbound DNS - -To configure unbound DNS refer to link:{BaseURL}/red_hat_enterprise_linux/9/html/managing_networking_infrastructure_services/assembly_setting-up-an-unbound-dns-server_networking-infrastructure-services[Chapter 2. Setting up an unbound DNS server Red Hat Enterprise Linux 9]. +. Ensure the host can resolve host names and IP addresses using DNS. This is essential to ensure services can talk to one another. -.Using BIND DNS - -To configure DNS using BIND refer to link:{BaseURL}/red_hat_enterprise_linux/9/html/managing_networking_infrastructure_services/assembly_setting-up-and-configuring-a-bind-dns-server_networking-infrastructure-services[Chapter 1. Setting up and configuring a BIND DNS server Red Hat Enterprise Linux 9]. +. Install `ansible-core`: ++ +---- +sudo dnf install -y ansible-core +---- ++ +. Optional: You can install additional utilities that can be useful for troubleshooting purposes, for example `wget`, `git-core`, `rsync`, and `vim`: ++ +---- +sudo dnf install -y wget git-core rsync vim +---- -.Optional +. Optional: To have the installation program automatically pick up and apply your {PlatformNameShort} subscription manifest license, follow the steps in link:{URLCentralAuth}/assembly-gateway-licensing#assembly-aap-obtain-manifest-files[Obtaining a manifest file]. -To have the installer automatically pick up and apply your {PlatformNameShort} subscription manifest license, use this guide to generate a manifest file which can be downloaded for the installer: link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_operations_guide/assembly-aap-obtain-manifest-files[Chapter 2. Obtaining a manifest file Red Hat Ansible Automation Platform 2.]. +[role="_additional-resources"] +.Additional resources +* link:{URLCentralAuth}/assembly-gateway-licensing#proc-attaching-subscriptions[Attaching your {PlatformName} Subscription] +* link:{BaseURL}/red_hat_enterprise_linux/9/html/managing_networking_infrastructure_services/assembly_setting-up-an-unbound-dns-server_networking-infrastructure-services[Setting up an unbound DNS server] +* link:{BaseURL}/red_hat_enterprise_linux/9/html/managing_networking_infrastructure_services/assembly_setting-up-and-configuring-a-bind-dns-server_networking-infrastructure-services[Setting up and configuring a BIND DNS server] +* link:https://docs.ansible.com/ansible/latest/[Ansible Core Documentation] diff --git a/downstream/modules/platform/proc-projects-manage-playbooks-manually.adoc b/downstream/modules/platform/proc-projects-manage-playbooks-manually.adoc index 482f7c8fee..d7f736729a 100644 --- a/downstream/modules/platform/proc-projects-manage-playbooks-manually.adoc +++ b/downstream/modules/platform/proc-projects-manage-playbooks-manually.adoc @@ -1,7 +1,12 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-projects-manage-playbooks-manually"] = Managing playbooks manually +While integrating with Source Code Management (SCM) systems is generally recommended for version control and collaborative development, there may be instances where direct management of playbook files is necessary. +This approach involves creating and organizing playbook directories and files on the local filesystem, ensuring proper ownership and permissions for execution. + .Procedure * Create one or more directories to store playbooks under the Project Base Path, for example, `/var/lib/awx/projects/`. @@ -11,7 +16,7 @@ .Troubleshooting -* If you have not added any Ansible Playbook directories to the base project path an error message is displayed. +* If you have not added any Ansible Playbook directories to the base project path, an error message is displayed. Choose one of the following options to troubleshoot this error: -** Create the appropriate playbook directories and check out playbooks from your (Source code management) SCM. -** Copy playbooks into the appropriate playbook directories. \ No newline at end of file +** Create the appropriate playbook directories and check out playbooks from your SCM. +** Copy playbooks into the appropriate playbook directories. diff --git a/downstream/modules/platform/proc-projects-using-collections-with-hub.adoc b/downstream/modules/platform/proc-projects-using-collections-with-hub.adoc index 631b6a32bc..6596a322f1 100644 --- a/downstream/modules/platform/proc-projects-using-collections-with-hub.adoc +++ b/downstream/modules/platform/proc-projects-using-collections-with-hub.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-projects-using-collections-with-hub"] = Using collections with {HubName} @@ -17,8 +19,8 @@ Use the following procedure to connect to {PrivateHubName} or {HubName}, the onl . Create a credential by choosing one of the following options: .. To use {HubName}, create an {HubName} credential by using the copied token and pointing to the URLs shown in the *Server URL* and *SSO URL* fields of the token page: + -* *Galaxy Server URL* = `https://console.redhat.com/api/automation-hub/` -* *AUTH SEVER URL* = `https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token` +* *Galaxy Server URL* = `https://console.redhat.com/ansible/automation-hub/token` +//* *AUTH SERVER URL* = `https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token` + .. To use {PrivateHubName}, create an {HubName} credential using a token retrieved from the *Repo Management* dashboard of your {PrivateHubName} and pointing to the published repository URL as shown: //+ @@ -33,7 +35,7 @@ For each repository in {Hubname} you must create a different credential. + //image:projects-create-ah-credential.png[Create hub credential] + -For UI specific instructions, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/managing_content_in_automation_hub/managing-cert-valid-content[Red Hat Certified, validated, and Ansible Galaxy content in automation hub]. +For UI specific instructions, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/managing_automation_content/managing-cert-valid-content[Red Hat Certified, validated, and Ansible Galaxy content in automation hub]. . Go to the organization for which you want to synchronize content from and add the new credential to the organization. This enables you to associate each organization with the credential, or repository, that you want to use content from. @@ -54,7 +56,7 @@ Then you can assign different levels of access to different organizations. For example, you can create a `Developers` organization that has access to both repository, while an Operations organization just has access to the *Prod* repository only. + -For UI specific instructions, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/managing_content_in_automation_hub/index#configuring-user-access-containers[Configuring user access for container repositories in private automation hub]. +For UI specific instructions, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/managing_automation_content/index#configuring-user-access-containers[Configuring user access for container repositories in private automation hub]. . If {HubName} has self-signed certificates, use the toggle to enable the setting *Ignore Ansible Galaxy SSL Certificate Verification* in *Job Settings*. For {HubName}, which uses a signed certificate, use the toggle to disable it instead. This is a global setting: diff --git a/downstream/modules/platform/proc-provide-custom-ca-cert.adoc b/downstream/modules/platform/proc-provide-custom-ca-cert.adoc new file mode 100644 index 0000000000..3db3fc75d1 --- /dev/null +++ b/downstream/modules/platform/proc-provide-custom-ca-cert.adoc @@ -0,0 +1,15 @@ +:_mod-docs-content-type: PROCEDURE + +[id="providing-a-custom-ca-certificate"] += Providing a custom CA certificate + +When you manually provide TLS certificates, those certificates might be signed by a custom CA. Provide a custom CA certificate to ensure proper authentication and secure communication within your environment. If you have multiple custom CA certificates, you must merge them into a single file. + +.Procedure +* If any of the TLS certificates you manually provided are signed by a custom CA, you must specify the CA certificate by using the following variable in your inventory file: ++ +---- +custom_ca_cert= +---- ++ +If you have more than one CA certificate, combine them into a single file and reference the combined certificate with the `custom_ca_cert` variable. diff --git a/downstream/modules/platform/proc-provide-custom-tls-certs-per-service.adoc b/downstream/modules/platform/proc-provide-custom-tls-certs-per-service.adoc new file mode 100644 index 0000000000..348f8b0de6 --- /dev/null +++ b/downstream/modules/platform/proc-provide-custom-tls-certs-per-service.adoc @@ -0,0 +1,53 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-provide-custom-tls-certs-per-service"] += Providing custom TLS certificates for each service + +Use this method if your organization manages TLS certificates outside of {PlatformNameShort} and requires manual provisioning. + +.Procedure +* To manually provide TLS certificates for each individual service (for example, {ControllerName}, {HubName}, and {EDAName}), set the following variables in your inventory file: ++ +[source,yaml,subs="+attributes"] +---- +# {GatewayStart} +gateway_tls_cert= +gateway_tls_key= +gateway_pg_tls_cert= +gateway_pg_tls_key= +gateway_redis_tls_cert= +gateway_redis_tls_key= + +# {ControllerNameStart} +controller_tls_cert= +controller_tls_key= +controller_pg_tls_cert= +controller_pg_tls_key= + +# {HubNameStart} +hub_tls_cert= +hub_tls_key= +hub_pg_tls_cert= +hub_pg_tls_key= + +# {EDAName} +eda_tls_cert= +eda_tls_key= +eda_pg_tls_cert= +eda_pg_tls_key= +eda_redis_tls_cert= +eda_redis_tls_key= + +# PostgreSQL +postgresql_tls_cert= +postgresql_tls_key= + +# Receptor +receptor_tls_cert= +receptor_tls_key= + +# Redis +redis_tls_cert= +redis_tls_key= +---- + diff --git a/downstream/modules/platform/proc-provision-ocp-storage-amazon-s3.adoc b/downstream/modules/platform/proc-provision-ocp-storage-amazon-s3.adoc index 144401f6c8..c95fd5cce5 100644 --- a/downstream/modules/platform/proc-provision-ocp-storage-amazon-s3.adoc +++ b/downstream/modules/platform/proc-provision-ocp-storage-amazon-s3.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="provision-ocp-storage-amazon-s3_{context}"] @@ -41,7 +43,7 @@ spec: + . If you are applying this secret to an existing instance, restart the API pods for the change to take effect. `` is the name of your hub instance. - ++ [source,bash] ---- $ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=-api diff --git a/downstream/modules/platform/proc-provision-ocp-storage-azure-blob.adoc b/downstream/modules/platform/proc-provision-ocp-storage-azure-blob.adoc index 48ae256e71..1ac9e39df4 100644 --- a/downstream/modules/platform/proc-provision-ocp-storage-azure-blob.adoc +++ b/downstream/modules/platform/proc-provision-ocp-storage-azure-blob.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="provision-ocp-storage-azure-blob_{context}"] = Configuring object storage on Azure Blob @@ -38,10 +40,10 @@ EOF spec: object_storage_azure_secret: test-azure ---- - ++ . If you are applying this secret to an existing instance, restart the API pods for the change to take effect. `` is the name of your hub instance. - ++ [source,bash] ---- $ oc -n $HUB_NAMESPACE delete pod -l app.kubernetes.io/name=-api diff --git a/downstream/modules/platform/proc-provision-ocp-storage-with-readwritemany.adoc b/downstream/modules/platform/proc-provision-ocp-storage-with-readwritemany.adoc index 9d5440177b..95754ab837 100644 --- a/downstream/modules/platform/proc-provision-ocp-storage-with-readwritemany.adoc +++ b/downstream/modules/platform/proc-provision-ocp-storage-with-readwritemany.adoc @@ -1,12 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-provision-ocp-storage-with-readwritemany_{context}"] = Provisioning OCP storage with `ReadWriteMany` access mode -To ensure successful installation of {OperatorPlatform}, you must provision your storage type for {HubName} initially to `ReadWriteMany` access mode. +To ensure successful installation of {OperatorPlatformNameShort}, you must provision your storage type for {HubName} initially to `ReadWriteMany` access mode. .Procedure -. Click link:{BaseURL}/openshift_container_platform/4.10/html-single/storage/index#persistent-storage-nfs-provisioning_persistent-storage-nfs[Provisioning] to update the access mode. +. Go to menu:Storage[PersistentVolume]. +. Click btn:[Create PersistentVolume]. . In the first step, update the `accessModes` from the default `ReadWriteOnce` to `ReadWriteMany`. +.. See link:{BaseURL}/openshift_container_platform/4.10/html-single/storage/index#persistent-storage-nfs-provisioning_persistent-storage-nfs[Provisioning] to update the access mode. for a detailed overview. . Complete the additional steps in this section to create the persistent volume claim (PVC). diff --git a/downstream/modules/platform/proc-proxy-AWS-inventory-sync.adoc b/downstream/modules/platform/proc-proxy-AWS-inventory-sync.adoc new file mode 100644 index 0000000000..74c0489297 --- /dev/null +++ b/downstream/modules/platform/proc-proxy-AWS-inventory-sync.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-proxy-AWS-inventory-sync"] + += Enabling a configurable proxy environment for AWS inventory synchronization + +To enable a configurable proxy environment for AWS inventory synchronization, you can manually edit the override configuration file or set the configuration in the platform UI: + +. Manually edit `/usr/lib/systemd/system/receptor.service.d/override.conf` and add the following `http_proxy` environment variables there: ++ +---- +http_proxy: +https_proxy: +proxy_username: +Proxy_password: +---- ++ +Or + +. To do this through the UI use the following procedure: + +.Procedure + +.. From the navigation panel, select {MenuSetJob}. +.. Click btn:[Edit]. +.. Add the variables to the *Extra Environment Variables* field ++ +For example: +* +---- +"AWX_TASK_ENV": { + "no_proxy": "localhost,127.0.0.0/8,10.0.0.0/8", + "http_proxy": "http://proxy_host:3128/", + "https_proxy": "http://proxy_host:3128/" + }, +---- diff --git a/downstream/modules/platform/proc-pulling-the-secret.adoc b/downstream/modules/platform/proc-pulling-the-secret.adoc index a05c8fa5ca..a9c1df46f6 100644 --- a/downstream/modules/platform/proc-pulling-the-secret.adoc +++ b/downstream/modules/platform/proc-pulling-the-secret.adoc @@ -1,6 +1,8 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-pulling-the-secret"] -= Pulling the secret for OpenShift Container Platform deployments += Pulling the secret for {OCPShort} deployments [NOTE] ==== @@ -19,14 +21,19 @@ oc create secret generic ee-pull-secret \ --from-literal=username= \ --from-literal=password= \ --from-literal=url=registry.redhat.io - -oc edit automationcontrollers ---- -. Add `ee_pull_credentials_secret` and `ee-pull-secret` to the specification using: +. Add `ee_pull_credentials_secret` and `ee-pull-secret` to the specification by editing the deployment specification: ++ +---- +oc edit automationcontrollers aap-controller-o yaml +---- ++ +and add the following: + ---- -spec.ee_pull_credentials_secret=ee-pull-secret +spec + ee_pull_credentials_secret=ee-pull-secret ---- . To manage instances from the {ControllerName} UI, you must have System Administrator or System Auditor permissions. diff --git a/downstream/modules/platform/proc-reinstalling-containerized-aap.adoc b/downstream/modules/platform/proc-reinstalling-containerized-aap.adoc new file mode 100644 index 0000000000..06a712e428 --- /dev/null +++ b/downstream/modules/platform/proc-reinstalling-containerized-aap.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + +[id="reinstalling-containerized-aap"] += Reinstalling containerized {PlatformNameShort} + +[role="_abstract"] + +To reinstall a containerized deployment after uninstalling and preserving the database, follow the steps in link:{URLContainerizedInstall}/aap-containerized-installation#installing-containerized-aap[Installing containerized {PlatformNameShort}] and include the existing secret key value in the playbook command: + +---- +$ ansible-playbook -i inventory ansible.containerized_installer.install -e controller_secret_key= +---- + +[role="_additional-resources"] +.Additional resources +* link:{URLContainerizedInstall}/appendix-inventory-files-vars[Inventory file variables] diff --git a/downstream/modules/platform/proc-renew-ssl-cert.adoc b/downstream/modules/platform/proc-renew-ssl-cert.adoc index 101a0b1b1f..74cd01da28 100644 --- a/downstream/modules/platform/proc-renew-ssl-cert.adoc +++ b/downstream/modules/platform/proc-renew-ssl-cert.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="renew-ssl-cert_{context}"] = Renewing the self-signed SSL certificate @@ -16,13 +18,15 @@ aap_service_regen_cert=true .Verification -* Validate the CA file and server.crt file on {ControllerName}: +* Validate the CA file and `server.crt` file on {ControllerName}: ++ ---- openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/tower/tower.cert openssl s_client -connect :443 ---- -* Validate the CA file and server.crt file on {HubName}: +* Validate the CA file and `server.crt` file on {HubName}: ++ ---- openssl verify -CAfile ansible-automation-platform-managed-ca-cert.crt /etc/pulp/certs/pulp_webserver.crt openssl s_client -connect :443 diff --git a/downstream/modules/platform/proc-restore-aap-container.adoc b/downstream/modules/platform/proc-restore-aap-container.adoc new file mode 100644 index 0000000000..4ad37c217e --- /dev/null +++ b/downstream/modules/platform/proc-restore-aap-container.adoc @@ -0,0 +1,93 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-restore-aap-container"] += Restoring containerized {PlatformNameShort} + +Restore your {ContainerBase} of {PlatformNameShort} from a backup, or to a different environment. + +.Prerequisites +* You have logged in to the {RHEL} host as your dedicated non-root user. +* You have a backup of your {PlatformNameShort} deployment. For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#backing-up-containerized-ansible-automation-platform[Backing up container-based {PlatformNameShort}]. +* If restoring to a different environment with the same hostnames, you have performed a fresh installation on the target environment with the same topology as the original (source) environment. +* You have ensured that the administrator credentials on the target environment match the administrator credentials from the source environment. + +.Procedure +. Go to the installation directory on your {RHEL} host. + +. Perform the relevant restoration steps: +** If you are restoring to the same environment with the same hostnames, run the `restore` playbook: ++ +---- +$ ansible-playbook -i ansible.containerized_installer.restore +---- ++ +This restores the important data deployed by the containerized installer such as: ++ +* PostgreSQL databases +* Configuration files +* Data files ++ +By default, the backup directory is set to `./backups`. You can change this by using the `backup_dir` variable in your `inventory` file. + +** If you are restoring to a different environment with different hostnames, perform the following additional steps before running the `restore` playbook: ++ +[IMPORTANT] +Restoring to a different environment with different hostnames is not recommended and is intended only as a workaround. ++ +... For each component, identify the backup file from the source environment that contains the PostgreSQL dump file. ++ +For example: ++ +---- +$ cd ansible-automation-platform-containerized-setup-2.5-XX/backups +---- ++ +---- +$ tar tvf gateway_env1-gateway-node1.tar.gz | grep db + +-rw-r--r-- ansible/ansible 4850774 2025-06-30 11:05 aap/backups/awx.db +---- +... Copy the backup files from the source environment to the target environment. +... Rename the backup files on the target environment to reflect the new node names. ++ +For example: ++ +---- +$ cd ansible-automation-platform-containerized-setup-2.5-XX/backups +---- ++ +---- +$ mv gateway_env1-gateway-node1.tar.gz gateway_env2-gateway-node1.tar.gz +---- +... For enterprise topologies, ensure that the component backup file containing the `component.db` file is listed first in its group within the inventory file. ++ +For example: ++ +---- +$ cd ansible-automation-platform-containerized-setup-2.5-XX +---- ++ +---- +$ ls backups/gateway* + +gateway_env2-gateway-node1.tar.gz +gateway_env2-gateway-node2.tar.gz +---- ++ +---- +$ tar tvf backups/gateway_env2-gateway-node1.tar.gz | grep db + +-rw-r--r-- ansible/ansible 416687 2025-06-30 11:05 aap/backups/gateway.db +---- ++ +---- +$ tar tvf backups/gateway_env2-gateway-node2.tar.gz | grep db +---- ++ +---- +$ vi inventory + +[automationgateway] +env2-gateway-node1 +env2-gateway-node2 +---- diff --git a/downstream/modules/platform/proc-rpm-troubleshoot-generating-logs.adoc b/downstream/modules/platform/proc-rpm-troubleshoot-generating-logs.adoc new file mode 100644 index 0000000000..d8b79f0347 --- /dev/null +++ b/downstream/modules/platform/proc-rpm-troubleshoot-generating-logs.adoc @@ -0,0 +1,59 @@ +:_mod-docs-content-type: PROCEDURE + +[id="rpm-troubleshoot-generating-logs"] + += Gathering {PlatformNameShort} logs + +With the `sos` utility, you can collect configuration, diagnostic, and troubleshooting data, and provide those files to Red Hat Technical Support. An `sos` report is a common starting point for Red Hat technical support engineers when performing analysis of a service request for the {PlatformNameShort}. + +As part of the troubleshooting with Red Hat Support, you can collect the `sos` report for each node in your RPM-based installation of {PlatformNameShort} using the installation inventory and the installation program. + +.Procedure + +. Access the installation program folder with the inventory file and run the installation program setup script the following command: ++ +`$ ./setup.sh -s` ++ +With this command, you can connect to each node present in the inventory, install the `sos` tool, and generate new logs. ++ +[NOTE] +==== +If you are running the setup as a non-root user with sudo privileges, you can use the following command: +---- +$ ANSIBLE_BECOME_METHOD='sudo' +ANSIBLE_BECOME=True ./setup.sh -s +---- +==== + +. _Optional_: If required, change the location of the `sos` report files. ++ +The `sos` report files are copied to the `/tmp` folder for the current server. To change the location, specify the new location by using the following command: ++ +---- +$ ./setup.sh -e 'target_sos_directory=/path/to/files' -s +---- ++ +Where `target_sos_directory=/path/to/files` is used to specify the destination directory where the `sos` report will be saved. In this case, the `sos` report is stored in the directory `/path/to/files`. + +. Gather the files described on the playbook output and share with the support engineer or directly upload the `sos` report to Red Hat. ++ +To create an `sos` report with additional information or directly upload the data to Red Hat, use the following command: ++ +---- +$ ./setup.sh -e 'case_number=0000000' -e 'clean=true' -e 'upload=true' -s +---- ++ +.Parameter Reference Table +[%header, cols="a,a,a"] +[%autowidth] +|=== +|Parameter |Description |Default value | + +`case_number`| Specifies the support case number that you want. | - | + +`clean`| Obfuscates sensitive data that might be present on the `sos` report. | `false` | + +`upload`| Automatically uploads the `sos` report data to Red Hat. | `false` | +|=== + +To know more about the `sos` report tool, see the KCS article: link:https://access.redhat.com/solutions/3592[What is an sos report and how to create one in {RHEL}?] diff --git a/downstream/modules/platform/proc-run-jobs-on-execution-nodes.adoc b/downstream/modules/platform/proc-run-jobs-on-execution-nodes.adoc index cdbc91779a..6467db939a 100644 --- a/downstream/modules/platform/proc-run-jobs-on-execution-nodes.adoc +++ b/downstream/modules/platform/proc-run-jobs-on-execution-nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-run-jobs-on-execution-nodes"] = Running jobs on execution nodes @@ -6,12 +8,12 @@ You must specify where jobs are run, or they default to running in the control c To do this, set up a Job Template. -For more information on Job Templates, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-job-templates[Job Templates] in the _{ControllerUG}_. +For more information on Job Templates, see link:{URLControllerUserGuide}/controller-job-templates[Job templates] in _{TitleControllerUserGuide}_. .Procedure . The *Templates* list view shows job templates that are currently available. -From this screen you can launch image:rightrocket.png[Launch,15,15], edit image:leftpencil.png[Edoit,15,15], and copy image:copy.png[Copy,15,15] a workflow job template. +From this screen you can launch image:rightrocket.png[Launch,15,15], edit image:leftpencil.png[Edoit,15,15], and duplicate image:copy.png[Duplicate,15,15] a workflow job template. . Select the job you want and click the image:rightrocket.png[Launch,15,15] icon. . Select the *Instance Group* on which you want to run the job. Note that a System Administrator must grant you or your team permissions to be able to use an instance group in a job template. diff --git a/downstream/modules/platform/proc-running-setup-script-for-updates.adoc b/downstream/modules/platform/proc-running-setup-script-for-updates.adoc index 255ef177c6..aa1dd86b8a 100644 --- a/downstream/modules/platform/proc-running-setup-script-for-updates.adoc +++ b/downstream/modules/platform/proc-running-setup-script-for-updates.adoc @@ -1,4 +1,6 @@ -// [id="proc-running-setup-script-for-updates_{context}"] +:_mod-docs-content-type: PROCEDURE + +[id="proc-running-setup-script-for-updates"] = Running the {PlatformName} installer setup script @@ -7,10 +9,15 @@ You can run the setup script once you have finished updating the `inventory` fil .Procedure -. Run the `setup.sh` script +* Run the `setup.sh` script: + ----- $ ./setup.sh ----- -The installation will begin. +The installation will begin. + +[role="_additional-resources"] +.Next steps +If you are upgrading from {PlatformNameShort} 2.4 to 2.5, proceed to +xref:account-linking_aap-post-upgrade[Linking your account] to link your existing service level accounts to a single unified platform account. diff --git a/downstream/modules/platform/proc-running-setup-script.adoc b/downstream/modules/platform/proc-running-setup-script.adoc index c43a9e4a53..5763a69e77 100644 --- a/downstream/modules/platform/proc-running-setup-script.adoc +++ b/downstream/modules/platform/proc-running-setup-script.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-running-setup-script_{context}"] = Running the {PlatformName} installer setup script [role="_abstract"] -After you update the inventory file with required parameters for installing your {PrivateHubName}, run the installer setup script. +After you update the inventory file with required parameters, run the installer setup script. .Procedure @@ -13,4 +15,20 @@ After you update the inventory file with required parameters for installing your $ sudo ./setup.sh ----- +[NOTE] +==== +If you are running the setup as a non-root user with `sudo` privileges, you can use the following command: +---- +$ ANSIBLE_BECOME_METHOD='sudo' +ANSIBLE_BECOME=True ./setup.sh +---- +==== + Installation of {PlatformName} will begin. + +.Additional resources +See link:https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_privilege_escalation.html[Understanding privilege escalation] for additional `setup.sh` script examples. + +ifdef::mesh-VM[] +If you want to add additional nodes to your {AutomationMesh} after the initial setup, edit the inventory file to add the new node, then rerun the `setup.sh` script. +endif::mesh-VM[] diff --git a/downstream/modules/platform/proc-scm-git-subversion.adoc b/downstream/modules/platform/proc-scm-git-subversion.adoc index e860ca9abc..8f7cdc5aff 100644 --- a/downstream/modules/platform/proc-scm-git-subversion.adoc +++ b/downstream/modules/platform/proc-scm-git-subversion.adoc @@ -1,44 +1,50 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-scm-git-subversion"] = SCM Types - Configuring playbooks to use Git and Subversion +Configure your projects to synchronize Ansible playbooks from Source Code Management (SCM) systems such as Git and Subversion. +Integrating with SCM is a best practice for managing playbooks, as it provides version control, collaboration features, and a centralized repository for your automation code. +By following these steps, you can ensure your environment always uses the latest version of your playbooks directly from your chosen SCM. + .Procedure + . From the navigation panel, select {MenuAEProjects}. . Click the project name you want to use. . In the project *Details* tab, click btn:[Edit project]. -. Select the appropriate option (Git or Subversion) from the *Source Control Type* menu. +. Select the appropriate option (Git or Subversion) from the *Source control type* menu. + //image:projects-create-scm-project.png[Select scm] . Enter the appropriate details into the following fields: -* *Source Control URL* - See an example in the tooltip . -* Optional: *Source Control Branch/Tag/Commit*: Enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout. -Some commit hashes and references might not be available unless you also provide a custom refspec in the next field. +* *Source control URL* - See an example in the tooltip . +* Optional: *Source control branch/tag/commit*: Enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout. +Some commit hashes and references might not be available unless you also give a custom refspec in the next field. If left blank, the default is `HEAD` which is the last checked out Branch, Tag, or Commit for this project. -* *Source Control Refspec* - This field is an option specific to git source control and only advanced users familiar and comfortable with git should specify which references to download from the remote repository. +* *Source control refspec* - This field is an option specific to git source control and only advanced users familiar and comfortable with git should specify which references to download from the remote repository. For more information, see xref:controller-job-branch-overriding[Job branch overriding]. -* *Source Control Credential* - If authentication is required, select the appropriate source control credential. +* *Source control credential* - If authentication is required, select the appropriate source control credential. . Optional: *Options* - select the launch behavior, if applicable: * *Clean* - Removes any local modifications before performing an update. * *Delete* - Deletes the local repository in its entirety before performing an update. Depending on the size of the repository this can significantly increase the amount of time required to complete an update. * *Track submodules* - Tracks the latest commit. There is more information in the tooltip image:question_circle.png[Tooltip,15,15]. -* *Update Revision on Launch* - Updates the revision of the project to the current revision in the remote source control, and caching the roles directory from link:https://docs.ansible.com/automation-controller/latest/html/userguide/projects.html#ug-galaxy[Galaxy] or -xref:ref-projects-collections-support[Collections support]. +* *Update revision on launch* - Updates the revision of the project to the current revision in the remote source control, and caching the roles directory from xref:ref-projects-galaxy-support[Ansible Galaxy support] or xref:ref-projects-collections-support[Collections support]. {ControllerNameStart} ensures that the local revision matches and that the roles and collections are up-to-date with the last update. In addition, to avoid job overflows if jobs are spawned faster than the project can synchronize, selecting this enables you to configure a Cache Timeout to cache previous project synchronizations for a given number of seconds. -* *Allow Branch Override* - Enables a job template or an inventory source that uses this project to start with a specified SCM branch or revision other than that of the project. +* *Allow branch override* - Enables a job template or an inventory source that uses this project to start with a specified SCM branch or revision other than that of the project. For more information, see xref:controller-job-branch-overriding[Job branch overriding]. + -image:projects-create-scm-project-branch-override-checked.png[Override options] +//image:projects-create-scm-project-branch-override-checked.png[Override options] . Click btn:[Save project]. -[TIP] -==== -Using a GitHub link is an easy way to use a playbook. -To help get you started, use the `helloworld.yml` file available link:https://github.com/ansible/tower-example.git[here]. +//[TIP] +//==== +//Using a GitHub link is an easy way to use a playbook. +//To help get you started, use the `helloworld.yml` file available link:https://github.com/ansible/tower-example.git[here]. -This link offers a very similar playbook to the one created manually in the instructions found in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_controller/index[{ControllerGS}]. -Using it will not alter or harm your system in any way. -==== +//This link offers a very similar playbook to the one created manually in the instructions found in link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_automation_controller/index[{ControllerGS}]. +//Using it will not alter or harm your system in any way. +//==== diff --git a/downstream/modules/platform/proc-scm-insights.adoc b/downstream/modules/platform/proc-scm-insights.adoc index aac9bdc3cf..ec932eb61e 100644 --- a/downstream/modules/platform/proc-scm-insights.adoc +++ b/downstream/modules/platform/proc-scm-insights.adoc @@ -1,20 +1,27 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-scm-insights"] = SCM Type - Configuring playbooks to use Red Hat Insights +Configure your projects to retrieve Ansible playbooks directly from Red Hat Insights. +By integrating with Red Hat Insights, you can use it to manage and deploy remediation playbooks identified through its analysis of your {RHEL} environment. +This integration streamlines the process of addressing identified vulnerabilities and optimizing system configurations, ensuring your automation aligns with best practices and security recommendations. + .Procedure + . From the navigation panel, select {MenuAEProjects}. . Click the project name you want to use. . In the project *Details* tab, click btn:[Edit project]. . Select *Red Hat Insights* from the *Source Control Type* menu. -. In the *Credential* field, select the appropriate credential for use with Insights, as Red Hat Insights requires a credential for authentication. +. In the *Insights credential* field, select the appropriate credential for use with Insights, as Red Hat Insights requires a credential for authentication. . Optional: In the *Options* field, select the launch behavior, if applicable: * *Clean* - Removes any local modifications before performing an update. * *Delete* - Deletes the local repository in its entirety before performing an update. Depending on the size of the repository this can significantly increase the amount of time required to complete an update. -* *Update Revision on Launch* - Updates the revision of the project to the current revision in the remote source control, and caches the +* *Update revision on launch* - Updates the revision of the project to the current revision in the remote source control, and caches the roles directory from xref:ref-projects-galaxy-support[{Galaxy} support] or xref:ref-projects-collections-support[Collections support]. {ControllerNameStart} ensures that the local revision matches, and that the roles and collections are up-to-date. If jobs are spawned faster than the project can synchronize, selecting this enables you to configure a Cache Timeout to diff --git a/downstream/modules/platform/proc-scm-remote-archive.adoc b/downstream/modules/platform/proc-scm-remote-archive.adoc index ed5bb7b287..dc27a9292f 100644 --- a/downstream/modules/platform/proc-scm-remote-archive.adoc +++ b/downstream/modules/platform/proc-scm-remote-archive.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-scm-remote-archive"] = SCM Type - Configuring playbooks to use a remote archive @@ -9,20 +11,20 @@ containing all the requirements for that project in a single archive. . From the navigation panel, select {MenuAEProjects}. . Click the project name you want to use. . In the project *Details* tab, click btn:[Edit project]. -. Select *Remote Archive* from the *Source Control Type* menu. +. Select *Remote Archive* from the *Source control type* menu. . Enter the appropriate details into the following fields: -* *Source Control URL* - requires a URL to a remote archive, such as a _GitHub Release_ or a build artifact stored in _Artifactory_ and unpacks it into +* *Source control URL* - requires a URL to a remote archive, such as a _GitHub Release_ or a build artifact stored in _Artifactory_ and unpacks it into the project path for use. -* *Source Control Credential* - If authentication is required, select the appropriate source control credential. +* *Source control credential* - If authentication is required, select the appropriate source control credential. . Optional: In the *Options* field, select the launch behavior, if applicable: * *Clean* - Removes any local modifications before performing an update. * *Delete* - Deletes the local repository in its entirety before performing an update. Depending on the size of the repository this can significantly increase the amount of time required to complete an update. -* *Update Revision on Launch* - Not recommended. This option updates the revision of the project to the current revision in the remote source control, and caches the roles directory from xref:ref-projects-galaxy-support[{Galaxy} support] or xref:ref-projects-collections-support[Collections support]. -* *Allow Branch Override* - Not recommended. This option enables a job template that uses this project to launch with a specified SCM branch or revision other than that of the project's. +* *Update revision on launch* - Not recommended. This option updates the revision of the project to the current revision in the remote source control, and caches the roles directory from xref:ref-projects-galaxy-support[{Galaxy} support] or xref:ref-projects-collections-support[Collections support]. +* *Allow branch override* - Not recommended. This option enables a job template that uses this project to launch with a specified SCM branch or revision other than that of the project's. + //image:projects-create-scm-rm-archive.png[Remote archived project] + diff --git a/downstream/modules/platform/proc-securing-secrets-in-inventory.adoc b/downstream/modules/platform/proc-securing-secrets-in-inventory.adoc index 0c48fd85c6..c5f24059b6 100644 --- a/downstream/modules/platform/proc-securing-secrets-in-inventory.adoc +++ b/downstream/modules/platform/proc-securing-secrets-in-inventory.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-securing_secrets_in_inventory_{context}"] = Securing secrets in the inventory file diff --git a/downstream/modules/platform/proc-set-EDA-proxy.adoc b/downstream/modules/platform/proc-set-EDA-proxy.adoc new file mode 100644 index 0000000000..3ff3255e95 --- /dev/null +++ b/downstream/modules/platform/proc-set-EDA-proxy.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-set-EDA-proxy"] + += Configuring proxy settings on {EDAName} +For {EDAName}, there are no global settings to set a proxy. +You must specify the proxy for every project. + +.Procedure +. From the navigation panel, select {MenuADProjects}. +. Click btn:[Create Project] + +Use the *Proxy* field. diff --git a/downstream/modules/platform/proc-set-domain-of-interest.adoc b/downstream/modules/platform/proc-set-domain-of-interest.adoc new file mode 100644 index 0000000000..ffe6fa7e0b --- /dev/null +++ b/downstream/modules/platform/proc-set-domain-of-interest.adoc @@ -0,0 +1,20 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-06-04 +:_mod-docs-content-type: PROCEDURE + +[id="set-domain-of-interest_{context}"] += Setting your domains of interest + +With domain filtering, you can customize the content displayed in the *Jobs* and *Templates* sub-sections of Automation Execution. Jobs and templates are linked to descriptive labels. When you select a label, you can filter out less-relevant resources, giving you easy access to the resources relevant to your area of interest. + +.Procedure + +. From the navigation panel, select {MenuAEJobs} or {MenuAETemplates}. +. Beneath the page heading, next to *Domains*, is a list of topic-related labels. Select a label to filter jobs and job templates so that only content related to the labels is shown. You can choose more than one label. +. To clear your selection, click the *X*. +. To customize your domain options, select the image:wrench.png[Wrench,15,15] icon. In the modal that appears, select *Add Domain* to add new domains to filter with. + +[NOTE] +==== +You can add labels to your individual job templates to make the templates appear as resources when you filter with the related domain label. Go to {MenuAETemplates}, select your job template, and click btn:[Edit template]. On the editing screen, enter the label you want to use in the *Labels* field and click btn:[Save job template]. +==== diff --git a/downstream/modules/platform/proc-set-registry-username-password.adoc b/downstream/modules/platform/proc-set-registry-username-password.adoc new file mode 100644 index 0000000000..e248ee27db --- /dev/null +++ b/downstream/modules/platform/proc-set-registry-username-password.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-set-registry-username-password"] + += Setting registry_username and registry_password + +When using the `registry_username` and `registry_password` variables for an online non-bundled installation, you need to create a new registry service account. + +Registry service accounts are named tokens that can be used in environments where credentials will be shared, such as deployment systems. + +.Procedure +. Go to https://access.redhat.com/terms-based-registry/accounts. +. On the *Registry Service Accounts* page click btn:[New Service Account]. +. Enter a name for the account using only the allowed characters. +. Optionally enter a description for the account. +. Click btn:[Create]. +. Find the created account in the list by searching for your name in the search field. +. Click the name of the account that you created. +. Alternatively, if you know the name of your token, you can go directly to the page by entering the URL: ++ +---- +https://access.redhat.com/terms-based-registry/token/ +---- ++ +. A *token* page opens, displaying a generated username (different from the account name) and a token. ++ +.. If no token is displayed, click btn:[Regenerate Token]. You can also click this to generate a new username and token. +. Copy the username (for example "1234567|testuser") and use it to set the variable `registry_username`. +. Copy the token and use it to set the variable `registry_password`. diff --git a/downstream/modules/platform/proc-set-up-virtual-machines.adoc b/downstream/modules/platform/proc-set-up-virtual-machines.adoc index 7ceda4aaec..f761755e1a 100644 --- a/downstream/modules/platform/proc-set-up-virtual-machines.adoc +++ b/downstream/modules/platform/proc-set-up-virtual-machines.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-set-up-virtual-machines"] = Setting up Virtual Machines for use in an {AutomationMesh} @@ -41,22 +43,40 @@ For more information about Simple Content Access, see link:{BaseURL}/subscriptio . Enable {PlatformNameShort} subscriptions and the proper {PlatformName} channel: + +For RHEL 8 ++ ---- -# subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms for RHEL 8 - -# subscription-manager repos --enable ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms for RHEL 9 +# subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms ---- - ++ +For RHEL 9 ++ +---- +# subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms +---- ++ +For ARM ++ +---- +# subscription-manager repos --enable ansible-automation-platform-2.5-for-rhel-aarch64-rpms +---- ++ . Ensure the packages are up to date: + ---- sudo dnf upgrade -y ---- -. Install the ansible-core packages: +. Install the ansible-core packages on the machine where the downloaded bundle is to run: + ---- sudo dnf install -y ansible-core ---- - ++ +[NOTE] +==== +Ansible core is required on the machine that runs the {AutomationMesh} configuration bundle playbooks. This document assumes that happens on the execution node. +However, this step can be omitted if you run the playbook from a different machine. +You cannot run directly from the control node, this is not currently supported, but future support expects that the control node has direct connectivity to the execution node. +==== diff --git a/downstream/modules/platform/proc-settings-gw-additional-options.adoc b/downstream/modules/platform/proc-settings-gw-additional-options.adoc new file mode 100644 index 0000000000..3923df9b52 --- /dev/null +++ b/downstream/modules/platform/proc-settings-gw-additional-options.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-gw-other-options"] + += Configuring additional platform options + +//Content divided into multiple procedures to address issue AAP-30592 + +From the *{GatewayStart} settings* page, you can configure additional platform options. + +.Procedure +. From the navigation panel, select {MenuSetGateway}. +. The *{GatewayStart} settings* page is displayed. +. Click btn:[Edit {Gateway} settings]. +. You can configure the following *Other settings*: ++ +* *Jwt expiration buffer in seconds*: The number of seconds before a JWT token's expiration to revoke from the cache. ++ +When authentication happens a JWT token is created for the user and that token is cached. +When subsequent calls happen to services such as {ControllerName} or {EDAName}, the token is taken from the cache and sent to the service. +Both the token and the cache of the token have an expiration time. +If the token expires while in the cache the authentication process attempts results in a 401 error (unauthorized). +This setting gives {PlatformName} a buffer by removing the JWT token from the cache before the token expires. +When a token is revoked from cache a new token with a new expiration is generated and cached for the user. +As a result, expired tokens from the cache are never used. +This setting defaults to 2 seconds. +If you have a large latency between {Gateway} and your services and observe 401 responses you must increase this setting to lower the number of 401 responses. +* *Status endpoint backend timeout seconds*: Timeout (in seconds) for the status endpoint to wait when trying to connect to a backend. +* *Status endpoint backend verify*: Specifies whether SSL certificates of the services are verified when calling individual nodes for statuses. +* *Request timeout*: Specifies, in seconds, the length of time before the proxy will report a timeout and generate a 504. +* *Allow external users to create OAuth2 tokens *: For security reasons, users from external authentication providers, such as LDAP, SAML, SSO, Radius, and others, are not allowed to create OAuth2 tokens. +To change this behavior, enable this setting. +Existing tokens are not deleted when this setting is turned off. ++ +. Click btn:[Save {Gateway} settings] to save the changes or proceed to configure the other platform options available. + diff --git a/downstream/modules/platform/proc-settings-gw-custom-login.adoc b/downstream/modules/platform/proc-settings-gw-custom-login.adoc new file mode 100644 index 0000000000..821df06284 --- /dev/null +++ b/downstream/modules/platform/proc-settings-gw-custom-login.adoc @@ -0,0 +1,20 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-gw-custom-login"] + += Configuring a custom platform log in + +//Content divided into multiple procedures to address issue AAP-30592 + +From the *{GatewayStart} settings* page, you can configure the custom log in options. + +.Procedure +. From the navigation panel, select {MenuSetGateway}. +. The *{GatewayStart} settings* page is displayed. +. To configure the options, click btn:[Edit {Gateway} settings]. +. You can configure the following *Custom Login* options: ++ +* *Custom login info*: Provide specific information (such as a legal notice or a disclaimer) to a text box in the login modal. For example, you can include a company banner with a statement such as, “This is only to be used for ``, etc.” +* *Custom logo* : Provide an image file for setting up a custom logo (must be a data URL with a base64-encoded GIF, PNG, or JPEG image). ++ +. Click btn:[Save {Gateway} settings] to save the changes or proceed to configure the other platform options available. diff --git a/downstream/modules/platform/proc-settings-gw-password-security.adoc b/downstream/modules/platform/proc-settings-gw-password-security.adoc new file mode 100644 index 0000000000..dce3bfd824 --- /dev/null +++ b/downstream/modules/platform/proc-settings-gw-password-security.adoc @@ -0,0 +1,23 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-gw-password-security"] + += Configuring a platform password security policy + +//Content divided into multiple procedures to address issue AAP-30592 + +From the *{GatewayStart} settings* page, you can configure a password security policy. + +.Procedure +. From the navigation panel, select {MenuSetGateway}. +. The *{GatewayStart} settings* page is displayed. +. To configure the options, click btn:[Edit {Gateway} settings]. +. You can configure the following *Password Security* options: ++ +* *Password minimum uppercase letters*: How many uppercase characters need to be in a local password. +* *Password minimum length*: The minimum length of a local password. +* *Password minimum numerical digits*: How many numerical characters need to be in a local password. +* *Password minimum special characters*: How many special characters need to be in a local password. ++ +. Click btn:[Save {Gateway} settings] to save the changes or proceed to configure the other platform options available. + diff --git a/downstream/modules/platform/proc-settings-gw-security-options.adoc b/downstream/modules/platform/proc-settings-gw-security-options.adoc new file mode 100644 index 0000000000..847743508e --- /dev/null +++ b/downstream/modules/platform/proc-settings-gw-security-options.adoc @@ -0,0 +1,52 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-gw-security-options"] + += Configuring platform security + +//Content divided into multiple procedures to address issue AAP-30592 + +From the *{GatewayStart} settings* page, you can configure platform security settings. + +.Procedure +. From the navigation panel, select {MenuSetGateway}. +. The *{GatewayStart} settings* page is displayed. +. To configure the options, click btn:[Edit]. +. You can configure the following *Security* settings: ++ +* *Allow admin to set insecure*: Whether a superuser account can save an insecure password when editing any local user account. +* *Gateway basic auth enabled*: Enable basic authentication to the {Gateway} API. ++ +Turning this off prevents all basic authentication (local users), so customers need to make sure they have their alternative authentication mechanisms correctly configured before doing so. ++ +Turning it off with only local authentication configured also prevents all access to the UI. ++ +*Social auth username is full email*: Enabling this setting alerts social authentication to use the full email as username instead of the full name. ++ +*Gateway token name*: The header name to push from the proxy to the backend service. ++ +[WARNING] +==== +If this name is changed, backends must be updated to compensate. +==== ++ +* *Gateway access token expiration*: How long the access tokens are valid for. +* *Jwt private key*: The private key used to encrypt the JWT tokens sent to backend services. ++ +This should be a private RSA key and one should be generated automatically on installation. ++ +[NOTE] +==== +Use caution when rotating the key as it will cause current sessions to fail until their JWT keys are reset. +==== ++ +* (Read only) *Jwt public key*: The private key used to encrypt the JWT tokens sent to backend services. ++ +This should be a private RSA key and one should be generated automatically on installation. ++ +[NOTE] +==== +See other services' documentation on how they consume this key. +==== ++ +. Click btn:[Save changes] to save the changes or proceed to configure the other platform options available. \ No newline at end of file diff --git a/downstream/modules/platform/proc-settings-gw-session-options.adoc b/downstream/modules/platform/proc-settings-gw-session-options.adoc new file mode 100644 index 0000000000..02a9593a92 --- /dev/null +++ b/downstream/modules/platform/proc-settings-gw-session-options.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-gw-session-options"] + += Configuring platform sessions + +//Content divided into multiple procedures to address issue AAP-30592 + +From the *{GatewayStart} settings* page, you can configure platform session settings. + +.Procedure +. From the navigation panel, select {MenuSetGateway}. +. The *{GatewayStart} settings* page is displayed. +. To configure the options, click btn:[Edit {Gateway} settings]. +. Enter the time in seconds before a session expires in the *Session cookie age* field. +. Click btn:[Save {Gateway} settings] to save the changes or proceed to configure the other platform options available. + diff --git a/downstream/modules/platform/proc-settings-platform-gateway.adoc b/downstream/modules/platform/proc-settings-platform-gateway.adoc new file mode 100644 index 0000000000..8aff06fcb8 --- /dev/null +++ b/downstream/modules/platform/proc-settings-platform-gateway.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-platform-gateway"] + += {GatewayStart} + +//To be added to Donna's AAP/UI document for 2.5 +//Content divided into multiple procedures to address issue AAP-30592 + +The {Gateway} is the service that handles authentication and authorization for {PlatformNameShort}. +It provides a single ingress into the Platform and serves the Platform's user interface. + +From the {MenuSetGateway} menu, you can configure *{GatewayStart}*, +*Security*, *Session*, *Platform Security*, *Custom Login*, and *Other* settings. + +.Procedure +. From the navigation panel, select {MenuSetGateway}. +. The *{GatewayStart} settings* page is displayed. +//[Removing screen captures but they can be added back if requested.] +//image::platform_gateway_settings_page.png[Initial {Gateway} settings page] +. To configure the options, click btn:[Edit {Gateway} settings]. +//image::platform_gateway_full.png[{GatewayStart} configurable options] +. You can configure the following {Gateway} options: ++ +* *{GatewayStart} proxy url*: URL to the {Gateway} proxy layer. +* *{GatewayStart} proxy url ignore cert*: Ignore the certificate to the {Gateway} proxy layer. ++ +. Click btn:[Save {Gateway} settings] to save the changes or proceed to configure the other platform options available. \ No newline at end of file diff --git a/downstream/modules/platform/proc-settings-troubleshooting.adoc b/downstream/modules/platform/proc-settings-troubleshooting.adoc new file mode 100644 index 0000000000..cca91ac4bd --- /dev/null +++ b/downstream/modules/platform/proc-settings-troubleshooting.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-troubleshooting"] + +//To be added to Donna's AAP/UI document for 2.5 += Troubleshooting options + +You can use the *Troubleshooting* page to enable or disable certain flags that aid in debugging issues within {PlatformNameShort}. + +.Procedure +. From the navigation panel, select {MenuSetTroubleshooting}. +. The *Troubleshooting* page is displayed. +. Click btn:[Edit]. +//[ddacosta] Removing screen captures but they can be added back if requested. +//image::troubleshooting_options.png[Troubleshooting options] +. You can select the following options: ++ +* *Enable or Disable tmp dir cleanup*: Select this to enable or disable the cleanup of tmp directories generated during execution of a job after job execution completes. +* *Debug Web Requests*: Select this to enable or disable web request profiling for debugging slow web requests. +* *Release Receptor Work*: Select this to turn on or off the deletion of job pods after they complete or fail. This can be helpful in debugging why a job failed. +* *Keep receptor work on error*: Select this to prevent receptor work from being released when an error is detected. +. Click btn:[Save] to save your changes. diff --git a/downstream/modules/platform/proc-settings-user-preferences.adoc b/downstream/modules/platform/proc-settings-user-preferences.adoc new file mode 100644 index 0000000000..1f0e1e8f22 --- /dev/null +++ b/downstream/modules/platform/proc-settings-user-preferences.adoc @@ -0,0 +1,52 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-settings-user-preferences"] + += User preferences + +//To be added to Donna's AAP/UI document for 2.5 + +You can use the *User preferences* page to customize your platform experience. Use this menu to control theming, layout options and formatting. + +[NOTE] +==== +User preferences are stored locally in your browser. This means that they are unique to you and your machine. +==== + +.Procedure + +. From the navigation panel, select {MenuSetUserPref}. +. The *User Preferences page* is displayed. +. Click btn:[Edit]. +. You can configure the following options: ++ +* *Refresh interval*: Select the refresh interval for the page. ++ +This refreshes the data on the page at the selected interval. ++ +The refresh happens in the background and does not reload the page. ++ +* *Color theme*: Select from: +** Dark theme +** Light theme +** System default ++ +* *Table layout*: Select from: +** Comfortable +** Compact ++ +* *Form columns*: Select from: +** Multiple columns of inputs +** Single column of inputs +//[ddacosta] 9/20/24 Form labels is no longer in the UI +//* *Form Labels*: Select from: +//** Labels above inputs +//** Labels beside inputs ++ +* *Date format* Select from: +** Shows dates *Relative* to the current time +** Shows dates as *Date and time* ++ +* *Preferred data format*: Sets the default format for when editing and displaying data. ++ +. Click btn:[Save user preferences]. diff --git a/downstream/modules/platform/proc-setup-ext-db-with-admin-creds.adoc b/downstream/modules/platform/proc-setup-ext-db-with-admin-creds.adoc new file mode 100644 index 0000000000..8a3c2fe975 --- /dev/null +++ b/downstream/modules/platform/proc-setup-ext-db-with-admin-creds.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + +[id="setup-ext-db-with-admin-creds"] += Setting up an external database with PostgreSQL admin credentials + +If you have PostgreSQL admin credentials, you can supply them in the inventory file and the installation program creates the PostgreSQL users and databases for each component for you. The PostgreSQL admin account must have `SUPERUSER` privileges. + +.Procedure + +* To configure the PostgreSQL admin credentials, add the following variables to the inventory file under the `[all:vars]` group: ++ +[source,yaml,subs="+attributes"] +---- +postgresql_admin_username= +postgresql_admin_password= +---- diff --git a/downstream/modules/platform/proc-setup-ext-db-without-admin-creds.adoc b/downstream/modules/platform/proc-setup-ext-db-without-admin-creds.adoc new file mode 100644 index 0000000000..80225bd41e --- /dev/null +++ b/downstream/modules/platform/proc-setup-ext-db-without-admin-creds.adoc @@ -0,0 +1,65 @@ +:_mod-docs-content-type: PROCEDURE + +[id="setup-ext-db-without-admin-creds"] += Setting up an external database without PostgreSQL admin credentials + +If you do not have PostgreSQL admin credentials, then PostgreSQL users and databases need to be created for each component ({Gateway}, {ControllerName}, {HubName}, and {EDAName}) before running the installation program. + +.Procedure + +. Connect to a PostgreSQL compliant database server with a user that has `SUPERUSER` privileges. ++ +[source,bash,subs="+attributes"] +---- +# psql -h -U -p +---- ++ +For example: ++ +[source,bash,subs="+attributes"] +---- +# psql -h db.example.com -U superuser -p 5432 +---- + +. Create the user with a password and ensure the `CREATEDB` role is assigned to the user. For more information, see link:https://www.postgresql.org/docs/13/user-manag.html[Database Roles]. ++ +[source,sql,subs="+attributes"] +---- +CREATE USER WITH PASSWORD CREATEDB; +---- + +. Create the database and add the user you created as the owner. ++ +[source,sql,subs="+attributes"] +---- +CREATE DATABASE OWNER ; +---- + +. When you have created the PostgreSQL users and databases for each component, you can supply them in the inventory file under the `[all:vars]` group. ++ +[source,yaml,subs="+attributes"] +---- +# {GatewayStart} +gateway_pg_host=aap.example.org +gateway_pg_database= +gateway_pg_username= +gateway_pg_password= + +# {ControllerNameStart} +controller_pg_host=aap.example.org +controller_pg_database= +controller_pg_username= +controller_pg_password= + +# {HubNameStart} +hub_pg_host=aap.example.org +hub_pg_database= +hub_pg_username= +hub_pg_password= + +# {EDAName} +eda_pg_host=aap.example.org +eda_pg_database= +eda_pg_username= +eda_pg_password= +---- diff --git a/downstream/modules/platform/proc-setup-postgresql-ext-database.adoc b/downstream/modules/platform/proc-setup-postgresql-ext-database.adoc index fcf250311e..0fc88bfe59 100644 --- a/downstream/modules/platform/proc-setup-postgresql-ext-database.adoc +++ b/downstream/modules/platform/proc-setup-postgresql-ext-database.adoc @@ -1,39 +1,44 @@ -[id="proc-setup-postgresql-ext-database"] +:_mod-docs-content-type: PROCEDURE + +[id="proc-setup-postgresql-ext-database_{context}"] = Setting up an external (customer supported) database [IMPORTANT] ==== -Red Hat does not support the use of external (customer supported) databases, however they are used by customers. -The following guidance on inital configuration, from a product installation perspective only, is provided to avoid related support requests. +* When using an external database with {PlatformNameShort}, you must create and maintain that database. Ensure that you clear your external database when uninstalling {PlatformNameShort}. + +* {PlatformName} {PlatformVers} uses {PostgresVers} and requires the external (customer supported) databases to have ICU support. + +* During configuration of an external database, you must check the external database coverage. For more information, see link:https://access.redhat.com/articles/4010491[{PlatformName} Database Scope of Coverage]. ==== -To create a database, user and password on an external PostgreSQL compliant database for use with {ControllerName}, use the following procedure. +{PlatformName} {PlatformVers} uses {PostgresVers} and requires the external (customer supported) databases to have ICU support. Use the following procedure to configure an external PostgreSQL compliant database for use with an {PlatformNameShort} component, for example {ControllerName}, {EDAName}, {HubName}, and {Gateway}. .Procedure -. Install and then connect to a PostgreSQL compliant database server with superuser privileges. +. Connect to a PostgreSQL compliant database server with superuser privileges. + ---- -# psql -h -U superuser -p 5432 -d postgres : +# psql -h -U superuser -p 5432 -d postgres ---- + -Where: +. Where the default value for is *hostname*: + ---- -h hostname --host=hostname ---- + -Specifies the host name of the machine on which the server is running. -If the value begins with a slash, it is used as the directory for the Unix-domain socket. +. Specify the hostname of the machine on which the server is running. +If the value begins with a slash, it is used as the directory for the UNIX-domain socket. + ---- -d dbname --dbname=dbname ---- + -Specifies the name of the database to connect to. -This is equivalent to specifying `dbname` as the first non-option argument on the command line. +. Specify the name of the database to connect to. +This is equal to specifying `dbname` as the first non-option argument on the command line. The `dbname` can be a connection string. If so, connection string parameters override any conflicting command line options. + @@ -42,31 +47,46 @@ If so, connection string parameters override any conflicting command line option --username=username ---- + -Connect to the database as the user `username` instead of the default. (You must have permission to do so.) +. Connect to the database as the user `username` instead of the default (you must have permission to do so). -. Create the user, database, and password with the `createDB` or administrator role assigned to the user. +. Create the user, database, and password with the `createDB` or `administrator` role assigned to the user. For further information, see link:https://www.postgresql.org/docs/13/user-manag.html[Database Roles]. -. Add the database credentials and host details to the {ControllerName} inventory file as an external database. -+ -The default values are used in the following example. + +. Run the installation program. If you are using a PostgreSQL database, the database is owned by the connecting user and must have a `createDB` or administrator role assigned to it. + +. Check that you can connect to the created database with the credentials provided in the inventory file. + +. Check the permission of the user. The user should have the `createDB` or administrator role. + +. After you create the PostgreSQL users and databases for each component, add the database credentials and host details in the inventory file under the [all:vars] group. + +[source,yaml,subs="+attributes"] ---- -[database] -pg_host='db.example.com' -pg_port=5432 -pg_database='awx' -pg_username='awx' -pg_password='redhat' ----- +# {ControllerNameStart} +pg_host=data.example.com +pg_database= +pg_port= +pg_username= +pg_password= -. Run the installer. -+ -If you are using a PostgreSQL database with {ControllerName}, the database is owned by the connecting user and must have a `createDB` or administrator role assigned to it. -. Check that you are able to connect to the created database with the user, password and database name. -. Check the permission of the user, the user should have the `createDB` or administrator role. +# {GatewayStart} +automationgateway_pg_host=aap.example.org +automationgateway_pg_database= +automationgateway_pg_port= +automationgateway_pg_username= +automationgateway_pg_password= -[NOTE] -==== -During this procedure, you must check the External Database coverage. For further information, see https://access.redhat.com/articles/4010491 -==== +# {HubNameStart} +automationhub_pg_host=data.example.com +automationhub_pg_database= +automationhub_pg_port= +automationhub_pg_username= +automationhub_pg_password= +# {EDAName} +automationedacontroller_pg_host=data.example.com +automationedacontroller_pg_database= +automationedacontroller_pg_port= +automationedacontroller_pg_username= +automationedacontroller_pg_password= +---- \ No newline at end of file diff --git a/downstream/modules/platform/proc-specify-nodes-job-execution.adoc b/downstream/modules/platform/proc-specify-nodes-job-execution.adoc index 9530ee374c..4c247bb4c9 100644 --- a/downstream/modules/platform/proc-specify-nodes-job-execution.adoc +++ b/downstream/modules/platform/proc-specify-nodes-job-execution.adoc @@ -1,4 +1,6 @@ -[id="proc-specify-nodes-job-execution"] +:_mod-docs-content-type: PROCEDURE + +[id="proc-specify-nodes-job-execution_{context}"] = Specify nodes for job execution diff --git a/downstream/modules/platform/proc-synchronizing-rpm-repositories-by-using-reposync.adoc b/downstream/modules/platform/proc-synchronizing-rpm-repositories-by-using-reposync.adoc index 561d2bb41d..7d75255f12 100644 --- a/downstream/modules/platform/proc-synchronizing-rpm-repositories-by-using-reposync.adoc +++ b/downstream/modules/platform/proc-synchronizing-rpm-repositories-by-using-reposync.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + // Module included in the following assemblies: // assembly-disconnected-installation.adoc @@ -7,14 +9,16 @@ To perform a reposync you need a RHEL host that has access to the internet. After the repositories are synced, you can move the repositories to the disconnected network hosted from a web server. +When downloading RPM, ensure you use the applicable distro. + .Procedure . Attach the BaseOS and AppStream required repositories: + ---- # subscription-manager repos \ - --enable rhel-8-for-x86_64-baseos-rpms \ - --enable rhel-8-for-x86_64-appstream-rpms + --enable rhel-9-for-x86_64-baseos-rpms \ + --enable rhel-9-for-x86_64-appstream-rpms ---- . Perform the reposync: @@ -25,19 +29,13 @@ To perform a reposync you need a RHEL host that has access to the internet. Afte -p /path/to/download ---- -.. Use reposync with `--download-metadata` and without `--newest-only`. See link://https://access.redhat.com/solutions/5186621[RHEL 8] Reposync. - -* If you are not using `--newest-only,` the repos downloaded will be ~90GB. +... Use reposync with `--download-metadata` and without `--newest-only`. See link://https://access.redhat.com/solutions/5186621[RHEL 8] Reposync. -* If you are using `--newest-only,` the repos downloaded will be ~14GB. +* If you are not using `--newest-only,` the repos downloaded may take an extended amount of time to sync due to the large number of GB. -. If you plan to use {RHSSO}, sync these repositories: - -.. jb-eap-7.3-for-rhel-8-x86_64-rpms -.. rh-sso-7.4-for-rhel-8-x86_64-rpms +* If you are using `--newest-only,` the repos downloaded may take an extended amount of time to sync due to the large number of GB. + After the reposync is completed, your repositories are ready to use with a web server. - . Move the repositories to your disconnected network. diff --git a/downstream/modules/platform/proc-troubleshoot-same-name.adoc b/downstream/modules/platform/proc-troubleshoot-same-name.adoc index f81c0e7cc3..b5756e911f 100644 --- a/downstream/modules/platform/proc-troubleshoot-same-name.adoc +++ b/downstream/modules/platform/proc-troubleshoot-same-name.adoc @@ -1,4 +1,6 @@ -[id="troubleshoot-same-name"] +:_mod-docs-content-type: PROCEDURE + +[id="troubleshoot-same-name_{context}"] = {ControllerNameStart} custom resource has the same name as an existing deployment diff --git a/downstream/modules/platform/proc-uninstalling-containerized-aap.adoc b/downstream/modules/platform/proc-uninstalling-containerized-aap.adoc index 973679b5c2..d900b7f72c 100644 --- a/downstream/modules/platform/proc-uninstalling-containerized-aap.adoc +++ b/downstream/modules/platform/proc-uninstalling-containerized-aap.adoc @@ -1,34 +1,53 @@ :_mod-docs-content-type: PROCEDURE -[id="uninstalling-containerized-aap_{context}"] +[id="uninstalling-containerized-aap"] = Uninstalling containerized {PlatformNameShort} -[role="_abstract"] +Uninstall your {ContainerBase} of {PlatformNameShort}. +.Prerequisites -To uninstall a containerized deployment, execute the *uninstall.yml* playbook. +* You have logged in to the {RHEL} host as your dedicated non-root user. + +.Procedure + +. If you intend to reinstall {PlatformNameShort} and want to use the preserved databases, you must collect the existing secret keys by running the following command: ++ +---- +$ podman secret inspect --showsecret | jq -r .[].SecretData +---- ++ +For example: ++ +---- +$ podman secret inspect --showsecret controller_secret_key | jq -r .[].SecretData +---- + +. Run the `uninstall` playbook: ++ ---- $ ansible-playbook -i inventory ansible.containerized_installer.uninstall ---- -This will stop all systemd units and containers and then delete all resources used by the containerized installer such as: +** This stops all systemd units and containers and then deletes all resources used by the containerized installer such as: -* config and data directories/files -* systemd unit files -* podman containers and images -* RPM packages +*** configuration and data directories and files +*** systemd unit files +*** Podman containers and images +*** RPM packages -To keep container images, you can set the *container_keep_images* variable to true. +** To keep container images, set the `container_keep_images` parameter to `true`. ++ ---- $ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=true ---- -To keep postgresql databases, you can set the *postgresql_keep_databases* variable to true. +** To keep PostgreSQL databases, set the `postgresql_keep_databases` parameter to `true`. ++ ---- -$ ansible-playbook -i ansible.containerized_installer.uninstall -e postgresql_keep_databases=true +$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e postgresql_keep_databases=true ---- -[NOTE] -==== -You will have to use the same django secret key values rather than the auto-generated ones. -==== \ No newline at end of file +[role="_additional-resources"] +.Additional resources +* link:{URLContainerizedInstall}/appendix-inventory-files-vars[Inventory file variables] diff --git a/downstream/modules/platform/proc-update-aap-container.adoc b/downstream/modules/platform/proc-update-aap-container.adoc new file mode 100644 index 0000000000..240528c6ff --- /dev/null +++ b/downstream/modules/platform/proc-update-aap-container.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: PROCEDURE +[id="updating-containerized-ansible-automation-platform"] + += Updating containerized {PlatformNameShort} + +Perform a patch update for a {ContainerBase} of {PlatformNameShort} from 2.5 to 2.5.x. + +include::snippets/container-upgrades.adoc[] + +.Prerequisites + +* You have reviewed the release notes for the associated patch release. +* You have created a backup of your {PlatformNameShort} deployment. + +.Procedure + +. Log in to the {RHEL} host as your dedicated non-root user. + +. Follow the steps in link:{URLContainerizedInstall}/aap-containerized-installation#downloading-ansible-automation-platform[Downloading {PlatformNameShort}] to download the latest version of containerized {PlatformNameShort}. + +. Copy the downloaded installation program to your {RHEL} Host. + +. Edit the `inventory` file to match your required configuration. You can keep the same parameters from your existing {PlatformNameShort} deployment or you can change the parameters to match any modifications to your environment. + +. Run the `install` playbook: ++ +---- +$ ansible-playbook -i inventory ansible.containerized_installer.install +---- ++ +* If your privilege escalation requires a password to be entered, append `-K` to the command. You will then be prompted for the `BECOME` password. +* You can use increasing verbosity, up to 4 v’s (`-vvvv`) to see the details of the installation process. However it is important to note that this can significantly increase installation time, so it is recommended that you use it only as needed or requested by Red Hat support. +. The update begins. + +[role="_additional-resources"] +.Additional resources +* link:{URLReleaseNotes}[{PlatformNameShort} {TitleReleaseNotes}] +* link:{URLContainerizedInstall}/aap-containerized-installation#backing-up-containerized-ansible-automation-platform[Backing up container-based {PlatformNameShort}] diff --git a/downstream/modules/platform/proc-update-aap-on-ocp.adoc b/downstream/modules/platform/proc-update-aap-on-ocp.adoc new file mode 100644 index 0000000000..324185f388 --- /dev/null +++ b/downstream/modules/platform/proc-update-aap-on-ocp.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: PROCEDURE + +[id="update-aap-on-ocp"] += Patch updating {PlatformNameShort} on {OCPShort} + +When you perform a patch update for an installation of {PlatformNameShort} on {OCPShort}, most updates happen within a channel: + +. A new update becomes available in the marketplace (through the redhat-operator CatalogSource). + +. A new InstallPlan is automatically created for your {PlatformNameShort} subscription. If the subscription is set to Manual, the InstallPlan must be manually approved in the OpenShift UI. If the subscription is set to Automatic, it upgrades as soon as the new version is available. ++ +[NOTE] +==== +It is recommended that you set a manual install strategy on your {OperatorPlatformNameShort} subscription (set when installing or upgrading the Operator) and you will be prompted to approve an upgrade when it becomes available in your selected update channel. Stable channels for each X.Y release (for example, stable-2.5) are available. +==== ++ +. A new Subscription, CSV, and Operator containers will be created alongside the old Subscription, CSV, and containers. Then the old resources will be cleaned up if the new install was successful. diff --git a/downstream/modules/platform/proc-update-aap-operator-yaml-chatbot.adoc b/downstream/modules/platform/proc-update-aap-operator-yaml-chatbot.adoc new file mode 100644 index 0000000000..a5ee18f3c9 --- /dev/null +++ b/downstream/modules/platform/proc-update-aap-operator-yaml-chatbot.adoc @@ -0,0 +1,48 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-update-aap-operator-chatbot"] + += Updating the YAML file of the {PlatformNameShort} operator + +After you create the chatbot authorization secret, you must update the YAML file of the {PlatformNameShort} operator to use the secret. + +.Procedure +. Log in to {OCP} as an administrator. +. Navigate to menu:Operators[Installed Operators]. +. From the list of installed operators, select the *{PlatformNameShort}* operator. +. Locate and select the *{PlatformNameShort}* custom resource, and then click the required app. +. Select the *YAML* tab. +. Scroll the text to find the `spec:` section, and add the following details under the `spec:` section: ++ +---- +spec: + lightspeed: + disabled: false + chatbot_config_secret_name: +---- +. Click *Save*. The {AAPchatbot} service takes a few minutes to set up. + +.Verification +. Verify that the chat interface service is running successfully: +.. Navigate to menu:Workloads[Pods]. +.. Filter with the term *api* and ensure that the following APIs are displayed in *Running* status: ++ +* `myaap-lightspeed-api-` +* `myaap-lightspeed-chatbot-api-` + +. Verify that the chat interface is displayed on the {PlatformNameShort}: +.. Access the {PlatformNameShort}: +... Navigate to menu:Operators[Installed Operators]. +... From the list of installed operators, click *Ansible Automation Platform*. +... Locate and select the *Ansible Automation Platform* custom resource, and then click the app that you created. +... From the *Details* tab, record the information available in the following fields: +* *URL*: This is the URL of your {PlatformNameShort} instance. +* *Gateway Admin User*: This is the username to log into your {PlatformNameShort} instance. +* *Gateway Admin password*: This is the password to log into your {PlatformNameShort} instance. +... Log in to the {PlatformNameShort} using the URL, username, and password that you recorded earlier. +.. Access the {AAPchatbot}: +... Click the {AAPchatbot} icon image:chatbot-icon.png[{AAPchatbot} icon] that is displayed at the top right corner of the taskbar. +... Verify that the chat interface is displayed, as shown in the following image: ++ +[.thumb] +image:aap-ansible-lightspeed-intelligent-assistant.png[{AAPchatbot}]. \ No newline at end of file diff --git a/downstream/modules/platform/proc-update-ee-image-locations.adoc b/downstream/modules/platform/proc-update-ee-image-locations.adoc index 3422fcc504..a4e5d678d9 100644 --- a/downstream/modules/platform/proc-update-ee-image-locations.adoc +++ b/downstream/modules/platform/proc-update-ee-image-locations.adoc @@ -1,5 +1,5 @@ // Module included in the following assemblies: -// assembly-platform-whats-next.adoc +// assembly-using-builder.adoc :_mod-docs-content-type: PROCEDURE @@ -18,7 +18,8 @@ If you installed {PrivateHubName} separately from {PlatformNameShort}, you can u touch ./group_vars/automationcontroller ---- + -. Paste the following content into `./group_vars/automationcontroller`. Adjust the settings to fit your environment: +. Paste the following content into `./group_vars/automationcontroller`. +Adjust the settings to fit your environment: + ---- # Automation Hub Registry @@ -37,6 +38,11 @@ global_job_execution_environments: image: "automationhub.example.org/ee-minimal-rhel8:latest" ---- + +[NOTE] +==== +For information on obtaining `registry_username` and `registry_password`, see link:{URLInstallationGuide}/index#proc-set-registry-username-password[Setting registry_username and registry_password] +==== + . Run the `./setup.sh` script + ---- diff --git a/downstream/modules/platform/proc-upgrade-controller-hub-eda-unified-ui.adoc b/downstream/modules/platform/proc-upgrade-controller-hub-eda-unified-ui.adoc new file mode 100644 index 0000000000..4de46be4bd --- /dev/null +++ b/downstream/modules/platform/proc-upgrade-controller-hub-eda-unified-ui.adoc @@ -0,0 +1,122 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-10-09 +:_mod-docs-content-type: PROCEDURE + +[id="upgrade-controller-hub-eda-unified-ui_{context}"] += Automation controller and automation hub 2.4 and Event-Driven Ansible 2.5 with unified UI upgrades + +{PlatformNameShort} 2.5 supports upgrades from {PlatformNameShort} 2.4 environments for all components, with the exception of {EDAName}. You can also configure a mixed environment with {EDAName} from 2.5 connected to a legacy 2.4 cluster. Combining install methods (OCP, RPM, Containerized) within such a topology is not supported by {PlatformNameShort}. + +[NOTE] +If you are running the 2.4 version of {EDAName} in production, before you upgrade, contact Red Hat support or your account representative for more information on how to move to {PlatformNameShort} 2.5. + +Supported topologies described in this document assume that: + +* 2.4 services will only include {ControllerName} and {HubName}. +* 2.5 parts will always include {EDAName} and the unified UI ({Gateway}). +* Combining install methods for these topologies is not supported. + +== Upgrade considerations + +* You must maintain two separate inventory files: one for the 2.4 services and one for the 2.5 services. +* You must maintain two separate "installations" within this scenario: one for the 2.4 services and one for the 2.5 services. +* You must "upgrade" the two separate "installations" separately. +* To upgrade to a consistent component version topology, consider the following: +** You must manually combine the inventory file configuration from the 2.4 inventory into the 2.5 inventory and run upgrade on ONLY the 2.5 inventory file. +** You must be using an external database for both the 2.4 inventory as well as the 2.5 inventory. +** Customers using "managed database" instances for either 2.4 or 2.5 inventory must migrate to an external database first, before upgrading. + + +.Prerequisites + +* An inventory from 2.4 for {ControllerName} and {HubName} and a 2.5 inventory for unified UI ({Gateway}) and {EDAName}. You must run upgrades on 2.4 services (using the inventory file to specify only {ControllerName} and {HubName} VMs) to get them to the initial version of {PlatformNameShort} 2.5 first. When all the services are at the same version, run an upgrade (using a complete inventory file) on all the services to go to the latest version of {PlatformNameShort} 2.5. + +[IMPORTANT] +==== +DO NOT upgrade {EDAName} and the unified UI ({Gateway}) to the latest version of {PlatformNameShort} 2.5 without first upgrading the individual services ({ControllerName} and {HubName}) to the initial version of {PlatformNameShort} 2.5. +==== + +* Ensure you have upgraded to the latest version of {PlatformNameShort} 2.4 before upgrading your {PlatformName}. + +.Procedure + +=== Migration path for 2.4 instances with managed databases + +*Standalone node managed database* + +Convert the database node to an external one, removing it from the inventory. The PostgreSQL node will continue working and will not lose the {PlatformNameShort}-provided setup, but you are responsible for managing its configuration afterward. + +*Collocated managed database* + +. Backup +. Restore with standalone managed database node instead of collocated. +. Unmanaged standalone database + +=== Migration path for 2.4 services with 2.5 services + +If you installed {PlatformNameShort} 2.5 to use {EDAName} in a supported scenario, you can upgrade your {PlatformNameShort} 2.4 {ControllerName} and {HubName} to {PlatformNameShort} 2.5 by following these steps: + +* Merge 2.4 inventory data into the 2.5 inventory. The example below shows the inventory file for {ControllerName} and {HubName} for 2.4 and the inventory file for {EDAName} and the unified UI ({Gateway}) for 2.5, respectively, as the starting point, and what the merged inventory looks like. + +*Inventory files from 2.4* + +[source,bash] +---- +[automationcontroller] +controller-1 +controller-2 + +[automationhub] +hub-1 +hub-2 + +[all:vars] +# Here we have the admin passwd, db credentials, etc. +---- + +*Inventory files from 2.5* +[source,] +---- +[edacontroller] +eda-1 +eda-2 + +[gateway] +gw-1 +gw-2 + +[all:vars] +# Here we have admin passwd, db credentials etc. +---- + +*Merged Inventory* +[source,] +---- +[automationcontroller] +controller-1 +controller-2 + +[automationhub] +hub-1 +hub-2 + +[edacontroller] +eda-1 +eda-2 + +[gateway] +gw-1 +gw-2 + +[all:vars] +# Here we have admin passwd, db credentials etc from both inventories above +---- + +* Run `setup.sh` +The installer upgrades {ControllerName} and {HubName} from 2.4 to {PlatformNameShort} 2.5.latest, {EDAName} and the unified UI ({Gateway}) from the fresh install of 2.5 to the latest version of 2.5, and connects {ControllerName} and {HubName} properly with the unified UI ({Gateway}) node to initialize the unified experience. + +.Verification + +* Verify that everything has upgraded to 2.5 and is working properly in one of two ways: +** performing an SSH to {ControllerName} and {EDAName}. +** In the unified UI, navigate to *Help > About* to verify the RPM versions are at 2.5. diff --git a/downstream/modules/platform/proc-uploading-the-custom-execution-environment-to-the-private-hub.adoc b/downstream/modules/platform/proc-uploading-the-custom-execution-environment-to-the-private-hub.adoc index ddfe6228e9..315f5dadd9 100644 --- a/downstream/modules/platform/proc-uploading-the-custom-execution-environment-to-the-private-hub.adoc +++ b/downstream/modules/platform/proc-uploading-the-custom-execution-environment-to-the-private-hub.adoc @@ -20,7 +20,7 @@ b38e3299a65e private-hub.example.com/custom-ee latest 8e38be53b486 private-hub.example.com/ee-minimal-rhel8 latest ---- -Then log in to the {PrivateHubName}'s container registry and push the image to make it available for use with job templates and workflows: +.Log in to the {PrivateHubName}'s container registry and push the image to make it available for use with job templates and workflows: ---- $ podman login private-hub.example.com -u admin diff --git a/downstream/modules/platform/proc-use-controller-resource-operator.adoc b/downstream/modules/platform/proc-use-controller-resource-operator.adoc index ac825e9825..4af835d0ba 100644 --- a/downstream/modules/platform/proc-use-controller-resource-operator.adoc +++ b/downstream/modules/platform/proc-use-controller-resource-operator.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="proc-use-controller-resource-operator_{context}"] = Using {OperatorResourceShort} The {OperatorResourceShort} itself does not do anything until the user creates an object. -As soon as the user creates an *AutomationControllerProject* or *AnsibleJob* resource, the Resource Operator will start processing that object. +As soon as the user creates an *AutomationControllerProject* or *AnsibleJob* resource, the Resource Operator starts processing that object. .Prerequisites * Install the Kubernetes-based cluster of your choice. diff --git a/downstream/modules/platform/proc-use-custom-ca-certs.adoc b/downstream/modules/platform/proc-use-custom-ca-certs.adoc new file mode 100644 index 0000000000..ff193bd76e --- /dev/null +++ b/downstream/modules/platform/proc-use-custom-ca-certs.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: PROCEDURE + +[id="use-custom-ca-certs"] += Using a custom CA to generate all TLS certificates + +Use this method when you want {PlatformNameShort} to generate all of the certificates, but you want them signed by a custom CA rather than the default self-signed certificates. + +.Procedure +* To use a custom Certificate Authority (CA) to generate TLS certificates for all {PlatformNameShort} services, set the following variables in your inventory file: ++ +---- +ca_tls_cert= +ca_tls_key= +---- diff --git a/downstream/modules/platform/proc-using-postinstall.adoc b/downstream/modules/platform/proc-using-postinstall.adoc deleted file mode 100644 index 8852c919e8..0000000000 --- a/downstream/modules/platform/proc-using-postinstall.adoc +++ /dev/null @@ -1,68 +0,0 @@ -:_mod-docs-content-type: PROCEDURE - -[id="using-postinstall_{context}"] - -= Using postinstall feature of containerized {PlatformNameShort} - -[role="_abstract"] - - -Use the experimental postinstaller feature of containerized {PlatformNameShort} to define and load the configuration during the initial installation. This uses a configuration-as-code approach, where you simply define your configuration to be loaded as simple YAML files. - -. To use this optional feature, you need to uncomment the following vars in the inventory file: -+ ----- -controller_postinstall=true ----- -+ - -. The default is false, so you need to enable this to activate the postinstaller. You need a {PlatformNameShort} license for this feature that must reside on the local filesystem so it can be automatically loaded: -+ ----- -controller_license_file=/full_path_to/manifest_file.zip ----- -+ - -. You can pull your configuration-as-code from a Git based repository. To do this, set the following variables to dictate where you pull the content from and where to store it for upload to the {PlatformNameShort} controller: -+ ----- -controller_postinstall_repo_url=https://your_cac_scm_repo -controller_postinstall_dir=/full_path_to_where_you_want_the pulled_content_to_reside ----- -+ - -. The controller_postinstall_repo_url variable can be used to define the postinstall repository URL which must include authentication information. - -+ ----- -http(s):///.git (public repository without http(s) authentication) -http(s)://:@:.git (private repository with http(s) authentication) -git@:.git (public/private repository with ssh authentication) ----- -+ - -[NOTE] -==== -When using ssh based authentication, the installer does not configure anything for you, so you must configure everything on the installer node. -==== - -Definition files use the link:https://console.redhat.com/ansible/automation-hub/namespaces/infra/[infra certified collections]. The link:https://console.redhat.com/ansible/automation-hub/repo/validated/infra/controller_configuration/[controller_configuration] collection is preinstalled as part of the installation and uses the installation controller credentials you supply in the inventory file for access into the {PlatformNameShort} controller. You simply need to give the YAML configuration files. - -You can setup {PlatformNameShort} configuration attributes such as credentials, LDAP settings, users and teams, organizations, projects, inventories and hosts, job and workflow templates. - -The following example shows a sample *your-config.yml* file defining and loading controller job templates. The example demonstrates a simple change to the preloaded demo example provided with an {PlatformNameShort} installation. - ----- -/full_path_to_your_configuration_as_code/ -├── controller - └── job_templates.yml ----- - ----- -controller_templates: - - name: Demo Job Template - execution_environment: Default execution environment - instance_groups: - - default - inventory: Demo Inventory ----- diff --git a/downstream/modules/platform/proc-verify-aap-installation.adoc b/downstream/modules/platform/proc-verify-aap-installation.adoc new file mode 100644 index 0000000000..e07e3dffd3 --- /dev/null +++ b/downstream/modules/platform/proc-verify-aap-installation.adoc @@ -0,0 +1,16 @@ +:_mod-docs-content-type: PROCEDURE + +[id="proc-verify-aap-installation_{context}"] + += Verifying installation of {PlatformNameShort} + +[role="_abstract"] +Upon a successful login, your installation of {PlatformName} is complete. + +[IMPORTANT] +==== +If the installation fails and you are a customer who has purchased a valid license for {PlatformName}, contact Ansible through the link:https://docs.redhat.com/[Red Hat Customer portal]. +==== + +.Additional resources +See link:{LinkGettingStarted} for post installation instructions. diff --git a/downstream/modules/platform/proc-verify-network-connectivity.adoc b/downstream/modules/platform/proc-verify-network-connectivity.adoc index 8aaca228cf..235623aea9 100644 --- a/downstream/modules/platform/proc-verify-network-connectivity.adoc +++ b/downstream/modules/platform/proc-verify-network-connectivity.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="verify-network-connectivity_{context}"] = Verifying network connectivity @@ -11,7 +13,7 @@ Take note of the host and port information from your existing deployment. This i .Procedure -. Create a yaml file to verify the connection between your new deployment and your old deployment database: +. Create a YAML file to verify the connection between your new deployment and your old deployment database: + ----- apiVersion: v1 @@ -41,13 +43,15 @@ oc get pods ----- oc rsh dbchecker ----- -. After the shell session opens in the pod, verify that the new project can connect to your old project cluster: +.Verification + +After the shell session opens in the pod, verify that the new project can connect to your old project cluster: + ----- -pg_isready -h -p -U awx +pg_isready -h -p -U AutomationContoller ----- + -.Example +For example: ----- : - accepting connections ----- diff --git a/downstream/modules/platform/ref-OCP-system-requirements.adoc b/downstream/modules/platform/ref-OCP-system-requirements.adoc new file mode 100644 index 0000000000..4e60b84056 --- /dev/null +++ b/downstream/modules/platform/ref-OCP-system-requirements.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: REFERENCE + + + +// [id="ref-OCP-system-requirements_{context}"] + += System requirements for installing on {OCPShort} + +For system requirements for installing {PlatformNameShort} on {OCPShort}, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/tested_deployment_models/ocp-topologies#tested_system_configurations_6[Tested system configurations] section of _{TitleTopologies}_. \ No newline at end of file diff --git a/downstream/modules/platform/ref-RPM-system-requirements.adoc b/downstream/modules/platform/ref-RPM-system-requirements.adoc new file mode 100644 index 0000000000..252d1cc839 --- /dev/null +++ b/downstream/modules/platform/ref-RPM-system-requirements.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: REFERENCE + + + +// [id="ref-RPM-system-requirements_{context}"] + += System requirements for RPM installation + +For system requirements for the RPM installation method of {PlatformNameShort}, see the link:{URLInstallationGuide}/platform-system-requirements[System requirements] section of _{TitleInstallationGuide}_. \ No newline at end of file diff --git a/downstream/modules/platform/ref-aap-considerations-for-migrate-admin-users.adoc b/downstream/modules/platform/ref-aap-considerations-for-migrate-admin-users.adoc new file mode 100644 index 0000000000..07f650b2f1 --- /dev/null +++ b/downstream/modules/platform/ref-aap-considerations-for-migrate-admin-users.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: REFERENCE + +[id="aap-considerations-for-migrate-admin-users_{context}"] + + += Key considerations for migrating admin users + +[role="_abstract"] +Upgrades from {PlatformNameShort} 2.4 to 2.5 allows for the migration of administrators for each component with their existing component-level admin privileges maintained. However, escalation of privileges to {Gateway} administrator is not automatic during the upgrade process. This ensures a secure privilege escalation process that can be customized to meet the organization's specific needs. + + +*Component-level admin privileges are retained:* Administrators for {ControllerName} and {HubName} will retain their existing admin privileges for those respective services post-upgrade. For example, an admin of {ControllerName} will continue to have full administration privileges for {ControllerName} resources. + +*Escalation to {Gateway} admin must be manually configured post-upgrade:* During the upgrade process, admin privileges for individual services are not automatically translated to platform administrator privileges. Escalation to {Gateway} admin must be granted by the platform administrator after upgrade and migration. Each service admin retains the original scope of their access until the access is changed. + +As a platform administrator, you can escalate a user's privileges by selecting the *{PlatformNameShort} Administrator* checkbox. Only a platform administrator can escalate privileges. + +[NOTE] +==== +Users previously designated as {ControllerName} or {HubName} administrators are labeled as *Normal* in the *User type* column of the Users list view. This is a mischaracterization. You can verify that these users have, in fact, retained their service level administrator privileges, by editing the account: +==== + + + + + diff --git a/downstream/modules/platform/ref-aap-considerations-for-migrate-normal-users.adoc b/downstream/modules/platform/ref-aap-considerations-for-migrate-normal-users.adoc new file mode 100644 index 0000000000..e8f620e003 --- /dev/null +++ b/downstream/modules/platform/ref-aap-considerations-for-migrate-normal-users.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: REFERENCE + + + +[id="aap-considerations-for-migrate-normal-users_{context}"] + += Key considerations for migrating normal users + +[role="_abstract"] + +*Previous service accounts are prefixed:* Users with accounts on multiple services in 2.4 are migrated as individual users in 2.5 and prefixed to identify the service from which they were migrated. For example, {HubName} accounts are prefixed as `hub_`. {ControllerNameStart} user names do not include a prefix. + +*{ControllerNameStart} user accounts take precedence:* When an individual user had accounts on multiple services in 2.4, priority is given to their {ControllerName} account during migration, so those are not renamed. + +*Component level roles are retained until user migration is complete:* When users log in using an existing service account and do not perform the account linking process, only the roles for that specific service account are available. The migration process is completed once the user performs the account linking process. At that time, all roles for all services are migrated into the new {Gateway} user account. + + diff --git a/downstream/modules/platform/ref-accessing-control-auto-hub-eda-control.adoc b/downstream/modules/platform/ref-accessing-control-auto-hub-eda-control.adoc deleted file mode 100644 index 19eb0578b8..0000000000 --- a/downstream/modules/platform/ref-accessing-control-auto-hub-eda-control.adoc +++ /dev/null @@ -1,54 +0,0 @@ -:_mod-docs-content-type: REFERENCE - -[id="accessing-control-auto-hub-eda-control_{context}"] - -= Accessing {ControllerName}, {HubName}, and {EDAcontroller} - -[role="_abstract"] - - -After the installation completes, these are the default protocol and ports used: - -* http/https protocol - -* Ports 8080/8443 for {ControllerName} - -* Ports 8081/8444 for {HubName} - -* Ports 8082/8445 for {EDAcontroller} - - -These can be changed. Consult the *README.md* for further details. It is recommended that you leave the defaults unless you need to change them due to port conflicts or other factors. - - -.Accessing {ControllerName} UI - -The {ControllerName} UI is available by default at: - ----- -https://:8443 ----- - -Log in as the admin user with the password you created for *controller_admin_password*. - -If you supplied the license manifest as part of the installation, the {PlatformNameShort} dashboard is displayed. If you did not supply a license file, the *Subscription* screen is displayed where you must supply your license details. This is documented here: link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_operations_guide/assembly-aap-activate[Chapter 1. Activating Red Hat Ansible Automation Platform]. - -.Accessing {HubName} UI - -The {HubName} UI is available by default at: - ----- -https://:8444 ----- - -Log in as the admin user with the password you created for *hub_admin_password*. - - -.Accessing {EDAName} UI - -The {EDAName} UI is available by default at: ----- -https://:8445 ----- - -Log in as the admin user with the password you created for *eda_admin_password*. diff --git a/downstream/modules/platform/ref-adding-execution-nodes.adoc b/downstream/modules/platform/ref-adding-execution-nodes.adoc index bbd6a7890a..6fae512a48 100644 --- a/downstream/modules/platform/ref-adding-execution-nodes.adoc +++ b/downstream/modules/platform/ref-adding-execution-nodes.adoc @@ -6,25 +6,41 @@ [id="adding-execution-nodes_{context}"] = Adding execution nodes -[role="_abstract"] +Containerized {PlatformNameShort} can deploy remote execution nodes. -The containerized installer can deploy remote execution nodes. This is handled by the execution_nodes group in the ansible inventory file. +You can define remote execution nodes in the `[execution_nodes]` group of your inventory file: ---- [execution_nodes] -fqdn_of_your_execution_host + ---- -An execution node is by default configured as an execution type running on port 27199 (TCP). -This can be changed by the following variables: +By default, an execution node is configured with the following settings which can be modified as needed: -* receptor_port=27199 -* receptor_protocol=tcp -* receptor_type=hop +---- +receptor_port=27199 +receptor_protocol=tcp +receptor_type=execution +---- + +* `receptor_port` - The port number that receptor listens on for incoming connections from other receptor nodes. +* `receptor_type` - The role of the node. Valid options include `execution` or `hop`. +* `receptor_protocol` - The protocol used for communication. Valid options include `tcp` or `udp`. + +By default, all nodes in the `[execution_nodes]` group are added as peers for the controller node. To change the peer configuration, use the `receptor_peers` variable. + +[NOTE] +==== +The value of `receptor_peers` must be a comma-separated list of host names. Do not use inventory group names. +==== + +Example configuration: -Receptor type value can be either execution or hop, while the protocol is either TCP or UDP. By default, the nodes in the `execution_nodes` group will be added as peers for the controller node. However, you can change the peers configuration by using the `receptor_peers` variable. ---- [execution_nodes] -fqdn_of_your_execution_host -fqdn_of_your_hop_host receptor_type=hop receptor_peers=’[“fqdn_of_your_execution_host”]’ ----- \ No newline at end of file +# Execution nodes +exec1.example.com +exec2.example.com +# Hop node that peers with the two execution nodes above +hop1.example.com receptor_type=hop receptor_peers='["exec1.example.com","exec2.example.com"]' +---- diff --git a/downstream/modules/platform/ref-ansible-inventory-variables.adoc b/downstream/modules/platform/ref-ansible-inventory-variables.adoc index 9155e8009d..52d4eaf011 100644 --- a/downstream/modules/platform/ref-ansible-inventory-variables.adoc +++ b/downstream/modules/platform/ref-ansible-inventory-variables.adoc @@ -1,56 +1,55 @@ -[id="ref-ansible-inventory-variables"] +:_mod-docs-content-type: REFERENCE + +[id="ansible-variables"] = Ansible variables The following variables control how {PlatformNameShort} interacts with remote hosts. -For more information about variables specific to certain plugins, see the documentation for link:https://docs.ansible.com/ansible-core/devel/collections/ansible/builtin/index.html[Ansible.Builtin]. - -For a list of global configuration options, see link:https://docs.ansible.com/ansible-core/devel/reference_appendices/config.html[Ansible Configuration Settings]. - +.Ansible variables [cols="50%,50%",options="header"] -|==== -| *Variable* | *Description* -| *`ansible_connection`* | The connection plugin used for the task on the target host. +|=== +| Variable | Description +| `ansible_connection` | The connection plugin used for the task on the target host. This can be the name of any Ansible connection plugin. -This can be the name of any of Ansible connection plugin. -SSH protocol types are `smart`, `ssh` or `paramiko`. +SSH protocol types are `smart`, `ssh`, or `paramiko`. You can also use `local` to run tasks on the control node itself. Default = `smart` -| *`ansible_host`* | The ip or name of the target host to use instead of *`inventory_hostname`*. -| *`ansible_port`* | The connection port number. -Default: 22 for ssh -| *`ansible_user`* | The user name to use when connecting to the host. -| *`ansible_password`* | The password to authenticate to the host. +| `ansible_host` | The IP address or name of the target host to use instead of `inventory_hostname`. +| `ansible_password` | The password to authenticate to the host. -Never store this variable in plain text. +Do not store this variable in plain text. Always use a vault. For more information, see link:https://docs.ansible.com/ansible-core/devel/tips_tricks/ansible_tips_tricks.html#keep-vaulted-variables-safely-visible[Keep vaulted variables safely visible]. +| `ansible_port` | The connection port number. -Always use a vault. -| *`ansible_ssh_private_key_file`* | Private key file used by SSH. -Useful if using multiple keys and you do not want to use an SSH agent. -| *`ansible_ssh_common_args`* | This setting is always appended to the default command line for `sftp`, `scp`, and `ssh`. -Useful to configure a ProxyCommand for a certain host or group. -| *`ansible_sftp_extra_args`* | This setting is always appended to the default `sftp` command line. -| *`ansible_scp_extra_args`* | This setting is always appended to the default `scp` command line. -| *`ansible_ssh_extra_args`* | This setting is always appended to the default `ssh` command line. -| *`ansible_ssh_pipelining`* | Determines if SSH pipelining is used. -This can override the pipelining setting in `ansible.cfg`. -If using SSH key-based authentication, the key must be managed by an SSH agent. -| *`ansible_ssh_executable`* | Added in version 2.2. +The default for SSH is `22`. +| `ansible_scp_extra_args` | This setting is always appended to the default `scp` command line. +| `ansible_sftp_extra_args` | This setting is always appended to the default `sftp` command line. +| `ansible_shell_executable` | This sets the shell that the Ansible controller uses on the target machine and overrides the executable in `ansible.cfg` which defaults to `/bin/sh`. +| `ansible_shell_type` | The shell type of the target system. -This setting overrides the default behavior to use the system SSH. -This can override the ssh_executable setting in `ansible.cfg`. -| *`ansible_shell_type`* | The shell type of the target system. Do not use this setting unless you have set the `ansible_shell_executable` to a non-Bourne (sh) compatible shell. -By default commands are formatted using sh-style syntax. -Setting this to `csh` or `fish` causes commands executed on target systems to follow the syntax of those shells instead. -| *`ansible_shell_executable`* | This sets the shell that the Ansible controller uses on the target machine, and overrides the executable in `ansible.cfg` which defaults to `/bin/sh`. - -Do not change this variable unless `/bin/sh` is not installed on the target machine or cannot be run from sudo. -| *`inventory_hostname`* | This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. +By default commands are formatted using sh-style syntax. Setting this to `csh` or `fish` causes commands executed on target systems to follow the syntax of those shells instead. +| `ansible_ssh_common_args` | This setting is always appended to the default command line for `sftp`, `scp`, and `ssh`. +Useful to configure a `ProxyCommand` for a certain host or group. +| `ansible_ssh_executable` | This setting overrides the default behavior to use the system `ssh`. +This can override the `ssh_executable` setting in `ansible.cfg`. +| `ansible_ssh_extra_args` | This setting is always appended to the default `ssh` command line. +| `ansible_ssh_pipelining` | Determines if SSH `pipelining` is used. + +This can override the `pipelining` setting in `ansible.cfg`. +If using SSH key-based authentication, the key must be managed by an SSH agent. +| `ansible_ssh_private_key_file` | Private key file used by SSH. -You cannot set the value of this variable. +Useful if using multiple keys and you do not want to use an SSH agent. +| `ansible_user` | The user name to use when connecting to the host. -Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable. -|==== +Do not change this variable unless `/bin/sh` is not installed on the target machine or cannot be run from sudo. +| `inventory_hostname` | This variable takes the hostname of the machine from the inventory script or the Ansible configuration file. +You cannot set the value of this variable. Because the value is taken from the configuration file, the actual runtime hostname value can vary from what is returned by this variable. +|=== + +[role="_additional-resources"] +.Additional resources +* link:https://docs.ansible.com/ansible-core/devel/collections/ansible/builtin/index.html[Ansible.Builtin] +* link:https://docs.ansible.com/ansible-core/devel/reference_appendices/config.html[Ansible Configuration Settings] diff --git a/downstream/modules/platform/ref-assign-pods-to-nodes.adoc b/downstream/modules/platform/ref-assign-pods-to-nodes.adoc index da08e422e2..9bdb82a788 100644 --- a/downstream/modules/platform/ref-assign-pods-to-nodes.adoc +++ b/downstream/modules/platform/ref-assign-pods-to-nodes.adoc @@ -1,4 +1,6 @@ -[id="ref-assign-pods-to-nodes"] +:_mod-docs-content-type: REFERENCE + +[id="ref-assign-pods-to-nodes_{context}"] = Assigning pods to specific nodes diff --git a/downstream/modules/platform/ref-automation-hub-requirements.adoc b/downstream/modules/platform/ref-automation-hub-requirements.adoc index 5c112b7ee9..ea13d19817 100644 --- a/downstream/modules/platform/ref-automation-hub-requirements.adoc +++ b/downstream/modules/platform/ref-automation-hub-requirements.adoc @@ -1,26 +1,11 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-automation-hub-requirements"] = {HubNameStart} system requirements -{HubNameStart} enables you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On {HubNameMain}, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation. - -{HubNameStart} has the following system requirements: - -[cols="a,a,a"] -|=== -h|Requirement | Required | Notes - -| *RAM* | 8 GB minimum | - -* 8 GB RAM (minimum and recommended for Vagrant trial installations) -* 8 GB RAM (minimum for external standalone PostgreSQL databases) -* For capacity based on forks in your configuration, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-jobs#controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. -| *CPUs* | 2 minimum | - -For capacity based on forks in your configuration, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-jobs#controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. -| *Local disk* | 60 GB disk | Dedicate a minimum of 40GB to `/var` for collection storage. +{HubNameStart} allows you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On {HubNameMain}, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation. -|=== [NOTE] ==== @@ -33,5 +18,5 @@ To avoid this, use the `automationhub_main_url` inventory variable with a value This adds the external address to `/etc/pulp/settings.py`. This implies that you only want to use the external address. -For information about inventory file variables, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars[Inventory file variables] in the _{PlatformName} Installation Guide_. +For information about inventory file variables, see xref:appendix-inventory-files-vars[Inventory file variables]. ==== diff --git a/downstream/modules/platform/ref-automation-mesh-proxy.adoc b/downstream/modules/platform/ref-automation-mesh-proxy.adoc new file mode 100644 index 0000000000..152021bb4c --- /dev/null +++ b/downstream/modules/platform/ref-automation-mesh-proxy.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-automation-mesh-proxy"] + += Configuring proxy settings for {AutomationMesh} +You can route outbound communication from the receptor on an {AutomatioinMesh} node through a proxy server. +If your proxy does not strip out TLS certificates then an installation of {PlatformNameShort} automatically supports the use of a proxy server. + +Every node on the mesh must have a Certifying Authority that the installer creates on your behalf. + +The default install location for the Certifying Authority is: + +`/etc/receptor/tls/ca/mesh-CA.crt` + +The certificates and keys created on your behalf use the nodeID for their names: + +For the certificate: +`/etc/receptor/tls/NODEID.crt` + +For the key: +`/etc/receptor/tls/NODEID.key` diff --git a/downstream/modules/platform/ref-aws-secrets-manager-lookup.adoc b/downstream/modules/platform/ref-aws-secrets-manager-lookup.adoc index 72e10d5631..906eccc013 100644 --- a/downstream/modules/platform/ref-aws-secrets-manager-lookup.adoc +++ b/downstream/modules/platform/ref-aws-secrets-manager-lookup.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-aws-secrets-manager-lookup"] = AWS Secrets Manager lookup diff --git a/downstream/modules/platform/ref-azure-key-vault-lookup.adoc b/downstream/modules/platform/ref-azure-key-vault-lookup.adoc index d361803a04..910aa967b3 100644 --- a/downstream/modules/platform/ref-azure-key-vault-lookup.adoc +++ b/downstream/modules/platform/ref-azure-key-vault-lookup.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-azure-key-vault-lookup"] = {Azure} Key Vault @@ -5,9 +7,9 @@ When you select *{Azure} Key Vault* for *Credential Type*, give the following metadata to configure your lookup: * *Vault URL (DNS Name)* (required): give the URL used for communicating with {Azure}'s key management system -* *Client ID* (required): give the identifier as obtained by the {Azure} Active Directory -* *Client Secret* (required): give the secret as obtained by the {Azure} Active Directory -* *Tenant ID* (required): give the unique identifier that is associated with an {Azure} Active Directory instance within an Azure subscription +* *Client ID* (required): give the identifier as obtained by {MSEntraID} +* *Client Secret* (required): give the secret as obtained by {MSEntraID} +* *Tenant ID* (required): give the unique identifier that is associated with an {MSEntraID} instance within an Azure subscription * *Cloud Environment*: select the applicable cloud environment to apply //The following is an example of a configured {Azure} KMS credential. diff --git a/downstream/modules/platform/ref-ccsp.adoc b/downstream/modules/platform/ref-ccsp.adoc new file mode 100644 index 0000000000..be52239410 --- /dev/null +++ b/downstream/modules/platform/ref-ccsp.adoc @@ -0,0 +1,78 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-ccsp"] + += CCSP + +`CCSP` is the original report format. It does not include many of the customization of CCSPv2, and it is intended to be used only for the CCSP partner program. + +== Optional collectors for `gather` command + +You can use the following optional collectors for the `gather` command: + +* `main_jobhostsummary` +** If present by default, this incrementally collects the `main_jobhostsummary` table from the {ControllerName} database, containing information about jobs runs and managed nodes automated. +* `main_host` +** This collects daily snapshots of the `main_host` table from the {ControllerName} database and has managed nodes/hosts present across {ControllerName} inventories, +* `main_jobevent` +** This incrementally collects the `main_jobevent` table from the {ControllerName} database and contains information about which modules, roles, and ansible collections are being used. +* main_indirectmanagednodeaudit +** This incrementally collects the `main_indirectmanagednodeaudit` table from the {ControllerName} database and contains information about indirectly managed nodes, + +---- +# Example with all optional collectors +export METRICS_UTILITY_OPTIONAL_COLLECTORS="main_host,main_jobevent,main_indirectmanagednodeaudit" +---- + +== Optional sheets for `build_report` command + +You may use the following optional sheets for the `build_report` command: + +* `ccsp_summary` +** This is a landing page specifically for partners under the CCSP program. It shows managed node usage by each {ControllerName} organization. +** This report takes additional parameters to customize the summary page. For more information, see the following example: + +---- +export METRICS_UTILITY_PRICE_PER_NODE=11.55 # in USD +export METRICS_UTILITY_REPORT_SKU=MCT3752MO +export METRICS_UTILITY_REPORT_SKU_DESCRIPTION="EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" +export METRICS_UTILITY_REPORT_H1_HEADING="CCSP Reporting : ANSIBLE Consumption" +export METRICS_UTILITY_REPORT_COMPANY_NAME="Company Name" +export METRICS_UTILITY_REPORT_EMAIL="email@email.com" +export METRICS_UTILITY_REPORT_RHN_LOGIN="test_login" +export METRICS_UTILITY_REPORT_COMPANY_BUSINESS_LEADER="BUSINESS LEADER" +export METRICS_UTILITY_REPORT_COMPANY_PROCUREMENT_LEADER="PROCUREMENT LEADER" +---- + +* `managed_nodes` +** This is a deduplicated list of managed nodes automated by {ControllerName}. +* `indirectly_managed_nodes` +** This is a deduplicated list of indirect managed nodes automated by {ControllerName}. +* `inventory_scope` +** This is a deduplicated list of managed nodes present across all inventories of {ControllerName}. +* `usage_by_collections` +** This is a list of Ansible collections used in {ControllerName} job runs. +* `usage_by_roles` +** This is a list of roles used in {ControllerName} job runs. +*`usage_by_modules` +** This is a list of modules used in {ControllerName}job runs. + +---- +# Example with all optional sheets +export METRICS_UTILITY_OPTIONAL_CCSP_REPORT_SHEETS='ccsp_summary,managed_nodes,indirectly_managed_nodes,inventory_scope,usage_by_collections,usage_by_roles,usage_by_modules' +---- + +== Selecting a date range for your CCSP report + +The default behavior of this report is to build a report for the previous month. The following examples describe how to override this default behavior to select a specific date range for your report: + +---- +# Builds report for a previous month +metrics-utility build_report + +# Build report for a specific month +metrics-utility build_report --month=2025-03 + +# Build report for a specific month overriding an existing report +metrics-utility build_report --month=2025-03 --force +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-ccspv2.adoc b/downstream/modules/platform/ref-ccspv2.adoc new file mode 100644 index 0000000000..8964526159 --- /dev/null +++ b/downstream/modules/platform/ref-ccspv2.adoc @@ -0,0 +1,115 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-ccspv2"] + += CCSPv2 + +CCSPv2 is a report which shows the following: + +* Directly and indirectly managed node usage +* The content of all inventories +* Content usage + +The primary use of this report is for partners under the link:https://connect.redhat.com/en/programs/certified-cloud-service-provider[CCSP] program, but all customers can use it to obtain on-premise reporting showing managed nodes, jobs and content usage across their {ControllerName} organizations. + +Set the report type using `METRICS_UTILITY_REPORT_TYPE=CCSPv2`. + +== Optional collectors for `gather` command + +You can use the following optional collectors for the `gather` command: + +* `main_jobhostsummary` +** If present by default, this incrementally collects data from the `main_jobhostsummary` table in the {ControllerName} database, containing information about jobs runs and managed nodes automated. +* `main_host` +** This collects daily snapshots of the `main_host` table in the {ControllerName} database and has managed nodes and hosts present across {ControllerName} inventories. +* `main_jobevent` +** This incrementally collects data from the `main_jobevent` table in the {ControllerName} database and contains information about which modules, roles, and Ansible collections are being used. +* `main_indirectmanagednodeaudit` +** This incrementally collects data from the `main_indirectmanagednodeaudit` table in the {ControllerName} database and contains information about indirectly managed nodes. + +---- +# Example with all optional collectors +export METRICS_UTILITY_OPTIONAL_COLLECTORS="main_host,main_jobevent,main_indirectmanagednodeaudit" +---- + +== Optional sheets for `build_report` command + +You can use the following optional sheets for the `build_report` command: + +* `ccsp_summary` +** This is a landing page specifically for partners under CCSP program. +This report takes additional parameters to customize the summary page. For more information, see the following example: ++ +---- +export METRICS_UTILITY_PRICE_PER_NODE=11.55 # in USD +export METRICS_UTILITY_REPORT_SKU=MCT3752MO +export METRICS_UTILITY_REPORT_SKU_DESCRIPTION="EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" +export METRICS_UTILITY_REPORT_H1_HEADING="CCSP NA Direct Reporting Template" +export METRICS_UTILITY_REPORT_COMPANY_NAME="Partner A" +export METRICS_UTILITY_REPORT_EMAIL="email@email.com" +export METRICS_UTILITY_REPORT_RHN_LOGIN="test_login" +export METRICS_UTILITY_REPORT_PO_NUMBER="123" +export METRICS_UTILITY_REPORT_END_USER_COMPANY_NAME="Customer A" +export METRICS_UTILITY_REPORT_END_USER_CITY="Springfield" +export METRICS_UTILITY_REPORT_END_USER_STATE="TX" +export METRICS_UTILITY_REPORT_END_USER_COUNTRY="US" +---- +* `jobs` +** This is a list of {ControllerName} jobs launched. It is grouped by job template. +* `managed_nodes` +** This is a deduplicated list of managed nodes automated by {ControllerName}. +* `indirectly_managed_nodes` +** This is a deduplicated list of indirect managed nodes automated by {ControllerName}. +* `inventory_scope` +** This is a deduplicated list of managed nodes present across all inventories of {ControllerName}. +* `usage_by_organizations` +** This is a list of all {ControllerName} organizations with several metrics showing the organizations usage. This provides data suitable for doing internal chargeback. +* `usage_by_collections` +** This is a list of Ansible collections used in a {ControllerName} job runs. +* `usage_by_roles` +** This is a list of roles used in {ControllerName} job runs. +* `usage_by_modules` +** This is a list of modules used in {ControllerName} job runs. +* `managed_nodes_by_organization` +** This generates a sheet per organization, listing managed nodes for every organization with the same content as the managed_nodes sheet. +* `data_collection_status` +** This generates a sheet with the status of every data collection done by the `gather` command for the date range the report is built for. + +To outline the quality of data collected it also lists: + +*** unusual gaps between collections (based on collection_start_timestamp) +*** gaps in collected intervals (based on since vs until) ++ +---- +# Example with all optional sheets +export METRICS_UTILITY_OPTIONAL_CCSP_REPORT_SHEETS='ccsp_summary,jobs,managed_nodes,indirectly_managed_nodes,inventory_scope,usage_by_organizations,usage_by_collections,usage_by_roles,usage_by_modules,data_collection_status' +---- + +== Filtering reports by organization +To filter your report so that only certain organizations are present, use this environment variable with a semicolon separated list of organization names. + +`export METRICS_UTILITY_ORGANIZATION_FILTER="ACME;Organization 1"` + +This renders only the data from these organizations in the built report. This filter currently does not have any effect on the following optional sheets: + +* `usage_by_collections` +* `usage_by_roles` +* `usage_by_modules` + +== Selecting a date range for your CCSPv2 report + +The default behavior of the CCSPv2 report is to build a report for the previous month. The following examples describe how to override this default behavior to select a specific date range for your report: + +---- +# Build report for a specific month +metrics-utility build_report --month=2025-03 + +# Build report for a specific date range, icluding the prvided days +metrics-utility build_report --since=2025-03-01 --until=2025-03-31 + +# Build report for a last 6 months from a current date +metrics-utility build_report --since=6months + +# Build report for a last 6 months from a current date overriding an exisitng report +metrics-utility build_report --since=6months --force +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-centrify-vault-lookup.adoc b/downstream/modules/platform/ref-centrify-vault-lookup.adoc index 9be331f807..4e38ebc1eb 100644 --- a/downstream/modules/platform/ref-centrify-vault-lookup.adoc +++ b/downstream/modules/platform/ref-centrify-vault-lookup.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-centrify-vault-lookup"] = Centrify Vault Credential Provider Lookup diff --git a/downstream/modules/platform/ref-configuring-inventory-file.adoc b/downstream/modules/platform/ref-configuring-inventory-file.adoc new file mode 100644 index 0000000000..f2a95d0caa --- /dev/null +++ b/downstream/modules/platform/ref-configuring-inventory-file.adoc @@ -0,0 +1,54 @@ +:_mod-docs-content-type: REFERENCE + +[id="configuring-inventory-file"] += Configuring the inventory file + +You can control the installation of {PlatformNameShort} with inventory files. Inventory files define the information needed to customize the installation. For example, host details, certificate details, and various component-specific settings. + +Example inventory files are available in this document that you can copy and change to quickly get started. + +Additionally, {GrowthTopology} and {EnterpriseTopology} inventory files are available in the following locations: + +* In the downloaded installation program package: +** The default inventory file, named `inventory`, is for the {EnterpriseTopology} pattern. +** To deploy the {GrowthTopology} (all-in-one) pattern, use the `inventory-growth` file instead. +* In link:{URLTopologies}/container-topologies[Container topologies] in _{TitleTopologies}_. + +To use the example inventory files, replace the `< >` placeholders with your specific variables, and update the host names. + +Refer to the `README.md` file in the installation directory or link:{URLContainerizedInstall}/appendix-inventory-files-vars[Inventory file variables] for more information about optional and required variables. + +== Inventory file for online installation for containerized {GrowthTopology} (all-in-one) + +Use the example inventory file to perform an online installation for the containerized {GrowthTopology} (all-in-one): + +include::snippets/inventory-cont-a-env-a.adoc[] + +* `ansible_connection=local` - Used for all-in-one installations where the installation program is run on the same node that hosts {PlatformNameShort}. +** If the installation program is run from a separate node, do not include `ansible_connection=local`. In this case, use an SSH connection instead. + +[role="_additional-resources"] +.Additional resources +* link:{URLTopologies}/container-topologies#infrastructure_topology_5[Container {GrowthTopology}] + +== Inventory file for online installation for containerized {EnterpriseTopology} + +Use the example inventory file to perform an online installation for the containerized {EnterpriseTopology}: + +include::snippets/inventory-cont-b-env-a.adoc[] + +[role="_additional-resources"] +.Additional resources +* link:{URLTopologies}/container-topologies#infrastructure_topology_6[Container {EnterpriseTopology}] +* link:{URLPlanningGuide}/ha-redis_planning[Caching and queueing system] + + +== Performing an offline or bundled installation + +To perform an offline installation, add the following under the `[all:vars]` group: + +---- +bundle_install=true +# The bundle directory must include /bundle in the path +bundle_dir= +---- diff --git a/downstream/modules/platform/ref-cont-aap-system-requirements.adoc b/downstream/modules/platform/ref-cont-aap-system-requirements.adoc new file mode 100644 index 0000000000..9a746d1bad --- /dev/null +++ b/downstream/modules/platform/ref-cont-aap-system-requirements.adoc @@ -0,0 +1,59 @@ +:_mod-docs-content-type: REFERENCE + +[id="system-requirements"] + += System requirements + +Use this information when planning your installation of containerized {PlatformNameShort}. + +== Prerequisites + +* Ensure a dedicated non-root user is configured on the {RHEL} host. +** This user requires `sudo` or other Ansible supported privilege escalation (`sudo` is recommended) to perform administrative tasks during the installation. +** This user is responsible for the installation of containerized {PlatformNameShort}. +** This user is also the service account for the containers running {PlatformNameShort}. + +* For managed nodes, ensure a dedicated user is configured on each node. {PlatformNameShort} connects as this user to run tasks on the node. For more information about configuring a dedicated user on each node, see link:{URLContainerizedInstall}/aap-containerized-installation#preparing-the-managed-nodes-for-containerized-installation[Preparing the managed nodes for containerized installation]. + +* For remote host installations, ensure SSH public key authentication is configured for the non-root user. For guidelines on setting up SSH public key authentication for the non-root user, see link:https://access.redhat.com/solutions/4110681[How to configure SSH public key authentication for passwordless login]. + +* Ensure internet access is available from the {RHEL} host if you are using the default online installation method. + +* Ensure the appropriate network ports are open if a firewall is in place. For more information about the ports to open, see link:{URLTopologies}/container-topologies[Container topologies] in _{TitleTopologies}_. + +[IMPORTANT] +==== +Storing container images on an NFS share is not supported by Podman. To use an NFS share for the user home directory, set up the Podman storage backend path outside of the NFS share. +For more information, see link:https://www.redhat.com/en/blog/rootless-podman-nfs[Rootless Podman and NFS]. +==== + +== {PlatformNameShort} system requirements + +Your system must meet the following minimum system requirements to install and run {PlatformName}. + +include::snippets/cont-tested-system-config.adoc[] + +Each virtual machine (VM) has the following system requirements: + +include::snippets/cont-tested-vm-config.adoc[] + +[NOTE] +==== +If performing a bundled installation of the {GrowthTopology} with `hub_seed_collections=true`, then 32 GB RAM is recommended. Note that with this configuration the install time is going to increase and can take 45 or more minutes alone to complete seeding the collections. +==== + +== Database requirements + +{PlatformNameShort} can work with two varieties of database: + +. Database installed with {PlatformNameShort} - This database consists of a PostgreSQL installation done as part of an {PlatformNameShort} installation using PostgreSQL packages provided by Red Hat. +. Customer provided or configured database - This is an external database that is provided by the customer, whether on bare metal, virtual machine, container, or cloud hosted service. + +{PlatformNameShort} requires customer provided (external) database to have ICU support. + +[role="_additional-resources"] +.Additional resources + +* link:https://access.redhat.com/articles/4010491[{PlatformName} Database Scope of Coverage] + +* link:{URLContainerizedInstall}/aap-containerized-installation#setting-up-a-customer-provided-external-database[Setting up a customer provided (external) database] diff --git a/downstream/modules/platform/ref-container-resource-requirements.adoc b/downstream/modules/platform/ref-container-resource-requirements.adoc index 820c151767..cfcf8db06a 100644 --- a/downstream/modules/platform/ref-container-resource-requirements.adoc +++ b/downstream/modules/platform/ref-container-resource-requirements.adoc @@ -1,4 +1,6 @@ -[id="ref-container-resource-requirements"] +:_mod-docs-content-type: REFERENCE + +[id="ref-container-resource-requirements_{context}"] = Containers resource requirements @@ -12,7 +14,7 @@ By default, controlling a job takes one unit of capacity. The memory and CPU limits of the task container are used to determine the capacity of control nodes. For more information about how this is calculated, see link:https://docs.ansible.com/automation-controller/latest/html/userguide/jobs.html#resource-determination-for-capacity-algorithm[Resouce determination for capacity algorithm]. -See also xref:ref-schedule-jobs-worker-nodes[Jobs scheduled on the worker nodes] +See also link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/performance_considerations_for_operator_environments/index#ref-schedule-jobs-worker-nodes[Jobs scheduled on the worker nodes] [cols="30%,30%,30%",options="header"] diff --git a/downstream/modules/platform/ref-containerized-system-requirements.adoc b/downstream/modules/platform/ref-containerized-system-requirements.adoc new file mode 100644 index 0000000000..7fc8badba4 --- /dev/null +++ b/downstream/modules/platform/ref-containerized-system-requirements.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: REFERENCE + + + +// [id="ref-containerized-system-requirements_{context}"] + += System requirements for containerized installation + +For system requirements for the containerized installation method of {PlatformNameShort}, see +the link:{URLContainerizedInstall}/aap-containerized-installation#system-requirements[System requirements] section of _{TitleContainerizedInstall}_. \ No newline at end of file diff --git a/downstream/modules/platform/ref-containerized-troubleshoot-config.adoc b/downstream/modules/platform/ref-containerized-troubleshoot-config.adoc new file mode 100644 index 0000000000..2dc10a8ec6 --- /dev/null +++ b/downstream/modules/platform/ref-containerized-troubleshoot-config.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: REFERENCE +[id="troubleshooting-containerized-ansible-automation-platform-configuration_{context}"] + += Troubleshooting containerized {PlatformNameShort} configuration + +*Sometimes the post install for seeding my {PlatformNameShort} content errors out* + +This could manifest itself as output similar to this: + +---- +TASK [infra.controller_configuration.projects : Configure Controller Projects | Wait for finish the projects creation] *************************************** +Friday 29 September 2023 11:02:32 +0100 (0:00:00.443) 0:00:53.521 ****** +FAILED - RETRYING: [daap1.lan]: Configure Controller Projects | Wait for finish the projects creation (1 retries left). +failed: [daap1.lan] (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': '536962174348.33944', 'results_file': '/home/aap/.ansible_async/536962174348.33944', 'changed': False, '__controller_project_item': {'name': 'AAP Config-As-Code Examples', 'organization': 'Default', 'scm_branch': 'main', 'scm_clean': 'no', 'scm_delete_on_update': 'no', 'scm_type': 'git', 'scm_update_on_launch': 'no', 'scm_url': 'https://github.com/user/repo.git'}, 'ansible_loop_var': '__controller_project_item'}) => {"__projects_job_async_results_item": {"__controller_project_item": {"name": "AAP Config-As-Code Examples", "organization": "Default", "scm_branch": "main", "scm_clean": "no", "scm_delete_on_update": "no", "scm_type": "git", "scm_update_on_launch": "no", "scm_url": "https://github.com/user/repo.git"}, "ansible_job_id": "536962174348.33944", "ansible_loop_var": "__controller_project_item", "changed": false, "failed": 0, "finished": 0, "results_file": "/home/aap/.ansible_async/536962174348.33944", "started": 1}, "ansible_job_id": "536962174348.33944", "ansible_loop_var": "__projects_job_async_results_item", "attempts": 30, "changed": false, "finished": 0, "results_file": "/home/aap/.ansible_async/536962174348.33944", "started": 1, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} +---- + +The `infra.controller_configuration.dispatch` role uses an asynchronous loop with 30 retries to apply each configuration type, and the default delay between retries is 1 second. If the configuration is large, this might not be enough time to apply everything before the last retry occurs. + +Increase the retry delay by setting the `controller_configuration_async_delay` variable to 2 seconds for example. You can set this variable in the `[all:vars]` section of the installation program inventory file. + +Re-run the installation program to ensure everything works as expected. diff --git a/downstream/modules/platform/ref-containerized-troubleshoot-diagnosing.adoc b/downstream/modules/platform/ref-containerized-troubleshoot-diagnosing.adoc new file mode 100644 index 0000000000..fb8d42e8f5 --- /dev/null +++ b/downstream/modules/platform/ref-containerized-troubleshoot-diagnosing.adoc @@ -0,0 +1,124 @@ +:_mod-docs-content-type: REFERENCE +[id="diagnosing-the-problem_{context}"] + += Diagnosing the problem + +For general container-based troubleshooting, you can inspect the container logs for any running service to help troubleshoot underlying issues. + +*Identifying the running containers* + +To get a list of the running container names run the following command: + +---- +$ podman ps --all --format "{{.Names}}" +---- + +Example output: + +---- +postgresql +redis-unix +redis-tcp +receptor +automation-controller-rsyslog +automation-controller-task +automation-controller-web +automation-eda-api +automation-eda-daphne +automation-eda-web +automation-eda-worker-1 +automation-eda-worker-2 +automation-eda-activation-worker-1 +automation-eda-activation-worker-2 +automation-eda-scheduler +automation-gateway-proxy +automation-gateway +automation-hub-api +automation-hub-content +automation-hub-web +automation-hub-worker-1 +automation-hub-worker-2 +---- + +*Inspecting the logs* + +To inspect any running container logs, run the `journalctl` command: + +---- +$ journalctl CONTAINER_NAME= +---- + +Example command with output: + +---- +$ journalctl CONTAINER_NAME=automation-gateway-proxy + +Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> +Oct 08 01:40:12 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 00:40:12.885][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> +Oct 08 01:40:19 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T00:40:16.753Z] "GET /up HTTP/1.1" 200 - 0 1138 10 0 "192.0.2.1" "python-> +---- + +To view the logs of a running container in real-time, run the `podman logs -f` command: + +---- +$ podman logs -f +---- + +*Controlling container operations* + +You can control operations for a container by running the `systemctl` command: + +---- +$ systemctl --user status +---- + +Example command with output: + +---- +$ systemctl --user status automation-gateway-proxy +● automation-gateway-proxy.service - Podman automation-gateway-proxy.service + Loaded: loaded (/home/user/.config/systemd/user/automation-gateway-proxy.service; enabled; preset: disabled) + Active: active (running) since Mon 2024-10-07 12:39:23 BST; 23h ago + Docs: man:podman-generate-systemd(1) + Process: 780 ExecStart=/usr/bin/podman start automation-gateway-proxy (code=exited, status=0/SUCCESS) + Main PID: 1919 (conmon) + Tasks: 1 (limit: 48430) + Memory: 852.0K + CPU: 2.996s + CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/automation-gateway-proxy.service + └─1919 /usr/bin/conmon --api-version 1 -c 2dc3c7b2cecd73010bad1e0aaa806015065f92556ed3591c9d2084d7ee209c7a -u 2dc3c7b2cecd73010bad1e0aaa80> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:02.926Z] "GET /api/galaxy/_ui/v1/settings/ HTTP/1.1" 200 - 0 654 58 47 "> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.387Z] "GET /api/controller/v2/config/ HTTP/1.1" 200 - 0 4018 58 44 "1> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.370Z] "GET /api/galaxy/v3/plugin/ansible/search/collection-versions/?> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:03.405Z] "GET /api/controller/v2/organizations/?role_level=notification_> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.366Z] "GET /api/galaxy/_ui/v1/me/ HTTP/1.1" 200 - 0 1368 79 40 "192.1> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.360Z] "GET /api/controller/v2/workflow_approvals/?page_size=200&statu> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.379Z] "GET /api/controller/v2/job_templates/7/ HTTP/1.1" 200 - 0 1356> +Oct 08 11:44:10 aap.example.org automation-gateway-proxy[1919]: [2024-10-08T10:44:04.378Z] "GET /api/galaxy/_ui/v1/feature-flags/ HTTP/1.1" 200 - 0 207 81> +Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap> +Oct 08 11:44:13 aap.example.org automation-gateway-proxy[1919]: [2024-10-08 10:44:13.856][2][info][upstream] [external/envoy/source/common/upstream/cds_ap +---- + +*Getting container information about the execution plane* + +To get container information about {ControllerName}, {EDAName}, and `execution_nodes` nodes, prefix any Podman commands with either: + +---- +CONTAINER_HOST=unix://run/user//podman/podman.sock +---- + +or + +---- +CONTAINERS_STORAGE_CONF=/aap/containers/storage.conf +---- + +Example with output: + +---- +$ CONTAINER_HOST=unix://run/user/1000/podman/podman.sock podman images + +REPOSITORY TAG IMAGE ID CREATED SIZE +registry.redhat.io/ansible-automation-platform-25/ee-supported-rhel8 latest 59d1bc680a7c 6 days ago 2.24 GB +registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel8 latest a64b9fc48094 6 days ago 338 MB +---- diff --git a/downstream/modules/platform/ref-containerized-troubleshoot-install.adoc b/downstream/modules/platform/ref-containerized-troubleshoot-install.adoc new file mode 100644 index 0000000000..b5e829eb53 --- /dev/null +++ b/downstream/modules/platform/ref-containerized-troubleshoot-install.adoc @@ -0,0 +1,60 @@ +:_mod-docs-content-type: REFERENCE +[id="troubleshooting-containerized-ansible-automation-platform-installation_{context}"] + += Troubleshooting containerized {PlatformNameShort} installation + +*The installation takes a long time, or has errors, what should I check?* + +. Ensure your system meets the minimum requirements as outlined in link:{URLContainerizedInstall}/aap-containerized-installation#system-requirements[System requirements]. Factors such as improper storage choices and high latency when distributing across many hosts will all have an impact on installation time. + +. Review the installation log file which is located by default at `./aap_install.log`. You can change the log file location within the `ansible.cfg` file in the installation directory. + +. Enable task profiling callbacks on an ad hoc basis to give an overview of where the installation program spends the most time. To do this, use the local `ansible.cfg` file. Add a callback line under the `[defaults]` section, for example: + +---- +$ cat ansible.cfg +[defaults] +callbacks_enabled = ansible.posix.profile_tasks +---- + +*{ControllerNameStart} returns an error of 413* + +This error happens when `manifest.zip` license files that are larger than the `nginx_client_max_body_size` setting. If this error occurs, change the inventory file to include the following variables: + +---- +nginx_disable_hsts=false +nginx_http_port=8081 +nginx_https_port=8444 +nginx_client_max_body_size=20m +nginx_user_headers=[] +---- + +The current default setting of `20m` should prevent this issue. + +*When attempting to install containerized {PlatformNameShort} in {AWS} you receive output that there is no space left on device* + +---- +TASK [ansible.containerized_installer.automationcontroller : Create the receptor container] *************************************************** +fatal: [ec2-13-48-25-168.eu-north-1.compute.amazonaws.com]: FAILED! => {"changed": false, "msg": "Can't create container receptor", "stderr": "Error: creating container storage: creating an ID-mapped copy of layer \"98955f43cc908bd50ff43585fec2c7dd9445eaf05eecd1e3144f93ffc00ed4ba\": error during chown: storage-chown-by-maps: lchown usr/local/lib/python3.9/site-packages/azure/mgmt/network/v2019_11_01/operations/__pycache__/_available_service_aliases_operations.cpython-39.pyc: no space left on device: exit status 1\n", "stderr_lines": ["Error: creating container storage: creating an ID-mapped copy of layer \"98955f43cc908bd50ff43585fec2c7dd9445eaf05eecd1e3144f93ffc00ed4ba\": error during chown: storage-chown-by-maps: lchown usr/local/lib/python3.9/site-packages/azure/mgmt/network/v2019_11_01/operations/__pycache__/_available_service_aliases_operations.cpython-39.pyc: no space left on device: exit status 1"], "stdout": "", "stdout_lines": []} +---- + +If you are installing a `/home` filesystem into a default {AWS} marketplace RHEL instance, it might be too small since `/home` is part of the root `/` filesystem. To resolve this issue you must make more space available. For more information about the system requirements, see link:{URLContainerizedInstall}/aap-containerized-installation#system-requirements[System requirements]. + +*"Install container tools" task fails due to unavailable packages* + +This error can be seen in the installation process output as the following: + +---- +TASK [ansible.containerized_installer.common : Install container tools] ********************************************************************************************************** +fatal: [192.0.2.1]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} +fatal: [192.0.2.2]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} +fatal: [192.0.2.3]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} +fatal: [192.0.2.4]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} +fatal: [192.0.2.5]: FAILED! => {"changed": false, "failures": ["No package crun available.", "No package podman available.", "No package slirp4netns available.", "No package fuse-overlayfs available."], "msg": "Failed to install some of the specified packages", "rc": 1, "results": []} +---- + +To fix this error, run the following command on the target hosts: + +---- +sudo subscription-manager register +---- diff --git a/downstream/modules/platform/ref-containerized-troubleshoot-ref.adoc b/downstream/modules/platform/ref-containerized-troubleshoot-ref.adoc new file mode 100644 index 0000000000..0fc287132a --- /dev/null +++ b/downstream/modules/platform/ref-containerized-troubleshoot-ref.adoc @@ -0,0 +1,319 @@ +:_mod-docs-content-type: REFERENCE + +[id="containerized-ansible-automation-platform-reference"] + += Containerized {PlatformNameShort} reference + +*Can you give details of the architecture for the {PlatformNameShort} containerized design?* + +We use as much of the underlying native {RHEL} technology as possible. Podman is used for the container runtime and management of services. + +Use `podman ps` to list the running containers on the system: + +---- +$ podman ps + +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +88ed40495117 registry.redhat.io/rhel8/postgresql-13:latest run-postgresql 48 minutes ago Up 47 minutes postgresql +8f55ba612f04 registry.redhat.io/rhel8/redis-6:latest run-redis 47 minutes ago Up 47 minutes redis +56c40445c590 registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8:latest /usr/bin/receptor... 47 minutes ago Up 47 minutes receptor +f346f05d56ee registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 47 minutes ago Up 45 minutes automation-controller-rsyslog +26e3221963e3 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 45 minutes automation-controller-task +c7ac92a1e8a1 registry.redhat.io/ansible-automation-platform-24/controller-rhel8:latest /usr/bin/launch_a... 46 minutes ago Up 28 minutes automation-controller-web +---- + +Use `podman images` to display information about locally stored images: + +---- +$ podman images + +REPOSITORY TAG IMAGE ID CREATED SIZE +registry.redhat.io/ansible-automation-platform-24/ee-supported-rhel8 latest b497bdbee59e 10 days ago 3.16 GB +registry.redhat.io/ansible-automation-platform-24/controller-rhel8 latest ed8ebb1c1baa 10 days ago 1.48 GB +registry.redhat.io/rhel8/redis-6 latest 78905519bb05 2 weeks ago 357 MB +registry.redhat.io/rhel8/postgresql-13 latest 9b65bc3d0413 2 weeks ago 765 MB +---- + +Containerized {PlatformNameShort} runs as rootless containers for enhanced security by default. This means you can install containerized {PlatformNameShort} by using any local unprivileged user account. Privilege escalation is only needed for certain root level tasks, and by default is not needed to use root directly. + +The installation program adds the following files to the filesystem where you run the installation program on the underlying {RHEL} host: + +---- +$ tree -L 1 + . + ├── aap_install.log + ├── ansible.cfg + ├── collections + ├── galaxy.yml + ├── inventory + ├── LICENSE + ├── meta + ├── playbooks + ├── plugins + ├── README.md + ├── requirements.yml + ├── roles +---- + +The installation root directory includes other containerized services that make use of Podman volumes. + +Here are some examples for further reference: + +The `containers` directory includes some of the Podman specifics used and installed for the execution plane: + +---- + containers/ + ├── podman + ├── storage + │ ├── defaultNetworkBackend + │ ├── libpod + │ ├── networks + │ ├── overlay + │ ├── overlay-containers + │ ├── overlay-images + │ ├── overlay-layers + │ ├── storage.lock + │ └── userns.lock + └── storage.conf +---- + +The `controller` directory has some of the installed configuration and runtime data points: + +---- + controller/ + ├── data + │ ├── job_execution + │ ├── projects + │ └── rsyslog + ├── etc + │ ├── conf.d + │ ├── launch_awx_task.sh + │ ├── settings.py + │ ├── tower.cert + │ └── tower.key + ├── nginx + │ └── etc + ├── rsyslog + │ └── run + └── supervisor + └── run +---- + +The `receptor` directory has the {AutomationMesh} configuration: + +---- + receptor/ + ├── etc + │ └── receptor.conf + └── run + ├── receptor.sock + └── receptor.sock.lock +---- + +After installation, you will also find other files in the local user's `/home` directory such as the `.cache` directory: + +---- + .cache/ + ├── containers + │ └── short-name-aliases.conf.lock + └── rhsm + └── rhsm.log +---- + +As services are run using rootless Podman by default, you can use other services such as running `systemd` as non-privileged users. Under `systemd` you can see some of the component service controls available: + +The `.config` directory: + +---- + .config/ + ├── cni + │ └── net.d + │ └── cni.lock + ├── containers + │ ├── auth.json + │ └── containers.conf + └── systemd + └── user + ├── automation-controller-rsyslog.service + ├── automation-controller-task.service + ├── automation-controller-web.service + ├── default.target.wants + ├── podman.service.d + ├── postgresql.service + ├── receptor.service + ├── redis.service + └── sockets.target.wants +---- + +This is specific to Podman and conforms to the Open Container Initiative (OCI) specifications. When you run Podman as the root user `/var/lib/containers` is used by default. For standard users the hierarchy under `$HOME/.local` is used. + +The `.local` directory: + +---- + .local/ + └── share + └── containers + ├── cache + ├── podman + └── storage +---- + +As an example `.local/storage/volumes` contains what the output from `podman volume ls` provides: + +---- +$ podman volume ls + +DRIVER VOLUME NAME +local d73d3fe63a957bee04b4853fd38c39bf37c321d14fdab9ee3c9df03645135788 +local postgresql +local redis_data +local redis_etc +local redis_run +---- + +The execution plane is isolated from the control plane main services to ensure it does not affect the main services. + +Control plane services run with the standard Podman configuration and can be found in: `~/.local/share/containers/storage`. + +Execution plane services ({ControllerName}, {EDAName} and execution nodes) use a dedicated configuration found in `~/aap/containers/storage.conf`. This separation prevents execution plane containers from affecting the control plane services. + +You can view the execution plane configuration with one of the following commands: + +---- +CONTAINERS_STORAGE_CONF=~/aap/containers/storage.conf podman +---- + +---- +CONTAINER_HOST=unix://run/user//podman/podman.sock podman +---- + + +*How can I see host resource utilization statistics?* + +Run the following command to display host resource utilization statistics: + +---- +$ podman container stats -a +---- + +Example output based on a Dell sold and offered containerized {PlatformNameShort} solution (DAAP) install that utilizes ~1.8 GB RAM: + +---- +ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS CPU TIME AVG CPU % +0d5d8eb93c18 automation-controller-web 0.23% 959.1MB / 3.761GB 25.50% 0B / 0B 0B / 0B 16 20.885142s 1.19% +3429d559836d automation-controller-rsyslog 0.07% 144.5MB / 3.761GB 3.84% 0B / 0B 0B / 0B 6 4.099565s 0.23% +448d0bae0942 automation-controller-task 1.51% 633.1MB / 3.761GB 16.83% 0B / 0B 0B / 0B 33 34.285272s 1.93% +7f140e65b57e receptor 0.01% 5.923MB / 3.761GB 0.16% 0B / 0B 0B / 0B 7 1.010613s 0.06% +c1458367ca9c redis 0.48% 10.52MB / 3.761GB 0.28% 0B / 0B 0B / 0B 5 9.074042s 0.47% +ef712cc2dc89 postgresql 0.09% 21.88MB / 3.761GB 0.58% 0B / 0B 0B / 0B 21 15.571059s 0.80% +---- + +*How much storage is used and where?* + +The container volume storage is under the local user at `$HOME/.local/share/containers/storage/volumes`. + +. To view the details of each volume, run the following command: ++ +---- +$ podman volume ls +---- ++ +. Run the following command to display detailed information about a specific volume: ++ +---- +$ podman volume inspect +---- + +For example: + +---- +$ podman volume inspect postgresql +---- + +Example output: +---- +[ + { + "Name": "postgresql", + "Driver": "local", + "Mountpoint": "/home/aap/.local/share/containers/storage/volumes/postgresql/_data", + "CreatedAt": "2024-01-08T23:39:24.983964686Z", + "Labels": {}, + "Scope": "local", + "Options": {}, + "MountCount": 0, + "NeedsCopyUp": true + } +] +---- + +Several files created by the installation program are located in `$HOME/aap/` and bind-mounted into various running containers. + +. To view the mounts associated with a container run the following command: ++ +---- +$ podman ps --format "{{.ID}}\t{{.Command}}\t{{.Names}}" +---- ++ +Example output: ++ +---- +89e779b81b83 run-postgresql postgresql +4c33cc77ef7d run-redis redis +3d8a028d892d /usr/bin/receptor... receptor +09821701645c /usr/bin/launch_a... automation-controller-rsyslog +a2ddb5cac71b /usr/bin/launch_a... automation-controller-task +fa0029a3b003 /usr/bin/launch_a... automation-controller-web +20f192534691 gunicorn --bind 1... automation-eda-api +f49804c7e6cb daphne -b 127.0.0... automation-eda-daphne +d340b9c1cb74 /bin/sh -c nginx ... automation-eda-web +111f47de5205 aap-eda-manage rq... automation-eda-worker-1 +171fcb1785af aap-eda-manage rq... automation-eda-worker-2 +049d10555b51 aap-eda-manage rq... automation-eda-activation-worker-1 +7a78a41a8425 aap-eda-manage rq... automation-eda-activation-worker-2 +da9afa8ef5e2 aap-eda-manage sc... automation-eda-scheduler +8a2958be9baf gunicorn --name p... automation-hub-api +0a8b57581749 gunicorn --name p... automation-hub-content +68005b987498 nginx -g daemon o... automation-hub-web +cb07af77f89f pulpcore-worker automation-hub-worker-1 +a3ba05136446 pulpcore-worker automation-hub-worker-2 +---- ++ + +. Run the following command: ++ +---- +$ podman inspect | jq -r .[].Mounts[].Source +---- ++ +Example output: ++ +---- +/home/aap/.local/share/containers/storage/volumes/receptor_run/_data +/home/aap/.local/share/containers/storage/volumes/redis_run/_data +/home/aap/aap/controller/data/rsyslog +/home/aap/aap/controller/etc/tower.key +/home/aap/aap/controller/etc/conf.d/callback_receiver_workers.py +/home/aap/aap/controller/data/job_execution +/home/aap/aap/controller/nginx/etc/controller.conf +/home/aap/aap/controller/etc/conf.d/subscription_usage_model.py +/home/aap/aap/controller/etc/conf.d/cluster_host_id.py +/home/aap/aap/controller/etc/conf.d/insights.py +/home/aap/aap/controller/rsyslog/run +/home/aap/aap/controller/data/projects +/home/aap/aap/controller/etc/settings.py +/home/aap/aap/receptor/etc/receptor.conf +/home/aap/aap/controller/etc/conf.d/execution_environments.py +/home/aap/aap/tls/extracted +/home/aap/aap/controller/supervisor/run +/home/aap/aap/controller/etc/uwsgi.ini +/home/aap/aap/controller/etc/conf.d/container_groups.py +/home/aap/aap/controller/etc/launch_awx_task.sh +/home/aap/aap/controller/etc/tower.cert +---- ++ +. If the `jq` RPM is not installed, install it by running the following command: ++ +---- +$ sudo dnf -y install jq +---- diff --git a/downstream/modules/platform/ref-controller-aap-template.adoc b/downstream/modules/platform/ref-controller-aap-template.adoc index 822c435e56..29e27d7299 100644 --- a/downstream/modules/platform/ref-controller-aap-template.adoc +++ b/downstream/modules/platform/ref-controller-aap-template.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-aap-template"] = {PlatformName} diff --git a/downstream/modules/platform/ref-controller-access-azure-resources-in-playbook.adoc b/downstream/modules/platform/ref-controller-access-azure-resources-in-playbook.adoc new file mode 100644 index 0000000000..08c234808d --- /dev/null +++ b/downstream/modules/platform/ref-controller-access-azure-resources-in-playbook.adoc @@ -0,0 +1,15 @@ +[id="ref-controller-access-azure-resources-in-playbook"] + += Access {Azure} resource manager credentials in an ansible playbook + +You can get {Azure} credential parameters from a job runtime environment: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + azure: + client_id: '{{ lookup("env", "AZURE_CLIENT_ID") }}' + secret: '{{ lookup("env", "AZURE_SECRET") }}' + tenant: '{{ lookup("env", "AZURE_TENANT") }}' + subscription_id: '{{ lookup("env", "AZURE_SUBSCRIPTION_ID") }}' +---- diff --git a/downstream/modules/platform/ref-controller-access-controller-creds-in-playbook.adoc b/downstream/modules/platform/ref-controller-access-controller-creds-in-playbook.adoc new file mode 100644 index 0000000000..8ef40dd6fd --- /dev/null +++ b/downstream/modules/platform/ref-controller-access-controller-creds-in-playbook.adoc @@ -0,0 +1,14 @@ +[id="ref-controller-access-controller-creds-in-playbook"] + += Access {ControllerName} credentials in an Ansible Playbook + +You can get the host, username, and password parameters from a job runtime environment: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + controller: + host: '{{ lookup("env", "CONTROLLER_HOST") }}' + username: '{{ lookup("env", "CONTROLLER_USERNAME") }}' + password: '{{ lookup("env", "CONTROLLER_PASSWORD") }}' +---- diff --git a/downstream/modules/platform/ref-controller-access-ec2-credentials-in-playbook.adoc b/downstream/modules/platform/ref-controller-access-ec2-credentials-in-playbook.adoc new file mode 100644 index 0000000000..291f8b6944 --- /dev/null +++ b/downstream/modules/platform/ref-controller-access-ec2-credentials-in-playbook.adoc @@ -0,0 +1,14 @@ +[id="ref-controller-access-ec2-credentials-in-playbook"] + += Access Amazon EC2 credentials in an Ansible Playbook + +You can get AWS credential parameters from a job runtime environment: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + aws: + access_key: '{{ lookup("env", "AWS_ACCESS_KEY_ID") }}' + secret_key: '{{ lookup("env", "AWS_SECRET_ACCESS_KEY") }}' + security_token: '{{ lookup("env", "AWS_SECURITY_TOKEN") }}' +---- diff --git a/downstream/modules/platform/ref-controller-access-network-creds-playbook.adoc b/downstream/modules/platform/ref-controller-access-network-creds-playbook.adoc new file mode 100644 index 0000000000..a329dbde51 --- /dev/null +++ b/downstream/modules/platform/ref-controller-access-network-creds-playbook.adoc @@ -0,0 +1,13 @@ +[id="ref-controller-access-network-creds-playbook"] + += Access network credentials in an ansible playbook + +You can get the username and password parameters from a job runtime environment: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + network: + username: '{{ lookup("env", "ANSIBLE_NET_USERNAME") }}' + password: '{{ lookup("env", "ANSIBLE_NET_PASSWORD") }}' +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-access-virt-creds-in-playbook.adoc b/downstream/modules/platform/ref-controller-access-virt-creds-in-playbook.adoc new file mode 100644 index 0000000000..3b1c40a88e --- /dev/null +++ b/downstream/modules/platform/ref-controller-access-virt-creds-in-playbook.adoc @@ -0,0 +1,46 @@ +[id="ref-controller-access-virt-creds-in-playbook"] + += Access virtualization credentials in an Ansible Playbook + +You can get the Red Hat Virtualization credential parameter from a job runtime environment: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + ovirt: + ovirt_url: '{{ lookup("env", "OVIRT_URL") }}' + ovirt_username: '{{ lookup("env", "OVIRT_USERNAME") }}' + ovirt_password: '{{ lookup("env", "OVIRT_PASSWORD") }}' +---- + +The `file` and `env` injectors for Red Hat Virtualization are as follows: + +[literal, options="nowrap" subs="+attributes"] +---- +ManagedCredentialType( + namespace='rhv', + +.... +.... +.... + +injectors={ + # The duplication here is intentional; the ovirt4 inventory plugin + # writes a .ini file for authentication, while the ansible modules for + # ovirt4 use a separate authentication process that support + # environment variables; by injecting both, we support both + 'file': { + 'template': '\n'.join( + [ + '[ovirt]', + 'ovirt_url={{host}}', + 'ovirt_username={{username}}', + 'ovirt_password={{password}}', + '{% if ca_file %}ovirt_ca_file={{ca_file}}{% endif %}', + ] + ) + }, + 'env': {'OVIRT_INI_PATH': '{{tower.filename}}', 'OVIRT_URL': '{{host}}', 'OVIRT_USERNAME': '{{username}}', 'OVIRT_PASSWORD': '{{password}}'}, + }, +) +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-access-vmware-creds-in-playbook.adoc b/downstream/modules/platform/ref-controller-access-vmware-creds-in-playbook.adoc new file mode 100644 index 0000000000..b171a63d61 --- /dev/null +++ b/downstream/modules/platform/ref-controller-access-vmware-creds-in-playbook.adoc @@ -0,0 +1,14 @@ +[id="ref-controller-access-vmware-creds-in-playbook"] + += Access VMware vCenter credentials in an ansible playbook + +You can get VMware vCenter credential parameters from a job runtime environment: + +[literal, options="nowrap" subs="+attributes"] +---- +vars: + vmware: + host: '{{ lookup("env", "VMWARE_HOST") }}' + username: '{{ lookup("env", "VMWARE_USER") }}' + password: '{{ lookup("env", "VMWARE_PASSWORD") }}' +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-activity-stream-schema.adoc b/downstream/modules/platform/ref-controller-activity-stream-schema.adoc index 7a5c189d9e..f27fbccc62 100644 --- a/downstream/modules/platform/ref-controller-activity-stream-schema.adoc +++ b/downstream/modules/platform/ref-controller-activity-stream-schema.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-activity-stream-schema"] = Activity stream schema diff --git a/downstream/modules/platform/ref-controller-additional-build-files.adoc b/downstream/modules/platform/ref-controller-additional-build-files.adoc index c2de464fac..5925c1fc8b 100644 --- a/downstream/modules/platform/ref-controller-additional-build-files.adoc +++ b/downstream/modules/platform/ref-controller-additional-build-files.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-additional-build-files"] = additional_build_files diff --git a/downstream/modules/platform/ref-controller-additional-build-steps.adoc b/downstream/modules/platform/ref-controller-additional-build-steps.adoc index 3aed82ba34..ceb4ea78ea 100644 --- a/downstream/modules/platform/ref-controller-additional-build-steps.adoc +++ b/downstream/modules/platform/ref-controller-additional-build-steps.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-additional-build-steps"] = additional_build_steps diff --git a/downstream/modules/platform/ref-controller-allow-jinja-in-extra-vars.adoc b/downstream/modules/platform/ref-controller-allow-jinja-in-extra-vars.adoc index 34ea44dfa6..4883800908 100644 --- a/downstream/modules/platform/ref-controller-allow-jinja-in-extra-vars.adoc +++ b/downstream/modules/platform/ref-controller-allow-jinja-in-extra-vars.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-allow-jinja-in-extra-vars"] = The ALLOW_JINJA_IN_EXTRA_VARS variable diff --git a/downstream/modules/platform/ref-controller-amazon-web-services.adoc b/downstream/modules/platform/ref-controller-amazon-web-services.adoc index 2602df8e75..e7fce3d825 100644 --- a/downstream/modules/platform/ref-controller-amazon-web-services.adoc +++ b/downstream/modules/platform/ref-controller-amazon-web-services.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-amazon-web-services"] = {AWS} EC2 diff --git a/downstream/modules/platform/ref-controller-analytics-gathering.adoc b/downstream/modules/platform/ref-controller-analytics-gathering.adoc index 2bc4cacc31..150c2702a1 100644 --- a/downstream/modules/platform/ref-controller-analytics-gathering.adoc +++ b/downstream/modules/platform/ref-controller-analytics-gathering.adoc @@ -1,22 +1,17 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-analytics-gathering"] = Analytics gathering Use this command to gather analytics on-demand outside of the predefined window (the default is 4 hours): -[literal, options="nowrap" subs="+attributes"] ----- -$ awx-manage gather_analytics --ship ----- +`$ awx-manage gather_analytics --ship` -For customers with disconnected environments who want to collect usage information about unique hosts automated across a time period, use this -command: +For customers with disconnected environments who want to collect usage information about unique hosts automated across a time period, use this command: -[literal, options="nowrap" subs="+attributes"] ----- -awx-manage host_metric --since YYYY-MM-DD --until YYYY-MM-DD --json ----- +`awx-manage host_metric --since YYYY-MM-DD --json` -The parameters `--since` and `--until` specify date ranges and are optional, but one of them has to be present. +The `--since` parameter is optional. The `--json` flag specifies the output format and is optional. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-analytics-reports.adoc b/downstream/modules/platform/ref-controller-analytics-reports.adoc index 54eda6976d..a7013a27cb 100644 --- a/downstream/modules/platform/ref-controller-analytics-reports.adoc +++ b/downstream/modules/platform/ref-controller-analytics-reports.adoc @@ -1,25 +1,36 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-analytics-reports"] = Analytics Reports +//[ddacosta - removed to reflect current environment but this might be updated in the product later and this statement could be added back.] +//Reports from collection are accessible through the {ControllerName} UI if you have superuser-level permissions. +//By including the analytics view on-prem where it is most convenient, you can access data that can affect your day-to-day work. +//This data is aggregated from the automation provided on link:https://console.redhat.com[{Console}]. + +Reports for data collected are available through link:https://console.redhat.com[{Console}]. -Reports from collection are accessible through the {ControllerName} UI if you have superuser-level permissions. -By including the analytics view on-prem where it is most convenient, you can access data that can affect your day-to-day work. -This data is aggregated from the automation provided on link:https://console.redhat.com[{Console}]. +Other {Analytics} data currently available and accessible through the platform UI include the following: -Currently available is a view-only version of the Automation Calculator utility that shows a report that represents (possible) savings to the subscriber. +*Automation Calculator* is a view-only version of the Automation Calculator utility that shows a report that represents (possible) savings to the subscriber. image:aa-automation-calculator.png[Automation calculator] -[NOTE] -==== -This option is available for technical preview and is subject to change in a future release. -To preview the analytic reports view, set the *Enable Preview of New User Interface* toggle to *On* from the *Miscellaneous System Settings* option of the {MenuAEAdminSettings} menu. +*Host Metrics* is an analytics report collected for host data such as, when they were first automated, when they were most recently automated, how many times they were automated, and how many times each host has been deleted. + +*Subscription Usage* reports the historical usage of your subscription. Subscription capacity and licenses consumed per month are displayed, with the ability to filter by the last year, two years, or three years. + +//I don't think this is included +//[NOTE] +//==== +//This option is available for technical preview and is subject to change in a future release. +//To preview the analytic reports view, set the *Enable Preview of New User Interface* toggle to *On* from the *Miscellaneous System Settings* option of the {MenuAEAdminSettings} menu. -After saving, logout and log back in to access the options under the *Analytics* section on the navigation panel. +//After saving, logout and log back in to access the options under the *Analytics* section on the navigation panel. -image:aa-options-navbar.png[Navigation panel] -==== +//image:aa-options-navbar.png[Navigation panel] +//==== -Host Metrics is another analytics report collected for host data. -The ability to access this option from this part of the UI is currently in tech preview and is subject to change in a future release. -For more information, see the _Host Metrics view_ in xref:controller-config[{ControllerNameStart} configuration]. +//Host Metrics is another analytics report collected for host data. +//The ability to access this option from this part of the UI is currently in tech preview and is subject to change in a future release. +//For more information, see the _Host Metrics view_ in xref:controller-config[{ControllerNameStart} configuration]. diff --git a/downstream/modules/platform/ref-controller-api-advanced-queries.adoc b/downstream/modules/platform/ref-controller-api-advanced-queries.adoc index 758f3d3f8d..669864d5f6 100644 --- a/downstream/modules/platform/ref-controller-api-advanced-queries.adoc +++ b/downstream/modules/platform/ref-controller-api-advanced-queries.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-api-advanced-queries"] = Advanced queries in the API diff --git a/downstream/modules/platform/ref-controller-api-config-settings.adoc b/downstream/modules/platform/ref-controller-api-config-settings.adoc index e1fea07e9f..3e61fcce6e 100644 --- a/downstream/modules/platform/ref-controller-api-config-settings.adoc +++ b/downstream/modules/platform/ref-controller-api-config-settings.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-api-config-settings"] = Configuration settings diff --git a/downstream/modules/platform/ref-controller-api-considerations.adoc b/downstream/modules/platform/ref-controller-api-considerations.adoc index 495730dada..bf1b107ed7 100644 --- a/downstream/modules/platform/ref-controller-api-considerations.adoc +++ b/downstream/modules/platform/ref-controller-api-considerations.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-api-considerations"] = Backwards-Compatible API considerations diff --git a/downstream/modules/platform/ref-controller-api-conventions.adoc b/downstream/modules/platform/ref-controller-api-conventions.adoc index 7da833c72f..42ad1bfecb 100644 --- a/downstream/modules/platform/ref-controller-api-conventions.adoc +++ b/downstream/modules/platform/ref-controller-api-conventions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-api-conventions-in-API"] The API is versioned for compatibility reasons. diff --git a/downstream/modules/platform/ref-controller-api-field-lookups.adoc b/downstream/modules/platform/ref-controller-api-field-lookups.adoc index 7b6f7d9f8e..afe95b5184 100644 --- a/downstream/modules/platform/ref-controller-api-field-lookups.adoc +++ b/downstream/modules/platform/ref-controller-api-field-lookups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-api-field-lookups"] = Field lookups @@ -11,7 +13,7 @@ You can use field lookups for more advanced queries, by appending the lookup to The following field lookups are supported: -* exact: Exact match (default lookup if not specified). +* exact: Exact match (default lookup if not specified, see the following note for more information). * iexact: Case-insensitive version of exact. * contains: Field contains value. * icontains: Case-insensitive version of contains. @@ -37,3 +39,21 @@ You can specify lists (for the `in` lookup) as a comma-separated list of values. Filtering based on the requesting user's level of access by query string parameter: * `role_level`: Level of role to filter on, such as `admin_role` + +[NOTE] +==== +Earlier releases of {PlatformNameShort} returned queries with *_exact* results by default. +As a workaround, set the `limit` to `?limit_exact` for the default filter. +For example, `/api/v2/jobs/?limit_exact=example.domain.com` results in: + +---- +{ + "count": 1, + "next": null, + "previous": null, + "results": [ +... +---- +==== + + diff --git a/downstream/modules/platform/ref-controller-app-token-functions.adoc b/downstream/modules/platform/ref-controller-app-token-functions.adoc index 2ed67f6ca3..37e4ffb15b 100644 --- a/downstream/modules/platform/ref-controller-app-token-functions.adoc +++ b/downstream/modules/platform/ref-controller-app-token-functions.adoc @@ -1,5 +1,7 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-app-token-functions"] -= Application Token Functions += Application token functions -The `refresh` and `revoke` functions associated with tokens, for tokens at the `/api/o/` endpoints can currently only be carried out with application tokens. \ No newline at end of file +The `refresh` and `revoke` functions associated with tokens, for tokens at the `/o/` endpoints can currently only be carried out with application tokens. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-applications-getting-started.adoc b/downstream/modules/platform/ref-controller-applications-getting-started.adoc index 513519c9b0..8f09b8f92c 100644 --- a/downstream/modules/platform/ref-controller-applications-getting-started.adoc +++ b/downstream/modules/platform/ref-controller-applications-getting-started.adoc @@ -1,12 +1,13 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-applications-getting-started"] = Getting started with OAuth Applications -From the navigation panel, select {MenuAMAdminOauthApps}. -The *OAuth Applications* page displays a searchable list of all available applications currently managed by {PlatformNameShort} and can that you can sort by *Name*. +You can access the *OAuth Applications* page from the navigation panel by selecting {MenuAMAdminOauthApps}. From there you can view, create, sort and search for applications currently managed by {PlatformNameShort} and {ControllerName}. //image:apps-list-view-examples.png[Applications- with example apps] -If no applications exist, you are requested to create applications. +If no applications exist, you can create one by clicking btn:[Create OAuth application]. //image:apps-list-view-empty.png[Add applications] diff --git a/downstream/modules/platform/ref-controller-approval-nodes.adoc b/downstream/modules/platform/ref-controller-approval-nodes.adoc index 1de6f64a6a..f220283ce5 100644 --- a/downstream/modules/platform/ref-controller-approval-nodes.adoc +++ b/downstream/modules/platform/ref-controller-approval-nodes.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-approval-nodes"] = Approval nodes -Choosing an *Approval* node requires your intervention in order to advance the workflow. +Choosing an *Approval* node requires your intervention to advance a workflow. This functions as a means to pause the workflow in between playbooks so that you can give approval to continue on to the next playbook in the workflow. This gives the user a specified amount of time to intervene, but also enables you to continue as quickly as possible without having to wait on another trigger. @@ -17,7 +19,7 @@ The approver is anyone who meets the following criteria: * A user who has organization administrator or above privileges (for the organization associated with that workflow job template). * A user who has the *Approve* permission explicitly assigned to them within that specific workflow job template. -image::ug-wf-node-approval-notifications.png[Node approval notifications] +//image::ug-wf-node-approval-notifications.png[Node approval notifications] If pending approval nodes are not approved within the specified time limit (if an expiration was assigned) or they are denied, then they are marked as "timed out" or "failed", and move on to the next "on fail node" or "always node". If approved, the "on success" path is taken. diff --git a/downstream/modules/platform/ref-controller-automate-signing.adoc b/downstream/modules/platform/ref-controller-automate-signing.adoc index f41edaf794..19639a7222 100644 --- a/downstream/modules/platform/ref-controller-automate-signing.adoc +++ b/downstream/modules/platform/ref-controller-automate-signing.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-automate-signing"] = Automate signing diff --git a/downstream/modules/platform/ref-controller-automation-analytics.adoc b/downstream/modules/platform/ref-controller-automation-analytics.adoc index c70c051c72..fd15086b17 100644 --- a/downstream/modules/platform/ref-controller-automation-analytics.adoc +++ b/downstream/modules/platform/ref-controller-automation-analytics.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-automation-analytics"] = {Analytics} -When you imported your license for the first time, you were given options related to the collection of data that powers {Analytics}, a cloud service that is part of the {PlatformNameShort} subscription. +When you imported your license for the first time, you were automatically opted in for the collection of data that powers {Analytics}, a cloud service that is part of the {PlatformNameShort} subscription. [IMPORTANT] ==== @@ -12,15 +14,7 @@ For opt-in of {Analytics} to have any effect, your instance of {ControllerName} As with Red Hat Insights, {Analytics} is built to collect the minimum amount of data needed. No credential secrets, personal data, automation variables, or task output is gathered. -For more information, see xref:ref-controller-data-collection-details[Details of data collection]. - -To enable this feature, turn on data collection for {Analytics} and enter your Red Hat customer credentials in the *Miscellaneous System settings* of the System configuration list of options in the {MenuAEAdminSettings} menu. - -image:configure-controller-system-misc-analytics.png[Miscellaneous System Settings] - -You can view the location to which the collection of insights data is uploaded in the *{Analytics} upload URL* field on the *Details* page. - -image:misc-system-details-analytics-url.png[Insights location] +When you imported your license for the first time, you were automatically opted in to {Analytics}. To configure or disable this feature, see xref:proc-controller-configure-analytics[Configuring {Analytics}]. By default, the data is collected every four hours. When you enable this feature, data is collected up to a month in arrears (or until the previous collection). @@ -34,7 +28,7 @@ This setting can also be enabled through the API by specifying `INSIGHTS_TRACKIN The {Analytics} generated from this data collection can be found on the link:https://cloud.redhat.com[Red Hat Cloud Services] portal. -image:aa-dashboard.png[Analytics dashboard] +//image:aa-dashboard.png[Analytics dashboard] *Clusters* data is the default view. This graph represents the number of job runs across all {ControllerName} clusters over a period of time. @@ -53,3 +47,8 @@ On the clouds navigation panel, select menu:Organization Statistics[] to view in * xref:ref-controller-use-by-organization[Use by organization] * xref:ref-controller-jobs-run-by-organization[Job runs by organization] * xref:ref-controller-organization-status[Organization status] + +[NOTE] +==== +The organization statistics page will be deprecated in a future release. +==== diff --git a/downstream/modules/platform/ref-controller-autoscaling.adoc b/downstream/modules/platform/ref-controller-autoscaling.adoc new file mode 100644 index 0000000000..f91a91723c --- /dev/null +++ b/downstream/modules/platform/ref-controller-autoscaling.adoc @@ -0,0 +1,5 @@ +[id="ref-controller-autoscaling"] + += Autoscaling + +Use the "callback" feature to permit newly booting instances to request configuration for auto-scaling scenarios or provisioning integration. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-available-resources.adoc b/downstream/modules/platform/ref-controller-available-resources.adoc index 8cff2698ca..09bdd0c4a8 100644 --- a/downstream/modules/platform/ref-controller-available-resources.adoc +++ b/downstream/modules/platform/ref-controller-available-resources.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-available-resources"] = Available resources diff --git a/downstream/modules/platform/ref-controller-aws-cloud.adoc b/downstream/modules/platform/ref-controller-aws-cloud.adoc index 6fd9ae6fa0..0fb75c6c9e 100644 --- a/downstream/modules/platform/ref-controller-aws-cloud.adoc +++ b/downstream/modules/platform/ref-controller-aws-cloud.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-aws-cloud"] = {AWS} diff --git a/downstream/modules/platform/ref-controller-aws-secrets-lookup.adoc b/downstream/modules/platform/ref-controller-aws-secrets-lookup.adoc new file mode 100644 index 0000000000..c055744c41 --- /dev/null +++ b/downstream/modules/platform/ref-controller-aws-secrets-lookup.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-aws-secrets-lookup"] + += AWS secrets manager lookup + +This is considered part of the secret management capability. For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management#ref-aws-secrets-manager-lookup[AWS Secrets Manager Lookup] \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-awx-default-ee.adoc b/downstream/modules/platform/ref-controller-awx-default-ee.adoc index 122e5c64a7..9397fd7caa 100644 --- a/downstream/modules/platform/ref-controller-awx-default-ee.adoc +++ b/downstream/modules/platform/ref-controller-awx-default-ee.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-awx-default-ee"] = Default {ExecEnvShort} for AWX diff --git a/downstream/modules/platform/ref-controller-azure-cloud.adoc b/downstream/modules/platform/ref-controller-azure-cloud.adoc index 538367accd..c622ff5362 100644 --- a/downstream/modules/platform/ref-controller-azure-cloud.adoc +++ b/downstream/modules/platform/ref-controller-azure-cloud.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-azure-cloud"] = Azure diff --git a/downstream/modules/platform/ref-controller-backup-restore-clustered-environments.adoc b/downstream/modules/platform/ref-controller-backup-restore-clustered-environments.adoc index 4b0097acde..f23fb4535a 100644 --- a/downstream/modules/platform/ref-controller-backup-restore-clustered-environments.adoc +++ b/downstream/modules/platform/ref-controller-backup-restore-clustered-environments.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-backup-restore-clustered-environments"] = Backup and restore clustered environments diff --git a/downstream/modules/platform/ref-controller-backup-restore-considerations.adoc b/downstream/modules/platform/ref-controller-backup-restore-considerations.adoc index 08d9ad66b1..97b1c1df82 100644 --- a/downstream/modules/platform/ref-controller-backup-restore-considerations.adoc +++ b/downstream/modules/platform/ref-controller-backup-restore-considerations.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-backup-restore-considerations"] = Backup and restoration considerations @@ -9,6 +11,11 @@ Disk space:: Review your disk space requirements to ensure you have enough room System credentials:: Confirm you have the required system credentials when working with a local database or a remote database. On local systems, you might need `root` or `sudo` access, depending on how credentials are set up. On remote systems, you might need different credentials to grant you access to the remote system you are trying to backup or restore. ++ +[NOTE] +==== +The {PlatformNameShort} database backups are staged on each node at `/var/backups/automation-platform` through the variable `backup_dir`. You might need to mount a new volume to `/var/backups` or change the staging location with the variable `backup_dir` to prevent issues with disk space before running the `./setup.sh -b` script. +==== Version:: You must always use the most recent minor version of a release to backup or restore your {PlatformNameShort} installation version. For example, if the current platform version you are on is 2.0.x, only use the latest 2.0 installer. diff --git a/downstream/modules/platform/ref-controller-build-arg-defaults.adoc b/downstream/modules/platform/ref-controller-build-arg-defaults.adoc index 3839981e4b..165bdd5125 100644 --- a/downstream/modules/platform/ref-controller-build-arg-defaults.adoc +++ b/downstream/modules/platform/ref-controller-build-arg-defaults.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-build-arg-defaults"] = build_arg_defaults diff --git a/downstream/modules/platform/ref-controller-build-exec-envs.adoc b/downstream/modules/platform/ref-controller-build-exec-envs.adoc index acfeaa85a5..42939679dc 100644 --- a/downstream/modules/platform/ref-controller-build-exec-envs.adoc +++ b/downstream/modules/platform/ref-controller-build-exec-envs.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-build-exec-envs"] -= Building an {ExecEnvShort} += Build an {ExecEnvShort} If your Ansible content depends on custom virtual environments instead of a default environment, you must take additional steps. -You must install packages on each node, interact well with other software installed on the host system, and keep them in synchronization. +You must install packages on each node that interact well with other software installed on the host system, and keep them in synchronization. //Before, jobs ran in a virtual environment at `/var/lib/awx/venv/ansible`, which was pre-loaded with dependencies for ansible-runner and certain types of Ansible content used by the Ansible control machine. To simplify this process, you can build container images that serve as Ansible diff --git a/downstream/modules/platform/ref-controller-building-exec-env.adoc b/downstream/modules/platform/ref-controller-building-exec-env.adoc index 0971053c6f..11e978c8b9 100644 --- a/downstream/modules/platform/ref-controller-building-exec-env.adoc +++ b/downstream/modules/platform/ref-controller-building-exec-env.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-building-exec-env"] = Content needed for an {ExecEnvShort} @@ -22,9 +24,9 @@ The content from the output generated from migrating to {ExecEnvShort}s has some .Additional resources For more information, see link:https://docs.ansible.com/automation-controller/4.4/html/upgrade-migration-guide/upgrade_to_ees.html#migrate-new-venv[Migrate legacy venvs to execution environments]. -If you did not migrate from a virtual environment, you can create a definition file with the required data described in the xref:assembly-controller-ee-setup-reference[Execution Environment Setup Reference]. +If you did not migrate from a virtual environment, you can create a definition file with the required data described in the link:{URLBuilder}/index#assembly-controller-ee-setup-reference[Execution Environment Setup Reference]. Collection developers can declare requirements for their content by providing the appropriate metadata. -For more information, see xref:ref-controller-dependencies[Dependencies]. +For more information, see link:{URLBuilder}/assembly-controller-ee-setup-reference#ref-controller-dependencies[Dependencies]. diff --git a/downstream/modules/platform/ref-controller-capacity-instance-container.adoc b/downstream/modules/platform/ref-controller-capacity-instance-container.adoc index e368c38d8d..45ec07e24d 100644 --- a/downstream/modules/platform/ref-controller-capacity-instance-container.adoc +++ b/downstream/modules/platform/ref-controller-capacity-instance-container.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-capacity-instance-container"] = Capacity settings for instance group and container group @@ -13,5 +15,5 @@ Use the `max_concurrent_jobs` and `max_forks` settings available on instance gro ---- ((number of worker nodes in kubernetes cluster) * (memory available on each worker)) / (memory request on pod_spec) = maximum number of forks ---- -** For example, given a single worker node with 8 Gb of Memory, we determine that the `max forks` we want to run is 81. This way, either 39 jobs with 1 fork can run (task impact is always forks + 1), or 2 jobs with forks set to 39 can run. +** For example, given a single worker node with 8 GB of Memory, we determine that the `max forks` we want to run is 81. This way, either 39 jobs with 1 fork can run (task impact is always forks + 1), or 2 jobs with forks set to 39 can run. * You might have other business requirements that motivate using `max_forks` or `max_concurrent_jobs` to limit the number of jobs launched in a container group. diff --git a/downstream/modules/platform/ref-controller-capacity-planning-exercise.adoc b/downstream/modules/platform/ref-controller-capacity-planning-exercise.adoc index 7d1b5d5936..4404dd8206 100644 --- a/downstream/modules/platform/ref-controller-capacity-planning-exercise.adoc +++ b/downstream/modules/platform/ref-controller-capacity-planning-exercise.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-capacity-planning-exercise"] = Example capacity planning exercise @@ -12,7 +14,7 @@ For this example, the cluster must support the following capacity: * Forks set to 5 on playbooks. This is the default. * Average event size is 1 Mb -The virtual machines have 4 CPU and 16 GB RAM, and disks that have 3000 IOPs. +The virtual machines have 4 CPU and 16 GB RAM, and disks that have 3000 IOPS. include::ref-controller-example-workload-reqs.adoc[leveloffset=+1] @@ -20,22 +22,19 @@ include::ref-controller-example-workload-reqs.adoc[leveloffset=+1] |=== | Node | API capacity | Default execution capacity | Default control capacity | Mean event processing rate at 100% capacity usage | Mean events processing rate at 50% capacity usage | Mean event processing rate at 40% capacity usage -| 4 CPU at 2.5Ghz, 16 GB RAM control node, a maximum of 3000 IOPs disk | approximately 10 requests per second | n/a | 137 jobs | 1100 per second | 1400 per second | 1630 per second -| 4 CPU at 2.5Ghz, 16 GB RAM execution node, a maximum of 3000 IOPs disk | n/a | 137 | n/a | n/a | n/a | n/a -| 4 CPU at 2.5Ghz, 16 GB RAM database node, a maximum of 3000 IOPs disk | n/a | n/a | n/a | n/a | n/a | n/a +| 4 CPU at 2.5Ghz, 16 GB RAM control node, a maximum of 3000 IOPS disk | about 10 requests per second | n/a | 137 jobs | 1100 per second | 1400 per second | 1630 per second +| 4 CPU at 2.5Ghz, 16 GB RAM execution node, a maximum of 3000 IOPS disk | n/a | 137 | n/a | n/a | n/a | n/a +| 4 CPU at 2.5Ghz, 16 GB RAM database node, a maximum of 3000 IOPS disk | n/a | n/a | n/a | n/a | n/a | n/a |=== Because controlling jobs competes with job event processing on the control node, over-provisioning control capacity can reduce processing times. When processing times are high, you can experience a delay between when the job runs and when you can view the output in the API or UI. For this example, for a workload on 300 managed hosts, executing 1000 tasks per hour per host, 10 concurrent jobs with forks set to 5 on playbooks, and an average event size 1 Mb, use the following procedure: -* Deploy 1 execution node, 1 control node, 1 database node of 4 CPU at 2.5Ghz, 16 GB RAM, and disks that have approximately 3000 IOPs. +* Deploy 1 execution node, 1 control node, 1 database node of 4 CPU at 2.5Ghz, 16 GB RAM, and disks that have about 3000 IOPS. * Keep the default fork setting of 5 on job templates. -* Use the capacity adjustment feature in the instance view of the UI on the control node to reduce the capacity down to 16, the lowest value, to reserve more of the control node's capacity for processing events. - - -.Additional Resources +* Use the capacity change feature in the instance view of the UI on the control node to reduce the capacity down to 16, the lowest value, to reserve more of the control node's capacity for processing events. -* For more information on workloads with high levels of API interaction, see link:https://www.ansible.com/blog/scaling-automation-controller-for-api-driven-workloads[Scaling Automation Controller for API Driven Workloads]. -* For more information on managing capacity with instances, see xref:assembly-controller-instances[Managing Capacity With Instances]. -* For more information on operator-based deployments, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_performance_considerations_for_operator_based_installations/index[Red Hat Ansible Automation Platform Performance Considerations for Operator Based Installations]. +For more information about workloads with high levels of API interaction, see link:https://www.ansible.com/blog/scaling-automation-controller-for-api-driven-workloads[Scaling Automation Controller for API Driven Workloads]. +For more information about managing capacity with instances, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/using_automation_execution/index#assembly-controller-instances[Managing capacity with Instances]. +For more information about operator-based deployments, see link:{URLOCPPerformanceGuide}/index[{PlatformName} considerations for operator environments]. diff --git a/downstream/modules/platform/ref-controller-capacity-planning.adoc b/downstream/modules/platform/ref-controller-capacity-planning.adoc index c61e89b661..b5d097699e 100644 --- a/downstream/modules/platform/ref-controller-capacity-planning.adoc +++ b/downstream/modules/platform/ref-controller-capacity-planning.adoc @@ -1,6 +1,8 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-capacity-planning"] -= Capacity Planning for deploying {ControllerName} += Capacity planning for deploying {ControllerName} Capacity planning for {ControllerName} is planning the scale and characteristics of your deployment so that it has the capacity to run the planned workload. Capacity planning includes the following phases: diff --git a/downstream/modules/platform/ref-controller-change-admin-password.adoc b/downstream/modules/platform/ref-controller-change-admin-password.adoc index 166d67f74e..9d0aee539f 100644 --- a/downstream/modules/platform/ref-controller-change-admin-password.adoc +++ b/downstream/modules/platform/ref-controller-change-admin-password.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-change-admin-password"] = Change the {ControllerName} Administrator Password -During the installation process, you are prompted to enter an administrator password which is used for the `admin` superuser or system administrator created by {ControllerName}. -If you log into the instance using SSH, it tells you the default administrator password in the prompt. +During the installation process, you are prompted to enter an administrator password that is used for the `admin` superuser or system administrator created by {ControllerName}. +If you log in to the instance by using SSH, it tells you the default administrator password in the prompt. If you need to change this password at any point, run the following command as root on the {ControllerName} server: diff --git a/downstream/modules/platform/ref-controller-cleanup-expired-tokens.adoc b/downstream/modules/platform/ref-controller-cleanup-expired-tokens.adoc index f837af0f59..f8f6e79f0e 100644 --- a/downstream/modules/platform/ref-controller-cleanup-expired-tokens.adoc +++ b/downstream/modules/platform/ref-controller-cleanup-expired-tokens.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-cleanup-expired-tokens"] = Cleanup Expired OAuth2 Tokens @@ -9,7 +11,7 @@ management jobs. For more information, see xref:proc-controller-scheduling-deletion[Scheduling deletion]. -You can also set or review notifications associated with this management job the same way as described in xref:proc-controller-management-notifications[setting notifications] for activity +You can also set or review notifications associated with this management job the same way as described in xref:proc-controller-management-notifications[Setting notifications] for activity stream management jobs. -For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-notifications[Notifications] in the _{ControllerUG}_. +For more information, see link:{URLControllerUserGuide}/controller-notifications[Notifications] in _{ControllerUG}_. diff --git a/downstream/modules/platform/ref-controller-cleanup-old-data.adoc b/downstream/modules/platform/ref-controller-cleanup-old-data.adoc index 210eaee027..87f598adb7 100644 --- a/downstream/modules/platform/ref-controller-cleanup-old-data.adoc +++ b/downstream/modules/platform/ref-controller-cleanup-old-data.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-cleanup-old-data"] = Cleanup of old data @@ -17,4 +19,4 @@ This permanently deletes the job details and job output for jobs older than a sp awx-manage cleanup_activitystream [--help] ---- -This permanently deletes any link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/assembly-controller-user-interface#proc-controller-activity-stream[Activity stream] data older than a specific number of days. \ No newline at end of file +This permanently deletes any [Activity stream] data older than a specific number of days. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-clear-sessions.adoc b/downstream/modules/platform/ref-controller-clear-sessions.adoc index 795aff35c0..c9b5dd081d 100644 --- a/downstream/modules/platform/ref-controller-clear-sessions.adoc +++ b/downstream/modules/platform/ref-controller-clear-sessions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-clear-sessions"] = `clearsessions` @@ -6,4 +8,4 @@ Use this command to delete all sessions that have expired. For more information, see link:https://docs.djangoproject.com/en/4.2/topics/http/sessions/#clearing-the-session-store[Clearing the session store] in Django's Oauth Toolkit documentation. -For more information on OAuth2 token management in the UI, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/assembly-controller-applications[Applications] section of the {ControllerUG}. +For more information on OAuth2 token management in the UI, see the xref:assembly-controller-applications[Applications]. diff --git a/downstream/modules/platform/ref-controller-clear-tokens.adoc b/downstream/modules/platform/ref-controller-clear-tokens.adoc index b01d15d0dc..a8dbaee789 100644 --- a/downstream/modules/platform/ref-controller-clear-tokens.adoc +++ b/downstream/modules/platform/ref-controller-clear-tokens.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-clear-tokens"] = `cleartokens` diff --git a/downstream/modules/platform/ref-controller-cluster-install.adoc b/downstream/modules/platform/ref-controller-cluster-install.adoc index 0e1fea3565..33f7339962 100644 --- a/downstream/modules/platform/ref-controller-cluster-install.adoc +++ b/downstream/modules/platform/ref-controller-cluster-install.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-cluster-install"] = Install and configure @@ -55,7 +57,7 @@ hostC routable_hostname=10.1.0.4 routable_hostname ---- -For more information about `routable_hostname`, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars#ref-genera-inventory-variables[General variables] in the _{PlatformName} Installation Guide_. +For more information about `routable_hostname`, see link:{URLInstallationGuide}/appendix-inventory-files-vars#ref-genera-inventory-variables[General variables] in the _{TitleInstallationGuide}_. [IMPORTANT] ==== diff --git a/downstream/modules/platform/ref-controller-cluster-instance-behavior.adoc b/downstream/modules/platform/ref-controller-cluster-instance-behavior.adoc index 7f026b2ee5..d1161aa0a7 100644 --- a/downstream/modules/platform/ref-controller-cluster-instance-behavior.adoc +++ b/downstream/modules/platform/ref-controller-cluster-instance-behavior.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-cluster-instance-behavior"] = Instance services and failure behavior @@ -13,4 +15,4 @@ Rsyslog:: The log processing service used to deliver logs to various external lo {ControllerNameStart} is configured so that if any of these services or their components fail, then all services are restarted. If these fail often in a short span of time, then the entire instance is placed offline in an automated fashion to allow remediation without causing unexpected behavior. -For backing up and restoring a clustered environment, see the xref:controller-backup-restore-clustered-environments[Backup and restore clustered environments] section. +For backing up and restoring a clustered environment, see the link:{URLControllerAdminGuide}/index#controller-backup-restore-clustered-environments[Backup and restore clustered environments] section. diff --git a/downstream/modules/platform/ref-controller-cluster-instances.adoc b/downstream/modules/platform/ref-controller-cluster-instances.adoc index a4ca5a8ac5..e5a1e8334f 100644 --- a/downstream/modules/platform/ref-controller-cluster-instances.adoc +++ b/downstream/modules/platform/ref-controller-cluster-instances.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-cluster-instances"] = Instances and ports used by {ControllerName} and {HubName} @@ -5,5 +7,5 @@ Ports and instances used by {ControllerName} and also required by the on-premise {HubName} node are as follows: * Port 80, 443 (normal {ControllerName} and {HubName} ports) -* Port 22 (ssh - ingress only required) +* Port 22 (SSH - ingress only required) * Port 5432 (database instance - if the database is installed on an external instance, it must be opened to {ControllerName} instances) diff --git a/downstream/modules/platform/ref-controller-cluster-job-runtime.adoc b/downstream/modules/platform/ref-controller-cluster-job-runtime.adoc index 3ff1886dd6..509aeac5c9 100644 --- a/downstream/modules/platform/ref-controller-cluster-job-runtime.adoc +++ b/downstream/modules/platform/ref-controller-cluster-job-runtime.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-cluster-job-runtime"] = Job runtime behavior @@ -7,7 +9,7 @@ On the system side, note the following differences: * When a job is submitted from the API interface it is pushed into the dispatcher queue. Each {ControllerName} instance connects to and receives jobs from that queue using a scheduling algorithm. -Any instance in the cluster is just as likely to receive the work and execute the task. +Any instance in the cluster is just as likely to receive the work and run the task. If an instance fails while executing jobs, then the work is marked as permanently failed. + image::ug-clustering-visual.png[Clustering visual] diff --git a/downstream/modules/platform/ref-controller-cluster-management.adoc b/downstream/modules/platform/ref-controller-cluster-management.adoc index 8c2684d71e..9e44018426 100644 --- a/downstream/modules/platform/ref-controller-cluster-management.adoc +++ b/downstream/modules/platform/ref-controller-cluster-management.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-cluster-management"] = Cluster management -For more information on the `awx-manage provision_instance` and `awx-manage deprovision_instance` commands, see xref:controller-clustering[Clustering]. +For more information about the `awx-manage provision_instance` and `awx-manage deprovision_instance` commands, see xref:controller-clustering[Clustering]. [NOTE] ==== diff --git a/downstream/modules/platform/ref-controller-cluster-status-api.adoc b/downstream/modules/platform/ref-controller-cluster-status-api.adoc index f771e007ae..5c05688a78 100644 --- a/downstream/modules/platform/ref-controller-cluster-status-api.adoc +++ b/downstream/modules/platform/ref-controller-cluster-status-api.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-cluster-status-api"] = Status and monitoring by browser API diff --git a/downstream/modules/platform/ref-controller-config-json.adoc b/downstream/modules/platform/ref-controller-config-json.adoc index 7ee09a2461..e9b446e18b 100644 --- a/downstream/modules/platform/ref-controller-config-json.adoc +++ b/downstream/modules/platform/ref-controller-config-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-config-json"] = config.json @@ -47,7 +49,7 @@ Which includes the following fields: * *ansible_version*: The system Ansible version on the host * *authentication_backends*: The user authentication backends that are available. -For more information, see xref:assembly-controller-set-up-social-authentication[Setting up social authentication] or xref:controller-LDAP-authentication[Setting up LDAP authentication]. +For more information, see link:{URLCentralAuth}/index#gw-config-authentication-type[Configuring an authentication type]. * *external_logger_enabled*: Whether external logging is enabled * *external_logger_type*: What logging backend is in use if enabled. For more information, see xref:assembly-controller-logging-aggregation[Logging and aggregation]. diff --git a/downstream/modules/platform/ref-controller-config-options.adoc b/downstream/modules/platform/ref-controller-config-options.adoc index 3ca3cb5063..4ede171fa2 100644 --- a/downstream/modules/platform/ref-controller-config-options.adoc +++ b/downstream/modules/platform/ref-controller-config-options.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-config-options"] = options diff --git a/downstream/modules/platform/ref-controller-config-version.adoc b/downstream/modules/platform/ref-controller-config-version.adoc index 7febf33a85..b496ab93a9 100644 --- a/downstream/modules/platform/ref-controller-config-version.adoc +++ b/downstream/modules/platform/ref-controller-config-version.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-config-version"] = version diff --git a/downstream/modules/platform/ref-controller-configure-host-name-notifications.adoc b/downstream/modules/platform/ref-controller-configure-host-name-notifications.adoc index fe638b6244..a63267a8c9 100644 --- a/downstream/modules/platform/ref-controller-configure-host-name-notifications.adoc +++ b/downstream/modules/platform/ref-controller-configure-host-name-notifications.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-configure-host-name-notifications"] = Configuring the `controllerhost` hostname for notifications diff --git a/downstream/modules/platform/ref-controller-connect-to-host.adoc b/downstream/modules/platform/ref-controller-connect-to-host.adoc deleted file mode 100644 index d490400401..0000000000 --- a/downstream/modules/platform/ref-controller-connect-to-host.adoc +++ /dev/null @@ -1,10 +0,0 @@ -[id="controller-connect-to-host"] - -= Unable to connect to your host - -If you are unable to run the `helloworld.yml` example playbook from the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/getting_started_with_automation_controller/index#controller-projects[Managing projects] section of the _{ControllerGS}_ guide or other playbooks due to host connection errors, try the following: - -* Can you `ssh` to your host? -Ansible depends on SSH access to the servers you are managing. -* Are your `hostnames` and IPs correctly added in your inventory file? -Check for typos. diff --git a/downstream/modules/platform/ref-controller-connect-with-winrm.adoc b/downstream/modules/platform/ref-controller-connect-with-winrm.adoc index f9b136e4b5..d23b8bbdb2 100644 --- a/downstream/modules/platform/ref-controller-connect-with-winrm.adoc +++ b/downstream/modules/platform/ref-controller-connect-with-winrm.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-connect-with-winrm"] = Connect to Windows with winrm diff --git a/downstream/modules/platform/ref-controller-constructed-inventories.adoc b/downstream/modules/platform/ref-controller-constructed-inventories.adoc index 51925693e3..b921a1d5e9 100644 --- a/downstream/modules/platform/ref-controller-constructed-inventories.adoc +++ b/downstream/modules/platform/ref-controller-constructed-inventories.adoc @@ -1,7 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-constructed-inventories"] = Constructed Inventories //As Smart inventories are deprecated, I'm removing comparisons from the source text. + You can create a new inventory (called a constructed inventory) from a list of input inventories. A constructed inventory has copies of hosts and groups in its input inventories, permitting jobs to target groups of servers across many inventories. @@ -29,5 +32,3 @@ You can construct groups based on these host properties: //image:inventories-constructed-inventory-details.png[Constructed inventory details] The examples described in later sections are organized by the structure of the input inventories. - - diff --git a/downstream/modules/platform/ref-controller-container-capacity.adoc b/downstream/modules/platform/ref-controller-container-capacity.adoc index 3fa7c447c3..315741395e 100644 --- a/downstream/modules/platform/ref-controller-container-capacity.adoc +++ b/downstream/modules/platform/ref-controller-container-capacity.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-container-capacity"] = Container capacity limits @@ -6,7 +8,7 @@ Capacity limits and quotas for containers are defined by objects in the Kubernet * To set limits on all pods within a given namespace, use the `LimitRange` object. For more information see the link:https://docs.openshift.com/online/pro/dev_guide/compute_resources.html#overview[Quotas and Limit Ranges] section of the OpenShift documentation. -* To set limits directly on the pod definition launched by {ControllerName}, see xref:controller-customize-pod-spec[Customizing the pod specification] and the link:https://docs.openshift.com/online/pro/dev_guide/compute_resources.html#dev-compute-resources[Compute Resources] section of the OpenShift documentation. +* To set limits directly on the pod definition launched by {ControllerName}, see link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-customize-pod-spec[Customizing the pod specification] and the link:https://docs.openshift.com/online/pro/dev_guide/compute_resources.html#dev-compute-resources[Compute Resources] section of the OpenShift documentation. [NOTE] ==== diff --git a/downstream/modules/platform/ref-controller-content-sourcing.adoc b/downstream/modules/platform/ref-controller-content-sourcing.adoc index b6601e58c2..12b569a374 100644 --- a/downstream/modules/platform/ref-controller-content-sourcing.adoc +++ b/downstream/modules/platform/ref-controller-content-sourcing.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-content-sourcing"] = Content sourcing from collections @@ -23,11 +25,11 @@ Additionally, post-upgrade, these settings are not visible (or editable) from th {ControllerNameStart} continues to fetch roles directly from public Galaxy even if `galaxy.ansible.com` is not the first credential in the list for the organization. The global Galaxy settings are no longer configured at the jobs level, but at the organization level in the user interface. -The organization's *Add* and *Edit* windows have an optional *Credential* lookup field for credentials of `kind=galaxy`. +The organization's *Create organization* and *Edit organization* windows have an optional *Galaxy credentials* lookup field for credentials of `kind=galaxy`. image:organizations-galaxy-credentials.png[Create organization] It is important to specify the order of these credentials as order sets precedence for the sync and lookup of the content. -For more information, see xref:proc-controller-create-organization[Creating an organization]. +For more information, see link:{URLCentralAuth}/gw-managing-access#proc-controller-create-organization[Creating an organization]. For more information about how to set up a project by using collections, see xref:proc-projects-using-collections-with-hub[Using Collections with {HubName}]. diff --git a/downstream/modules/platform/ref-controller-continuous-integration.adoc b/downstream/modules/platform/ref-controller-continuous-integration.adoc new file mode 100644 index 0000000000..69b43381ae --- /dev/null +++ b/downstream/modules/platform/ref-controller-continuous-integration.adoc @@ -0,0 +1,7 @@ +[id="ref-controller-continuous-integration"] + += Continuous integration / Continuous Deployment + +For a Continuous Integration system, such as Jenkins, to spawn a job, it must make a `curl` request to a job template. +The credentials to the job template must not require prompting for any particular passwords. +For configuration and use instructions, see link:https://docs.ansible.com/automation-controller/latest/html/controllercli/usage.html[Installation] in the Ansible documentation. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-counts-json.adoc b/downstream/modules/platform/ref-controller-counts-json.adoc index 99f81c9c9d..cf4f78301e 100644 --- a/downstream/modules/platform/ref-controller-counts-json.adoc +++ b/downstream/modules/platform/ref-controller-counts-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-counts-json"] = counts.json diff --git a/downstream/modules/platform/ref-controller-cpu-relative-capacity.adoc b/downstream/modules/platform/ref-controller-cpu-relative-capacity.adoc index b2db9f6f2c..186e065a0a 100644 --- a/downstream/modules/platform/ref-controller-cpu-relative-capacity.adoc +++ b/downstream/modules/platform/ref-controller-cpu-relative-capacity.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-cpu-relative-capacity"] = CPU relative capacity diff --git a/downstream/modules/platform/ref-controller-create-controller-admin.adoc b/downstream/modules/platform/ref-controller-create-controller-admin.adoc index ea8c1d8681..3f24436ecd 100644 --- a/downstream/modules/platform/ref-controller-create-controller-admin.adoc +++ b/downstream/modules/platform/ref-controller-create-controller-admin.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-create-controller-admin"] = Create an {ControllerName} Administrator from the command line diff --git a/downstream/modules/platform/ref-controller-create-oauth2-token.adoc b/downstream/modules/platform/ref-controller-create-oauth2-token.adoc index 50456f9502..578de6fd5e 100644 --- a/downstream/modules/platform/ref-controller-create-oauth2-token.adoc +++ b/downstream/modules/platform/ref-controller-create-oauth2-token.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-create-oauth2-token"] = `create_oauth2_token` @@ -6,7 +8,7 @@ Use the following command to create OAuth2 tokens (specify the username for `exa [literal, options="nowrap" subs="+attributes"] ---- -$ awx-manage create_oauth2_token --user example_user +$ aap-gateway-manage create_oauth2_token --user example_user New OAuth2 token for example_user: j89ia8OO79te6IAZ97L7E8bMgXCON2 ---- diff --git a/downstream/modules/platform/ref-controller-cred-type-counts-json.adoc b/downstream/modules/platform/ref-controller-cred-type-counts-json.adoc index b55cb66fd2..d47795d6e1 100644 --- a/downstream/modules/platform/ref-controller-cred-type-counts-json.adoc +++ b/downstream/modules/platform/ref-controller-cred-type-counts-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-cred-type-counts-json"] = cred_type_counts.json diff --git a/downstream/modules/platform/ref-controller-credential-GCE.adoc b/downstream/modules/platform/ref-controller-credential-GCE.adoc index 9b05cfe5f5..8a9eecff16 100644 --- a/downstream/modules/platform/ref-controller-credential-GCE.adoc +++ b/downstream/modules/platform/ref-controller-credential-GCE.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-GCE"] = Google Compute Engine credential type @@ -25,16 +27,3 @@ GCE credentials require the following information: Click btn:[Browse] to browse for the file that has the special account information that can be used by services and applications running on your GCE instance to interact with other {GCP} APIs. This grants permissions to the service account and virtual machine instances. * *RSA Private Key*: The PEM file associated with the service account email. - -== Access Google Compute Engine credentials in an Ansible Playbook - -You can get GCE credential parameters from a job runtime environment: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - gce: - email: '{{ lookup("env", "GCE_EMAIL") }}' - project: '{{ lookup("env", "GCE_PROJECT") }}' - pem_file_path: '{{ lookup("env", "GCE_PEM_FILE_PATH") }}' ----- diff --git a/downstream/modules/platform/ref-controller-credential-GPG-public-key.adoc b/downstream/modules/platform/ref-controller-credential-GPG-public-key.adoc index 17e65ceb71..c1541bc436 100644 --- a/downstream/modules/platform/ref-controller-credential-GPG-public-key.adoc +++ b/downstream/modules/platform/ref-controller-credential-GPG-public-key.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-GPG-public-key"] = GPG Public Key credential type diff --git a/downstream/modules/platform/ref-controller-credential-aap.adoc b/downstream/modules/platform/ref-controller-credential-aap.adoc index d8e268606f..cfc3b48a18 100644 --- a/downstream/modules/platform/ref-controller-credential-aap.adoc +++ b/downstream/modules/platform/ref-controller-credential-aap.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-aap"] = {PlatformName} credential type @@ -38,17 +40,4 @@ injectors={ 'CONTROLLER_OAUTH_TOKEN': '{{oauth_token}}', } ----- - -== Access {ControllerName} credentials in an Ansible Playbook - -You can get the host, username, and password parameters from a job runtime environment: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - controller: - host: '{{ lookup("env", "CONTROLLER_HOST") }}' - username: '{{ lookup("env", "CONTROLLER_USERNAME") }}' - password: '{{ lookup("env", "CONTROLLER_PASSWORD") }}' ----- +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-aws.adoc b/downstream/modules/platform/ref-controller-credential-aws.adoc index 129d3e90f1..57b16e46d0 100644 --- a/downstream/modules/platform/ref-controller-credential-aws.adoc +++ b/downstream/modules/platform/ref-controller-credential-aws.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-aws"] = {AWS} credential type @@ -19,9 +21,8 @@ These are fields prompted in the user interface. {AWS} credentials consist of the AWS *Access Key* and *Secret Key*. -{ControllerNameStart} provides support for EC2 STS tokens, also known as Identity and Access Management (IAM) STS credentials. -_Security Token Service_ (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS -IAM users. +{ControllerNameStart} provides support for EC2 STS tokens, also known as _Identity and Access Management_ (IAM) STS credentials. +_Security Token Service_ (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS IAM users. [NOTE] ==== @@ -39,15 +40,3 @@ Attaching your AWS cloud credential to your job template forces the use of your For more information about the IAM/EC2 STS Token, see link:http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html[Temporary security credentials in IAM]. -== Access Amazon EC2 credentials in an Ansible Playbook - -You can get AWS credential parameters from a job runtime environment: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - aws: - access_key: '{{ lookup("env", "AWS_ACCESS_KEY_ID") }}' - secret_key: '{{ lookup("env", "AWS_SECRET_ACCESS_KEY") }}' - security_token: '{{ lookup("env", "AWS_SECURITY_TOKEN") }}' ----- diff --git a/downstream/modules/platform/ref-controller-credential-azure-key.adoc b/downstream/modules/platform/ref-controller-credential-azure-key.adoc index 85d1cda63e..9c5622cb89 100644 --- a/downstream/modules/platform/ref-controller-credential-azure-key.adoc +++ b/downstream/modules/platform/ref-controller-credential-azure-key.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-azure-key"] = Microsoft Azure Key Vault credential type This is considered part of the secret management capability. -For more information, see xref:ref-azure-key-vault-lookup[{Azure} Key Vault]. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management#ref-azure-key-vault-lookup[{Azure} Key Vault]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-azure-resource.adoc b/downstream/modules/platform/ref-controller-credential-azure-resource.adoc index 13cce5a639..f4bd22bb49 100644 --- a/downstream/modules/platform/ref-controller-credential-azure-resource.adoc +++ b/downstream/modules/platform/ref-controller-credential-azure-resource.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-azure-resource"] = {Azure} Resource Manager credential type @@ -16,7 +18,7 @@ Select this credential type to enable synchronization of cloud inventory with {A * *Tenant ID*: The Tenant ID for the {Azure} account. * *Azure Cloud Environment*: The variable associated with Azure cloud or Azure stack environments. -These fields are equal to the variables in the API. +These fields are equivalent to the variables in the API. To pass service principal credentials, define the following variables: @@ -59,18 +61,4 @@ Alternatively, pass the following parameters for Active Directory username/passw ad_user password subscription_id ----- - -== Access {Azure} resource manager credentials in an ansible playbook - -You can get {Azure} credential parameters from a job runtime environment: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - azure: - client_id: '{{ lookup("env", "AZURE_CLIENT_ID") }}' - secret: '{{ lookup("env", "AZURE_SECRET") }}' - tenant: '{{ lookup("env", "AZURE_TENANT") }}' - subscription_id: '{{ lookup("env", "AZURE_SUBSCRIPTION_ID") }}' ----- +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-bitbucket.adoc b/downstream/modules/platform/ref-controller-credential-bitbucket.adoc new file mode 100644 index 0000000000..e84b7021f8 --- /dev/null +++ b/downstream/modules/platform/ref-controller-credential-bitbucket.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-credential-bitbucket"] + += BitBucket data center HTTP access token + +Bitbucket Data Center is a self-hosted Git repository for collaboration and management. +Select this credential type to enable you to use HTTP access tokens in place of passwords for Git over HTTPS. + +For further information, see link:https://confluence.atlassian.com/bitbucketserver/http-access-tokens-939515499.html[HTTP access tokens] in the Bitbucket Data Center documentation.. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-centrify-vault.adoc b/downstream/modules/platform/ref-controller-credential-centrify-vault.adoc index 7a3541bb15..04e06ffe7a 100644 --- a/downstream/modules/platform/ref-controller-credential-centrify-vault.adoc +++ b/downstream/modules/platform/ref-controller-credential-centrify-vault.adoc @@ -1,6 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-centrify-vault"] = Centrify Vault Credential Provider Lookup credential type This is considered part of the secret management capability. -For more information, see xref:ref-centrify-vault-lookup[Centrify Vault Credential Provider Lookup]. \ No newline at end of file + +For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management#ref-centrify-vault-lookup[Centrify Vault Credential Provider Lookup]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-container-registry.adoc b/downstream/modules/platform/ref-controller-credential-container-registry.adoc index fd64ead45b..b135aed87c 100644 --- a/downstream/modules/platform/ref-controller-credential-container-registry.adoc +++ b/downstream/modules/platform/ref-controller-credential-container-registry.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-container-registry"] = Container Registry credential type diff --git a/downstream/modules/platform/ref-controller-credential-cyberark-central.adoc b/downstream/modules/platform/ref-controller-credential-cyberark-central.adoc index 85b0caf43c..22c10017d5 100644 --- a/downstream/modules/platform/ref-controller-credential-cyberark-central.adoc +++ b/downstream/modules/platform/ref-controller-credential-cyberark-central.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-cyberark-central"] = CyberArk Central Credential Provider Lookup credential type This is considered part of the secret management capability. -For more information, see xref:ref-cyberark-ccp-lookup[CyberArk Central Credential Provider (CCP) Lookup]. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management#ref-cyberark-ccp-lookup[CyberArk Central Credential Provider (CCP) Lookup]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-cyberark-conjur.adoc b/downstream/modules/platform/ref-controller-credential-cyberark-conjur.adoc index 745c43536a..6a0a9bf0d6 100644 --- a/downstream/modules/platform/ref-controller-credential-cyberark-conjur.adoc +++ b/downstream/modules/platform/ref-controller-credential-cyberark-conjur.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-cyberark-conjur"] = CyberArk Conjur Secrets Manager Lookup credential type This is considered part of the secret management capability. -For more information, see xref:ref-cyberark-conjur-lookup[CyberArk Conjur Secrets Manager Lookup]. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management#ref-cyberark-conjur-lookup[CyberArk Conjur Secrets Manager Lookup]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-galaxy-hub.adoc b/downstream/modules/platform/ref-controller-credential-galaxy-hub.adoc index 6e4f7c2846..cf86dbce88 100644 --- a/downstream/modules/platform/ref-controller-credential-galaxy-hub.adoc +++ b/downstream/modules/platform/ref-controller-credential-galaxy-hub.adoc @@ -1,10 +1,12 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-galaxy-hub"] -= {Galaxy}/Automation Hub API token credential type += {Galaxy}/{HubName} API token credential type Select this credential to access {Galaxy} or use a collection published on an instance of {PrivateHubName}. -Entering the Galaxy server URL on this screen. +Enter the Galaxy server URL on this screen. //image:credentials-create-galaxy-credential.png[Credentials- galaxy credential] @@ -13,6 +15,6 @@ Populate the *Auth Server URL* field with the contents of the *SSO URL* field at .Additional resources -For more information, see xref:proc-projects-using-collections-with-hub[Using Collections with {HubName}]. +For more information, see link:{URLControllerUserGuide}/controller-projects#proc-projects-using-collections-with-hub[Using Collections with {HubName}]. //image:hub-console-tokens-page.png[image] diff --git a/downstream/modules/platform/ref-controller-credential-gitHub-pat.adoc b/downstream/modules/platform/ref-controller-credential-gitHub-pat.adoc index c9fc350f92..a82e7cc2d9 100644 --- a/downstream/modules/platform/ref-controller-credential-gitHub-pat.adoc +++ b/downstream/modules/platform/ref-controller-credential-gitHub-pat.adoc @@ -1,10 +1,12 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-gitHub-pat"] = GitHub Personal Access Token credential type Select this credential to enable you to access GitHub by using a _Personal Access Token_ (PAT), which you can get through GitHub. -For more information, see xref:controller-set-up-github-webhook[Working with Webhooks]. +For more information, see xref:controller-set-up-github-webhook[Setting up a GitHub webhook]. GitHub PAT credentials require a value in the *Token* field, which is provided in your GitHub profile settings. diff --git a/downstream/modules/platform/ref-controller-credential-gitLab-pat.adoc b/downstream/modules/platform/ref-controller-credential-gitLab-pat.adoc index 9205d610cc..69cb10f9ba 100644 --- a/downstream/modules/platform/ref-controller-credential-gitLab-pat.adoc +++ b/downstream/modules/platform/ref-controller-credential-gitLab-pat.adoc @@ -1,10 +1,12 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-gitLab-pat"] = GitLab Personal Access Token credential type Select this credential to enable you to access GitLab by using a _Personal Access Token_ (PAT), which you can get through GitLab. -For more information, see xref:controller-set-up-github-webhook[Working with Webhooks]. +For more information, see xref:controller-set-up-gitlab-webhook[Setting up a GitLab webhook]. GitLab PAT credentials require a value in the *Token* field, which is provided in your GitLab profile settings. diff --git a/downstream/modules/platform/ref-controller-credential-hashiCorp-secret.adoc b/downstream/modules/platform/ref-controller-credential-hashiCorp-secret.adoc index 0856907152..a66154eeec 100644 --- a/downstream/modules/platform/ref-controller-credential-hashiCorp-secret.adoc +++ b/downstream/modules/platform/ref-controller-credential-hashiCorp-secret.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-hasiCorp-secret"] = HashiCorp Vault Secret Lookup credential type This is considered part of the secret management capability. -For more information, see xref:ref-hashicorp-vault-lookup[HashiCorp Vault Secret Lookup]. \ No newline at end of file +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/configuring_automation_execution/assembly-controller-secret-management#ref-hashicorp-vault-lookup[HashiCorp Vault Secret Lookup]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-hashiCorp-vault.adoc b/downstream/modules/platform/ref-controller-credential-hashiCorp-vault.adoc index dde92210cf..615276f5db 100644 --- a/downstream/modules/platform/ref-controller-credential-hashiCorp-vault.adoc +++ b/downstream/modules/platform/ref-controller-credential-hashiCorp-vault.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-hashiCorp-vault"] = HashiCorp Vault Signed SSH credential type This is considered part of the secret management capability. -For more information, see xref:ref-hashicorp-signed-ssh[HashiCorp Vault Signed SSH]. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/controller-credentials#ref-controller-credential-hasiCorp-secret[HashiCorp Vault Signed SSH]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-insights.adoc b/downstream/modules/platform/ref-controller-credential-insights.adoc index 01063f9378..4b496294e6 100644 --- a/downstream/modules/platform/ref-controller-credential-insights.adoc +++ b/downstream/modules/platform/ref-controller-credential-insights.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-insights"] = Insights credential type diff --git a/downstream/modules/platform/ref-controller-credential-machine.adoc b/downstream/modules/platform/ref-controller-credential-machine.adoc index 8b36c1f33d..5501a40381 100644 --- a/downstream/modules/platform/ref-controller-credential-machine.adoc +++ b/downstream/modules/platform/ref-controller-credential-machine.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-machine"] = Machine credential type @@ -28,9 +30,9 @@ In these cases, a dialog opens when the job is launched, prompting the user to e * *Privilege Escalation Method*: Specifies the type of escalation privilege to assign to specific users. This is the same as specifying the `--become-method=BECOME_METHOD` parameter, where `BECOME_METHOD` is any of the existing methods, or a custom method you have written. Begin entering the name of the method, and the appropriate name auto-populates. - ++ //image:credentials-create-machine-credential-priv-escalation.png[image] - ++ ** *empty selection*: If a task or play has `become` set to `yes` and is used with an empty selection, then it will default to `sudo`. ** *sudo*: Performs single commands with superuser (root user) privileges. ** *su*: Switches to the superuser (root user) account (or to other user accounts). @@ -69,16 +71,4 @@ You must use sudo password must in combination with SSH passwords or SSH Private [WARNING] ==== Credentials that are used in scheduled jobs must not be configured as *Prompt on launch*. -==== - -== Access machine credentials in an ansible playbook - -You can get username and password from Ansible facts: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - machine: - username: '{{ ansible_user }}' - password: '{{ ansible_password }}' ----- \ No newline at end of file +==== \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-network.adoc b/downstream/modules/platform/ref-controller-credential-network.adoc index 9266b6e47e..5d183bb299 100644 --- a/downstream/modules/platform/ref-controller-credential-network.adoc +++ b/downstream/modules/platform/ref-controller-credential-network.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-network"] = Network credential type @@ -12,7 +14,7 @@ When connecting to network devices, the credential type must match the connectio * For `local` connections using `provider`, credential type should be *Network*. * For all other network connections (`httpapi`, `netconf`, and `network_cli`), the credential type should be *Machine*. -For more information about connection types available for network devices, see link:https://docs.ansible.com/ansible/devel/network/getting_started/network_differences.html#multiple-communication-protocols[Multiple Communication Protocols]. +For more information about connection types available for network devices, see link:{URLControllerUserGuide}/using_automation_execution/controller-credentials#ref-controller-multiple-connection-protocols[Multiple Communication Protocols]. {ControllerNameStart} uses the following environment variables for Network credentials: @@ -30,19 +32,7 @@ Provide the following information for network credentials: * *Password*: The password to use in conjunction with the network device. * *SSH Private Key*: Copy or drag-and-drop the actual SSH Private Key to be used to authenticate the user to the network through SSH. * *Private Key Passphrase*: The passphrase for the private key to authenticate the user to the network through SSH. -* *Authorize*: Select this from the Options field to control whether or not to enter privileged mode. -* If *Authorize* is checked, enter a password in the *Authorize Password* field to access privileged mode. - -For more information, see link:https://www.ansible.com/blog/porting-ansible-network-playbooks-with-new-connection-plugins[Porting Ansible Network Playbooks with New Connection Plugins]. - -= Access network credentials in an ansible playbook +* *Authorize*: Select this to control whether or not to enter privileged mode. +** If *Authorize* is checked, enter a password in the *Authorize Password* field to access privileged mode. -You can get the username and password parameters from a job runtime environment: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - network: - username: '{{ lookup("env", "ANSIBLE_NET_USERNAME") }}' - password: '{{ lookup("env", "ANSIBLE_NET_PASSWORD") }}' ----- \ No newline at end of file +For more information, see link:https://www.ansible.com/blog/porting-ansible-network-playbooks-with-new-connection-plugins[Porting Ansible Network Playbooks with New Connection Plugins]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-openShift.adoc b/downstream/modules/platform/ref-controller-credential-openShift.adoc index e039916c6b..6afba86b89 100644 --- a/downstream/modules/platform/ref-controller-credential-openShift.adoc +++ b/downstream/modules/platform/ref-controller-credential-openShift.adoc @@ -1,20 +1,22 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-openShift"] = OpenShift or Kubernetes API Bearer Token credential type Select this credential type to create instance groups that point to a Kubernetes or OpenShift container. -For more information, see link:https://docs.ansible.com/automation-controller/4.4/html/administration/containers_instance_groups.html#ag-ext-exe-env[Container and Instance Groups] in the _{ControllerAG}_. +For more information, see link:{URLControllerUserGuide}/controller-instance-and-container-groups[Instance and container groups]. //image:credentials-create-containers-credential.png[Credentials- create Containers credential] Provide the following information for container credentials: * *OpenShift or Kubernetes API Endpoint* (required): The endpoint used to connect to an OpenShift or Kubernetes container. -* *API Authentication Bearer Token* (required): The token used to authenticate the connection. +* *API authentication bearer token* (required): The token used to authenticate the connection. * Optional: *Verify SSL*: You can check this option to verify the server's SSL/TLS certificate is valid and trusted. Environments that use internal or private _Certificate Authority_ (CA) must leave this option unchecked to disable verification. -* *Certificate Authority Data*: Include the `BEGIN CERTIFICATE` and `END CERTIFICATE` lines when pasting the certificate, if provided. +* *Certificate Authority data*: Include the `BEGIN CERTIFICATE` and `END CERTIFICATE` lines when pasting the certificate, if provided. A container group is a type of instance group that has an associated credential that enables connection to an OpenShift cluster. To set up a container group, you must have the following items: diff --git a/downstream/modules/platform/ref-controller-credential-openStack.adoc b/downstream/modules/platform/ref-controller-credential-openStack.adoc index 344a55a6ec..93bb6b6754 100644 --- a/downstream/modules/platform/ref-controller-credential-openStack.adoc +++ b/downstream/modules/platform/ref-controller-credential-openStack.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-openstack"] = OpenStack credential type @@ -6,14 +8,15 @@ Select this credential type to enable synchronization of cloud inventory with Op //image:credentials-create-openstack-credential.png[Credentials- create OpenStack credential] -Provide the following information for OpenStack credentials: +Enter the following information for OpenStack credentials: * *Username*: The username to use to connect to OpenStack. * *Password (API Key)*: The password or API key to use to connect to OpenStack. * *Host (Authentication URL)*: The host to be used for authentication. * *Project (Tenant Name)*: The Tenant name or Tenant ID used for OpenStack. This value is usually the same as the username. -* Optional: *Project (Domain Name)*: Provide the project name associated with your domain. -* Optional: *Domain name*: Provide the FQDN to be used to connect to OpenStack. +* Optional: *Project (Domain Name)*: Give the project name associated with your domain. +* Optional: *Domain Name*: Give the FQDN to be used to connect to OpenStack. +* Optional: *Region Name*: Give the region name. For some cloud providers, like OVH, the region must be specified. -If you are interested in using OpenStack Cloud Credentials, see xref:controller-cloud-credentials[Use Cloud Credentials with a cloud inventory], which includes a sample playbook. \ No newline at end of file +If you are interested in using OpenStack Cloud Credentials, see link:{URLControllerUserGuide}/controller-job-templates#controller-cloud-credentials[Use Cloud Credentials with a cloud inventory], which includes a sample playbook. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-satellite.adoc b/downstream/modules/platform/ref-controller-credential-satellite.adoc index c9a6253782..b8ee1d1e48 100644 --- a/downstream/modules/platform/ref-controller-credential-satellite.adoc +++ b/downstream/modules/platform/ref-controller-credential-satellite.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-satellite"] = Red Hat Satellite 6 credential type @@ -7,10 +9,7 @@ Select this credential type to enable synchronization of cloud inventory with Re {ControllerNameStart} writes a Satellite configuration file based on fields prompted in the user interface. The absolute path to the file is set in the following environment variable: -[literal, options="nowrap" subs="+attributes"] ----- -FOREMAN_INI_PATH ----- +`FOREMAN_INI_PATH` //image:credentials-create-rh-sat-credential.png[Credentials- create Red Hat Satellite 6 credential] diff --git a/downstream/modules/platform/ref-controller-credential-source-control.adoc b/downstream/modules/platform/ref-controller-credential-source-control.adoc index 4506651ed4..471d9dd1fb 100644 --- a/downstream/modules/platform/ref-controller-credential-source-control.adoc +++ b/downstream/modules/platform/ref-controller-credential-source-control.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-source-control"] = Source Control credential type diff --git a/downstream/modules/platform/ref-controller-credential-terraform.adoc b/downstream/modules/platform/ref-controller-credential-terraform.adoc index 3a2e8306b3..71df6cc232 100644 --- a/downstream/modules/platform/ref-controller-credential-terraform.adoc +++ b/downstream/modules/platform/ref-controller-credential-terraform.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-terraform"] // This Terraform module is for AAP 2.5 @@ -7,20 +9,19 @@ Terraform is a HashiCorp tool used to automate various infrastructure tasks. Select this credential type to enable synchronization with the Terraform inventory source. -The Terraform credential requires the *Backend configuration* attribute which must contain the data from a link:https://developer.hashicorp.com/terraform/language/settings/backends/configuration[Terraform backend block]. -You can paste, drag a file, browse to upload a file, or click the image:leftkey.png[Key,15,15] icon to populate the field from an external xref:assembly-controller-secret-management[Secret Management System]. +The Terraform credential requires the *Backend configuration* attribute which must contain the data from a link:https://developer.hashicorp.com/terraform/language/backend[Terraform backend block]. +You can paste, drag a file, browse to upload a file, or click the image:leftkey.png[Key,15,15] icon to populate the field from an external link:{URLControllerAdminGuide}/assembly-controller-secret-management[Secret Management System]. Terraform backend configuration requires the following inputs: * *Name* * Credential type: Select *Terraform backend configuration*. * Optional: *Organization* -* Optional: *Description* -//Not yet available in test env. +* Optional: *Description* * *Backend configuration*: Drag a file here or browse to upload. - ++ Example configuration for an S3 backend: - ++ ---- bucket = "my-terraform-state-bucket" key = "path/to/terraform-state-file" @@ -28,3 +29,5 @@ region = "us-east-1" access_key = "my-aws-access-key" secret_key = "my-aws-secret-access-key" ---- ++ +* Optional: *Google Cloud Platform account credentials* diff --git a/downstream/modules/platform/ref-controller-credential-thycotic-server.adoc b/downstream/modules/platform/ref-controller-credential-thycotic-server.adoc index b6d5e18f92..57285af99c 100644 --- a/downstream/modules/platform/ref-controller-credential-thycotic-server.adoc +++ b/downstream/modules/platform/ref-controller-credential-thycotic-server.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-thycotic-server"] = Thycotic secret server credential type This is considered part of the secret management capability. -For more information, see xref:ref-thycotic-secret-server[Thycotic Secret Server]. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management#ref-thycotic-secret-server[Thycotic Secret Server]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-thycotic-vault.adoc b/downstream/modules/platform/ref-controller-credential-thycotic-vault.adoc index e5eb58a3be..d5f2fa3b76 100644 --- a/downstream/modules/platform/ref-controller-credential-thycotic-vault.adoc +++ b/downstream/modules/platform/ref-controller-credential-thycotic-vault.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-thycotic-vault"] = Thycotic DevOps Secrets Vault credential type This is considered part of the secret management capability. -For more information, see xref:ref-thycotic-devops-vault[Thycotic DevOps Secrets Vault]. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management#ref-thycotic-devops-vault[Thycotic DevOps Secrets Vault]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-types.adoc b/downstream/modules/platform/ref-controller-credential-types.adoc index 071d95a60a..c81dbecc74 100644 --- a/downstream/modules/platform/ref-controller-credential-types.adoc +++ b/downstream/modules/platform/ref-controller-credential-types.adoc @@ -1,37 +1,44 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-types"] = Credential types {ControllerNameStart} supports the following credential types: -* xref:ref-controller-credential-aws[Amazon Web Services] -* xref:ref-controller-credential-galaxy-hub[{Galaxy}/Automation Hub API Token] -* xref:ref-controller-credential-centrify-vault[Centrify Vault Credential Provider Lookup] -* xref:ref-controller-credential-container-registry[Container Registry] -* xref:ref-controller-credential-cyberark-central[CyberArk Central Credential Provider Lookup] -* xref:ref-controller-credential-cyberark-conjur[CyberArk Conjur Secrets Manager Lookup] -* xref:ref-controller-credential-gitHub-pat[GitHub Personal Access Token] -* xref:ref-controller-credential-gitLab-pat[GitLab Personal Access Token] -* xref:ref-controller-credential-GCE[Google Compute Engine] -* xref:ref-controller-credential-GPG-public-key[GPG Public Key] -* xref:ref-controller-credential-hasiCorp-secret[HashiCorp Vault Secret Lookup] -* xref:ref-controller-credential-hashiCorp-vault[HashiCorp Vault Signed SSH] -* xref:ref-controller-credential-insights[Insights] -* xref:ref-controller-credential-machine[Machine] -* xref:ref-controller-credential-azure-key[{Azure} Key Vault] -* xref:ref-controller-credential-azure-resource[{Azure} Resource Manager] -* xref:ref-controller-credential-network[Network] -* xref:ref-controller-credential-openShift[OpenShift or Kubernetes API Bearer Token] -* xref:ref-controller-credential-openstack[OpenStack] -* xref:ref-controller-credential-aap[{PlatformName}] -* xref:ref-controller-credential-satellite[Red Hat Satellite 6] -* xref:ref-controller-credential-virtualization[Red Hat Virtualization] -* xref:ref-controller-credential-source-control[Source Control] -* xref:ref-controller-credential-thycotic-vault[Thycotic DevOps Secrets Vault] -* xref:ref-controller-credential-thycotic-server[Thycotic Secret Server] -* xref:ref-controller-credential-vault[Vault] -* xref:ref-controller-credential-vmware-vcenter[VMware vCenter] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-aws[Amazon Web Services] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-galaxy-hub[{Galaxy}/Automation Hub API Token] +//added AWS Secrets Manager Lookup +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-aws-secrets-lookup[AWS Secrets Manager Lookup] +//added Bitbucket Data Center HTTP Access Token +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-bitbucket[Bitbucket Data Center HTTP Access Token] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-centrify-vault[Centrify Vault Credential Provider Lookup] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-container-registry[Container Registry] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-cyberark-central[CyberArk Central Credential Provider Lookup] +* link:{URLControllerUserGuide}/controller-credentials#ef-controller-credential-cyberark-conjur[CyberArk Conjur Secrets Manager Lookup] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-gitHub-pat[GitHub Personal Access Token] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-gitLab-pat[GitLab Personal Access Token] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-GCE[Google Compute Engine] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-GPG-public-key[GPG Public Key] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-hasiCorp-secret[HashiCorp Vault Secret Lookup] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-hashiCorp-vault[HashiCorp Vault Signed SSH] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-insights[Insights] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-machine[Machine] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-azure-key[{Azure} Key Vault] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-azure-resource[{Azure} Resource Manager] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-network[Network] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-openShift[OpenShift or Kubernetes API Bearer Token] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-openstack[OpenStack] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-aap[{PlatformName}] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-satellite[Red Hat Satellite 6] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-virtualization[Red Hat Virtualization] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-source-control[Source Control] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-terraform[Terraform Backend Configuration] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-thycotic-vault[Thycotic DevOps Secrets Vault] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-thycotic-server[Thycotic Secret Server] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-vault[Vault] +* link:{URLControllerUserGuide}/controller-credentials#ref-controller-credential-vmware-vcenter[VMware vCenter] -The credential types associated with Centrify, CyberArk, HashiCorp Vault, {Azure} Key Vault, and Thycotic are part of the credential plugins capability that enables an external system to lookup your secrets information. +The credential types associated with AWS Secrets Manager, Centrify, CyberArk, HashiCorp Vault, {Azure} Key Vault, and Thycotic are part of the credential plugins capability that enables an external system to lookup your secrets information. -For more information, see xref:assembly-controller-secret-management[Secrets Management System]. \ No newline at end of file +For more information, see link:{URLControllerAdminGuide}/assembly-controller-secret-management[Secrets Management System]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-credential-vault.adoc b/downstream/modules/platform/ref-controller-credential-vault.adoc index 6b2cb2ee1f..d299a91752 100644 --- a/downstream/modules/platform/ref-controller-credential-vault.adoc +++ b/downstream/modules/platform/ref-controller-credential-vault.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-vault"] = Ansible Vault credential type @@ -8,11 +10,11 @@ Select this credential type to enable synchronization of inventory with Ansible Vault credentials require the *Vault Password* and an optional *Vault Identifier* if applying multi-Vault credentialing. -For more information on the Multi-Vault support, refer to the link:https://docs.ansible.com/automation-controller/latest/html/administration/multi-creds-assignment.html#multi-vault-credentials[Multi-Vault Credentials] section of the _{ControllerAG}_. +// For more information about the Multi-Vault support, see the link:https://docs.ansible.com/automation-controller/latest/html/administration/multi-creds-assignment.html#multi-vault-credentials[Multi-Vault Credentials] section of _{ControllerAG}_. You can configure {ControllerName} to ask the user for the password at launch time by selecting *Prompt on launch*. -When you select *Prompt on Launch*, a dialog opens when the job is launched, prompting the user to enter the password. +When you select *Prompt on launch*, a dialog opens when the job is launched, prompting the user to enter the password. [WARNING] ==== diff --git a/downstream/modules/platform/ref-controller-credential-virtualization.adoc b/downstream/modules/platform/ref-controller-credential-virtualization.adoc index e39896c234..320e2da0fb 100644 --- a/downstream/modules/platform/ref-controller-credential-virtualization.adoc +++ b/downstream/modules/platform/ref-controller-credential-virtualization.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-virtualization"] = Red Hat Virtualization credential type @@ -25,47 +27,3 @@ To sync with the inventory, the credential URL needs to include the `ovirt-engin * *Password*: The password to use to connect to it. * Optional: *CA File*: Provide an absolute path to the oVirt certificate file (it might end in `.pem`, `.cer` and `.crt` extensions, but preferably `.pem` for consistency) -== Access virtualization credentials in an Ansible Playbook - -You can get the Red Hat Virtualization credential parameter from a job runtime environment: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - ovirt: - ovirt_url: '{{ lookup("env", "OVIRT_URL") }}' - ovirt_username: '{{ lookup("env", "OVIRT_USERNAME") }}' - ovirt_password: '{{ lookup("env", "OVIRT_PASSWORD") }}' ----- - -The `file` and `env` injectors for Red Hat Virtualization are as follows: - -[literal, options="nowrap" subs="+attributes"] ----- -ManagedCredentialType( - namespace='rhv', - -.... -.... -.... - -injectors={ - # The duplication here is intentional; the ovirt4 inventory plugin - # writes a .ini file for authentication, while the ansible modules for - # ovirt4 use a separate authentication process that support - # environment variables; by injecting both, we support both - 'file': { - 'template': '\n'.join( - [ - '[ovirt]', - 'ovirt_url={{host}}', - 'ovirt_username={{username}}', - 'ovirt_password={{password}}', - '{% if ca_file %}ovirt_ca_file={{ca_file}}{% endif %}', - ] - ) - }, - 'env': {'OVIRT_INI_PATH': '{{tower.filename}}', 'OVIRT_URL': '{{host}}', 'OVIRT_USERNAME': '{{username}}', 'OVIRT_PASSWORD': '{{password}}'}, - }, -) ----- diff --git a/downstream/modules/platform/ref-controller-credential-vmware-vcenter.adoc b/downstream/modules/platform/ref-controller-credential-vmware-vcenter.adoc index 36b52e214d..ce11de3222 100644 --- a/downstream/modules/platform/ref-controller-credential-vmware-vcenter.adoc +++ b/downstream/modules/platform/ref-controller-credential-vmware-vcenter.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-credential-vmware-vcenter"] = VMware vCenter credential type @@ -29,16 +31,3 @@ VMware credentials require the following inputs: ==== If the VMware guest tools are not running on the instance, VMware inventory synchronization does not return an IP address for that instance. ==== - -== Access VMware vCenter credentials in an ansible playbook - -You can get VMware vCenter credential parameters from a job runtime environment: - -[literal, options="nowrap" subs="+attributes"] ----- -vars: - vmware: - host: '{{ lookup("env", "VMWARE_HOST") }}' - username: '{{ lookup("env", "VMWARE_USER") }}' - password: '{{ lookup("env", "VMWARE_PASSWORD") }}' ----- diff --git a/downstream/modules/platform/ref-controller-custom-fact-scans.adoc b/downstream/modules/platform/ref-controller-custom-fact-scans.adoc index 73f6f7da72..71edb17d3c 100644 --- a/downstream/modules/platform/ref-controller-custom-fact-scans.adoc +++ b/downstream/modules/platform/ref-controller-custom-fact-scans.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-custom-fact-scans"] = Custom fact scans diff --git a/downstream/modules/platform/ref-controller-data-collection-details.adoc b/downstream/modules/platform/ref-controller-data-collection-details.adoc index 315293e03e..ad70023736 100644 --- a/downstream/modules/platform/ref-controller-data-collection-details.adoc +++ b/downstream/modules/platform/ref-controller-data-collection-details.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-data-collection-details"] = Details of data collection diff --git a/downstream/modules/platform/ref-controller-database-settings.adoc b/downstream/modules/platform/ref-controller-database-settings.adoc index 459cb6ff4b..8f5c65cc70 100644 --- a/downstream/modules/platform/ref-controller-database-settings.adoc +++ b/downstream/modules/platform/ref-controller-database-settings.adoc @@ -1,10 +1,11 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-database-settings"] = PostgreSQL database configuration and maintenance for {ControllerName} To improve the performance of {ControllerName}, you can configure the following configuration parameters in the database: - *Maintenance* The `VACUUM` and `ANALYZE` tasks are important maintenance activities that can impact performance. In normal PostgreSQL operation, tuples that are deleted or obsoleted by an update are not physically removed from their table; they remain present until a `VACUUM` is done. Therefore it's necessary to do VACUUM periodically, especially on frequently-updated tables. `ANALYZE` collects statistics about the contents of tables in the database, and stores the results in the `pg_statistic` system catalog. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries. The autovacuuming PostgreSQL configuration parameter automates the execution of `VACUUM` and `ANALYZE` commands. Setting autovacuuming to *true* is a good practice. However, autovacuuming will not occur if there is never any idle time on the database. If it is observed that autovacuuming is not sufficiently cleaning up space on the database disk, then scheduling specific vacuum tasks during specific maintenance windows can be a solution. @@ -15,7 +16,22 @@ To improve the performance of the PostgreSQL server, configure the following _Gr * `shared_buffers`: determines how much memory is dedicated to the server for caching data. The default value for this parameter is 128 MB. When you modify this value, you must set it between 15% and 25% of the machine's total RAM. -NOTE: You must restart the database server after changing the value for shared_buffers. +[NOTE] +==== +If you are compiling Postgres against OpenSSL 3.2, your system regresses to remove the parameter for User during startup. You can rectify this by using the BIO_get_app_data call instead of open_get_data. Only an administrator can make these changes, but it impacts all users connected to the PostgreSQL database. f you update your systems without the OpenSSL patch, you are not impacted, and you do not need to take action. +==== + +[NOTE] +==== +You must restart the database server after changing the value for `shared_buffers`. +==== + +[WARNING] +==== +If you are compiling Postgres against OpenSSL 3.2, your system regresses to remove the parameter for User during startup. You can rectify this by using the BIO_get_app_data call instead of open_get_data. Only an administrator can make these changes, but it impacts all users connected to the PostgreSQL database. + +If you update your systems without the OpenSSL patch, you are not impacted, and you do not need to take action. +==== * `work_mem`: provides the amount of memory to be used by internal sort operations and hash tables before disk-swapping. Sort operations are used for order by, distinct, and merge join operations. Hash tables are used in hash joins and hash-based aggregation. The default value for this parameter is 4 MB. Setting the correct value of the `work_mem` parameter improves the speed of a search by reducing disk-swapping. ** Use the following formula to calculate the optimal value of the `work_mem` parameter for the database server: @@ -36,7 +52,11 @@ NOTE: Setting a large `work_mem` can cause the PostgreSQL server to go out of me Total RAM * 0.05 ---- -NOTE: Set `maintenance_work_mem` higher than `work_mem` to improve performance for vacuuming. +[NOTE] +==== +Set `maintenance_work_mem` higher than `work_mem` to improve performance for vacuuming. +==== .Additional resources -For more information on autovacuuming settings, see link:https://www.postgresql.org/docs/13/runtime-config-autovacuum.html[Automatic Vacuuming]. + +* link:https://www.postgresql.org/docs/13/runtime-config-autovacuum.html[Automatic Vacuuming] diff --git a/downstream/modules/platform/ref-controller-dependencies.adoc b/downstream/modules/platform/ref-controller-dependencies.adoc index 3f43fba4a9..7f06ce336d 100644 --- a/downstream/modules/platform/ref-controller-dependencies.adoc +++ b/downstream/modules/platform/ref-controller-dependencies.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-dependencies"] = Dependencies diff --git a/downstream/modules/platform/ref-controller-django-password-policies.adoc b/downstream/modules/platform/ref-controller-django-password-policies.adoc index 170f8ccd2b..7b5ac56f4c 100644 --- a/downstream/modules/platform/ref-controller-django-password-policies.adoc +++ b/downstream/modules/platform/ref-controller-django-password-policies.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-django-password-policies"] = Django password policies diff --git a/downstream/modules/platform/ref-controller-ec2-vpc-instances.adoc b/downstream/modules/platform/ref-controller-ec2-vpc-instances.adoc index 9b6feba6b2..73f59eac97 100644 --- a/downstream/modules/platform/ref-controller-ec2-vpc-instances.adoc +++ b/downstream/modules/platform/ref-controller-ec2-vpc-instances.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-ec2-vpc-instances"] = Viewing private EC2 VPC instances in the {ControllerName} inventory diff --git a/downstream/modules/platform/ref-controller-ee-configuration-options.adoc b/downstream/modules/platform/ref-controller-ee-configuration-options.adoc index cced3ebb52..6f94cce5f6 100644 --- a/downstream/modules/platform/ref-controller-ee-configuration-options.adoc +++ b/downstream/modules/platform/ref-controller-ee-configuration-options.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-ee-configuration-options"] = Configuration options @@ -6,22 +8,29 @@ Use the following configuration YAML keys in your definition file. The {Builder} 3.x {ExecEnvShort} definition file accepts seven top-level sections: -* xref:ref-controller-additional-build-files[additional_build_files] -* xref:ref-controller-additional-build-steps[additional_build_steps] -* xref:ref-controller-build-arg-defaults[build_arg_defaults] -* xref:ref-controller-dependencies[dependencies] -* xref:ref-controller-images[images] -** xref:ref-controller-image-verification[image verification] -* xref:ref-controller-config-options[options] -* xref:ref-controller-config-version[version] +* link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-additional-build-files[additional_build_files] +* link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-additional-build-steps[additional_build_steps] +* link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-build-arg-defaults[build_arg_defaults] +* link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-dependencies[dependencies] +* link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-images[images] +** link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-image-verification[image verification] +* link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-config-options[options] +* link:{URLControllerUserGuide}/assembly-controller-ee-setup-reference#ref-controller-config-version[version] include::ref-controller-additional-build-files.adoc[leveloffset=+1] + include::ref-controller-additional-build-steps.adoc[leveloffset=+1] + include::ref-controller-build-arg-defaults.adoc[leveloffset=+1] + include::ref-controller-dependencies.adoc[leveloffset=+1] + include::ref-controller-images.adoc[leveloffset=+1] + include::ref-controller-image-verification.adoc[leveloffset=+1] + include::ref-controller-config-options.adoc[leveloffset=+1] + include::ref-controller-config-version.adoc[leveloffset=+1] diff --git a/downstream/modules/platform/ref-controller-ee-definition.adoc b/downstream/modules/platform/ref-controller-ee-definition.adoc index d3d9b63c01..1e869ff857 100644 --- a/downstream/modules/platform/ref-controller-ee-definition.adoc +++ b/downstream/modules/platform/ref-controller-ee-definition.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-ee-definition"] = Execution environment definition example diff --git a/downstream/modules/platform/ref-controller-events-table-csv.adoc b/downstream/modules/platform/ref-controller-events-table-csv.adoc index 52aaba00fe..bb5ff2dd26 100644 --- a/downstream/modules/platform/ref-controller-events-table-csv.adoc +++ b/downstream/modules/platform/ref-controller-events-table-csv.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-events-table-csv"] = events_table.csv diff --git a/downstream/modules/platform/ref-controller-example-workload-reqs.adoc b/downstream/modules/platform/ref-controller-example-workload-reqs.adoc index 8548741356..161827a613 100644 --- a/downstream/modules/platform/ref-controller-example-workload-reqs.adoc +++ b/downstream/modules/platform/ref-controller-example-workload-reqs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-example-workload-reqs"] = Example workload requirements diff --git a/downstream/modules/platform/ref-controller-existing-security.adoc b/downstream/modules/platform/ref-controller-existing-security.adoc index aed944b5e5..5825764461 100644 --- a/downstream/modules/platform/ref-controller-existing-security.adoc +++ b/downstream/modules/platform/ref-controller-existing-security.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-existing-security"] = Existing security functionality @@ -7,4 +9,4 @@ Use {ControllerName}'s role-based access control (RBAC) to delegate the minimum Use teams in {ControllerName} to assign permissions to groups of users rather than to users individually. .Additional resources -For more information, see link:https://docs.ansible.com/automation-controller/4.4/html/userguide/security.html#rbac-ug[Role-Based Access Controls] in the _{ControllerUG}_. +For more information, see link:https://docs.ansible.com/automation-controller/4.4/html/userguide/security.html#rbac-ug[Role-Based Access Controls] in _{ControllerUG}_. diff --git a/downstream/modules/platform/ref-controller-export-old-scripts.adoc b/downstream/modules/platform/ref-controller-export-old-scripts.adoc index 978e18c9c1..38fff11d9a 100644 --- a/downstream/modules/platform/ref-controller-export-old-scripts.adoc +++ b/downstream/modules/platform/ref-controller-export-old-scripts.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-export-old-scripts"] = Export old inventory scripts Despite the removal of the custom inventory scripts API, the scripts are still saved in the database. -The commands described in this section enable you to recover the scripts from the database in a format that is suitable for you to subsequently check into source control. +Use the following commands to recover the scripts from the database in a format that is suitable for you to subsequently check into source control. Use the following commands: @@ -67,5 +69,5 @@ $ ansible-inventory -i ./my_scripts/_11__inventory_script_upperorder --list --ex In the preceding example, you can `cd` into `my_scripts` and then issue a `git init` command, add the scripts you want, push it to source control, and then create an SCM inventory source in the user interface. -For more information on syncing or using custom inventory scripts, see link:https://docs.ansible.com/automation-controller/4.4/html/administration/scm-inv-source.html#ag-inv-import[Inventory file importing] in the _{ControllerAG}_. +For more information about syncing or using custom inventory scripts, see link:{URLControllerAdminGuide}/assembly-inventory-file-importing[Inventory file importing] in _{ControllerAG}_. diff --git a/downstream/modules/platform/ref-controller-external-access.adoc b/downstream/modules/platform/ref-controller-external-access.adoc index 68530a3c47..774e3f97f5 100644 --- a/downstream/modules/platform/ref-controller-external-access.adoc +++ b/downstream/modules/platform/ref-controller-external-access.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-external-access"] = External access diff --git a/downstream/modules/platform/ref-controller-external-account-stores.adoc b/downstream/modules/platform/ref-controller-external-account-stores.adoc index 4dc3f5f7b2..3674fb0a0b 100644 --- a/downstream/modules/platform/ref-controller-external-account-stores.adoc +++ b/downstream/modules/platform/ref-controller-external-account-stores.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-external-account-stores"] = External account stores diff --git a/downstream/modules/platform/ref-controller-extra-variables.adoc b/downstream/modules/platform/ref-controller-extra-variables.adoc index e9b8abee54..4f411f89a0 100644 --- a/downstream/modules/platform/ref-controller-extra-variables.adoc +++ b/downstream/modules/platform/ref-controller-extra-variables.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-extra-variables"] = Extra variables @@ -25,7 +27,7 @@ It is possible that this variable, `debug = true`, can be overridden in a job te To ensure the variables that you pass are not overridden, ensure they are included by redefining them in the survey. You can define extra variables at the inventory, group, and host levels. -If you are specifying the `ALLOW_JINJA_IN_EXTRA_VARS` parameter, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-allow-jinja-in-extra-vars[The ALLOW_JINJA_IN_EXTRA_VARS variable] section of the _{ControllerAG}_ to configure it. +If you are specifying the `ALLOW_JINJA_IN_EXTRA_VARS` parameter, see the {URLControllerAdminGuide}/controller-tips-and-tricks#ref-controller-allow-jinja-in-extra-vars[The ALLOW_JINJA_IN_EXTRA_VARS variable] section of _{ControllerAG}_ to configure it. The job template extra variables dictionary is merged with the survey variables. diff --git a/downstream/modules/platform/ref-controller-fact-scan-playbooks.adoc b/downstream/modules/platform/ref-controller-fact-scan-playbooks.adoc index 6fd6b5881d..bd4a7b47dd 100644 --- a/downstream/modules/platform/ref-controller-fact-scan-playbooks.adoc +++ b/downstream/modules/platform/ref-controller-fact-scan-playbooks.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-fact-scan-playbooks"] = Fact scan playbooks diff --git a/downstream/modules/platform/ref-controller-file-directory-structure.adoc b/downstream/modules/platform/ref-controller-file-directory-structure.adoc new file mode 100644 index 0000000000..bdb0dd0b06 --- /dev/null +++ b/downstream/modules/platform/ref-controller-file-directory-structure.adoc @@ -0,0 +1,19 @@ +[id="ref-controller-file-directory-structure"] + += Ansible file and directory structure + +If you are creating a common set of roles to use across projects, these should be accessed through source control submodules, or a common location such as `/opt`. +Projects should not expect to import roles or content from other projects. + +For more information, see the link https://docs.ansible.com/ansible/latest/tips_tricks/ansible_tips_tricks.html[General tips] from the Ansible documentation. + +[NOTE] +==== +* Avoid using the playbooks `vars_prompt` feature, as {ControllerName} does not interactively permit `vars_prompt` questions. +If you cannot avoid using `vars_prompt`, see the xref:controller-surveys-in-job-templates[Surveys in job templates] functionality. + +* Avoid using the playbooks `pause` feature without a timeout, as {ControllerName} does not permit canceling a pause interactively. +If you cannot avoid using `pause`, you must set a timeout. +==== + +Jobs use the playbook directory as the current working directory, although jobs must be coded to use the `playbook_dir` variable rather than relying on this. diff --git a/downstream/modules/platform/ref-controller-filter-environ-variables.adoc b/downstream/modules/platform/ref-controller-filter-environ-variables.adoc index 5568d58093..e74fe78a66 100644 --- a/downstream/modules/platform/ref-controller-filter-environ-variables.adoc +++ b/downstream/modules/platform/ref-controller-filter-environ-variables.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-filter-environ-variables"] = Filter on environment variables diff --git a/downstream/modules/platform/ref-controller-filter-hosts-cpu-type.adoc b/downstream/modules/platform/ref-controller-filter-hosts-cpu-type.adoc index 5d8fc8c2fe..2e9da75fc9 100644 --- a/downstream/modules/platform/ref-controller-filter-hosts-cpu-type.adoc +++ b/downstream/modules/platform/ref-controller-filter-hosts-cpu-type.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-filter-hosts-cpu-type"] = Filter hosts by processor type diff --git a/downstream/modules/platform/ref-controller-filter-instances.adoc b/downstream/modules/platform/ref-controller-filter-instances.adoc index 1dab7136ea..9e0006e3c5 100644 --- a/downstream/modules/platform/ref-controller-filter-instances.adoc +++ b/downstream/modules/platform/ref-controller-filter-instances.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-filter-instances"] = Filtering instances returned by the dynamic inventory sources in the controller diff --git a/downstream/modules/platform/ref-controller-git-refspec.adoc b/downstream/modules/platform/ref-controller-git-refspec.adoc index 3f93687563..6bd2d1adb7 100644 --- a/downstream/modules/platform/ref-controller-git-refspec.adoc +++ b/downstream/modules/platform/ref-controller-git-refspec.adoc @@ -1,27 +1,29 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-git-refspec"] = Git Refspec -The *SCM Refspec* field specifies which extra references the update should download from the remote. +The *Source control refspec* field specifies which extra references the update should download from the remote. Examples include the following: * `refs/*:refs/remotes/origin/*`: This fetches all references, including remotes of the remote * `refs/pull/*:refs/remotes/origin/pull/*` (GitHub-specific): This fetches all refs for all pull requests * `refs/pull/62/head:refs/remotes/origin/pull/62/head`: This fetches the ref for one GitHub pull request -For large projects, consider performance impact when using the first or second previous examples. +For large projects, consider performance impact when using the first or second examples. -The *SCM Refspec* parameter affects the availability of the project branch, and can enable access to references not otherwise available. -The previous examples enable you to supply a pull request from the *SCM Branch*, which is not possible without the *SCM Refspec* field. +The *Source control refspec* parameter affects the availability of the project branch, and can enable access to references not otherwise available. +Use the earlier examples to supply a pull request from the *Source control branch*, which is not possible without the *Source control refspec* field. The Ansible git module fetches `refs/heads/` by default. -This means that a project's branches, tags and commit hashes, can be used as the *SCM Branch* if *SCM Refspec* is blank. -The value specified in the *SCM Refspec* field affects which *SCM Branch* fields can be used as overrides. +This means that you can use a project's branches, tags and commit hashes, as the *Source control branch* if *Source control refspec* is blank. +The value specified in the *Source control refspec* field affects which *Source control branch* fields can be used as overrides. Project updates (of any type) perform an extra `git fetch` command to pull that refspec from the remote. .Example You can set up a project that enables branch override with the first or second refspec example. -Use this in a job template that prompts for the *SCM Branch*. +Use this in a job template that prompts for the *Source control branch*. A client can then launch the job template when a new pull request is created, providing the branch `pull/N/head` and the job template can run against the provided GitHub pull request reference. .Additional resources diff --git a/downstream/modules/platform/ref-controller-google-cloud.adoc b/downstream/modules/platform/ref-controller-google-cloud.adoc index beb145215a..056f6de613 100644 --- a/downstream/modules/platform/ref-controller-google-cloud.adoc +++ b/downstream/modules/platform/ref-controller-google-cloud.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-google-cloud"] = Google diff --git a/downstream/modules/platform/ref-controller-google-compute.adoc b/downstream/modules/platform/ref-controller-google-compute.adoc index 2bee6ead29..812086b01f 100644 --- a/downstream/modules/platform/ref-controller-google-compute.adoc +++ b/downstream/modules/platform/ref-controller-google-compute.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-google-compute"] = Google Compute Engine diff --git a/downstream/modules/platform/ref-controller-group-name-vars-filtering.adoc b/downstream/modules/platform/ref-controller-group-name-vars-filtering.adoc index caef96e8a5..2c04fef1a9 100644 --- a/downstream/modules/platform/ref-controller-group-name-vars-filtering.adoc +++ b/downstream/modules/platform/ref-controller-group-name-vars-filtering.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-group-name-vars-filtering"] = Filtering on group name and variables @@ -13,8 +15,8 @@ This is the recommended approach. * Define one group. In the definition, include the condition that the group and host variables must match specific values. Use the `limit` pattern to return all the hosts in the new group. -.Example: - +.Example +==== The following inventory file defines four hosts and sets group and host variables. It defines a product group, a sustaining group, and it sets two hosts to a shutdown state. @@ -55,11 +57,11 @@ groups: `limit`: `is_shutdown:&product_dev` + This constructed inventory input creates a group for both categories and uses the `limit` (host pattern) to only return hosts that -are in the intersection of those two groups, which is documented in link:https://docs.ansible.com/ansible/latest/inventory_guide/intro_patterns.html[Patterns:targeting hosts and groups]. +are in the intersection of those two groups, which is documented in link:https://docs.ansible.com/ansible/latest/inventory_guide/intro_patterns.html[Patterns: targeting hosts and groups]. + When a variable is or is not defined (depending on the host), you can give a default. For example, use `| default("running")` if you know what value it should have when it is not defined. -This helps with debugging, as described in xref:ref-controller-inv-debugging-tips[Debugging tips]. +This helps with debugging, as described in link:{URLControllerUserGuide}/controller-inventories#ref-controller-inv-debugging-tips[Debugging tips]. + . *Construct 1 group, limit to group*: + @@ -77,3 +79,4 @@ groups: This input creates one group that only includes hosts that match both criteria. The limit is then just the group name by itself, returning *host2*. The same as the earlier approach. +==== diff --git a/downstream/modules/platform/ref-controller-group-policies-automationcontroller.adoc b/downstream/modules/platform/ref-controller-group-policies-automationcontroller.adoc index e4ab4b590e..c086a18a1d 100644 --- a/downstream/modules/platform/ref-controller-group-policies-automationcontroller.adoc +++ b/downstream/modules/platform/ref-controller-group-policies-automationcontroller.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-group-policies-automationcontroller"] = Group policies for `automationcontroller` @@ -12,8 +14,9 @@ You can also create custom instance groups in the API after the install has fini The current behavior expects a member of an `instance_group_*` to be part of `automationcontroller` or `execution_nodes` group. -.Example - +.Define instance groups +[example] +==== [literal, options="nowrap" subs="+attributes"] ---- [automationcontroller] @@ -27,8 +30,9 @@ peers=execution_nodes [instance_group_test] 110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928 ---- +==== -After you run installation program, the following error appears: +After you run the {Installer}, the following error appears: [literal, options="nowrap" subs="+attributes"] ---- @@ -36,7 +40,7 @@ TASK [ansible.automation_platform_installer.check_config_static : Validate mesh fatal: [126-addr.tatu.home -> localhost]: FAILED! => {"msg": "The host '110-addr.tatu.home' is not present in either [automationcontroller] or [execution_nodes]"} ---- -To fix this, move the box `110-addr.tatu.home` to an `execution_node` group: +To fix this, move the box `110-addr.tatu.home` to an `execution_node` group, as follows: [literal, options="nowrap" subs="+attributes"] ---- @@ -61,5 +65,5 @@ TASK [ansible.automation_platform_installer.check_config_static : Validate mesh ok: [126-addr.tatu.home -> localhost] => {"changed": false, "mesh": {"110-addr.tatu.home": {"node_type": "execution", "peers": [], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": true, "receptor_listener_port": 8928, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}, "126-addr.tatu.home": {"node_type": "control", "peers": ["110-addr.tatu.home"], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": false, "receptor_listener_port": 27199, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}}} ---- -After you upgrade from {ControllerName} 4.0 or earlier, the legacy `instance_group_` member likely has the awx code installed. +After upgrading from {ControllerName} 4.0 or earlier, the legacy `instance_group_` member likely has the awx code installed. This places that node in the `automationcontroller` group. diff --git a/downstream/modules/platform/ref-controller-host-details.adoc b/downstream/modules/platform/ref-controller-host-details.adoc deleted file mode 100644 index 1298a09e80..0000000000 --- a/downstream/modules/platform/ref-controller-host-details.adoc +++ /dev/null @@ -1,17 +0,0 @@ -[id="controller-host-details"] - -= Host Details - -The *Host Details* window displays the following information about the host affected by the selected event and its associated play and task: - -* The *Host*. -* The *Status*. -* The type of run in the *Play* field. -* The type of *Task*. -* If applicable, the Ansible Module task, and any arguments for that module. - -image::ug-job-details-hostevent.png[Host details] - -To view the results in JSON format, click the *JSON* tab. -To view the output of the task, click *Standard Out*. -To view errors from the output, click *Standard Error*. diff --git a/downstream/modules/platform/ref-controller-image-verification.adoc b/downstream/modules/platform/ref-controller-image-verification.adoc index bf055b501c..2325529af6 100644 --- a/downstream/modules/platform/ref-controller-image-verification.adoc +++ b/downstream/modules/platform/ref-controller-image-verification.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-image-verification"] = Image verification diff --git a/downstream/modules/platform/ref-controller-images.adoc b/downstream/modules/platform/ref-controller-images.adoc index 33791678c5..ba56796962 100644 --- a/downstream/modules/platform/ref-controller-images.adoc +++ b/downstream/modules/platform/ref-controller-images.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-images"] = images diff --git a/downstream/modules/platform/ref-controller-import-inventory-files.adoc b/downstream/modules/platform/ref-controller-import-inventory-files.adoc index 43682dd8bf..e9d6ee1926 100644 --- a/downstream/modules/platform/ref-controller-import-inventory-files.adoc +++ b/downstream/modules/platform/ref-controller-import-inventory-files.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-import-inventory-files"] = Import existing inventory files and host/group vars into {ControllerName} diff --git a/downstream/modules/platform/ref-controller-install-builder.adoc b/downstream/modules/platform/ref-controller-install-builder.adoc index c6c730b63c..31b1c7ac8c 100644 --- a/downstream/modules/platform/ref-controller-install-builder.adoc +++ b/downstream/modules/platform/ref-controller-install-builder.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="red-controller-install-builder"] = Install ansible-builder @@ -6,4 +8,6 @@ To build images, you must have Podman or Docker installed, along with the `ansib The `--container-runtime` option must correspond to the Podman or Docker executable you intend to use. -For more information, see link:https://ansible.readthedocs.io/projects/builder/en/latest/#quickstart-for-ansible-builder[Quickstart for Ansible Builder], or link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/creating_and_consuming_execution_environments/index[Creating and consuming execution environments]. +When building an {ExecEnvShort} image, it must support the architecture that {PlatformNameShort} is deployed with. + +For more information, see link:https://ansible.readthedocs.io/projects/builder/en/latest/#quickstart-for-ansible-builder[Quickstart for Ansible Builder], or link:{URLBuilder}/index[Creating and consuming execution environments]. diff --git a/downstream/modules/platform/ref-controller-instance-group-capacity.adoc b/downstream/modules/platform/ref-controller-instance-group-capacity.adoc index d5cd78d11c..40a5021918 100644 --- a/downstream/modules/platform/ref-controller-instance-group-capacity.adoc +++ b/downstream/modules/platform/ref-controller-instance-group-capacity.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-instance-group-capacity"] = Instance group capacity limits @@ -30,4 +32,5 @@ For container groups, using the `max_forks` value is useful given that all jobs The default `pod_spec` sets requests and not limits, so the pods can "burst" above their requested value without being throttled or reaped. By setting the `max_forks value`, you can help prevent a scenario where too many jobs with large forks values get scheduled concurrently and cause the OpenShift nodes to be oversubscribed with multiple pods using more resources than their requested value. -To set the maximum values for the concurrent jobs and forks in an instance group, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-create-instance-group[Creating an instance group] in the _{ControllerUG}_. +To set the maximum values for the concurrent jobs and forks in an instance group, see +link:{URLControllerUserGuide}/controller-instance-groups#controller-create-instance-group[Creating an instance group]. diff --git a/downstream/modules/platform/ref-controller-instance-group-policies.adoc b/downstream/modules/platform/ref-controller-instance-group-policies.adoc index fa11ad53d3..ad37b5e909 100644 --- a/downstream/modules/platform/ref-controller-instance-group-policies.adoc +++ b/downstream/modules/platform/ref-controller-instance-group-policies.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-instance-group-policies"] = Instance group policies @@ -20,4 +22,4 @@ image::ug-instance-groups_list_view.png[Instance Groups list view] .Additional resources -For more information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-instance-groups[Managing Instance Groups] section of the _{ControllerUG}_. \ No newline at end of file +For more information, see the xref:controller-instance-groups[Managing Instance Groups] section. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-instance-info-json.adoc b/downstream/modules/platform/ref-controller-instance-info-json.adoc index d5f8683c4d..a37aa3f584 100644 --- a/downstream/modules/platform/ref-controller-instance-info-json.adoc +++ b/downstream/modules/platform/ref-controller-instance-info-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-instance-info-json"] = instance_info.json diff --git a/downstream/modules/platform/ref-controller-internal-cluster-routing.adoc b/downstream/modules/platform/ref-controller-internal-cluster-routing.adoc index 4a65936628..563531c95b 100644 --- a/downstream/modules/platform/ref-controller-internal-cluster-routing.adoc +++ b/downstream/modules/platform/ref-controller-internal-cluster-routing.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-internal-cluster-routing"] = Internal Cluster Routing diff --git a/downstream/modules/platform/ref-controller-internal-services.adoc b/downstream/modules/platform/ref-controller-internal-services.adoc index 0ecdff36fa..ae8e42e293 100644 --- a/downstream/modules/platform/ref-controller-internal-services.adoc +++ b/downstream/modules/platform/ref-controller-internal-services.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-internal-services"] = Internal services @@ -6,7 +8,7 @@ PostgreSQL database:: The connection to the PostgreSQL database is done by password authentication over TCP, either through localhost or remotely (external database). -This connection can use PostgreSQL's built in support for SSL/TLS, as natively configured by the installer support. +This connection can use PostgreSQL's built-in support for SSL/TLS, as natively configured by the installer support. SSL/TLS protocols are configured by the default OpenSSL configuration. A Redis key or value store:: diff --git a/downstream/modules/platform/ref-controller-intro-job-template.adoc b/downstream/modules/platform/ref-controller-intro-job-template.adoc new file mode 100644 index 0000000000..7569f8d017 --- /dev/null +++ b/downstream/modules/platform/ref-controller-intro-job-template.adoc @@ -0,0 +1,45 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-intro-job-template"] + += Automation templates + +The *Automation Templates* page shows both *job templates* and *workflow job templates* that are currently available. + +Automation Templates serve as a powerful blueprint for automating and orchestrating complex IT tasks. + +Whether defined as a Job Template or Workflow Template, they standardize and streamline routine operations, enabling consistent execution across various environments. + +By specifying playbooks, inventory, credentials, and other configuration details, an Automation Template eliminates manual intervention, reduces errors, and accelerates task completion. + +It also provides flexibility by allowing the chaining of multiple tasks in a Workflow Template, supporting sophisticated automation use cases that can span across multiple systems and processes. + +This ensures IT teams can reliably scale automation while maintaining high efficiency and control. + +The default view is collapsed (Compact), showing the template name, template type, and the timestamp of the last job that ran using that template. You can click the image:arrow.png[Arrow,15,15] icon next to each entry to expand and view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. + +From this screen you can launch image:rightrocket.png[Launch icon,15,15] , edit image:leftpencil.png[Edit icon,15,15], and duplicate image:copy.png[Duplicate icon,15,15] a job template. + +//The default view is to show each template as a card, showing the template name and template type. + +//From the template card you can launch image:rightrocket.png[Rightrocket,15,15], edit image:leftpencil.png[Leftpencil,15,15] a template, or, using the {MoreActionsIcon} icon, you can duplicate image:copy.png[Duplicate,15,15] or delete image:delete-button.png[Delete,15,15] a template. + +Select the template name to display more information about the template, including when it last ran. + +This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. + +[NOTE] +==== +Search functionality for Job templates is limited to alphanumeric characters only. +==== + +Workflow templates have the workflow visualizer image:visualizer.png[Workflow visualizer,15,15] icon as a shortcut for accessing the workflow editor. + +[NOTE] +==== +You can use job templates to build a workflow template. +Templates that show the *Workflow Visualizer* image:visualizer.png[Visualizer, 15,15] icon next to them are workflow templates. +Clicking the icon allows you to build a workflow graphically. +Many parameters in a job template enable you to select *Prompt on Launch* that you can change at the workflow level, and do not affect the values assigned at the job template level. +For instructions, see the xref:controller-workflow-visualizer[Workflow Visualizer] section. +==== diff --git a/downstream/modules/platform/ref-controller-intro-proj-sign.adoc b/downstream/modules/platform/ref-controller-intro-proj-sign.adoc new file mode 100644 index 0000000000..0387c296b1 --- /dev/null +++ b/downstream/modules/platform/ref-controller-intro-proj-sign.adoc @@ -0,0 +1,37 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-intro-proj-sign"] + += About project signing + +For project maintainers, the supported way to sign content is to use the `ansible-sign` utility, using the _command-line +interface_ (CLI) supplied with it. + +The CLI aims to make it easy to use cryptographic technology such as _GNU Privacy Guard_ (GPG) to validate that files within a project have not been tampered with in any way. +Currently, GPG is the only supported means of signing and validation. + +{ControllerNameStart} is used to verify the signed content. +After a matching public key has been associated with the signed project, {ControllerName} verifies that the files included during signing have not changed, and that files have been added or removed unexpectedly. +If the signature is not valid or a file has changed, the project fails to update, and jobs making use of the project will not launch. Verification status of the project ensures that only secure, untampered content can be run in jobs. + +If the repository has already been configured for signing and verification, the usual workflow for altering the project becomes the following: + +. You have a project repository set up already and want to make a change to a file. +. You make the change, and run the following command: ++ +[literal, options="nowrap" subs="+attributes"] +---- +ansible-sign project gpg-sign /path/to/project +---- ++ +This command updates a checksum manifest and signs it. +. You commit the change, the updated checksum manifest, and the signature to the repository. + +When you synchronize the project, {ControllerName} pulls in the new changes, checks that the public key associated with the project in {ControllerName} matches the private key that the checksum manifest was signed with (this prevents tampering with the checksum manifest itself), then re-calculates the checksums of each file in the manifest to ensure that the checksum matches (and thus that no file has changed). It also ensures that all files are accounted for: + +Files must be included in, or excluded from, the `MANIFEST.in` file. +For more information on this file, see +link:{URLControllerUserGuide}/assembly-controller-project-signing#con-controller-signing-your-project[Sign a project]. +If files have been added or removed unexpectedly, verification fails + +image::content-sign-diagram.png[Content signing proedure] diff --git a/downstream/modules/platform/ref-controller-inv-ansible-facts.adoc b/downstream/modules/platform/ref-controller-inv-ansible-facts.adoc index 09089886b5..7b7b7bb26a 100644 --- a/downstream/modules/platform/ref-controller-inv-ansible-facts.adoc +++ b/downstream/modules/platform/ref-controller-inv-ansible-facts.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-inv-ansible-facts"] = Ansible facts To create an inventory with Ansible facts, you must run a playbook against the inventory that has the setting `gather_facts: true`. The facts differ system-to-system. -The following examples are not intended to address all known scenarios. \ No newline at end of file +The following examples are not intended to address all known scenarios. diff --git a/downstream/modules/platform/ref-controller-inv-debugging-tips.adoc b/downstream/modules/platform/ref-controller-inv-debugging-tips.adoc index 68ad98707b..2b4ed9b73b 100644 --- a/downstream/modules/platform/ref-controller-inv-debugging-tips.adoc +++ b/downstream/modules/platform/ref-controller-inv-debugging-tips.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-inv-debugging-tips"] = Debugging tips @@ -15,7 +17,8 @@ You can also set `strict: false`, and so enable the template to produce an error You might still have to debug the intended function of the templates if they are not producing the expected inventory content. For example, if a `groups` group has a complex filter (like `shutdown_in_product_dev`) but does not contain any hosts in the resultant constructed inventory, then use the `compose` parameter to help debug. -For example: +.Example +==== [literal, options="nowrap" subs="+attributes"] ---- source_vars: @@ -33,3 +36,4 @@ limit: `` Running with a blank `limit` returns all hosts. You can use this to inspect specific variables on specific hosts, giving insight into where problems in the `groups` lie. +==== diff --git a/downstream/modules/platform/ref-controller-inv-nested-groups.adoc b/downstream/modules/platform/ref-controller-inv-nested-groups.adoc index 1636aa4b6f..2a2e78ab89 100644 --- a/downstream/modules/platform/ref-controller-inv-nested-groups.adoc +++ b/downstream/modules/platform/ref-controller-inv-nested-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-inv-nested-groups"] = Nested groups @@ -28,8 +30,7 @@ all: Because `host1` is in `groupB`, it is also in `groupA`. - -.Filter on nested group names +*Filter on nested group names* Use the following YAML format to filter on nested group names: @@ -42,7 +43,7 @@ plugin: constructed `limit`: `groupA` ---- -.Filter on nested group property +*Filter on nested group property* Use the following YAML format to filter on a group variable, even if the host is indirectly a member of that group. @@ -61,4 +62,4 @@ groups: filter_var_is_filter_val: filter_var | default("") == "filter_val" limit: filter_var_is_filter_val ----- \ No newline at end of file +---- diff --git a/downstream/modules/platform/ref-controller-inv-variable-management.adoc b/downstream/modules/platform/ref-controller-inv-variable-management.adoc new file mode 100644 index 0000000000..0d439797a4 --- /dev/null +++ b/downstream/modules/platform/ref-controller-inv-variable-management.adoc @@ -0,0 +1,6 @@ +[id="ref-controller-inv-variable-management"] + += Variable Management for Inventory + +Keep variable data with the hosts and groups definitions (see the inventory editor), rather than using `group_vars/` and `host_vars/`. +If you use dynamic inventory sources, {ControllerName} can synchronize such variables with the database as long as the *Overwrite Variables* option is not set. diff --git a/downstream/modules/platform/ref-controller-inventory-counts-json.adoc b/downstream/modules/platform/ref-controller-inventory-counts-json.adoc index addb2ae953..4cfbc36a76 100644 --- a/downstream/modules/platform/ref-controller-inventory-counts-json.adoc +++ b/downstream/modules/platform/ref-controller-inventory-counts-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-inventory-counts-json"] = inventory_counts.json diff --git a/downstream/modules/platform/ref-controller-inventory-import.adoc b/downstream/modules/platform/ref-controller-inventory-import.adoc index fa0870798a..75309c9bab 100644 --- a/downstream/modules/platform/ref-controller-inventory-import.adoc +++ b/downstream/modules/platform/ref-controller-inventory-import.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-inventory-import"] = Inventory Import diff --git a/downstream/modules/platform/ref-controller-inventory-plugins.adoc b/downstream/modules/platform/ref-controller-inventory-plugins.adoc index a4dd59ad2a..e04abd2a52 100644 --- a/downstream/modules/platform/ref-controller-inventory-plugins.adoc +++ b/downstream/modules/platform/ref-controller-inventory-plugins.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-inventory-plugins"] = Inventory Plugins @@ -9,12 +11,14 @@ In {Controllername} v4.4, you can provide the inventory plugin configuration dir * xref:proc-controller-inv-source-gce[Google Compute Engine] * xref:proc-controller-azure-resource-manager[{Azure} Resource Manager] * xref:proc-controller-inv-source-vm-vcenter[VMware vCenter] +* xref:proc-controller-inv-source-vm-esxi[VMWare ESXI] * xref:proc-controller-inv-source-satellite[Red Hat Satellite 6] * xref:proc-controller-inv-source-insights[Red Hat Insights] * xref:proc-controller-inv-source-openstack[OpenStack] * xref:proc-controller-inv-source-rh-virt[Red Hat Virtualization] * xref:proc-controller-inv-source-aap[{PlatformName}] * xref:proc-controller-inv-source-terraform[Terraform State] +* xref:proc-controller-inv-source-open-shift-virt[OpenShift Virtualization] Newly created configurations for inventory sources contain the default plugin configuration values. If you want your newly created inventory sources to match the output of a legacy source, you must apply a specific set of configuration values for that source. @@ -24,4 +28,4 @@ format. For more information about sources and their templates, see xref:controller-inventory-templates[Supported inventory plugin templates]. `source_vars` that contain `plugin: foo.bar.baz` as a top-level key are replaced with the fully-qualified inventory plugin name at runtime based on the `InventorySource` source. -For example, if you select ec2 for the `InventorySource` then, at run-time, plugin is set to `amazon.aws.aws_ec2`. \ No newline at end of file +For example, if you select ec2 for the `InventorySource` then, at run-time, plugin is set to `amazon.aws.aws_ec2`. diff --git a/downstream/modules/platform/ref-controller-inventory-sources.adoc b/downstream/modules/platform/ref-controller-inventory-sources.adoc index 4b0444776e..bf47a150ff 100644 --- a/downstream/modules/platform/ref-controller-inventory-sources.adoc +++ b/downstream/modules/platform/ref-controller-inventory-sources.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-inventory-sources"] = Inventory sources @@ -9,6 +11,7 @@ Choose a source which matches the inventory type against which a host can be ent * xref:proc-controller-inv-source-gce[Google Compute Engine] * xref:proc-controller-azure-resource-manager[{Azure} Resource Manager] * xref:proc-controller-inv-source-vm-vcenter[VMware vCenter] +* xref:proc-controller-inv-source-vm-esxi[VMware ESXi] * xref:proc-controller-inv-source-satellite[Red Hat Satellite 6] * xref:proc-controller-inv-source-insights[Red Hat Insights] * xref:proc-controller-inv-source-openstack[OpenStack] diff --git a/downstream/modules/platform/ref-controller-inventory-sync-details.adoc b/downstream/modules/platform/ref-controller-inventory-sync-details.adoc index ae898cc33e..66fd455ac5 100644 --- a/downstream/modules/platform/ref-controller-inventory-sync-details.adoc +++ b/downstream/modules/platform/ref-controller-inventory-sync-details.adoc @@ -1,4 +1,6 @@ -[id="controller-inventory-sync-details"] +:_mod-docs-content-type: REFERENCE + +[id="controller-inventory-sync-details_{context}"] = Inventory sync details diff --git a/downstream/modules/platform/ref-controller-isolation-functionality.adoc b/downstream/modules/platform/ref-controller-isolation-functionality.adoc index 7e0b554ab6..36546dcdaf 100644 --- a/downstream/modules/platform/ref-controller-isolation-functionality.adoc +++ b/downstream/modules/platform/ref-controller-isolation-functionality.adoc @@ -1,4 +1,6 @@ -[id="ref-controller-isolation-functionality"] +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-isolation-functionality_{context}"] = Isolation functionality and variables @@ -9,7 +11,7 @@ If you need to expose additional directories, you must customize your playbook r To configure job isolation, you can set variables. By default, {ControllerName} uses the system's `tmp` directory (`/tmp` by default) as its staging area. -This can be changed in the *Job Execution Path* field of the *Jobs settings* page, or in the REST API at `/api/v2/settings/jobs`: +You can change this in the *Job Execution Path* field of the *Jobs settings* page, or in the REST API at `/api/v2/settings/jobs`: [options="nowrap" subs="+attributes"] ---- @@ -25,9 +27,10 @@ AWX_ISOLATION_SHOW_PATHS = ['/list/of/', '/paths'] [NOTE] ==== -If your playbooks need to use keys or settings defined in `AWX_ISOLATION_SHOW_PATHS`, then add this file to `/var/lib/awx/.ssh`. +* If a path to a specific file is entered, then the entire directory containing that file will be mounted inside the {ExecEnvShort}. +* If your playbooks need to use keys or settings defined in `AWX_ISOLATION_SHOW_PATHS`, then add this file to `/var/lib/awx/.ssh`. ==== The fields described here can be found on the *Jobs settings* page: -image:configure-tower-jobs-isolated-jobs-fields.png[image] \ No newline at end of file +image::job-settings-full.png[Jobs settings options] diff --git a/downstream/modules/platform/ref-controller-job-counts-json.adoc b/downstream/modules/platform/ref-controller-job-counts-json.adoc index 9e919a49a1..ad8a48636e 100644 --- a/downstream/modules/platform/ref-controller-job-counts-json.adoc +++ b/downstream/modules/platform/ref-controller-job-counts-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-job-counts-json"] = job_counts.json diff --git a/downstream/modules/platform/ref-controller-job-event-schema.adoc b/downstream/modules/platform/ref-controller-job-event-schema.adoc index 50894e895e..2122e68353 100644 --- a/downstream/modules/platform/ref-controller-job-event-schema.adoc +++ b/downstream/modules/platform/ref-controller-job-event-schema.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-job-event-schema"] This logger reflects the data being saved into job events, except when they would otherwise conflict with expected standard fields from the logger, in which case the fields are nested. diff --git a/downstream/modules/platform/ref-controller-job-instance-counts-json.adoc b/downstream/modules/platform/ref-controller-job-instance-counts-json.adoc index 4a437e2553..2720880172 100644 --- a/downstream/modules/platform/ref-controller-job-instance-counts-json.adoc +++ b/downstream/modules/platform/ref-controller-job-instance-counts-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-job-instance-counts-json"] = job_instance_counts.json diff --git a/downstream/modules/platform/ref-controller-job-runtime-behavior.adoc b/downstream/modules/platform/ref-controller-job-runtime-behavior.adoc index 85658e7b41..0e88bf7a21 100644 --- a/downstream/modules/platform/ref-controller-job-runtime-behavior.adoc +++ b/downstream/modules/platform/ref-controller-job-runtime-behavior.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-job-runtime-behavior"] = Job runtime behavior @@ -5,12 +7,12 @@ When you run a job associated with an instance group, note the following behaviors: * If you divide a cluster into separate instance groups, then the behavior is similar to the cluster as a whole. -If you assign two instances to a group then either one is as likely to receive a job as any other in the same group. +* If you assign two instances to a group then either one is as likely to receive a job as any other in the same group. * As {ControllerName} instances are brought online, it effectively expands the work capacity of the system. -If you place those instances into instance groups, then they also expand that group's capacity. -If an instance is performing work and it is a member of multiple groups, then capacity is reduced from all groups for which it is a member. -De-provisioning an instance removes capacity from the cluster wherever that instance was assigned. -For more information, see the xref:controller-deprovision-instance-group[Deprovisioning instance groups] section for more detail. +* If you place those instances into instance groups, then they also expand that group's capacity. +* If an instance is performing work and it is a member of multiple groups, then capacity is reduced from all groups for which it is a member. +* De-provisioning an instance removes capacity from the cluster wherever that instance was assigned. +For more information, see the link:{URLControllerUserGuide}/controller-instance-and-container-groups#controller-deprovision-instance-group[Deprovisioning instance groups] section. [NOTE] ==== diff --git a/downstream/modules/platform/ref-controller-job-slice-execution-behavior.adoc b/downstream/modules/platform/ref-controller-job-slice-execution-behavior.adoc index 7533d4a4ce..017b70542c 100644 --- a/downstream/modules/platform/ref-controller-job-slice-execution-behavior.adoc +++ b/downstream/modules/platform/ref-controller-job-slice-execution-behavior.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-job-slice-execution-behavior"] = Job slice execution behavior @@ -5,16 +7,18 @@ When jobs are sliced, they can run on any node. Insufficient capacity in the system can cause some to run at a different time. When slice jobs are running, job details display the workflow and job slices currently running, and a link to view their details individually. -image::ug-sliced-job-shown-jobs-output-view.png[Sliced jobs output view] +//Image removed at AAP-45083 as it's not possible using currrent instances to recreate this information +//image::ug-sliced-job-shown-jobs-output-view.png[Sliced jobs output view] By default, job templates are not normally configured to execute simultaneously (you must check `allow_simultaneous` in the API or *Concurrent jobs* in the UI). Slicing overrides this behavior and implies `allow_simultaneous` even if that setting is clear. -See xref:controller-job-templates[Job templates] for information about how to specify this, and the number of job slices on your job template configuration. +See link:{URLControllerUserGuide}/controller-job-templates[Job templates] for information about how to specify this, and the number of job slices on your job template configuration. -The xref:controller-job-templates[Job templates] section provides additional detail on performing the following operations in the UI: +The link:{URLControllerUserGuide}/controller-job-templates[Job templates] section provides additional detail on performing the following operations in the UI: * Launch workflow jobs with a job template that has a slice number greater than one. * Cancel the whole workflow or individual jobs after launching a slice job template. * Relaunch the whole workflow or individual jobs after slice jobs finish running. * View the details about the workflow and slice jobs after launching a job template. -* Search slice jobs specifically after you create them, according to the next section, "Searching job slices"). +* Search slice jobs specifically after you create them, according to the +link:{URLControllerUserGuide}/controller-job-slicing#controller-search-job-slices[Searching job slices] section. diff --git a/downstream/modules/platform/ref-controller-job-status-changes.adoc b/downstream/modules/platform/ref-controller-job-status-changes.adoc index f228c21690..03d76f4650 100644 --- a/downstream/modules/platform/ref-controller-job-status-changes.adoc +++ b/downstream/modules/platform/ref-controller-job-status-changes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-job-status-changes"] = Job status changes diff --git a/downstream/modules/platform/ref-controller-job-template-variables.adoc b/downstream/modules/platform/ref-controller-job-template-variables.adoc index ce4724d424..5e4d9814b2 100644 --- a/downstream/modules/platform/ref-controller-job-template-variables.adoc +++ b/downstream/modules/platform/ref-controller-job-template-variables.adoc @@ -1,10 +1,16 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-job-template-variables"] = Variables in job templates -Along with any extra variables set in the job template and survey, {ControllerName} automatically adds the following variables to the job environment. -Also note, awx_``* variables are defined by the system and cannot be overridden. -Variables about the job context, such as ``awx_job_template_name are not affected if they are set in `extra_vars`. +Along with any extra variables set in the job template and survey, {ControllerName} automatically adds the following variables to the job environment. + +[NOTE] +==== +* `awx_*` variables are defined by the system and cannot be overridden. +* Variables about the job context, such as `awx_job_template_name` are not affected if they are set in `extra_vars`. +==== * `awx_job_id`: The job ID for this job run. * `awx_job_launch_type`: The description to indicate how the job was started: @@ -23,10 +29,10 @@ Variables about the job context, such as ``awx_job_template_name are not affecte * `awx_project_scm_branch`: The configured default project SCM branch for the project the job template uses. * `awx_job_scm_branch`: If the SCM Branch is overwritten by the job, the value is shown here. * `awx_user_email`: The user email of the controller user that started this job. This is not available for callback or scheduled jobs. -* `awx_user_first_name`: The user's first name of the controller user that started this job. This is not available for callback or scheduled jobs. -* `awx_user_id`: The user ID of the controller user that started this job. This is not available for callback or scheduled jobs. -* `awx_user_last_name`: The user's last name of the controller user that started this job. This is not available for callback or scheduled jobs. -* `awx_user_name`: The user name of the controller user that started this job. This is not available for callback or scheduled jobs. +* `awx_user_first_name`: The user's first name of the {ControllerName} user that started this job. This is not available for callback or scheduled jobs. +* `awx_user_id`: The user ID of the {ControllerName} user that started this job. This is not available for callback or scheduled jobs. +* `awx_user_last_name`: The last name of the {ControllerName} user that started this job. This is not available for callback or scheduled jobs. +* `awx_user_name`: The user name of the {ControllerName} user that started this job. This is not available for callback or scheduled jobs. * `awx_schedule_id`: If applicable, the ID of the schedule that launched this job. * `awx_schedule_name`: If applicable, the name of the schedule that launched this job. * `awx_workflow_job_id`: If applicable, the ID of the workflow job that launched this job. @@ -35,6 +41,3 @@ Variables about the job context, such as ``awx_job_template_name are not affecte * `awx_inventory_name`: If applicable, the name of the inventory this job uses. For compatibility, all variables are also given an "awx" prefix, for example, `awx_job_id`. - - - diff --git a/downstream/modules/platform/ref-controller-jobs-run-by-organization.adoc b/downstream/modules/platform/ref-controller-jobs-run-by-organization.adoc index 8a7008db67..8bd5a2ef71 100644 --- a/downstream/modules/platform/ref-controller-jobs-run-by-organization.adoc +++ b/downstream/modules/platform/ref-controller-jobs-run-by-organization.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-jobs-run-by-organization"] = Job runs by organization diff --git a/downstream/modules/platform/ref-controller-kubernetes-API-failure.adoc b/downstream/modules/platform/ref-controller-kubernetes-API-failure.adoc index e2c3e43ca2..ec782a14d6 100644 --- a/downstream/modules/platform/ref-controller-kubernetes-API-failure.adoc +++ b/downstream/modules/platform/ref-controller-kubernetes-API-failure.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-kubernetes-API-failure"] = Kubernetes API failure conditions diff --git a/downstream/modules/platform/ref-controller-large-host-counts.adoc b/downstream/modules/platform/ref-controller-large-host-counts.adoc new file mode 100644 index 0000000000..1a8d42b5dd --- /dev/null +++ b/downstream/modules/platform/ref-controller-large-host-counts.adoc @@ -0,0 +1,6 @@ +[id="ref-controller-large-host-counts"] + += Larger Host Counts + +Set "forks" on a job template to larger values to increase parallelism of execution runs. +//For more information about tuning Ansible, see link:https://www.ansible.com/blog/ansible-performance-tuning[the Ansible blog]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-launch-jobs-with-curl.adoc b/downstream/modules/platform/ref-controller-launch-jobs-with-curl.adoc index 42880d6004..d9a82adf7a 100644 --- a/downstream/modules/platform/ref-controller-launch-jobs-with-curl.adoc +++ b/downstream/modules/platform/ref-controller-launch-jobs-with-curl.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-launch-jobs-with-curl"] = Launching Jobs with curl @@ -12,7 +14,7 @@ Assuming that your Job Template ID is '1', your controller IP is 192.168.42.100, ---- curl -f -k -H 'Content-Type: application/json' -XPOST \ --user admin:awxsecret \ - ht p://192.168.42.100/api/v2/job_templates/1/launch/ + https://192.168.42.100/api/v2/job_templates/1/launch/ ---- This returns a JSON object that you can parse and use to extract the 'id' field, which is the ID of the newly created job. @@ -22,7 +24,7 @@ You can also pass extra variables to the Job Template call, as in the following ---- curl -f -k -H 'Content-Type: application/json' -XPOST \ -d '{"extra_vars": "{\"foo\": \"bar\"}"}' \ - --user admin:awxsecret http://192.168.42.100/api/v2/job_templates/1/launch/ + --user admin:awxsecret https://192.168.42.100/api/v2/job_templates/1/launch/ ---- //You can view the live API documentation by logging into http://192.168.42.100/api/ and browsing around to the various objects available. diff --git a/downstream/modules/platform/ref-controller-licenses.adoc b/downstream/modules/platform/ref-controller-licenses.adoc index 46dc100524..a20ca0d7bc 100644 --- a/downstream/modules/platform/ref-controller-licenses.adoc +++ b/downstream/modules/platform/ref-controller-licenses.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-licenses"] = Component licenses -To view the license information for the components included in {ControllerName}, see `/usr/share/doc/automation-controller-/README`. +To view the license information for the components included in {PlatformNameShort}, see `/usr/share/doc/automation-controller-/README`. where `` refers to the version of {ControllerName} you have installed. diff --git a/downstream/modules/platform/ref-controller-list-ansible-variables.adoc b/downstream/modules/platform/ref-controller-list-ansible-variables.adoc index 7f41f0bdf7..94866b8409 100644 --- a/downstream/modules/platform/ref-controller-list-ansible-variables.adoc +++ b/downstream/modules/platform/ref-controller-list-ansible-variables.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-list-ansible-variables"] = View a listing of all ansible_ variables diff --git a/downstream/modules/platform/ref-controller-locate-ansible-config-file.adoc b/downstream/modules/platform/ref-controller-locate-ansible-config-file.adoc index 0297a68881..a45f2c9dcc 100644 --- a/downstream/modules/platform/ref-controller-locate-ansible-config-file.adoc +++ b/downstream/modules/platform/ref-controller-locate-ansible-config-file.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-locate-ansible-config-file"] = Locate and configure the Ansible configuration file diff --git a/downstream/modules/platform/ref-controller-log-aggregators.adoc b/downstream/modules/platform/ref-controller-log-aggregators.adoc index dc82015a04..f40c63560d 100644 --- a/downstream/modules/platform/ref-controller-log-aggregators.adoc +++ b/downstream/modules/platform/ref-controller-log-aggregators.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-log-aggregators"] = Logging Aggregator Services diff --git a/downstream/modules/platform/ref-controller-log-message-schema.adoc b/downstream/modules/platform/ref-controller-log-message-schema.adoc index 586129445b..2e4eb17816 100644 --- a/downstream/modules/platform/ref-controller-log-message-schema.adoc +++ b/downstream/modules/platform/ref-controller-log-message-schema.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-log-message-schema"] = Log message schema diff --git a/downstream/modules/platform/ref-controller-loggers.adoc b/downstream/modules/platform/ref-controller-loggers.adoc index 74c249af3d..db6536d1c1 100644 --- a/downstream/modules/platform/ref-controller-loggers.adoc +++ b/downstream/modules/platform/ref-controller-loggers.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-loggers"] = Loggers diff --git a/downstream/modules/platform/ref-controller-logging-elastic-stack.adoc b/downstream/modules/platform/ref-controller-logging-elastic-stack.adoc index a00d43edbf..8bbadee1a7 100644 --- a/downstream/modules/platform/ref-controller-logging-elastic-stack.adoc +++ b/downstream/modules/platform/ref-controller-logging-elastic-stack.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-logging-elastic-stack"] = Elastic stack (formerly ELK stack) diff --git a/downstream/modules/platform/ref-controller-logging-loggly.adoc b/downstream/modules/platform/ref-controller-logging-loggly.adoc index c4cc820837..a0da785c56 100644 --- a/downstream/modules/platform/ref-controller-logging-loggly.adoc +++ b/downstream/modules/platform/ref-controller-logging-loggly.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-logging-loggly"] = Loggly diff --git a/downstream/modules/platform/ref-controller-logging-settings.adoc b/downstream/modules/platform/ref-controller-logging-settings.adoc new file mode 100644 index 0000000000..d190b29997 --- /dev/null +++ b/downstream/modules/platform/ref-controller-logging-settings.adoc @@ -0,0 +1,7 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-logging-settings"] + += Logging and aggregation settings + +For information about these settings, see xref:proc-controller-set-up-logging[Setting up logging]. diff --git a/downstream/modules/platform/ref-controller-logging-splunk.adoc b/downstream/modules/platform/ref-controller-logging-splunk.adoc index 373459953b..b350828114 100644 --- a/downstream/modules/platform/ref-controller-logging-splunk.adoc +++ b/downstream/modules/platform/ref-controller-logging-splunk.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-logging-splunk"] = Splunk @@ -34,6 +36,6 @@ The Splunk HTTP Event Collector listens on port 8088 by default, so you must pro Typical values are shown in the following example: -image:logging-splunk-tower-example.png[Splunk logging example] +image:logging-splunk-controller-example.png[Splunk logging example] For more information on configuring the HTTP Event Collector, see the link:https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector[Splunk documentation]. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-logging-sumologic.adoc b/downstream/modules/platform/ref-controller-logging-sumologic.adoc index 880358b987..aa85fb6daa 100644 --- a/downstream/modules/platform/ref-controller-logging-sumologic.adoc +++ b/downstream/modules/platform/ref-controller-logging-sumologic.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-logging-sumologic"] = Sumologic diff --git a/downstream/modules/platform/ref-controller-logs.adoc b/downstream/modules/platform/ref-controller-logs.adoc index b881ed79d6..36b34a2048 100644 --- a/downstream/modules/platform/ref-controller-logs.adoc +++ b/downstream/modules/platform/ref-controller-logs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-logs"] = {ControllerNameStart} logs diff --git a/downstream/modules/platform/ref-controller-managed-nodes.adoc b/downstream/modules/platform/ref-controller-managed-nodes.adoc index 9096094247..b287395fa7 100644 --- a/downstream/modules/platform/ref-controller-managed-nodes.adoc +++ b/downstream/modules/platform/ref-controller-managed-nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-managed-nodes"] = Managed nodes diff --git a/downstream/modules/platform/ref-controller-manifest-json.adoc b/downstream/modules/platform/ref-controller-manifest-json.adoc index 404873f940..fb8e9626f1 100644 --- a/downstream/modules/platform/ref-controller-manifest-json.adoc +++ b/downstream/modules/platform/ref-controller-manifest-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-manifest-json"] = manifest.json diff --git a/downstream/modules/platform/ref-controller-memory-relative-capacity.adoc b/downstream/modules/platform/ref-controller-memory-relative-capacity.adoc index edd982c1b2..1087960084 100644 --- a/downstream/modules/platform/ref-controller-memory-relative-capacity.adoc +++ b/downstream/modules/platform/ref-controller-memory-relative-capacity.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-memory-relative-capacity"] = Memory relative capacity `mem_capacity` is calculated relative to the amount of memory needed per fork. -Taking into account the overhead for internal components, this is approximately 100MB per fork. +Taking into account the overhead for internal components, this is about 100MB per fork. When considering the amount of memory available to Ansible jobs, the capacity algorithm reserves 2GB of memory to account for the presence of other services. The algorithm formula for this is: @@ -18,4 +20,4 @@ The following is an example: ---- A system with 4GB of memory is capable of running 20 forks. -The value `mem_per_fork` is controlled by setting the value of `SYSTEM_TASK_FORKS_MEM`, which defaults to 100. \ No newline at end of file +The value `mem_per_fork` is controlled by setting the value of `SYSTEM_TASK_FORKS_MEM`, which defaults to 100. diff --git a/downstream/modules/platform/ref-controller-metadata-credential-input.adoc b/downstream/modules/platform/ref-controller-metadata-credential-input.adoc index 34fbc041e5..3ad1b88744 100644 --- a/downstream/modules/platform/ref-controller-metadata-credential-input.adoc +++ b/downstream/modules/platform/ref-controller-metadata-credential-input.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-metadata-credential-input"] = Metadata for credential input sources diff --git a/downstream/modules/platform/ref-controller-metrics-monitoring.adoc b/downstream/modules/platform/ref-controller-metrics-monitoring.adoc index 5f767c46f1..ad622018bf 100644 --- a/downstream/modules/platform/ref-controller-metrics-monitoring.adoc +++ b/downstream/modules/platform/ref-controller-metrics-monitoring.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-metrics-monitoring"] = Metrics for monitoring {ControllerName} application diff --git a/downstream/modules/platform/ref-controller-microsoft-azure.adoc b/downstream/modules/platform/ref-controller-microsoft-azure.adoc index 0d796e5dfa..4e026cbd97 100644 --- a/downstream/modules/platform/ref-controller-microsoft-azure.adoc +++ b/downstream/modules/platform/ref-controller-microsoft-azure.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-microsoft-azure"] = Microsoft Azure Resource Manager diff --git a/downstream/modules/platform/ref-controller-multiple-connection-protocols.adoc b/downstream/modules/platform/ref-controller-multiple-connection-protocols.adoc new file mode 100644 index 0000000000..187deaf407 --- /dev/null +++ b/downstream/modules/platform/ref-controller-multiple-connection-protocols.adoc @@ -0,0 +1,28 @@ +[id="ref-controller-multiple-connection-protocols"] + += Multiple communication protocols + +Because network modules execute on the control node instead of on the managed nodes, they can support multiple communication protocols. +The communication protocols (XML over SSH, CLI over SSH, or API over HTTPS) selected for each network module depend on the platform and the purpose of the module. +Some network modules support only one protocol, while some offer a choice. + +The most common protocol is CLI over SSH. You set the communication protocol with the ansible_connection variable: + + +[cols="40%,20%,20%,20%",options="header",] +|==== +| Value of ansible_connection | Protocol | Requires | Persistent? + +| `ansible.netcommon.network_cli` | CLI over SSH | network_os setting | yes + +| `ansible.netcommon.netconf` | XML over SSH | network_os setting | yes + +| `ansible.netcommon.httpapi` | API over HTTP/HTTPS | network_os setting | yes + +| `local` | depends on provider | provider setting | no +|==== + +The `ansible_connection: local` is deprecated. +Use one of the persistent connection types listed above instead. +With persistent connections, you can define the hosts and credentials only once, rather than in every task. +You must also set the `network_os` variable for the specific network platform you are communicating with. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-node-counting.adoc b/downstream/modules/platform/ref-controller-node-counting.adoc index 011985ee69..d79adcd124 100644 --- a/downstream/modules/platform/ref-controller-node-counting.adoc +++ b/downstream/modules/platform/ref-controller-node-counting.adoc @@ -1,12 +1,14 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-node-counting"] = Node counting in licenses -The {ControllerName} license defines the number of Managed Nodes that can be managed as part of a {PlatformName} subscription. +The {PlatformNameShort} license defines the number of Managed Nodes that can be managed as part of your subscription. A typical license says "License Count: 500", which sets the maximum number of Managed Nodes at 500. -For more information about managed node requirements for licensing, see https://access.redhat.com/articles/3331481. +For more information about managed node requirements for licensing, see link:https://access.redhat.com/articles/3331481[How are "managed nodes" defined as part of the {PlatformName} offering]. [NOTE] ==== diff --git a/downstream/modules/platform/ref-controller-node-types.adoc b/downstream/modules/platform/ref-controller-node-types.adoc index c0b8571428..d17c29f72f 100644 --- a/downstream/modules/platform/ref-controller-node-types.adoc +++ b/downstream/modules/platform/ref-controller-node-types.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-node-types"] = Types of nodes in {ControllerName} diff --git a/downstream/modules/platform/ref-controller-notification-email.adoc b/downstream/modules/platform/ref-controller-notification-email.adoc index b1612e191b..7b6c953303 100644 --- a/downstream/modules/platform/ref-controller-notification-email.adoc +++ b/downstream/modules/platform/ref-controller-notification-email.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-email"] = Email diff --git a/downstream/modules/platform/ref-controller-notification-grafana.adoc b/downstream/modules/platform/ref-controller-notification-grafana.adoc index 03e5b6b673..5a81a723fc 100644 --- a/downstream/modules/platform/ref-controller-notification-grafana.adoc +++ b/downstream/modules/platform/ref-controller-notification-grafana.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-grafana"] = Grafana diff --git a/downstream/modules/platform/ref-controller-notification-irc.adoc b/downstream/modules/platform/ref-controller-notification-irc.adoc index 1b776b8181..dc98e77208 100644 --- a/downstream/modules/platform/ref-controller-notification-irc.adoc +++ b/downstream/modules/platform/ref-controller-notification-irc.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-irc"] = IRC @@ -11,9 +13,9 @@ The failure scenario is reserved specifically for connectivity. Provide the following details to set up an IRC notification: * Optional: *IRC server password*: IRC servers can require a password to connect. -If the server does not require one, leave it blank. +If the server does not require one leave it blank. *IRC Server Port*: The IRC server port. -*IRC Server Address*: The host name or address of the IRC server. +*IRC Server Address*: The hostname or address of the IRC server. *IRC Nick*: The bot's nickname once it connects to the server. *Destination Channels or Users*: A list of users or channels to which the notification is sent. * Optional: *Disable SSL verification*: Check if you want the bot to use SSL when connecting. diff --git a/downstream/modules/platform/ref-controller-notification-mattermost.adoc b/downstream/modules/platform/ref-controller-notification-mattermost.adoc index c7464c0114..d4182dae68 100644 --- a/downstream/modules/platform/ref-controller-notification-mattermost.adoc +++ b/downstream/modules/platform/ref-controller-notification-mattermost.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-mattermost"] = Mattermost diff --git a/downstream/modules/platform/ref-controller-notification-pager-duty.adoc b/downstream/modules/platform/ref-controller-notification-pager-duty.adoc index 9d4a763131..34f6b39fc3 100644 --- a/downstream/modules/platform/ref-controller-notification-pager-duty.adoc +++ b/downstream/modules/platform/ref-controller-notification-pager-duty.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-pagerduty"] = Pagerduty diff --git a/downstream/modules/platform/ref-controller-notification-rocketchat.adoc b/downstream/modules/platform/ref-controller-notification-rocketchat.adoc index 6401cbac9c..42f1f68ff0 100644 --- a/downstream/modules/platform/ref-controller-notification-rocketchat.adoc +++ b/downstream/modules/platform/ref-controller-notification-rocketchat.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-rocketchat"] = Rocket.Chat diff --git a/downstream/modules/platform/ref-controller-notification-slack.adoc b/downstream/modules/platform/ref-controller-notification-slack.adoc index 60baef9a2a..ca4d78f298 100644 --- a/downstream/modules/platform/ref-controller-notification-slack.adoc +++ b/downstream/modules/platform/ref-controller-notification-slack.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-slack"] = Slack @@ -19,7 +21,7 @@ To respond to or start a thread to a specific message add the parent message Id Acceptable colors are hex color code, for example: #3af or #789abc. When you have a bot or app set up, you must complete the following steps: -. Navigate to *Apps*. +. Go to *Apps*. . Click the newly-created app and then go to *Add features and functionality*, which enables you to configure incoming webhooks, bots, and permissions, as well as *Install your app to your workspace*. image::ug-notification-template-slack.png[Notification template slack] diff --git a/downstream/modules/platform/ref-controller-notification-twilio.adoc b/downstream/modules/platform/ref-controller-notification-twilio.adoc index 0bf29700f1..f38df10cb2 100644 --- a/downstream/modules/platform/ref-controller-notification-twilio.adoc +++ b/downstream/modules/platform/ref-controller-notification-twilio.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-twilio"] = Twilio diff --git a/downstream/modules/platform/ref-controller-notification-webhook-payloads.adoc b/downstream/modules/platform/ref-controller-notification-webhook-payloads.adoc index 04be9d1cb0..c5a3e3c779 100644 --- a/downstream/modules/platform/ref-controller-notification-webhook-payloads.adoc +++ b/downstream/modules/platform/ref-controller-notification-webhook-payloads.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-webhook-payloads"] = Webhook payloads diff --git a/downstream/modules/platform/ref-controller-notification-webhook.adoc b/downstream/modules/platform/ref-controller-notification-webhook.adoc index dc9a13072f..1ce9d36402 100644 --- a/downstream/modules/platform/ref-controller-notification-webhook.adoc +++ b/downstream/modules/platform/ref-controller-notification-webhook.adoc @@ -1,16 +1,18 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notification-webhook"] = Webhook The webhook notification type provides a simple interface for sending `POSTs` to a predefined web service. -{ControllerNameStart} `POSTs` to this address using application and JSON content type with the data payload containing the relevant details in JSON format. +{ControllerNameStart} `POSTs` to this address by using application and JSON content type with the data payload containing the relevant details in JSON format. Some web service APIs expect HTTP requests to be in a certain format with certain fields. Configure the webhook notification with the following: -* Configure the HTTP method, using `POST` or `PUT`. +* Configure the HTTP method, usingBasic authentication `PUT`. * The body of the outgoing request. -* Configure authentication, using basic auth. +* Configure authentication, using Basic authentication. Provide the following details to set up a webhook notification: diff --git a/downstream/modules/platform/ref-controller-notifications-api.adoc b/downstream/modules/platform/ref-controller-notifications-api.adoc index 40c7ef6aae..be7f673d83 100644 --- a/downstream/modules/platform/ref-controller-notifications-api.adoc +++ b/downstream/modules/platform/ref-controller-notifications-api.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-notifications-api"] = Notifications API diff --git a/downstream/modules/platform/ref-controller-old-job-history.adoc b/downstream/modules/platform/ref-controller-old-job-history.adoc index 92ac387ae8..7aa3615faa 100644 --- a/downstream/modules/platform/ref-controller-old-job-history.adoc +++ b/downstream/modules/platform/ref-controller-old-job-history.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-remove-old-job-history"] = Removing Old Job History @@ -19,4 +21,4 @@ jobs. For more information, see xref:proc-controller-scheduling-deletion[Scheduling deletion]. -You can also set or review notifications associated with this management job in the same way as described in xref:proc-controller-management-notifications[Notifications] for activity stream management jobs, or for more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-notifications[Notifications] in the _{ControllerUG}_. +You can also set or review notifications associated with this management job in the same way as described in xref:proc-controller-management-notifications[Notifications] for activity stream management jobs, or for more information, see link:{URLControllerUserGuide}/controller-notifications[Notifiers] in _{ControllerUG}_. diff --git a/downstream/modules/platform/ref-controller-openstack-cloud.adoc b/downstream/modules/platform/ref-controller-openstack-cloud.adoc index 008a7ce8c0..952c638430 100644 --- a/downstream/modules/platform/ref-controller-openstack-cloud.adoc +++ b/downstream/modules/platform/ref-controller-openstack-cloud.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-openstack-cloud"] = OpenStack diff --git a/downstream/modules/platform/ref-controller-openstack.adoc b/downstream/modules/platform/ref-controller-openstack.adoc index 862d33e34a..bccc2e1f4b 100644 --- a/downstream/modules/platform/ref-controller-openstack.adoc +++ b/downstream/modules/platform/ref-controller-openstack.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-openstack"] = OpenStack diff --git a/downstream/modules/platform/ref-controller-optional-survey-questions.adoc b/downstream/modules/platform/ref-controller-optional-survey-questions.adoc index 6a746c904f..4d431ce3ed 100644 --- a/downstream/modules/platform/ref-controller-optional-survey-questions.adoc +++ b/downstream/modules/platform/ref-controller-optional-survey-questions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-optional-survey-questions"] = Optional survey questions diff --git a/downstream/modules/platform/ref-controller-org-counts-json.adoc b/downstream/modules/platform/ref-controller-org-counts-json.adoc index f429695363..a2ba59e822 100644 --- a/downstream/modules/platform/ref-controller-org-counts-json.adoc +++ b/downstream/modules/platform/ref-controller-org-counts-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-org-counts-json"] = org_counts.json diff --git a/downstream/modules/platform/ref-controller-organization-mapping.adoc b/downstream/modules/platform/ref-controller-organization-mapping.adoc index f6b8092068..64f0ded363 100644 --- a/downstream/modules/platform/ref-controller-organization-mapping.adoc +++ b/downstream/modules/platform/ref-controller-organization-mapping.adoc @@ -1,55 +1,26 @@ +:_mod-docs-content-type: PROCEDURE + [id="ref-controller-organization-mapping"] = Organization mapping -You must control which users are placed into which {ControllerName} organizations based on their username and email address (distinguishing your organization administrators and users from social or enterprise-level authentication accounts). - -Dictionary keys are organization names. -Organizations are created, if not already present, and if the license permits multiple organizations. -Otherwise, the single default organization is used regardless of the key. - -Values are dictionaries defining the options for each organization's membership. -For each organization, you can specify which users are automatically users of the organization and also which users can administer the organization. - -*admins*: None, True/False, string or list/tuple of strings: - -* If *None*, organization administrators are not updated. -* If *True*, all users using account authentication are automatically added as administrators of the organization. -* If *False*, no account authentication users are automatically added as administrators of the organization. -* If a string or list of strings, specifies the usernames and emails for users to be added to the organization, strings beginning and ending with `/` are compiled into regular expressions. -The modifiers `i` (case-insensitive) and `m` (multi-line) can be specified after the ending `/`. - -*remove_admins*: True/False. Defaults to *True*: - -* When *True*, a user who does not match is removed from the organization's administrative list. -* *users*: None, True/False, string or list/tuple of strings. The same rules apply as for *admins*. -* *remove_users*: True/False. Defaults to *True*. The same rules apply as for *remove_admins*. - -[literal, options="nowrap" subs="+attributes"] ----- -{ - "Default": { - "users": true - }, - "Test Org": { - "admins": ["admin@example.com"], - "users": true - }, - "Test Org 2": { - "admins": ["admin@example.com", "/^controller-[^@]+?@.*$/i"], - "users": "/^[^@].*?@example\\.com$/" - } -} ----- - -Organization mappings can be specified separately for each account authentication backend. -If defined, these configurations take precedence over the global configuration above. - -[literal, options="nowrap" subs="+attributes"] ----- -SOCIAL_AUTH_GOOGLE_OAUTH2_ORGANIZATION_MAP = {} -SOCIAL_AUTH_GITHUB_ORGANIZATION_MAP = {} -SOCIAL_AUTH_GITHUB_ORG_ORGANIZATION_MAP = {} -SOCIAL_AUTH_GITHUB_TEAM_ORGANIZATION_MAP = {} -SOCIAL_AUTH_SAML_ORGANIZATION_MAP = {} ----- \ No newline at end of file +You can control which users are placed into which {PlatformNameShort} organizations based on attributes like their username and email address or based on groups provided from an authenticator. + +When organization mapping is positively evaluated, a specified organization is created, if it does not exist if the authenticator tied to the map is allowed to create objects. + +.Procedure + +. After configuring the authentication details for your authentication method, select the *Mapping* tab. +. Select *Organization* from the *Add authentication mapping* list. +. Enter a unique rule *Name* to identify the rule. +. Select a *Trigger* from the list. See xref:gw-authenticator-map-triggers[Authenticator map triggers] for more information about map triggers. +. Select *Revoke* to remove the user's access to the selected organization role when the trigger conditions are not matched. +. Select the *Organization* to which matching users are added or blocked. +. Select a *Role* to be applied or removed for matching users (for example, *Organization Admin* or *Organization Member*). +. Click btn:[Next]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-mapping-next-steps.adoc[] + + diff --git a/downstream/modules/platform/ref-controller-organization-notifications.adoc b/downstream/modules/platform/ref-controller-organization-notifications.adoc index ebeeae3994..2f2abdf67b 100644 --- a/downstream/modules/platform/ref-controller-organization-notifications.adoc +++ b/downstream/modules/platform/ref-controller-organization-notifications.adoc @@ -1,14 +1,17 @@ -[id="red-controller-oganization-notifications"] +:_mod-docs-content-type: REFERENCE -= Work with Notifications +[id="ref-controller-organization-notifications"] -Selecting the *Notifications* tab on the Organization details page enables you to review any notification integrations you have set up. += Working with notifiers -image:organizations-notifications-samples-list.png[Notifications] +When {ControllerName} is enabled on the platform, you can review any notifier integrations you have set up and manage their settings within the organization resource. -Use the toggles to enable or disable the notifications to use with your particular organization. -For more information, see xref:controller-enable-disable-notifications[Enable and Disable Notifications]. +.Procedure +. From the navigation panel, select {MenuAMOrganizations}. +. From the *Organizations* list view, select the organization to which you want to manage notifications. +//ddacosta - this might change to Notifiers tab. +. Select the *Notification* tab. +. Use the toggles to enable or disable the notifications to use with your particular organization. For more information, see link:{URLControllerUserGuide}/controller-notifications#controller-enable-disable-notifications[Enable and disable notifications]. +. If no notifiers have been set up, select {MenuAEAdminJobNotifications} from the navigation panel. -If no notifications have been set up, select {MenuAEAdminJobNotifications} from the navigation panel. - -For information on configuring notification types, see xref:controller-notification-types[Notification Types]. +For information on configuring notification types, see link:{URLControllerUserGuide}/controller-notifications#controller-notification-types[Notification types]. diff --git a/downstream/modules/platform/ref-controller-organization-status.adoc b/downstream/modules/platform/ref-controller-organization-status.adoc index 502f7f205b..7261253d66 100644 --- a/downstream/modules/platform/ref-controller-organization-status.adoc +++ b/downstream/modules/platform/ref-controller-organization-status.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-organization-status"] = Organization status diff --git a/downstream/modules/platform/ref-controller-other-search-considerations.adoc b/downstream/modules/platform/ref-controller-other-search-considerations.adoc index 5a5faa3799..ee24d914ca 100644 --- a/downstream/modules/platform/ref-controller-other-search-considerations.adoc +++ b/downstream/modules/platform/ref-controller-other-search-considerations.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-other-search-considerations"] = Other search considerations @@ -11,4 +13,4 @@ to support searching for strings with spaces. For more information, see xref:ref * Currently, the values in the Fields are direct attributes expected to be returned in a *GET* request. Whenever you search against one of the values, {ControllerName} carries out an `__icontains` search. So, for example, `name:localhost` sends back `+?name__icontains=localhost+`. -{ControllerNameStart} currently performs this search for every Field value, even `id`. \ No newline at end of file +{ControllerNameStart} currently performs this search for every Field value, even `id`. diff --git a/downstream/modules/platform/ref-controller-performance-troubleshooting.adoc b/downstream/modules/platform/ref-controller-performance-troubleshooting.adoc index 6b4ce0b3f8..8d560afdc7 100644 --- a/downstream/modules/platform/ref-controller-performance-troubleshooting.adoc +++ b/downstream/modules/platform/ref-controller-performance-troubleshooting.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-performance-troubleshooting"] = Performance troubleshooting for {ControllerName} @@ -11,14 +13,16 @@ * Job output streams from the execution node where the ansible-playbook is actually run to the associated control node. Then the callback receiver serializes this data and writes it to the database. Relevant settings to observe and tune can be found in xref:ref-controller-settings-job-events[Settings for managing job event processing] and xref:ref-controller-database-settings[PostgreSQL database configuration and maintenance for {ControllerName}]. * In general, to resolve this symptom it is important to observe the CPU and memory use of the control nodes. If CPU or memory use is very high, you can either horizontally scale the control plane by deploying more virtual machines to be control nodes that naturally spreads out work more, or to modify the number of jobs a control node will manage at a time. For more information, see xref:ref-controller-settings-control-execution-nodes[Capacity settings for control and execution nodes] for more information. +* Job output delay can occur on initial job runs that use {ExecEnvShort}s that have not been pulled into the platform. +The output becomes visible after the job run completes. -*What can I do to increase the number of jobs that {ControllerName} can run concurrently?* +*What can you do to increase the number of jobs that {ControllerName} can run concurrently?* * Factors that cause jobs to remain in “pending” state are: ** *Waiting for “dependencies” to finish*: this includes project updates and inventory updates when “update on launch” behavior is enabled. ** *The “allow_simultaneous” setting of the job template*: if multiple jobs of the same job template are in “pending” status, check the “allow_simultaneous” setting of the job template (“Concurrent Jobs” checkbox in the UI). If this is not enabled, only one job from a job template can run at a time. ** *The “forks” value of your job template*: the default value is 5. The amount of capacity required to run the job is roughly the forks value (some small overhead is accounted for). If the forks value is set to a very large number, this will limit what nodes will be able to run it. -** *Lack of either control or execution capacity*: see “awx_instance_remaining_capacity” metric from the application metrics available on /api/v2/metrics. See xref:ref-controller-metrics-monitoring[Metrics for monitoring {ControllerName} application] for more information about how to monitor metrics. See xref:ref-controller-capacity-planning[Capacity planning for deploying {ControllerName}] for information on how to plan your deployment to handle the number of jobs you are interested in. +** *Lack of either control or execution capacity*: see “awx_instance_remaining_capacity” metric from the application metrics available on /api/v2/metrics. See xref:ref-controller-metrics-monitoring[Metrics for monitoring {ControllerName} application] for more information about how to check metrics. See xref:ref-controller-capacity-planning[Capacity planning for deploying {ControllerName}] for information about how to plan your deployment to handle the number of jobs you are interested in. *Jobs run more slowly on {ControllerName} than on a local machine.* @@ -27,9 +31,9 @@ * Size of projects can impact how long it takes to start the job, as the project is updated on the control node and transferred to the execution node. Internal cluster routing can impact network performance. For more information, see xref:ref-controller-internal-cluster-routing[Internal cluster routing]. * Container pull settings can impact job start time. The {ExecEnvShort} is a container that is used to run jobs within it. Container pull settings can be set to “Always”, “Never” or “If not present”. If the container is always pulled, this can cause delays. -* Ensure that all cluster nodes, including execution, control, and the database, have been deployed in instances with storage rated to the minimum required IOPS, because the manner in which {ControllerName} runs ansible and caches event data implicates significant disk I/O. For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/platform-system-requirements#red_hat_ansible_automation_platform_system_requirements[Red Hat Ansible Automation Platform system requirements]. +* Ensure that all cluster nodes, including execution, control, and the database, have been deployed in instances with storage rated to the minimum required IOPS, because the manner in which {ControllerName} runs ansible and caches event data implicates significant disk I/O. For more information, see link:{URLPlanningGuide}/platform-system-requirements[System requirements]. *Database storage does not stop growing.* -* {ControllerNameStart} has a management job titled “Cleanup Job Details”. By default, it is set to keep 120 days of data and to run once a week. To reduce the amount of data in the database, you can shorten the retention time. For more information, see xref:proc-controller-remove-old-activity-stream[Removing Old Activity Stream Data]. +* {ControllerNameStart} has a management job titled “Cleanup Job Details”. By default, it is set to keep 120 days of data and to run once a week. To reduce the amount of data in the database, you can shorten the retention time. For more information, see xref:proc-controller-remove-old-activity-stream[Removing old activity stream data]. * Running the cleanup job deletes the data in the database. However, the database must at some point perform its vacuuming operation which reclaims storage. See xref:ref-controller-database-settings[PostgreSQL database configuration and maintenance for {ControllerName}] for more information about database vacuuming. diff --git a/downstream/modules/platform/ref-controller-playbook-pending.adoc b/downstream/modules/platform/ref-controller-playbook-pending.adoc index 6594c94bd4..845c1c8ded 100644 --- a/downstream/modules/platform/ref-controller-playbook-pending.adoc +++ b/downstream/modules/platform/ref-controller-playbook-pending.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-playbook-pending"] = Playbook stays in pending diff --git a/downstream/modules/platform/ref-controller-playbook-run-details.adoc b/downstream/modules/platform/ref-controller-playbook-run-details.adoc index ca920627f7..0607d4562d 100644 --- a/downstream/modules/platform/ref-controller-playbook-run-details.adoc +++ b/downstream/modules/platform/ref-controller-playbook-run-details.adoc @@ -1,4 +1,6 @@ -[id="controller-playbook-run-details"] +:_mod-docs-content-type: REFERENCE + +[id="controller-playbook-run-details_{context}"] = Playbook run details @@ -16,14 +18,15 @@ Reasons for playbook runs not being ready include dependencies that are currentl ** *Running*: The playbook run is currently in progress. ** *Successful*: The last playbook run succeeded. ** *Failed*: The last playbook run failed. -* *Job Template*: The name of the job template from which this job launched. +* *Job template*: The name of the job template from which this job launched. * *Inventory*: The inventory selected to run this job against. * *Project*: The name of the project associated with the launched job. * *Project Status*: The status of the project associated with the launched job. * *Playbook*: The playbook used to launch this job. -* *Execution Environment*: The name of the {ExecEnvShort} used in this job. -* *Container Group*: The name of the container group used in this job. +* *Execution environment*: The name of the {ExecEnvShort} used in this job. +//Container group doesn't appear in latest instance: +//* *Container Group*: The name of the container group used in this job. * *Credentials*: The credentials used in this job. -* *Extra Variables*: Any extra variables passed when creating the job template are displayed here. +* *Extra variables*: Any extra variables passed when creating the job template are displayed here. Select one of these items to view the corresponding job templates, projects, and other objects. diff --git a/downstream/modules/platform/ref-controller-playbook-run-search.adoc b/downstream/modules/platform/ref-controller-playbook-run-search.adoc index b17c53c21b..a2d773f0c2 100644 --- a/downstream/modules/platform/ref-controller-playbook-run-search.adoc +++ b/downstream/modules/platform/ref-controller-playbook-run-search.adoc @@ -1,4 +1,6 @@ -[id="controller-playbook-run-search"] +:_mod-docs-content-type: REFERENCE + +[id="controller-playbook-run-search_{context}"] = Search @@ -7,46 +9,48 @@ To filter only certain hosts with a particular status, specify one of the follow ok:: Indicates that a task completed successfully but no change was executed on the host. changed:: The playbook task executed. -Since Ansible tasks should be written to be idempotent, tasks may exit successfully without executing anything on the host. +Since Ansible tasks should be written to be idempotent, tasks can exit successfully without executing anything on the host. In these cases, the task returns *ok*, but not *changed*. failed:: The task failed. Further playbook execution stopped for this host. -unreachable:: The host is unreachable from the network or has another fatal error associated with it. +unreachable:: The host is unreachable from the network or has another unrecoverable error associated with it. skipped:: The playbook task skipped because no change was necessary for the host to reach the target state. rescued:: This shows the tasks that failed and then executes a rescue section. ignored:: This shows the tasks that failed and have `ignore_errors: yes configured`. -These statuses also display in each *Stdout* pane, in a group of "stats" called the host summary fields: +//These statuses also display in each *Stdout* pane, in a group of "stats" called the host summary fields: -image::ug-job-std-out-host-summary-status.png[Host summary status] +//image::ug-job-std-out-host-summary-status.png[Host summary status] The following example shows a search with only unreachable hosts: image::ug-std-out-unreachable.png[Stdout pane unreachable] -For more information on using the search, see the xref:assembly-controller-search[Search] section. +For more information about using the search, see the link:{URLControllerUserGuide}/assembly-controller-search[Search] section. The standard output view displays the events that occur on a particular job. -By default, all rows are expanded so that the details are displayed. -Use the collapse-all (image:ug-collapse-all-icon.png[Collapse,15,15]) icon to switch to a view that only contains the headers for plays and tasks. -Click the plus (image:plus_icon_dark.png[Plus icon,15,15]) icon to view all the lines of the standard output. -You can display all the details of a specific play or task by clicking the arrow icons next to them. -Click an arrow from sideways to downward to expand the lines associated with that play or task. -Click the arrow back to the sideways position to collapse and hide the lines. +// Latest environment does not show these options: +//By default, all rows are expanded so that the details are displayed. +//Use the collapse-all (image:ug-collapse-all-icon.png[Collapse,15,15]) icon to switch to a view that only contains the headers for plays and tasks. +//Click the plus (image:plus_icon_dark.png[Plus icon,15,15]) icon to view all the lines of the standard output. + +//You can display all the details of a specific play or task by clicking the arrow icons next to them. +//Click an arrow from sideways to downward to expand the lines associated with that play or task. +//Click the arrow back to the sideways position to collapse and hide the lines. -image::ug-std-out-expand-collapse-icons.png[Collapse icons] +//image::ug-std-out-expand-collapse-icons.png[Collapse icons] -When viewing details in the expand or collapse mode, note the following: +//When viewing details in the expand or collapse mode, note the following: -* Each displayed line that is not collapsed has a corresponding line number and start time. -* An expand or collapse icon is at the start of any play or task after the play or task has completed. -* If querying for a particular play or task, it appears collapsed at the end of its completed process. -* In some cases, an error message appears, stating that the output may be too large to display. -This occurs when there are more than 4000 events. -Use the search and filter for specific events to bypass the error. +//* Each displayed line that is not collapsed has a corresponding line number and start time. +//* An expand or collapse icon is at the start of any play or task after the play or task has completed. +//* If querying for a particular play or task, it appears collapsed at the end of its completed process. +//* In some cases, an error message appears, stating that the output may be too large to display. +//This occurs when there are more than 4000 events. +//Use the search and filter for specific events to bypass the error. -Click on a line of an event from the *Stdout* pane and a *Host Events* window displays in a separate window. +Click a line of an event from the *Stdout* pane and a *Host Events* window displays in a separate window. This window shows the host that was affected by that particular event. [NOTE] diff --git a/downstream/modules/platform/ref-controller-playbooks-not-showing.adoc b/downstream/modules/platform/ref-controller-playbooks-not-showing.adoc index fa4517f0ba..1caceabed9 100644 --- a/downstream/modules/platform/ref-controller-playbooks-not-showing.adoc +++ b/downstream/modules/platform/ref-controller-playbooks-not-showing.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-playbooks-not-showing"] = Playbooks do not show up in the Job Template list diff --git a/downstream/modules/platform/ref-controller-policy-considerations.adoc b/downstream/modules/platform/ref-controller-policy-considerations.adoc index 68af39ce11..92a91c0281 100644 --- a/downstream/modules/platform/ref-controller-policy-considerations.adoc +++ b/downstream/modules/platform/ref-controller-policy-considerations.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-policy-considerations"] = Notable policy considerations @@ -6,8 +8,11 @@ Take the following policy considerations into account: * Both `policy_instance_percentage` and `policy_instance_minimum` set minimum allocations. The rule that results in more instances assigned to the group takes effect. ++ For example, if you have a `policy_instance_percentage` of 50% and a `policy_instance_minimum` of 2 and you start 6 instances, 3 of them are assigned to the instance group. ++ If you reduce the number of total instances in the cluster to 2, then both of them are assigned to the instance group to satisfy `policy_instance_minimum`. This enables you to set a lower limit on the amount of available resources. * Policies do not actively prevent instances from being associated with multiple instance groups, but this can be achieved by making the percentages add up to 100. ++ If you have 4 instance groups, assign each a percentage value of 25 and the instances are distributed among them without any overlap. diff --git a/downstream/modules/platform/ref-controller-pre-scan-setup.adoc b/downstream/modules/platform/ref-controller-pre-scan-setup.adoc index 3a2547c0fc..e4fc3c5b99 100644 --- a/downstream/modules/platform/ref-controller-pre-scan-setup.adoc +++ b/downstream/modules/platform/ref-controller-pre-scan-setup.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-pre-scan-setup"] = Pre-scan setup diff --git a/downstream/modules/platform/ref-controller-proj-sign-prerequisites.adoc b/downstream/modules/platform/ref-controller-proj-sign-prerequisites.adoc index c5a3697825..527886e61d 100644 --- a/downstream/modules/platform/ref-controller-proj-sign-prerequisites.adoc +++ b/downstream/modules/platform/ref-controller-proj-sign-prerequisites.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-proj-sign-prerequisites"] = Prerequisites @@ -8,10 +10,10 @@ + [literal, options="nowrap" subs="+attributes"] ---- -ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms for RHEL 8 -ansible-automation-platform-2.4-for-rhel-9-x86_64-rpms for RHEL 9 +ansible-automation-platform-2.5-for-rhel-8-x86_64-rpms for RHEL 8 +ansible-automation-platform-2.5-for-rhel-9-x86_64-rpms for RHEL 9 ---- -* A valid GPG public or private keypair is required for signing content. +* You require a valid GPG public or private keypair for signing content. For more information, see link:https://www.redhat.com/sysadmin/creating-gpg-keypairs[How to create GPG keypairs]. + For more information about GPG keys, see the link:https://www.gnupg.org/documentation/index.html[GnuPG documentation]. diff --git a/downstream/modules/platform/ref-controller-projects-scm-type-json.adoc b/downstream/modules/platform/ref-controller-projects-scm-type-json.adoc index 8408df69cf..b71f95bfba 100644 --- a/downstream/modules/platform/ref-controller-projects-scm-type-json.adoc +++ b/downstream/modules/platform/ref-controller-projects-scm-type-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-projects-scm-type-json"] = projects_by_scm_type.json diff --git a/downstream/modules/platform/ref-controller-query-info-json.adoc b/downstream/modules/platform/ref-controller-query-info-json.adoc index ba71468086..fecffdca41 100644 --- a/downstream/modules/platform/ref-controller-query-info-json.adoc +++ b/downstream/modules/platform/ref-controller-query-info-json.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-query-info-json"] = query_info.json diff --git a/downstream/modules/platform/ref-controller-refresh-existing-token.adoc b/downstream/modules/platform/ref-controller-refresh-existing-token.adoc index 48e8448d1e..f6c9e9f2ad 100644 --- a/downstream/modules/platform/ref-controller-refresh-existing-token.adoc +++ b/downstream/modules/platform/ref-controller-refresh-existing-token.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-refresh-existing-token"] = Refresh an existing access token @@ -19,14 +21,14 @@ The following example shows an existing access token with a refresh token provid } ---- -The `/api/o/token/` endpoint is used for refreshing the access token: +The `/o/token/` endpoint is used for refreshing the access token: [literal, options="nowrap" subs="+attributes"] ---- curl -X POST \ -d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \ -u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \ - http:///api/o/token/ -i + http:///o/token/ -i ---- Where `refresh_token` is provided by `refresh_token` field of the preceding access token. @@ -36,7 +38,7 @@ The authentication information is of format `:`, where [NOTE] ==== The special OAuth 2 endpoints only support using the `x-www-form-urlencoded` *Content-type*, so as a result, none of the -`api/o/*` endpoints accept `application/json`. +`/o/*` endpoints accept `application/json`. ==== On success, a response displays in JSON format containing the new (refreshed) access token with the same scope information as the previous one: @@ -60,4 +62,4 @@ Strict-Transport-Security: max-age=15768000 The refresh operation replaces the existing token by deleting the original and then immediately creating a new token with the same scope and related application as the original one. -Verify that the new token is present and the old one is deleted in the `/api/v2/tokens/` endpoint. +Verify that the new token is present and the old one is deleted in the `api/gateway/v1/tokens/` endpoint. diff --git a/downstream/modules/platform/ref-controller-reuse-external-database-fail.adoc b/downstream/modules/platform/ref-controller-reuse-external-database-fail.adoc index b862b6a16e..27a0326f7d 100644 --- a/downstream/modules/platform/ref-controller-reuse-external-database-fail.adoc +++ b/downstream/modules/platform/ref-controller-reuse-external-database-fail.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-reuse-external-database-fail"] = Reusing an external database causes installations to fail diff --git a/downstream/modules/platform/ref-controller-revoke-access-token.adoc b/downstream/modules/platform/ref-controller-revoke-access-token.adoc index 2e5b018bff..9ac9cbde72 100644 --- a/downstream/modules/platform/ref-controller-revoke-access-token.adoc +++ b/downstream/modules/platform/ref-controller-revoke-access-token.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-revoke-access-token"] = Revoke an access token -You can revoke an access token by using the `/api/o/revoke-token/` endpoint. +You can revoke an access token by deleting the token in the platform UI, or by using the `/o/revoke-token/` endpoint. Revoking an access token by this method is the same as deleting the token resource object, but it enables you to delete a token by providing its token value, and the associated `client_id` (and `client_secret` if the application is `confidential`). For example: @@ -10,24 +12,21 @@ Revoking an access token by this method is the same as deleting the token resour ---- curl -X POST -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \ -u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \ -http:///api/o/revoke_token/ -i +http:///o/revoke_token/ -i ---- [NOTE] ==== * The special OAuth 2 endpoints only support using the `x-www-form-urlencoded` *Content-type*, so as a result, none of the -`api/o/*` endpoints accept `application/json`. +`/o/*` endpoints accept `application/json`. * The *Allow External Users to Create Oauth2 Tokens* (`ALLOW_OAUTH2_FOR_EXTERNAL_USERS` in the API) setting is disabled by default. External users refer to users authenticated externally with a service such as LDAP, or any of the other SSO services. This setting ensures external users cannot create their own tokens. If you enable then disable it, any tokens created by external users in the meantime will still exist, and are not automatically revoked. +This setting can be configured from the {MenuSetGateway} menu. ==== Alternatively, to revoke OAuth2 tokens, you can use the `manage` utility, see xref:ref-controller-revoke-oauth2-token[Revoke oauth2 tokens]. -This setting can be configured at the system-level in the UI: - -image:configure-controller-system-oauth2-tokens-toggle.png[image] - On success, a response of `200 OK` is displayed. -Verify the deletion by checking whether the token is present in the `/api/v2/tokens/` endpoint. \ No newline at end of file +Verify the deletion by checking whether the token is present in the `api/gateway/v1/tokens/` endpoint. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-revoke-oauth2-token.adoc b/downstream/modules/platform/ref-controller-revoke-oauth2-token.adoc index 888c573fe2..fafc764e87 100644 --- a/downstream/modules/platform/ref-controller-revoke-oauth2-token.adoc +++ b/downstream/modules/platform/ref-controller-revoke-oauth2-token.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-revoke-oauth2-token"] = `revoke_oauth2_tokens` @@ -11,26 +13,26 @@ To revoke all existing OAuth2 tokens use the following command: [literal, options="nowrap" subs="+attributes"] ---- -$ awx-manage revoke_oauth2_tokens +$ aap-gateway-manage revoke_oauth2_tokens ---- To revoke all OAuth2 tokens and their refresh tokens use the following command: [literal, options="nowrap" subs="+attributes"] ---- -$ awx-manage revoke_oauth2_tokens --revoke_refresh +$ aap-gateway-manage revoke_oauth2_tokens --revoke_refresh ---- To revoke all OAuth2 tokens for the user with `id=example_user` (specify the username for `example_user`): [literal, options="nowrap" subs="+attributes"] ---- -$ awx-manage revoke_oauth2_tokens --user example_user +$ aap-gateway-manage revoke_oauth2_tokens --user example_user ---- To revoke all OAuth2 tokens and refresh token for the user with `id=example_user`: [literal, options="nowrap" subs="+attributes"] ---- -$ awx-manage revoke_oauth2_tokens --user example_user --revoke_refresh +$ aap-gateway-manage revoke_oauth2_tokens --user example_user --revoke_refresh ---- diff --git a/downstream/modules/platform/ref-controller-rh-satellite.adoc b/downstream/modules/platform/ref-controller-rh-satellite.adoc index fdc5a017c1..d74a8a9888 100644 --- a/downstream/modules/platform/ref-controller-rh-satellite.adoc +++ b/downstream/modules/platform/ref-controller-rh-satellite.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-rh-satellite"] = Red Hat Satellite 6 diff --git a/downstream/modules/platform/ref-controller-rh-virtualization.adoc b/downstream/modules/platform/ref-controller-rh-virtualization.adoc index 68b9b9a75b..35b9bfe063 100644 --- a/downstream/modules/platform/ref-controller-rh-virtualization.adoc +++ b/downstream/modules/platform/ref-controller-rh-virtualization.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-rh-virtualization"] = Red Hat Virtualization diff --git a/downstream/modules/platform/ref-controller-run-a-playbook.adoc b/downstream/modules/platform/ref-controller-run-a-playbook.adoc deleted file mode 100644 index dccf4c2593..0000000000 --- a/downstream/modules/platform/ref-controller-run-a-playbook.adoc +++ /dev/null @@ -1,12 +0,0 @@ -[id="controller-run-a-playbook"] - -= Unable to run a playbook - -If you are unable to run the `helloworld.yml` example playbook from the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/getting_started_with_automation_controller/index#controller-projects[Managing projects] section of the _{ControllerGS}_ guide due to playbook errors, try the following: - -* Ensure that you are authenticating with the user currently running the commands. -If not, check how the username has been set up or pass the `--user=username` or `-u username` commands to specify a user. -* Is your YAML file correctly indented? -You might need to line up your whitespace correctly. -Indentation level is significant in YAML. -You can use `yamlint` to check your playbook. diff --git a/downstream/modules/platform/ref-controller-run-the-builder.adoc b/downstream/modules/platform/ref-controller-run-the-builder.adoc index b472865494..b535523bc2 100644 --- a/downstream/modules/platform/ref-controller-run-the-builder.adoc +++ b/downstream/modules/platform/ref-controller-run-the-builder.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-run-the-builder"] = Example YAML file to build an image diff --git a/downstream/modules/platform/ref-controller-scan-fact-tracking-schema.adoc b/downstream/modules/platform/ref-controller-scan-fact-tracking-schema.adoc index d20b8fcd4b..c12dc3baf5 100644 --- a/downstream/modules/platform/ref-controller-scan-fact-tracking-schema.adoc +++ b/downstream/modules/platform/ref-controller-scan-fact-tracking-schema.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-scan-fact-tracking-schema"] = Scan / fact / system tracking data schema diff --git a/downstream/modules/platform/ref-controller-scm-inv-source-fields.adoc b/downstream/modules/platform/ref-controller-scm-inv-source-fields.adoc index adb028d2c1..d57b619414 100644 --- a/downstream/modules/platform/ref-controller-scm-inv-source-fields.adoc +++ b/downstream/modules/platform/ref-controller-scm-inv-source-fields.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-scm-inv-source-fields"] = Source control management Inventory Source Fields @@ -17,7 +19,7 @@ Additionally: * In cases where you have a large project (around 10 GB), disk space on `/tmp` can be an issue. You can specify a location manually in the {ControllerName} UI from the *Add source* page of an inventory. -Refer to link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#proc-controller-add-source[Adding a source] for instructions on creating an inventory source. +Refer to link:{URLControllerUserGuide}/controller-inventories#proc-controller-add-source[Adding a source] for instructions on creating an inventory source. When you update a project, refresh the listing to use the latest source control management (SCM) information. If no inventory sources use a project as an SCM inventory source, then the inventory listing might not be refreshed on update. @@ -35,3 +37,64 @@ You can perform an inventory update while a related job is running. == Supported File Syntax {ControllerNameStart} uses the `ansible-inventory` module from Ansible to process inventory files, and supports all valid inventory syntax that {ControllerName} requires. + +[IMPORTANT] +==== +You do not need to write inventory scripts in Python. +You can enter any executable file in the source field and must run `chmod +x` for that file and check it into Git. +==== + +The following is a working example of JSON output that {ControllerName} can read for the import: + +---- +{ + "_meta": { + "hostvars": { + "host1": { + "fly_rod": true + } + } + }, + "all": { + "children": [ + "groupA", + "ungrouped" + ] + }, + "groupA": { + "hosts": [ + "host1", + "host10", + "host11", + "host12", + "host13", + "host14", + "host15", + "host16", + "host17", + "host18", + "host19", + "host2", + "host20", + "host21", + "host22", + "host23", + "host24", + "host25", + "host3", + "host4", + "host5", + "host6", + "host7", + "host8", + "host9" + ] + } +} +---- + +.Additional resources + +* link:https://github.com/ansible/test-playbooks/tree/main/inventories[test-playbooks/inventories] +* link:https://github.com/ansible/test-playbooks/blob/main/inventories/changes.py[inventories/changes.py] +* link:https://access.redhat.com/solutions/6997130[How to migrate inventory scripts from Red Hat Ansible tower to Red Hat Ansible Automation Platform?] diff --git a/downstream/modules/platform/ref-controller-scm-inventory-details.adoc b/downstream/modules/platform/ref-controller-scm-inventory-details.adoc index 868a6909aa..58cb8b6a32 100644 --- a/downstream/modules/platform/ref-controller-scm-inventory-details.adoc +++ b/downstream/modules/platform/ref-controller-scm-inventory-details.adoc @@ -1,10 +1,12 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-scm-inventory-details"] = SCM inventory details -To view details about the job execution and its associated project, select the *Access* tab. +To view details about the job execution and its associated project, select the *Details* tab. -image::ug-details-for-scm-job.png[Details for SCM job] +//image::ug-details-for-scm-job.png[Details for SCM job] You can view the following details for an executed job: @@ -16,13 +18,13 @@ Reasons for SCM jobs not being ready include dependencies that are currently run ** *Running*: The SCM job is currently in progress. ** *Successful*: The last SCM job succeeded. ** *Failed*: The last SCM job failed. -* *Job Type*: SCM jobs display Source Control Update. +* *Type*: SCM jobs display Source Control Update. * *Project*: The name of the project. -* *Project Status*: Indicates whether the associated project was successfully updated. +* *Status*: Indicates whether the associated project was successfully updated. * *Revision*: Indicates the revision number of the sourced project that was used in this job. -* *Execution Environment*: Specifies the {ExecEnvShort} used to run this job. -* *Execution Node*: Indicates the node on which the job ran. -* *Instance Group*: Indicates the instance group on which the job ran, if specified. -* *Job Tags*: Tags show the various job operations executed. +* *Execution environment*: Specifies the {ExecEnvShort} used to run this job. +* *Execution node*: Indicates the node on which the job ran. +* *Instance group*: Indicates the instance group on which the job ran, if specified. +* *Job tags*: Tags show the various job operations executed. -Selecting these items enables you to view the corresponding job templates, projects, and other objects. +Select these items to view the corresponding job templates, projects, and other objects. diff --git a/downstream/modules/platform/ref-controller-search-sort.adoc b/downstream/modules/platform/ref-controller-search-sort.adoc index 536d7ba7fe..87ac123c67 100644 --- a/downstream/modules/platform/ref-controller-search-sort.adoc +++ b/downstream/modules/platform/ref-controller-search-sort.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-search-sort"] = Sort @@ -7,4 +9,4 @@ The following is an example from the schedules list: image:sort-order-example.png[sort arrow] -The direction of the arrow indicates the sort order of the column. \ No newline at end of file +The direction of the arrow indicates the sort order of the column. diff --git a/downstream/modules/platform/ref-controller-search-tips.adoc b/downstream/modules/platform/ref-controller-search-tips.adoc index 30fe7230b8..cace1c233c 100644 --- a/downstream/modules/platform/ref-controller-search-tips.adoc +++ b/downstream/modules/platform/ref-controller-search-tips.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-search-tips"] = Rules for searching @@ -9,6 +11,11 @@ These searching tips assume that you are not searching hosts. * A colon is used to separate the field that you want to search from the value. * If the search has no colon (see example 3) it is treated as a simple string search where `?search=foobar` is sent. +[NOTE] +==== +Search functionality for Job templates is limited to alphanumeric characters only. +==== + The following are examples of syntax used for searching: . `name:localhost` In this example, the user is searching for the string `localhost` in the name attribute. diff --git a/downstream/modules/platform/ref-controller-search-values-related-fields.adoc b/downstream/modules/platform/ref-controller-search-values-related-fields.adoc index 0d7b8eada9..47bbd5a033 100644 --- a/downstream/modules/platform/ref-controller-search-values-related-fields.adoc +++ b/downstream/modules/platform/ref-controller-search-values-related-fields.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-search-values-related-fields"] = Searching using values from related fields @@ -14,4 +16,4 @@ The syntax on this would look like: `job_template.project.name:"A Project"`. ==== This query executes against the `unified_job_templates` endpoint which is why it starts with `job_template`. If you were searching against the `job_templates` endpoint, then you would not need the `job_template` portion of the query. -==== \ No newline at end of file +==== diff --git a/downstream/modules/platform/ref-controller-secret-handling-automation-use.adoc b/downstream/modules/platform/ref-controller-secret-handling-automation-use.adoc index 90ade055a5..d89a423279 100644 --- a/downstream/modules/platform/ref-controller-secret-handling-automation-use.adoc +++ b/downstream/modules/platform/ref-controller-secret-handling-automation-use.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-secret-handling-automation-use"] = Secret handling for automation use diff --git a/downstream/modules/platform/ref-controller-secret-handling-operational-use.adoc b/downstream/modules/platform/ref-controller-secret-handling-operational-use.adoc index b03e0f766d..90efde6fd6 100644 --- a/downstream/modules/platform/ref-controller-secret-handling-operational-use.adoc +++ b/downstream/modules/platform/ref-controller-secret-handling-operational-use.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-secret-handling-operational-use"] = Secret handling for operational use diff --git a/downstream/modules/platform/ref-controller-set-up-jump-host.adoc b/downstream/modules/platform/ref-controller-set-up-jump-host.adoc index b9dfb1e97d..0d16715925 100644 --- a/downstream/modules/platform/ref-controller-set-up-jump-host.adoc +++ b/downstream/modules/platform/ref-controller-set-up-jump-host.adoc @@ -1,29 +1,72 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-set-up-jump-host"] -= Set up a jump host to use with {ControllerName} += Configuring {ControllerName} to use jump hosts connecting to managed nodes Credentials supplied by {ControllerName} do not flow to the jump host through ProxyCommand. They are only used for the end-node when the tunneled connection is set up. -You can configure a fixed user/keyfile in the AWX user's SSH configuration in the ProxyCommand definition that sets up the connection through the jump host. +[discrete] +== Configure a fixed user/keyfile in your SSH configuration file + +You can configure a fixed user/keyfile in your SSH configuration file in the ProxyCommand definition that sets up the connection through the jump host. -For example: +.Prerequisites +* Check whether all jump hosts are reachable from any node that establishes an SSH connection to the managed nodes, such as a Hybrid Controller or an Execution Node. + +.Procedure +. Create an SSH configuration file `/var/lib/awx .ssh/config` on each node with the following details [literal, options="nowrap" subs="+attributes"] ---- -Host tampa -Hostname 10.100.100.11 -IdentityFile [privatekeyfile] +Host jumphost.example.com + Hostname jumphost.example.com + User + Port + IdentityFile ~.ssh/id_rsa + StrictHostKeyChecking no + ProxyCommand ssh -W %h:%p jumphost.example.com +---- -Host 10.100.. -Proxycommand ssh -W [jumphostuser]@%h:%p tampa +* The code specifies the configuration required to connect to the jump host 'jumphost.example.com' +* {ControllerNameStart} establishes an SSH connection from each node to the managed nodes. +* Example values `jumphost.example.com`, `jumphostuser`, `jumphostport` and `~/.ssh/id_rsa` must be changed according to your environment +* Add a Host matching block to the already created SSH configuration file `/var/lib/awx/.ssh/config`` on the node, for example: ++ +[literal, options="nowrap" subs="+attributes"] +---- +Host 192.0.* + ... ---- ++ +* The `Host 192.0.*` line indicates that all hosts in that subnet use the settings defined in that block. +Specifically all hosts in that subnet are accessed using the `ProxyCommand` setting and connect through `jumphost.example.com` +* If `Host *` is used to indicate that all hosts connect through the specified proxy, ensure that `jumphost.example.com` is excluded from that matching, for example: ++ +[literal, options="nowrap" subs="+attributes"] +---- +Host * !jumphost.example.com + ... +---- + +[discrete] +=== Using the {PlatformName} UI + +.Procedure +. On the navigation panel, select {MenuSetJob} +. Click btn:[Edit] and add `/var/lib/awx .ssh:/home/runner/.ssh:0` to the *Paths to expose isolated jobs* field. +. Click btn:[Save] to save your changes. + +[discrete] +== Configuring jump hosts using Ansible Inventory variables You can also add a jump host to your {ControllerName} instance through Inventory variables. -These variables can be set at either the inventory, -group, or host level. -To add this, navigate to your inventory and in the `variables` field of whichever level you choose, add the following +These variables can be set at either the inventory, group, or host level. +Use this method if you want to control the use of jump hosts inside {ControllerName} using the inventory. + +* Navigate to your inventory and in the `variables` field of whichever level you choose, add the following variables: [literal, options="nowrap" subs="+attributes"] diff --git a/downstream/modules/platform/ref-controller-settings-control-execution-nodes.adoc b/downstream/modules/platform/ref-controller-settings-control-execution-nodes.adoc index 74cc86026b..8dd2b44198 100644 --- a/downstream/modules/platform/ref-controller-settings-control-execution-nodes.adoc +++ b/downstream/modules/platform/ref-controller-settings-control-execution-nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-settings-control-execution-nodes"] = Capacity settings for control and execution nodes @@ -6,6 +8,3 @@ The following settings impact capacity calculations on the cluster. Set them to * `AWX_CONTROL_NODE_TASK_IMPACT`: Sets the impact of controlling jobs. You can use it when your control plane exceeds desired CPU or memory usage to control the number of jobs that your control plane can run at the same time. * `SYSTEM_TASK_FORKS_CPU` and `SYSTEM_TASK_FORKS_MEM`: Influence how many resources are estimated to be consumed by each fork of Ansible. By default, 1 fork of Ansible is estimated to use 0.25 of a CPU and 100 Mb of memory. - -//.Additional resources -//For information about file-based settings, see xref:con-controller-additional-settings[Additional settings for {ControllerName}]. diff --git a/downstream/modules/platform/ref-controller-settings-job-events.adoc b/downstream/modules/platform/ref-controller-settings-job-events.adoc index 4abfbca0a7..3125e9e88d 100644 --- a/downstream/modules/platform/ref-controller-settings-job-events.adoc +++ b/downstream/modules/platform/ref-controller-settings-job-events.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-settings-job-events"] = Settings for managing job event processing @@ -6,4 +8,4 @@ The callback receiver processes all the output of jobs and writes this output as Administrators can override the number of callback receiver workers with the setting `JOB_EVENT_WORKERS`. Do not set more than 1 worker per CPU, and there must be at least 1 worker. Greater values have more workers available to clear the Redis queue as events stream to the {ControllerName}, but can compete with other processes such as the web server for CPU seconds, uses more database connections (1 per worker), and can reduce the batch size of events each worker commits. -Each worker builds up a buffer of events to write in a batch. The default amount of time to wait before writing a batch is 1 second. This is controlled by the `JOB_EVENT_BUFFER_SECONDS` setting. Increasing the amount of time the worker waits between batches can result in larger batch sizes. \ No newline at end of file +Each worker builds up a buffer of events to write in a batch. The default amount of time to wait before writing a batch is 1 second. This is controlled by the `JOB_EVENT_BUFFER_SECONDS` setting. Increasing the amount of time the worker waits between batches can result in larger batch sizes. diff --git a/downstream/modules/platform/ref-controller-settings-scheduling-jobs.adoc b/downstream/modules/platform/ref-controller-settings-scheduling-jobs.adoc index d03dadd821..2022427d12 100644 --- a/downstream/modules/platform/ref-controller-settings-scheduling-jobs.adoc +++ b/downstream/modules/platform/ref-controller-settings-scheduling-jobs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-settings-scheduling-jobs"] = Settings for scheduling jobs diff --git a/downstream/modules/platform/ref-controller-settings-to-modify-events.adoc b/downstream/modules/platform/ref-controller-settings-to-modify-events.adoc index 182fd2610c..4fbbc727c8 100644 --- a/downstream/modules/platform/ref-controller-settings-to-modify-events.adoc +++ b/downstream/modules/platform/ref-controller-settings-to-modify-events.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-settings-to-modify-events"] = Settings to modify rate and size of events @@ -15,6 +17,3 @@ If you cannot disable live streaming of events because of their size, reduce the * `MAX_UI_JOB_EVENTS`: Number of events to display. This setting hides the rest of the events in the list. * `MAX_EVENT_RES_DATA`: The maximum size of the ansible callback event's "res" data structure. The "res" is the full "result" of the module. When the maximum size of ansible callback events is reached, then the remaining output will be truncated. Default value is 700000 bytes. * `LOCAL_STDOUT_EXPIRE_TIME`: The amount of time before a `stdout` file is expired and removed locally. - -//.Additional resources -//For more information on file based settings, see xref:con-controller-additional-settings[Additional settings for {ControllerName}]. diff --git a/downstream/modules/platform/ref-controller-setup-considerations.adoc b/downstream/modules/platform/ref-controller-setup-considerations.adoc index 1a86bdf91d..5752edec84 100644 --- a/downstream/modules/platform/ref-controller-setup-considerations.adoc +++ b/downstream/modules/platform/ref-controller-setup-considerations.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-setup-considerations"] = Setup considerations diff --git a/downstream/modules/platform/ref-controller-smart-host-filter.adoc b/downstream/modules/platform/ref-controller-smart-host-filter.adoc index 62dc160fec..b163f2c7de 100644 --- a/downstream/modules/platform/ref-controller-smart-host-filter.adoc +++ b/downstream/modules/platform/ref-controller-smart-host-filter.adoc @@ -1,10 +1,12 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-smart-host-filter"] = Smart Host Filters You can use a search filter to populate hosts for an inventory. This feature uses the fact searching feature. -{ControllerNameStart} stores facts generated by an Ansible playbook during a Job Template in the database whenever `use_fact_cache=True` is set per-Job Template. +{ControllerNameStart} stores facts generated by an Ansible Playbook during a Job Template in the database whenever `use_fact_cache=True` is set per-Job Template. New facts are merged with existing facts and are per-host. These stored facts can be used to filter hosts with the `/api/v2/hosts` endpoint, using the `GET` query parameter `host_filter`. @@ -24,7 +26,7 @@ The `host_filter` parameter permits: ** `""` can be used in the value when spaces are wanted in the value * "classic" Django queries may be embedded in the `host_filter` -.Examples: +*Examples*: [literal, options="nowrap" subs="+attributes"] ---- @@ -68,4 +70,4 @@ If a search term in `host_filter` is of string type, to make the value a number ---- host_filter=ansible_facts__packages__dnsmasq[]__version="2.66" ---- -==== \ No newline at end of file +==== diff --git a/downstream/modules/platform/ref-controller-smart-inventories.adoc b/downstream/modules/platform/ref-controller-smart-inventories.adoc index 58c7ca737a..9372380d9e 100644 --- a/downstream/modules/platform/ref-controller-smart-inventories.adoc +++ b/downstream/modules/platform/ref-controller-smart-inventories.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-smart-inventories"] = Smart Inventories @@ -7,7 +9,7 @@ Organization administrators have admin permission for inventories in their organ A Smart Inventory is identified by `KIND=smart`. -You can define a Smart Inventory using the same method being used with Search. +You can define a Smart Inventory by using the same method being used with Search. `InventorySource` is directly associated with an Inventory. [NOTE] diff --git a/downstream/modules/platform/ref-controller-source-control.adoc b/downstream/modules/platform/ref-controller-source-control.adoc new file mode 100644 index 0000000000..9fa84acc31 --- /dev/null +++ b/downstream/modules/platform/ref-controller-source-control.adoc @@ -0,0 +1,8 @@ +[id="ref-controller-source-control"] + += Use source control + +{ControllerNameStart} supports playbooks stored directly on the server. +Therefore, you must store your playbooks, roles, and any associated details in source control. +This way you have an audit trail describing when and why you changed the rules that are automating your infrastructure. +Additionally, it permits sharing of playbooks with other parts of your infrastructure or team. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-subscription-types.adoc b/downstream/modules/platform/ref-controller-subscription-types.adoc index cf3017b14a..1584e00396 100644 --- a/downstream/modules/platform/ref-controller-subscription-types.adoc +++ b/downstream/modules/platform/ref-controller-subscription-types.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-subscription-types"] = Subscription Types @@ -18,7 +20,7 @@ ** Review the SLA at link:https://access.redhat.com/support/offerings/production/sla[Product Support Terms of Service] ** Review the link:https://access.redhat.com/support/policy/severity[Red Hat Support Severity Level Definitions] -All subscription levels include regular updates and releases of {ControllerName}, Ansible, and any other components of the Platform. +All subscription levels include regular updates and releases of {ControllerName}, Ansible, and any other components of the {PlatformNameShort}. For more information, contact Ansible through the link:https://access.redhat.com/[Red Hat Customer Portal] -or at http://www.ansible.com/contact-us/. +or at the link:http://www.ansible.com/contact-us/[Ansible site]. diff --git a/downstream/modules/platform/ref-controller-supported-attributes.adoc b/downstream/modules/platform/ref-controller-supported-attributes.adoc index ac364f3915..b72cf829e2 100644 --- a/downstream/modules/platform/ref-controller-supported-attributes.adoc +++ b/downstream/modules/platform/ref-controller-supported-attributes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="controller-supported-attributes"] The following are the supported job attributes: @@ -5,7 +7,7 @@ The following are the supported job attributes: * `allow_simultaneous` - (boolean) Indicates if multiple jobs can run simultaneously from the job template associated with this job. * `controller_node` - (string) The instance that manages the isolated {ExecEnvShort}. * `created` - (datetime) The timestamp when this job was created. -* `custom_virtualenv` - (string) The custom virtual environment used to execute the job. +* `custom_virtualenv` - (string) The custom virtual environment used to run the job. * `description` - (string) An optional description of the job. * `diff_mode` - (boolean) If enabled, textual changes made to any templated files on the host are shown in the standard output. * `elapsed` - (decimal) The elapsed time in seconds that the job has run. @@ -19,7 +21,7 @@ Note that some conditions, such as unreachable hosts can still prevent handlers * `job_explanation` - (string) The status field to indicate the state of the job if it was not able to run and capture `stdout`. * `job_slice_count` - (integer) If run as part of a sliced job, this is the total number of slices (if 1, job is not part of a sliced job). * `job_slice_number` - (integer) If run as part of a sliced job, this is the ID of the inventory slice operated on (if not part of a sliced job, attribute is not used). -* `job_tags` - (string) Only tasks with specified tags execute. +* `job_tags` - (string) Only tasks with specified tags run. * `job_type` - (choice) This can be `run`, `check`, or `scan`. * `launch_type` - (choice) This can be `manual`, `relaunch`, `callback`, `scheduled`, `dependency`, `workflow`, `sync`, or `scm`. * `limit` - (string) The playbook execution limited to this set of hosts, if specified. @@ -93,10 +95,10 @@ Note that some conditions, such as unreachable hosts can still prevent handlers **** `results` - The list of dictionaries representing labels. For example, {"id": 5, "name": "database jobs"}. -You can reference information about a job in a custom notification message using grouped curly brackets {{ }}. -Specific job attributes are accessed using dotted notation, for example, {{ job.summary_fields.inventory.name }}. +You can reference information about a job in a custom notification message by using grouped curly brackets {{ }}. +Access specific job attributes by using dotted notation, for example, {{ job.summary_fields.inventory.name }}. You can add any characters used in front or around the braces, or plain text, for clarification, such as "#" for job ID and single-quotes to denote some descriptor. -Custom messages can include a number of variables throughout the message: +Custom messages can include several variables throughout the message: [literal, options="nowrap" subs="+attributes"] ---- @@ -133,7 +135,3 @@ In cases of approval-related notifications, both `url` and `workflow_url` are th 'credential': 'Stub credential', 'created_by': 'admin'} ---- - - - - diff --git a/downstream/modules/platform/ref-controller-supported-oses.adoc b/downstream/modules/platform/ref-controller-supported-oses.adoc index 5483ebe248..8f0f91f5ee 100644 --- a/downstream/modules/platform/ref-controller-supported-oses.adoc +++ b/downstream/modules/platform/ref-controller-supported-oses.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-supported-oses"] = Supported OSes for scan_facts.yml diff --git a/downstream/modules/platform/ref-controller-system-requirements.adoc b/downstream/modules/platform/ref-controller-system-requirements.adoc index e0917a5400..b8ef16c767 100644 --- a/downstream/modules/platform/ref-controller-system-requirements.adoc +++ b/downstream/modules/platform/ref-controller-system-requirements.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-system-requirements"] = {ControllerNameStart} system requirements @@ -7,68 +9,40 @@ In the installer, four node types are provided as abstractions to help you desig Use the following recommendations for node sizing: -[NOTE] -==== -On control and hybrid nodes, allocate a minimum of 20 GB to `/var/lib/awx` for {ExecEnvShort} storage. -==== - *Execution nodes* Execution nodes run automation. Increase memory and CPU to increase capacity for running more forks. [NOTE] ==== -* The RAM and CPU resources stated might not be required for packages installed on an execution node but are the minimum recommended to handle the job load for a node to run an average number of jobs simultaneously. +* The RAM and CPU resources stated are minimum recommendations to handle the job load for a node to run an average number of jobs simultaneously. * Recommended RAM and CPU node sizes are not supplied. The required RAM or CPU depends directly on the number of jobs you are running in that environment. -For further information about required RAM and CPU levels, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_administration_guide/assembly-controller-improving-performance[Performance tuning for automation controller]. -==== - -.Execution nodes +* For capacity based on forks in your configuration, see link:{URLControllerUserGuide}/controller-jobs#controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. -[cols="a,a",options="header"] -|=== -h| Requirement | Minimum required -| *RAM* | 16 GB -| *CPUs* | 4 -| *Local disk* | 40GB minimum -|=== +For further information about required RAM and CPU levels, see link:{URLControllerAdminGuide}/assembly-controller-improving-performance[Performance tuning for automation controller]. +==== *Control nodes* Control nodes process events and run cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing. -.Control nodes +//Control nodes have the following storage requirements: -[cols="a,a",options="header"] -|=== -h| Requirement | Minimum required -| *RAM* | 16 GB -| *CPUs* | 4 -| *Local disk* a| * 40GB minimum with at least 20GB available under /var/lib/awx -* Storage volume must be rated for a minimum baseline of 1500 IOPS +* Storage volume must be rated for a minimum baseline of 3000 IOPS * Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors. -|=== *Hop nodes* Hop nodes serve to route traffic from one part of the {AutomationMesh} to another (for example, a hop node could be a bastion host into another network). RAM can affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU. -.Hop nodes - -[cols="a,a",options="header"] -|=== -h| Requirement | Minimum required -| *RAM* | 16 GB -| *CPUs* | 4 -| *Local disk* | 40 GB -|=== - -* Actual RAM requirements vary based on how many hosts {ControllerName} will manage simultaneously (which is controlled by the `forks` parameter in the job template or the system `ansible.cfg` file). -To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for {ControllerName}. For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-jobs#controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. If `forks` is set to 400, 42 GB of memory is recommended. +* Actual RAM requirements vary based on how many hosts {ControllerName} manages simultaneously (which is controlled by the `forks` parameter in the job template or the system `ansible.cfg` file). +To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for {ControllerName}. +See link:{URLControllerUserGuide}/controller-jobs#controller-capacity-determination[{ControllerNameStart} capacity determination and job impact]. +If `forks` is set to 400, 42 GB of memory is recommended. * {ControllerNameStart} hosts check if `umask` is set to 0022. If not, the setup fails. Set `umask=0022` to avoid this error. * A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches: ** Use rolling updates. @@ -78,5 +52,5 @@ To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 f [role="_additional-resources"] .Additional resources -* For more information about obtaining an {ControllerName} subscription, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/automation_controller_user_guide/controller-managing-subscriptions#controller-importing-subscriptions[Importing a subscription]. +* For more information about obtaining an {ControllerName} subscription, see link:{URLCentralAuth}/assembly-gateway-licensing#proc-attaching-subscriptions[Attaching your {PlatformName} subscription]. * For questions, contact Ansible support through the link:https://access.redhat.com/[Red Hat Customer Portal]. diff --git a/downstream/modules/platform/ref-controller-team-mapping.adoc b/downstream/modules/platform/ref-controller-team-mapping.adoc index 1f32a5dc4b..6240875d83 100644 --- a/downstream/modules/platform/ref-controller-team-mapping.adoc +++ b/downstream/modules/platform/ref-controller-team-mapping.adoc @@ -1,60 +1,32 @@ +:_mod-docs-content-type: PROCEDURE + [id="ref-controller-team-mapping"] = Team mapping -Team mapping is the mapping of team members (users) from social authentication accounts. -Keys are team names (which are created if not present). -Values are dictionaries of options for each team's membership, where each can contain the following parameters: - -* *organization*: String. The name of the organization to which the team belongs. -The team is created if the combination of organization and team name does not exist. -The organization is created first if it does not exist. -If the license does not permit multiple organizations, the team is always assigned to the single default organization. - -* *users*: None, True/False, string or list/tuple of strings. - -*** If *None*, team members are not updated. -*** If *True*, all social authentication users are added as team members. -*** If *False*, all social authentication users are removed as team members. -* If a string or list of strings, specifies expressions used to match users, the user is added as a team member if the username or email matches. -Strings beginning and ending with `/` are compiled into regular expressions. -The modifiers `i` (case-insensitive) and `m` (multi-line) can be specified after the ending `/`. - -*remove*: True/False. Defaults to *True*. When *True*, a user who does not match the preceding rules is removed from the team. - -[literal, options="nowrap" subs="+attributes"] ----- -{ - "My Team": { - "organization": "Test Org", - "users": ["/^[^@]+?@test\\.example\\.com$/"], - "remove": true - }, - "Other Team": { - "organization": "Test Org 2", - "users": ["/^[^@]+?@test\\.example\\.com$/"], - "remove": false - } -} ----- - -Team mappings can be specified separately for each account authentication backend, based on which of these you set up. -When defined, these configurations take precedence over the preceding global configuration. - -[literal, options="nowrap" subs="+attributes"] ----- -SOCIAL_AUTH_GOOGLE_OAUTH2_TEAM_MAP = {} -SOCIAL_AUTH_GITHUB_TEAM_MAP = {} -SOCIAL_AUTH_GITHUB_ORG_TEAM_MAP = {} -SOCIAL_AUTH_GITHUB_TEAM_TEAM_MAP = {} -SOCIAL_AUTH_SAML_TEAM_MAP = {} ----- - -Uncomment the following line, that is, set `SOCIAL_AUTH_USER_FIELDS` to an empty list, to prevent new user accounts from being created. - -[literal, options="nowrap" subs="+attributes"] ----- -SOCIAL_AUTH_USER_FIELDS = [] ----- - -Only users who have previously logged in to {ControllerName} using social or enterprise-level authentication, or have a user account with a matching email address can then login. +Team mapping is the mapping of team members (users) from authenticators. + +You can define the options for each team’s membership. For each team, you can specify which users are automatically added as members of the team and also which users can administer the team. + +Team mappings can be specified separately for each account authentication. + +When Team mapping is positively evaluated, a specified team and its organization are created, if they don’t exist if the related authenticator is allowed to create objects. + + +.Procedure + +. After configuring the authentication details for your authentication method, select the *Mapping* tab. +. Select *Team* from the *Add authentication mapping* list. +. Enter a unique rule *Name* to identify the rule. +. Select a *Trigger* from the list. See xref:gw-authenticator-map-triggers[Authenticator map triggers] for more information about map triggers. +. Select *Revoke* to remove the user’s access to the selected organization role and deny user access to the system when the trigger conditions are not matched. +. Select the *Team* and *Organization* to which matching users are added or blocked. +. Select a *Role* to be applied or removed for matching users (for example, *Team Admin* or *Team Member*). +. Click btn:[Next]. + +[role="_additional-resources"] +.Next steps +include::snippets/snip-gw-mapping-next-steps.adoc[] + + + diff --git a/downstream/modules/platform/ref-controller-token-session-management.adoc b/downstream/modules/platform/ref-controller-token-session-management.adoc index 913d68b3f5..9101517bee 100644 --- a/downstream/modules/platform/ref-controller-token-session-management.adoc +++ b/downstream/modules/platform/ref-controller-token-session-management.adoc @@ -1,12 +1,15 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-token-session-management"] = Token and session management -{ControllerNameStart} supports the following commands for OAuth2 token management: +{PlatformNameShort} supports the following commands for OAuth2 token management: * xref:ref-controller-create-oauth2-token[`create_oauth2_token`] * xref:ref-controller-revoke-oauth2-token[`revoke_oauth2_tokens`] * xref:ref-controller-clear-sessions[`cleartokens`] -* xref:ref-controller-expire-sessions[`expire_sessions`] +//[emcwhinn - Temporarily hiding expire sessions module as it does not yet exist for gateway as per AAP-35735] +//* xref:ref-controller-expire-sessions[`expire_sessions`] * xref:ref-controller-clear-sessions[`clearsessions`] diff --git a/downstream/modules/platform/ref-controller-trial-evaluation.adoc b/downstream/modules/platform/ref-controller-trial-evaluation.adoc index 649b47b321..85e3cef90a 100644 --- a/downstream/modules/platform/ref-controller-trial-evaluation.adoc +++ b/downstream/modules/platform/ref-controller-trial-evaluation.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-trial-evaluation"] = Trial and evaluation -You require a license to run {ControllerName}. -You can start by using a free trial license. -* Trial licenses for {PlatformNameShort} are available at: http://ansible.com/license +A license is required to run {PlatformNameShort}. You can start by using a free trial license. + +* Trial licenses for {PlatformNameShort} are available at the link:https://www.redhat.com/en/products/trials?products=ansible[Red Hat product trial center]. -* Support is not included in a trial license or during an evaluation of the {ControllerName} software. \ No newline at end of file +* Support is not included in a trial license or during an evaluation of the {PlatformNameShort}. \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-troubleshoot-logging.adoc b/downstream/modules/platform/ref-controller-troubleshoot-logging.adoc index def09c63a5..2f906669ac 100644 --- a/downstream/modules/platform/ref-controller-troubleshoot-logging.adoc +++ b/downstream/modules/platform/ref-controller-troubleshoot-logging.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-troubleshoot-logging"] = Troubleshooting logging diff --git a/downstream/modules/platform/ref-controller-unable-to-login-http.adoc b/downstream/modules/platform/ref-controller-unable-to-login-http.adoc index a8e5b41711..aa5c42d20c 100644 --- a/downstream/modules/platform/ref-controller-unable-to-login-http.adoc +++ b/downstream/modules/platform/ref-controller-unable-to-login-http.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-unable-to-login-http"] = Unable to login to {ControllerName} through HTTP diff --git a/downstream/modules/platform/ref-controller-unable-to-run-job.adoc b/downstream/modules/platform/ref-controller-unable-to-run-job.adoc index 2e704dcfa6..37778ef530 100644 --- a/downstream/modules/platform/ref-controller-unable-to-run-job.adoc +++ b/downstream/modules/platform/ref-controller-unable-to-run-job.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-unable-to-run-job"] = Unable to run a job diff --git a/downstream/modules/platform/ref-controller-unified-job-template-table-csv.adoc b/downstream/modules/platform/ref-controller-unified-job-template-table-csv.adoc index 009b414538..d1c81f0a62 100644 --- a/downstream/modules/platform/ref-controller-unified-job-template-table-csv.adoc +++ b/downstream/modules/platform/ref-controller-unified-job-template-table-csv.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-unified-job-template-table-csv"] = unified_job_template_table.csv diff --git a/downstream/modules/platform/ref-controller-unified-jobs-table-csv.adoc b/downstream/modules/platform/ref-controller-unified-jobs-table-csv.adoc index e89e42450f..c03db6f157 100644 --- a/downstream/modules/platform/ref-controller-unified-jobs-table-csv.adoc +++ b/downstream/modules/platform/ref-controller-unified-jobs-table-csv.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-unified-jobs-table-csv"] = unified_jobs_table.csv diff --git a/downstream/modules/platform/ref-controller-use-CLI-tool.adoc b/downstream/modules/platform/ref-controller-use-CLI-tool.adoc index ae8c6461b6..87e9a0f920 100644 --- a/downstream/modules/platform/ref-controller-use-CLI-tool.adoc +++ b/downstream/modules/platform/ref-controller-use-CLI-tool.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-use-CLI-tool"] = The {ControllerName} CLI Tool diff --git a/downstream/modules/platform/ref-controller-use-an-unreleased-module.adoc b/downstream/modules/platform/ref-controller-use-an-unreleased-module.adoc index 96ff468911..74a9476c95 100644 --- a/downstream/modules/platform/ref-controller-use-an-unreleased-module.adoc +++ b/downstream/modules/platform/ref-controller-use-an-unreleased-module.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-use-an-unreleased-module"] = Use an unreleased module from Ansible source with {ControllerName} diff --git a/downstream/modules/platform/ref-controller-use-by-organization.adoc b/downstream/modules/platform/ref-controller-use-by-organization.adoc index 50ed36ab4e..725b95f4bd 100644 --- a/downstream/modules/platform/ref-controller-use-by-organization.adoc +++ b/downstream/modules/platform/ref-controller-use-by-organization.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-use-by-organization"] = Use by organization diff --git a/downstream/modules/platform/ref-controller-use-callback-plugins.adoc b/downstream/modules/platform/ref-controller-use-callback-plugins.adoc index 2e0b190371..142db81280 100644 --- a/downstream/modules/platform/ref-controller-use-callback-plugins.adoc +++ b/downstream/modules/platform/ref-controller-use-callback-plugins.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-use-callback-plugins"] = Use callback plugins with {ControllerName} diff --git a/downstream/modules/platform/ref-controller-use-credentials-in-playbooks.adoc b/downstream/modules/platform/ref-controller-use-credentials-in-playbooks.adoc index 52c60557dd..e846d0632f 100644 --- a/downstream/modules/platform/ref-controller-use-credentials-in-playbooks.adoc +++ b/downstream/modules/platform/ref-controller-use-credentials-in-playbooks.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-use-credentials-in-playbooks"] = Use {ControllerName} credentials in a playbook diff --git a/downstream/modules/platform/ref-controller-use-dynamic-inv-sources.adoc b/downstream/modules/platform/ref-controller-use-dynamic-inv-sources.adoc new file mode 100644 index 0000000000..74d5d5484f --- /dev/null +++ b/downstream/modules/platform/ref-controller-use-dynamic-inv-sources.adoc @@ -0,0 +1,11 @@ +[id="ref-controller-use-dynamic-inv-sources"] + += Use Dynamic Inventory Sources + +If you have an external source of truth for your infrastructure, whether it is a cloud provider or a local CMDB, it is best to define an inventory sync process and use the support for dynamic inventory (including cloud inventory sources). +This ensures your inventory is always up to date. + +[NOTE] +==== +Edits and additions to Inventory host variables persist beyond an inventory synchronization as long as `--overwrite_vars` is *not* set. +==== \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-user-roles.adoc b/downstream/modules/platform/ref-controller-user-roles.adoc index a7c6fef5c9..f1d572669f 100644 --- a/downstream/modules/platform/ref-controller-user-roles.adoc +++ b/downstream/modules/platform/ref-controller-user-roles.adoc @@ -1,20 +1,27 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-user-roles"] -= Displaying a user's roles += Adding roles for a user -From the *Users > Details* page, select the *Roles* tab to display the set of roles assigned to this user. -These offer the ability to read, change, and administer projects, inventories, job templates, and other elements. +You can grant access for users to use, read, or write credentials by assigning roles to them. -image:users-permissions-list-for-example-user.png[Users- permissions list] +[NOTE] +==== +Users can not be assigned to an organization by adding roles. Refer to the steps provided in link:{URLCentralAuth}/gw-managing-access#proc-controller-add-organization-user[Adding a user to an organization] for detailed instructions. +==== -//This doesn't seem to fit here. -//[NOTE] -//==== -//The job template administrator may not have access to other resources (inventory, project, credentials, or instance groups) associated with the template. -// -//Without access to these, certain fields in the job template are not editable. -// -//System Administrators can grant individual users permissions to certain resources as necessary. -// -//For more information, see xref:proc-controller-user-permissions[Adding permissions to a user]. -//==== +.Procedure +. From the navigation panel, select {MenuAMUsers}. +. From the *Users* list view, click on the user to which you want to add roles. +. Select the *Roles* tab to display the set of roles assigned to this user. These provide the ability to read, modify, and administer resources. +. To add new roles, click btn:[Add roles]. ++ +include::snippets/snip-gw-roles-note-multiple-components.adoc[] ++ +. Select a Resource type and click btn:[Next]. +. Select the resources that will receive new roles and click btn:[Next]. +. Select the roles that will be applied to the resources and click btn:[Next]. +. Review the settings and click btn:[Finish]. ++ +The Add roles dialog displays indicating whether the role assignments were successfully applied. Click btn:[Close] to close the dialog. diff --git a/downstream/modules/platform/ref-controller-user-teams.adoc b/downstream/modules/platform/ref-controller-user-teams.adoc deleted file mode 100644 index 5ca19e8bf2..0000000000 --- a/downstream/modules/platform/ref-controller-user-teams.adoc +++ /dev/null @@ -1,15 +0,0 @@ -[id="ref-controller-user-teams"] - -= Displaying a user's teams - -From the *Users > Details* page, select the *Teams* tab to display the list of teams of which that user is a member. - -[NOTE] -==== -You cannot modify team membership from this display panel. -For more information, see xref:assembly-controller-teams[Teams]. -==== - -Until you create a team and assign a user to that team, the assigned teams details for that user is displayed as empty. - -//image:users-teams-list-for-example-user.png[Users - teams list] diff --git a/downstream/modules/platform/ref-controller-values-for-search-fields.adoc b/downstream/modules/platform/ref-controller-values-for-search-fields.adoc index 42c274b989..969bdf0f6e 100644 --- a/downstream/modules/platform/ref-controller-values-for-search-fields.adoc +++ b/downstream/modules/platform/ref-controller-values-for-search-fields.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-values-for-search-fields"] = Values for search fields @@ -39,4 +41,4 @@ Related Fields is populated by taking all the values from `related_search_fields Any search that does not start with a value from Fields or a value from the Related Fields, is treated as a generic string search. Searching for `localhost`, for example, results in the UI sending `?search=localhost` as a query parameter to the API endpoint. -This is a shortcut for an `icontains` search on the name and description fields. \ No newline at end of file +This is a shortcut for an `icontains` search on the name and description fields. diff --git a/downstream/modules/platform/ref-controller-variables.adoc b/downstream/modules/platform/ref-controller-variables.adoc index ac12f67a1b..ee3da4d132 100644 --- a/downstream/modules/platform/ref-controller-variables.adoc +++ b/downstream/modules/platform/ref-controller-variables.adoc @@ -1,141 +1,285 @@ -[id="ref-controller-variables"] +:_mod-docs-content-type: REFERENCE -= {ControllerNameStart} variables - -[cols="50%,50%",options="header"] -|==== -| *Variable* | *Description* -| *`admin_password`* | The password for an administration user to access the UI when the installation is complete. - -Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. -| *`automation_controller_main_url`* | For an alternative front end URL needed for SSO configuration, provide the URL. - -| *`automationcontroller_password`* | Password for your {ControllerName} instance. - -Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. -| *`automationcontroller_username`* | Username for your {ControllerName} instance. -| *`nginx_http_port`* | The nginx HTTP server listens for inbound connections. - -Default = 80 -| *`nginx_https_port`* | The nginx HTTPS server listens for secure connections. - -Default = 443 -| *`nginx_hsts_max_age`* | This variable specifies how long, in seconds, the system must be considered as a _HTTP Strict Transport Security_ (HSTS) host. That is, how long HTTPS is used exclusively for communication. - -Default = 63072000 seconds, or two years. -| *`nginx_tls_protocols`* | Defines support for `ssl_protocols` in Nginx. - -Values available `TLSv1`, `TLSv1.1, `TLSv1.2`, `TLSv1.3` - -The TLSv1.1 and TLSv1.2 parameters only work when OpenSSL 1.0.1 or higher is used. - -The TLSv1.3 parameter only works when OpenSSL 1.1.1 or higher is used. - -If `nginx_tls-protocols = ['TLSv1.3']` only TLSv1.3 is enabled. -To set more than one protocol use `nginx_tls_protocols = ['TLSv1.2', 'TLSv.1.3']` - -Default = `TLSv1.2`. -| *`nginx_user_headers`* | List of nginx headers for the {ControllerName} web server. - -Each element in the list is provided to the web server's nginx configuration as a separate line. - -Default = empty list -| *`node_state`* | _Optional_ - -The status of a node or group of nodes. -Valid options are `active`, `deprovision` to remove a node from a cluster, or `iso_migrate` to migrate a legacy isolated node to an execution node. - -Default = `active`. -| *`node_type`* | For `[automationcontroller]` group. - -Two valid `node_types` can be assigned for this group. - -A `node_type=control` means that the node only runs project and inventory updates, but not regular jobs. - -A `node_type=hybrid` can run everything. - -Default for this group = `hybrid` - -For `[execution_nodes]` group: - -Two valid `node_types` can be assigned for this group. - -A `node_type=hop` implies that the node forwards jobs to an execution node. - -A `node_type=execution` implies that the node can run jobs. - -Default for this group = `execution`. -| *`peers`* | _Optional_ +[id="controller-variables"] -The `peers` variable is used to indicate which nodes a specific host or group connects to. Wherever this variable is defined, an outbound connection to the specific host or group is established. - -This variable is used to add `tcp-peer` entries in the `receptor.conf` file used for establishing network connections with other nodes. - -The peers variable can be a comma-separated list of hosts and groups from the inventory. -This is resolved into a set of hosts that is used to construct the `receptor.conf` file. - -| *`pg_database`* | The name of the postgreSQL database. - -Default = `awx`. -| *`pg_host`* | The postgreSQL host, which can be an externally managed database. -| *`pg_password`* | The password for the postgreSQL database. - -Use of special characters for `pg_password` is limited. -The `!`, `#`, `0` and `@` characters are supported. -Use of other special characters can cause the setup to fail. - -NOTE - -You no longer have to provide a `pg_hashed_password` in your inventory file at the time of installation because PostgreSQL 13 can now store user passwords more securely. - -When you supply `pg_password` in the inventory file for the installer, PostgreSQL uses the SCRAM-SHA-256 hash to secure that password as part of the installation process. -| *`pg_port`* | The postgreSQL port to use. - -Default = 5432 -| *`pg_ssl_mode`* | Choose one of the two available modes: `prefer` and `verify-full`. - -Set to `verify-full` for client-side enforced SSL. - -Default = `prefer`. -| *`pg_username`* | Your postgreSQL database username. - -Default = `awx`. -| *`postgres_ssl_cert`* | Location of the postgreSQL SSL certificate. - -`/path/to/pgsql_ssl.cert` -| *`postgres_ssl_key`* | Location of the postgreSQL SSL key. - -`/path/to/pgsql_ssl.key` -| *`postgres_use_cert`* | Location of the postgreSQL user certificate. - -`/path/to/pgsql.crt` -| *`postgres_use_key`* | Location of the postgreSQL user key. - -`/path/to/pgsql.key` -| *`postgres_use_ssl`* | Use this variable if postgreSQL uses SSL. -| *`postgres_max_connections`* | Maximum database connections setting to apply if you are using installer-managed postgreSQL. - -See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#ref-controller-database-settings[PostgreSQL database configuration] in the {ControllerName} administration guide for help selecting a value. - -Default for VM-based installations = 200 for a single node -and 1024 for a cluster. -| *`receptor_listener_port`* | Port to use for receptor connection. - -Default = 27199 -| *`supervisor_start_retry_count`* | When specified, it adds `startretries = ` to the supervisor config file (/etc/supervisord.d/tower.ini). - -See link:http://supervisord.org/configuration.html#program-x-section-values[program:x Section Values] for more information about `startretries`. - -No default value exists. - -| *`web_server_ssl_cert`* | _Optional_ - -`/path/to/webserver.cert` - -Same as `automationhub_ssl_cert` but for web server UI and API. -| *`web_server_ssl_key`* | _Optional_ - -`/path/to/webserver.key` += {ControllerNameStart} variables -Same as `automationhub_server_ssl_key` but for web server UI and API. -|==== +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `admin_email` +| `controller_admin_email` +| Email address used by Django for the admin user for {ControllerName}. +| Optional +| `admin@example.com` + +| `admin_password` +| `controller_admin_password` +| {ControllerNameStart} administrator password. +Use of special characters for this variable is limited. The password can include any printable ASCII character except `/`, `”`, or `@`. +| Required +| + +| `admin_username` +| `controller_admin_user` +| Username used to identify and create the administrator user in {ControllerName}. +| Optional +| `admin` + +| `automationcontroller_client_max_body_size` +| `controller_nginx_client_max_body_size` +| Maximum allowed size for data sent to {ControllerName} through NGINX. +| Optional +| `5m` + +| `automationcontroller_use_archive_compression` +| `controller_use_archive_compression` +| Controls whether archive compression is enabled or disabled for {ControllerName}. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` + +| `automationcontroller_use_db_compression` +| `controller_use_db_compression` +| Controls whether database compression is enabled or disabled for {ControllerName}. You can control this functionality globally by using `use_db_compression`. +| Optional +| `true` + +| `awx_pg_cert_auth` +| `controller_pg_cert_auth` +| Controls whether client certificate authentication is enabled or disabled on the {ControllerName} PostgreSQL database. +Set this variable to `true` to enable client certificate authentication. +| Optional +| `false` + +| `controller_firewalld_zone` +| `controller_firewall_zone` +| The firewall zone where {ControllerName} related firewall rules are applied. This controls which networks can access {ControllerName} based on the zone's trust level. +| Optional +| `public` + +| `controller_nginx_tls_files_remote` +| +| Denote whether the web certificate sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| The value defined in `controller_tls_files_remote`. + +| `controller_pgclient_tls_files_remote` +| +| Denote whether the PostgreSQL client certificate sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| The value defined in `controller_tls_files_remote`. + +| `controller_tls_files_remote` +| `controller_tls_remote` +| Denote whether the {ControllerName} provided certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `nginx_disable_hsts` +| `controller_nginx_disable_hsts` +| Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for {ControllerName}. +Set this variable to `true` to disable HSTS. +| Optional +| `false` + +| `nginx_disable_https` +| `controller_nginx_disable_https` +| Controls whether HTTPS is enabled or disabled for {ControllerName}. +Set this variable to `true` to disable HTTPS. +| Optional +| `false` + +| `nginx_hsts_max_age` +| `controller_nginx_hsts_max_age` +| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for {ControllerName}. +| Optional +| `63072000` + +| `nginx_http_port` +| `controller_nginx_http_port` +| Port number that {ControllerName} listens on for HTTP requests. +| Optional +| RPM = `80`. Container = `8080` + +| `nginx_https_port` +| `controller_nginx_https_port` +| Port number that {ControllerName} listens on for HTTPS requests. +| Optional +| RPM = `443`. Container = `8443` + +| `nginx_tls_protocols` +| `controller_nginx_https_protocols` +| Protocols that {ControllerName} supports when handling HTTPS traffic. +| Optional +| RPM = `[TLSv1.2]`. Container = `[TLSv1.2, TLSv1.3]` + +| `nginx_user_headers` +| `controller_nginx_user_headers` +| List of additional NGINX headers to add to {ControllerName}'s NGINX configuration. +| Optional +| `[]` + +| +| `controller_create_preload_data` +| Controls whether or not to create preloaded content during installation. +| Optional +| `true` + +| `node_state` +| +| The status of a node or group of nodes. +Valid options include `active`, `deprovision` to remove a node from a cluster, or `iso_migrate` to migrate a legacy isolated node to an execution node. +| Optional +| `active` + +| `node_type` +| See `receptor_type` for the container equivalent variable. a| + +For the `[automationcontroller]` group the two options are: + +* `node_type=control` - The node only runs project and inventory updates, but not regular jobs. +* `node_type=hybrid` - The node runs everything. + +For the `[execution_nodes]` group the two options are: + +* `node_type=hop` - The node forwards jobs to an execution node. +* `node_type=execution` - The node can run jobs. +| Optional +| For `[automationcontroller]` = `hybrid`, for `[execution_nodes]` = `execution` + +| `peers` +| See `receptor_peers` for the container equivalent variable. +| Used to indicate which nodes a specific host or group connects to. Wherever this variable is defined, an outbound connection to the specific host or group is established. +This variable can be a comma-separated list of hosts and groups from the inventory. This is resolved into a set of hosts that is used to construct the `receptor.conf` file. +| Optional +| + +| `pg_database` +| `controller_pg_database` +| Name of the PostgreSQL database used by {ControllerName}. +| Optional +| `awx` + +| `pg_host` +| `controller_pg_host` +| Hostname of the PostgreSQL database used by {ControllerName}. +| Required +| + +| `pg_password` +| `controller_pg_password` +| Password for the {ControllerName} PostgreSQL database user. +Use of special characters for this variable is limited. The `!`, `#`, `0` and `@` characters are supported. Use of other special characters can cause the setup to fail. +| Required if not using client certificate authentication. +| + +| `pg_port` +| `controller_pg_port` +| Port number for the PostgreSQL database used by {ControllerName}. +| Optional +| `5432` + +| `pg_sslmode` +| `controller_pg_sslmode` +| Controls the SSL/TLS mode to use when {ControllerName} connects to the PostgreSQL database. +Valid options include `verify-full`, `verify-ca`, `require`, `prefer`, `allow`, `disable`. +| Optional +| `prefer` + +| `pg_username` +| `controller_pg_username` +| Username for the {ControllerName} PostgreSQL database user. +| Optional +| `awx` + +| `pgclient_sslcert` +| `controller_pg_tls_cert` +| Path to the PostgreSQL SSL/TLS certificate file for {ControllerName}. +| Required if using client certificate authentication. +| + +| `pgclient_sslkey` +| `controller_pg_tls_key` +| Path to the PostgreSQL SSL/TLS key file for {ControllerName}. +| Required if using client certificate authentication. +| + +| `precreate_partition_hours` +| +| Number of hours worth of events table partitions to pre-create before starting a backup to avoid `pg_dump` locks. +| Optional +| 3 + +| `uwsgi_listen_queue_size` +| `controller_uwsgi_listen_queue_size` +| Number of requests `uwsgi` allows in the queue on {ControllerName} until `uwsgi_processes` can serve them. +| Optional +| `2048` + +| `web_server_ssl_cert` +| `controller_tls_cert` +| Path to the SSL/TLS certificate file for {ControllerName}. +| Optional +| + +| `web_server_ssl_key` +| `controller_tls_key` +| Path to the SSL/TLS key file for {ControllerName}. +| Optional +| + +| +| `controller_event_workers` +| Number of event workers that handle job-related events inside {ControllerName}. +| Optional +| `4` + +| +| `controller_extra_settings` +a| Defines additional settings for use by {ControllerName} during installation. + +For example: +---- +controller_extra_settings: + - setting: USE_X_FORWARDED_HOST + value: true +---- +| Optional +| `[]` + +| +| `controller_license_file` +| Path to the {ControllerName} license file. +// If you are defining this variable as part of the postinstall process (`controller_postinstall=true`), then you need to also set `controller_postinstall_dir`." +| +| + +| +| `controller_percent_memory_capacity` +| Memory allocation for {ControllerName}. +| Optional +| `1.0` (allocates 100% of the total system memory to {ControllerName}) + +| +| `controller_pg_socket` +| UNIX socket used by {ControllerName} to connect to the PostgreSQL database. +| Optional +| + +| +| `controller_secret_key` +| Secret key value used by {ControllerName} to sign and encrypt data. +| Optional +| + +// Michelle - commenting out postinstall vars. +// | | `controller_postinstall` | Enable or disable the postinstall feature of the containerized installer. If set to `true`, then you also need to set `controller_license_file` and `controller_postinstall_dir`. Default = `false` +// | | `controller_postinstall_dir` | The location of your {ControllerName} postinstall directory. +// | | `controller_postinstall_async_delay` | Postinstall delay between retries. Default = `1` +// | | `controller_postinstall_async_retries` | Postinstall number of tries to attempt. Default = `30` +// | | `controller_postinstall_ignore_files` | {ControllerNameStart} ignore files. +// | | `controller_postinstall_repo_ref` | {ControllerNameStart} repository branch or tag. Default = `main` +// | | `controller_postinstall_repo_url` | {ControllerNameStart} repository URL. + +|=== diff --git a/downstream/modules/platform/ref-controller-verify-your-project.adoc b/downstream/modules/platform/ref-controller-verify-your-project.adoc index 5c67b3e240..e176f6af14 100644 --- a/downstream/modules/platform/ref-controller-verify-your-project.adoc +++ b/downstream/modules/platform/ref-controller-verify-your-project.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-verify-your-project"] = Verify your project diff --git a/downstream/modules/platform/ref-controller-view-ansible-outputs.adoc b/downstream/modules/platform/ref-controller-view-ansible-outputs.adoc index d743f9a21c..5b6e353654 100644 --- a/downstream/modules/platform/ref-controller-view-ansible-outputs.adoc +++ b/downstream/modules/platform/ref-controller-view-ansible-outputs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-view-ansible-outputs"] = View Ansible outputs for JSON commands when using {ControllerName} diff --git a/downstream/modules/platform/ref-controller-view-completed-jobs.adoc b/downstream/modules/platform/ref-controller-view-completed-jobs.adoc index 0311d247cd..186a440b9a 100644 --- a/downstream/modules/platform/ref-controller-view-completed-jobs.adoc +++ b/downstream/modules/platform/ref-controller-view-completed-jobs.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-view-completed-jobs"] = View completed jobs diff --git a/downstream/modules/platform/ref-controller-view-edit-inv-groups.adoc b/downstream/modules/platform/ref-controller-view-edit-inv-groups.adoc index ff158bf117..b37f7513ad 100644 --- a/downstream/modules/platform/ref-controller-view-edit-inv-groups.adoc +++ b/downstream/modules/platform/ref-controller-view-edit-inv-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-view-edit-inv-groups"] = View or edit inventory groups diff --git a/downstream/modules/platform/ref-controller-vmware-cloud.adoc b/downstream/modules/platform/ref-controller-vmware-cloud.adoc index 9d7ce4c772..b78cfa7ce3 100644 --- a/downstream/modules/platform/ref-controller-vmware-cloud.adoc +++ b/downstream/modules/platform/ref-controller-vmware-cloud.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-vmware-cloud"] = VMware diff --git a/downstream/modules/platform/ref-controller-vmware-esxi.adoc b/downstream/modules/platform/ref-controller-vmware-esxi.adoc new file mode 100644 index 0000000000..39d772e567 --- /dev/null +++ b/downstream/modules/platform/ref-controller-vmware-esxi.adoc @@ -0,0 +1,65 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-controller-vmware-esxi"] + += VMWare ESXI + +[literal, options="nowrap" subs="+attributes"] +---- +compose: + ansible_host: guest.ipAddress + ansible_ssh_host: guest.ipAddress + ansible_uuid: 99999999 | random | to_uuid + availablefield: availableField + configissue: configIssue + configstatus: configStatus + customvalue: customValue + effectiverole: effectiveRole + guestheartbeatstatus: guestHeartbeatStatus + layoutex: layoutEx + overallstatus: overallStatus + parentvapp: parentVApp + recenttask: recentTask + resourcepool: resourcePool + rootsnapshot: rootSnapshot + triggeredalarmstate: triggeredAlarmState +filters: + - runtime.powerState == "poweredOn" +keyed_groups: + - key: config.guestId + prefix: "" + separator: "" + - key: '"templates" if config.template else "guests"' + prefix: "" + separator: "" +plugin: vmware.vmware.vms +properties: + - availableField + - configIssue + - configStatus + - customValue + - datastore + - effectiveRole + - guestHeartbeatStatus + - layout + - layoutEx + - name + - network + - overallStatus + - parentVApp + - permission + - recentTask + - resourcePool + - rootSnapshot + - snapshot + - triggeredAlarmState + - value + - capability + - config + - guest + - runtime + - storage + - summary +strict: false +flatten_nested_properties: true +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-controller-vmware-vcenter.adoc b/downstream/modules/platform/ref-controller-vmware-vcenter.adoc index b56cbcfd89..14485f5324 100644 --- a/downstream/modules/platform/ref-controller-vmware-vcenter.adoc +++ b/downstream/modules/platform/ref-controller-vmware-vcenter.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-vmware-vcenter"] = VMware vCenter diff --git a/downstream/modules/platform/ref-controller-web-service-tuning.adoc b/downstream/modules/platform/ref-controller-web-service-tuning.adoc index 7744682bb2..98b25812da 100644 --- a/downstream/modules/platform/ref-controller-web-service-tuning.adoc +++ b/downstream/modules/platform/ref-controller-web-service-tuning.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-web-service-tuning"] = Web server tuning @@ -14,9 +16,10 @@ To optimize {ControllerName}'s web service on the client side, follow these guid * Direct user to use dynamic inventory sources instead of individually creating inventory hosts by using the API. * Use webhook notifications instead of polling for job status. * Use the bulk APIs for host creation and job launching to batch requests. -* Use token authentication. For automation clients that must make many requests very quickly, using tokens is a best practice, because depending on the type of user, there may be additional overhead when using basic authentication. +* Use token authentication. For automation clients that must make many requests very quickly, using tokens is a best practice, because depending on the type of user, there might be additional overhead when using Basic authentication. .Additional resources -* For more information on workloads with high levels of API interaction, see link:https://www.ansible.com/blog/scaling-automation-controller-for-api-driven-workloads[Scaling Automation Controller for API Driven Workloads]. -* For more information on bulk API, see link:https://www.ansible.com/blog/bulk-api-in-automation-controller[Bulk API in Automation Controller]. -* For more information on how to generate and use tokens, see link:https://docs.ansible.com/automation-controller/latest/html/administration/oauth2_token_auth.html#ag-oauth2-token-auth[Token-Based Authentication]. + +* link:https://www.ansible.com/blog/scaling-automation-controller-for-api-driven-workloads[Scaling Automation Controller for API Driven Workloads] +* link:https://www.ansible.com/blog/bulk-api-in-automation-controller[Bulk API in Automation Controller] +* link:https://docs.ansible.com/automation-controller/latest/html/administration/oauth2_token_auth.html#ag-oauth2-token-auth[Token-Based Authentication] diff --git a/downstream/modules/platform/ref-controller-workflow-job-node-table-csv.adoc b/downstream/modules/platform/ref-controller-workflow-job-node-table-csv.adoc index 3414e28c92..6743b54076 100644 --- a/downstream/modules/platform/ref-controller-workflow-job-node-table-csv.adoc +++ b/downstream/modules/platform/ref-controller-workflow-job-node-table-csv.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-workflow-job-node-table-csv"] = workflow_job_node_table.csv diff --git a/downstream/modules/platform/ref-controller-workflow-job-template-extra-variables.adoc b/downstream/modules/platform/ref-controller-workflow-job-template-extra-variables.adoc index 8d2747c971..36ffac2a3c 100644 --- a/downstream/modules/platform/ref-controller-workflow-job-template-extra-variables.adoc +++ b/downstream/modules/platform/ref-controller-workflow-job-template-extra-variables.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-workflow-job-template-extra-variables"] = Workflow job template extra variables diff --git a/downstream/modules/platform/ref-controller-workflow-job-template-node-table-csv.adoc b/downstream/modules/platform/ref-controller-workflow-job-template-node-table-csv.adoc index 35c7a11c22..df611b84af 100644 --- a/downstream/modules/platform/ref-controller-workflow-job-template-node-table-csv.adoc +++ b/downstream/modules/platform/ref-controller-workflow-job-template-node-table-csv.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-workflow-job-template-node-table-csv"] = workflow_job_template_node_table.csv diff --git a/downstream/modules/platform/ref-controller-workflows-extra-variables.adoc b/downstream/modules/platform/ref-controller-workflows-extra-variables.adoc index 543d12c3b7..942d504893 100644 --- a/downstream/modules/platform/ref-controller-workflows-extra-variables.adoc +++ b/downstream/modules/platform/ref-controller-workflows-extra-variables.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="controller-workflow-extra-variables"] = Workflow extra variables @@ -63,9 +65,9 @@ In this example, there are two playbooks that can be combined in a workflow to e The `set_stats` module processes this workflow as follows: . The contents of an integration result is uploaded to the web. -. Through the `invoke_set_stats` playbook, `set_stats` is then invoked to artifact the URL of the uploaded `integration_results.txt` into the Ansible variable "integration_results_url". -. The second playbook in the workflow consumes the Ansible extra variable "integration_results_url". -It calls out to the web using the uri module to get the contents of the file uploaded by the previous job template job. +. Through the `invoke_set_stats` playbook, `set_stats` is then invoked to artifact the URL of the uploaded `integration_results.txt` into the Ansible variable `integration_results_url`. +. The second playbook in the workflow consumes the Ansible extra variable `integration_results_url`. +It calls out to the web by using the URI module to get the contents of the file uploaded by the previous job template job. Then, it prints out the contents of the obtained file. [NOTE] diff --git a/downstream/modules/platform/ref-controller-workload-characteristics.adoc b/downstream/modules/platform/ref-controller-workload-characteristics.adoc index 65dc88a44e..1cc6e791b5 100644 --- a/downstream/modules/platform/ref-controller-workload-characteristics.adoc +++ b/downstream/modules/platform/ref-controller-workload-characteristics.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-controller-workload-characteristics"] = Characteristics of your workload diff --git a/downstream/modules/platform/ref-cyberark-ccp-lookup.adoc b/downstream/modules/platform/ref-cyberark-ccp-lookup.adoc index 1ccc34c15a..f252a23b26 100644 --- a/downstream/modules/platform/ref-cyberark-ccp-lookup.adoc +++ b/downstream/modules/platform/ref-cyberark-ccp-lookup.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-cyberark-ccp-lookup"] = CyberArk Central Credential Provider (CCP) Lookup diff --git a/downstream/modules/platform/ref-cyberark-conjur-lookup.adoc b/downstream/modules/platform/ref-cyberark-conjur-lookup.adoc index 0061baa414..946eecf27e 100644 --- a/downstream/modules/platform/ref-cyberark-conjur-lookup.adoc +++ b/downstream/modules/platform/ref-cyberark-conjur-lookup.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-cyberark-conjur-lookup"] = CyberArk Conjur Secrets Manager Lookup diff --git a/downstream/modules/platform/ref-database-inventory-variables.adoc b/downstream/modules/platform/ref-database-inventory-variables.adoc new file mode 100644 index 0000000000..cc3b0e41a3 --- /dev/null +++ b/downstream/modules/platform/ref-database-inventory-variables.adoc @@ -0,0 +1,112 @@ +:_mod-docs-content-type: REFERENCE + +[id="database-variables"] + += Database variables + +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `install_pg_port` +| `postgresql_port` +| Port number for the PostgreSQL database. +| Optional +| `5432` + +| `postgres_firewalld_zone` +| `postgresql_firewall_zone` +| The firewall zone where PostgreSQL related firewall rules are applied. This controls which networks can access PostgreSQL based on the zone's trust level. +| Optional +| RPM = no default set. Container = `public`. + +| `postgres_max_connections` +| `postgresql_max_connections` +| Maximum number of concurrent connections to the database if you are using an installer-managed database. +For more information see link:{URLControllerAdminGuide}/assembly-controller-improving-performance#ref-controller-database-settings[PostgreSQL database configuration and maintenance for {ControllerName}]. +| Optional +| `1024` + +| `postgres_ssl_cert` +| `postgresql_tls_cert` +| Path to the PostgreSQL SSL/TLS certificate file. +| Optional +| + +| `postgres_ssl_key` +| `postgresql_tls_key` +| Path to the PostgreSQL SSL/TLS key file. +| Optional +| + +| `postgres_use_ssl` +| `postgresql_disable_tls` +| Controls whether SSL/TLS is enabled or disabled for the PostgreSQL database. +| Optional +| `false` + +| +| `postgresql_admin_database` +| Database name used for connections to the PostgreSQL database server. +| Optional +| `postgres` + +| +| `postgresql_admin_password` +| Password for the PostgreSQL admin user. +When used, the installation program creates each component's database and credentials. +| Required if using `postgresql_admin_username`. +| + +| +| `postgresql_admin_username` +| Username for the PostgreSQL admin user. +When used, the installation program creates each component's database and credentials. +| Optional +| `postgres` + +| +| `postgresql_effective_cache_size` +| Memory allocation available (in MB) for caching data. +| Optional +| + +| +| `postgresql_keep_databases` +| Controls whether or not to keep databases during uninstall. +This variable applies to databases managed by the installation program only, and not external (customer-managed) databases. +Set to `true` to keep databases during uninstall. +| Optional +| `false` + +| +| `postgresql_log_destination` +| Destination for server log output. +| Optional +| `/dev/stderr` + +| +| `postgresql_password_encryption` +| The algorithm for encrypting passwords. +| Optional +| `scram-sha-256` + +| +| `postgresql_shared_buffers` +| Memory allocation (in MB) for shared memory buffers. +| Optional +| + +| +| `postgresql_tls_remote` +| Denote whether the PostgreSQL provided certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| +| `postgresql_use_archive_compression` +| Controls whether archive compression is enabled or disabled for PostgreSQL. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` + +|=== diff --git a/downstream/modules/platform/ref-delete-hosts-api-endpoint.adoc b/downstream/modules/platform/ref-delete-hosts-api-endpoint.adoc new file mode 100644 index 0000000000..663ddb1289 --- /dev/null +++ b/downstream/modules/platform/ref-delete-hosts-api-endpoint.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-delete-hosts-api-endpoint_{context}"] + += Deleting Hosts automated using API endpoint + +The API lists only non-deleted records and are sortable by last_automation and used_in_inventories columns. + +You can use the host metric API endpoint, `api/v2/host_metric` to soft delete hosts, using: + +`api/v2/host_metric DELETE` + +A monthly scheduled task automatically deletes jobs that uses hosts from the Host Metric table that were last automated more than a year ago. diff --git a/downstream/modules/platform/ref-deprovisioning.adoc b/downstream/modules/platform/ref-deprovisioning.adoc index ad5bfac92b..e6db948a1d 100644 --- a/downstream/modules/platform/ref-deprovisioning.adoc +++ b/downstream/modules/platform/ref-deprovisioning.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-deprovisioning"] = Deprovisioning nodes or groups diff --git a/downstream/modules/platform/ref-eda-controller-variables.adoc b/downstream/modules/platform/ref-eda-controller-variables.adoc index b293a89d33..a0f8b989c5 100644 --- a/downstream/modules/platform/ref-eda-controller-variables.adoc +++ b/downstream/modules/platform/ref-eda-controller-variables.adoc @@ -1,76 +1,347 @@ +:_mod-docs-content-type: REFERENCE + +[id="event-driven-ansible-variables"] -[id="event-driven-ansible-controller"] = {EDAcontroller} variables -[cols="50%,50%",options="header"] -|==== -| *Variable* | *Description* -| *`automationedacontroller_admin_password`* | The admin password used by the {EdaController} instance. +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `automationedacontroller_activation_workers` +| `eda_activation_workers` +| Number of workers used for ansible-rulebook activation pods in {EDAName}. +| Optional +| RPM = (# of cores or threads) * 2 + 1. Container = `2` + +| `automationedacontroller_admin_email` +| `eda_admin_email` +| Email address used by Django for the admin user for {EDAName}. +| Optional +| `admin@example.com` + +| `automationedacontroller_admin_password` +| `eda_admin_password` +| {EDAName} administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except `/`, `”`, or `@`. +| Required +| + +| `automationedacontroller_admin_username` +| `eda_admin_user` +| Username used to identify and create the administrator user in {EDAName}. +| Optional +| `admin` + +| `automationedacontroller_backend_gunicorn_workers` +| +| Number of workers for handling the API served through Gunicorn on worker nodes. +| Optional +| `2` + +| `automationedacontroller_cache_tls_files_remote` +| +| Denote whether the cache cert sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `automationedacontroller_client_regen_cert` +| +| Controls whether or not to regenerate {EDAName} client certificates for the platform cache. Set to `true` to regenerate {EDAName} client certificates. +| Optional +| `false` + +| `automationedacontroller_default_workers` +| `eda_workers` +| Number of workers used in {EDAName} for application work. +| Optional +| Number of cores or threads + +| `automationedacontroller_disable_hsts` +| `eda_nginx_disable_hsts` +| Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for {EDAName}. Set this variable to `true` to disable HSTS. +| Optional +| `false` + +| `automationedacontroller_disable_https` +| `eda_nginx_disable_https` +| Controls whether HTTPS is enabled or disabled for {EDAName}. Set this variable to `true` to disable HTTPS. +| Optional +| `false` + +| `automationedacontroller_event_stream_path` +| `eda_event_stream_prefix_path` +| API prefix path used for {EDAName} event-stream through {Gateway}. +| Optional +| `/eda-event-streams` + +| `automationedacontroller_firewalld_zone` +| `eda_firewall_zone` +| The firewall zone where {EDAName} related firewall rules are applied. This controls which networks can access {EDAName} based on the zone's trust level. +| Optional +| RPM = no default set. Container = `public`. + +| `automationedacontroller_gunicorn_event_stream_workers` +| +| Number of workers for handling event streaming for {EDAName}. +| Optional +| `2` + +| `automationedacontroller_gunicorn_workers` +| `eda_gunicorn_workers` +| Number of workers for handling the API served through Gunicorn. +| Optional +| (Number of cores or threads) * 2 + 1 + +| `automationedacontroller_http_port` +| `eda_nginx_http_port` +| Port number that {EDAName} listens on for HTTP requests. +| Optional +| RPM = `80`. Container = `8082`. + +| `automationedacontroller_https_port` +| `eda_nginx_https_port` +| Port number that {EDAName} listens on for HTTPS requests. +| Optional +| RPM = `443`. Container = `8445`. + +| `automationedacontroller_max_running_activations` +| `eda_max_running_activations` +| Number of maximum activations running concurrently per node. This is an integer that must be greater than 0. +| Optional +| `12` + +| `automationedacontroller_nginx_tls_files_remote` +| +| Denote whether the web cert sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `automationedacontroller_pg_cert_auth` +| `eda_pg_cert_auth` +| Controls whether client certificate authentication is enabled or disabled on the {EDAName} PostgreSQL database. Set this variable to `true` to enable client certificate authentication. +| Optional +| `false` + +| `automationedacontroller_pg_database` +| `eda_pg_database` +| Name of the PostgreSQL database used by {EDAName}. +| Optional +| RPM = `automationedacontroller`. Container = `eda`. + +| `automationedacontroller_pg_host` +| `eda_pg_host` +| Hostname of the PostgreSQL database used by {EDAName}. +| Required +| + +| `automationedacontroller_pg_password` +| `eda_pg_password` +| Password for the {EDAName} PostgreSQL database user. Use of special characters for this variable is limited. The `!`, `#`, `0` and `@` characters are supported. Use of other special characters can cause the setup to fail. +| Required if not using client certificate authentication. +| + +| `automationedacontroller_pg_port` +| `eda_pg_port` +| Port number for the PostgreSQL database used by {EDAName}. +| Optional +| `5432` + +| `automationedacontroller_pg_sslmode` +| `eda_pg_sslmode` +| Determines the level of encryption and authentication for client server connections. Valid options include `verify-full`, `verify-ca`, `require`, `prefer`, `allow`, `disable`. +| Optional +| `prefer` + +| `automationedacontroller_pg_username` +| `eda_pg_username` +| Username for the {EDAName} PostgreSQL database user. +| Optional +| RPM = `automationedacontroller`. Container = `eda`. + +| `automationedacontroller_pgclient_sslcert` +| `eda_pg_tls_cert` +| Path to the PostgreSQL SSL/TLS certificate file for {EDAName}. +| Required if using client certificate authentication. +| + +| `automationedacontroller_pgclient_sslkey` +| `eda_pg_tls_key` +| Path to the PostgreSQL SSL/TLS key file for {EDAName}. +| Required if using client certificate authentication. +| + +| `automationedacontroller_pgclient_tls_files_remote` +| +| Denote whether the PostgreSQL client cert sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `automationedacontroller_public_event_stream_url` +| `eda_event_stream_url` +| URL for connecting to the event stream. The URL must start with the `http://` or `https://` prefix +| Optional +| + +| `automationedacontroller_redis_host` +| `eda_redis_host` +| Hostname of the Redis host used by {EDAName}. +| Optional +| First node in the `[automationgateway]` inventory group + +| `automationedacontroller_redis_password` +| `eda_redis_password` +| Password for {EDAName} Redis. +| Optional +| Randomly generated string + +| `automationedacontroller_redis_port` +| `eda_redis_port` +| Port number for the Redis host for {EDAName}. +| Optional +| RPM = The value defined in {Gateway}'s implementation (`automationgateway_redis_port`). Container = `6379` + +| `automationedacontroller_redis_username` +| `eda_redis_username` +| Username for {EDAName} Redis. +| Optional +| `eda` + +| `automationedacontroller_secret_key` +| `eda_secret_key` +| Secret key value used by {EDAName} to sign and encrypt data. +| Optional +| + +| `automationedacontroller_ssl_cert` +| `eda_tls_cert` +| Path to the SSL/TLS certificate file for {EDAName}. +| Optional +| -Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. -| *`automationedacontroller_admin_username`* | Username used by django to identify and create the admin superuser in {EDAcontroller}. +| `automationedacontroller_ssl_key` +| `eda_tls_key` +| Path to the SSL/TLS key file for {EDAName}. +| Optional +| -Default = `admin` -| *`automationedacontroller_admin_email`* | Email address used by django for the admin user for {EDAcontroller}. +| `automationedacontroller_tls_files_remote` +| `eda_tls_remote` +| Denote whether the {EDAName} provided certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` -Default = `admin@example.com` -| *`automationedacontroller_allowed_hostnames`* | List of additional addresses to enable for user access to {EDAcontroller}. +| `automationedacontroller_trusted_origins` +| +| List of host addresses in the form: `//:
:` for trusted Cross-Site Request Forgery (CSRF) origins. +| Optional +| `[]` -Default = empty list -| *`automationedacontroller_controller_verify_ssl`* | Boolean flag used to verify automation controller's web certificates when making calls from {EDAcontroller}. Verified is `true`; not verified is `false`. +| `automationedacontroller_use_archive_compression` +| `eda_use_archive_compression` +| Controls whether archive compression is enabled or disabled for {EDAName}. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` -Default = `false` -| *`automationedacontroller_disable_https`* | Boolean flag to disable HTTPS {EDAcontroller}. +| `automationedacontroller_use_db_compression` +| `eda_use_db_compression` +| Controls whether database compression is enabled or disabled for {EDAName}. You can control this functionality globally by using `use_db_compression`. +| Optional +| `true` -Default = `false` -| *`automationedacontroller_disable_hsts`* | Boolean flag to disable HSTS {EDAcontroller}. +| `automationedacontroller_user_headers` +| `eda_nginx_user_headers` +| List of additional NGINX headers to add to {EDAName}'s NGINX configuration. +| Optional +| `[]` -Default = `false` -| *`automationedacontroller_gunicorn_workers`* | Number of workers for the API served through gunicorn. +| `automationedacontroller_websocket_ssl_verify` +| +| Controls whether or not to perform SSL verification for the Daphne WebSocket used by Podman to communicate from the pod to the host. +Set to `false` to disable SSL verification. +| Optional +| `true` -Default = (# of cores or threads) * 2 + 1 -| *`automationedacontroller_max_running_activations`* | The number of maximum activations running concurrently per node. +| `eda_node_type` +| `eda_type` +| {EDAName} node type. Valid options include `api`, `event-stream`, `hybrid`, `worker`. +| Optional +| `hybrid` -This is an integer that must be greater than 0. +| +| `eda_debug` +| Controls whether debug mode is enabled or disabled for {EDAName}. Set to `true` to enable debug mode for {EDAName}. +| Optional +| `false` -Default = 12 -| *`automationedacontroller_nginx_tls_files_remote`* | Boolean flag to specify whether cert sources are on the remote host (true) or local (false). +| +| `eda_extra_settings` +a| Defines additional settings for use by {EDAName} during installation. -Default = `false` -| *`automationedacontroller_pg_database`* | The Postgres database used by {EDAController}. +For example: +---- +eda_extra_settings: + - setting: RULEBOOK_READINESS_TIMEOUT_SECONDS + value: 120 +---- +| Optional +| `[]` -Default = `automtionedacontroller`. -| *`automationnedacontroller_pg_host`* | The hostname of the Postgres database used by {EDAController}, which can be an externally managed database. -| *`automationedacontroller_pg_password`* | The password for the Postgres database used by {EDAController}. +| +| `eda_nginx_client_max_body_size` +| Maximum allowed size for data sent to {EDAName} through NGINX. +| Optional +| `1m` -Use of special characters for `automationedacontroller_pg_password` is limited. -The `!`, `#`, `0` and `@` characters are supported. -Use of other special characters can cause the setup to fail. -| *`automationedacontroller_pg_port`* | The port number of the Postgres database used by {EDAController}. +| +| `eda_nginx_hsts_max_age` +| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for {EDAName}. +| Optional +| `63072000` -Default = `5432`. -| *`automationedacontroller_pg_username`* | The username for your {EDAController} Postgres database. +| `nginx_tls_protocols` +| `eda_nginx_https_protocols` +| Protocols that {EDAName} supports when handling HTTPS traffic. +| Optional +| RPM = `[TLSv1.2]`. Container = `[TLSv1.2, TLSv1.3]`. -Default = `automationedacontroller`. -| *`automationedacontroller_rq_workers`* | Number of Redis Queue (RQ) workers used by {EDAcontroller}. RQ workers are Python processes that run in the background. +| +| `eda_pg_socket` +| UNIX socket used by {EDAName} to connect to the PostgreSQL database. +| Optional +| -Default = (# of cores or threads) * 2 + 1 -| *`automationedacontroller_ssl_cert`* | _Optional_ +| `redis_disable_tls` +| `eda_redis_disable_tls` +| Controls whether TLS is enabled or disabled for {EDAName} Redis. Set this variable to true to disable TLS. +| Optional +| `false` -`/root/ssl_certs/eda.__.com.crt` +| +| `eda_redis_tls_cert` +| Path to the {EDAName} Redis certificate file. +| Optional +| -Same as `automationhub_ssl_cert` but for {EDAcontroller} UI and API. -| *`automationedacontroller_ssl_key`* | _Optional_ +| +| `eda_redis_tls_key` +| Path to the {EDAName} Redis key file. +| Optional +| -`/root/ssl_certs/eda.__.com.key` +| +| `eda_safe_plugins` +| List of plugins that are allowed to run within {EDAName}. -Same as `automationhub_server_ssl_key` but for {EDAcontroller} UI and API. -| *`automationedacontroller_user_headers`* | List of additional nginx headers to add to {EDAcontroller}'s nginx configuration. +// This content is used in RPM installation +ifdef::aap-install[] +For more information, see link:{URLInstallationGuide}/assembly-platform-install-scenario#proc-add-eda-safe-plugin-var[Adding a safe plugin variable to {EDAcontroller}]. +endif::aap-install[] +// This content is used in Containerized installation +ifdef::container-install[] +For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#proc-add-eda-safe-plugin-var[Adding a safe plugin variable to {EDAcontroller}]. +endif::container-install[] -Default = empty list -//Add this variable back for the next release, as long as approved by development. -//| *`automationedacontroller_websocket_ssl_verify`* | -//SSL verification for the Daphne websocket used by podman to communicate from the pod to the host. Default is false to disable SSL connection as verified +| Optional +| `[]` -//Default = false -|==== +|=== diff --git a/downstream/modules/platform/ref-eda-system-requirements.adoc b/downstream/modules/platform/ref-eda-system-requirements.adoc index e8e0259c27..f2203f8dff 100644 --- a/downstream/modules/platform/ref-eda-system-requirements.adoc +++ b/downstream/modules/platform/ref-eda-system-requirements.adoc @@ -1,22 +1,38 @@ +:_mod-docs-content-type: REFERENCE + [id="event-driven-ansible-system-requirements"] = {EDAcontroller} system requirements -The {EDAcontroller} is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores. Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations: +The {EDAcontroller} is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores. +[NOTE] +==== +If you want to use {EDAName} 2.5 with a 2.4 {ControllerName} version, see link:{BaseURL}/red_hat_ansible_automation_platform/2.4/html-single/using_event-driven_ansible_2.5_with_ansible_automation_platform_2.4/index[Using {EDAName} 2.5 with {PlatformNameShort} 2.4]. +==== + +Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations: -[cols="a,a",options="header"] +[cols=2*,options="header"] |=== -h| Requirement | Required +| Requirement | Required | *RAM* | 16 GB | *CPUs* | 4 -| *Local disk* | 40 GB minimum +| *Local disk* a| +* Hard drive must be 40 GB minimum with at least 20 GB available under /var. +* Storage volume must be rated for a minimum baseline of 3000 IOPS. +* If the cluster has many large projects or decision environment images, consider doubling the GB in /var to avoid disk space errors. |=== [IMPORTANT] ==== -* If you are running {RHEL} 8 and want to set your memory limits, you must have cgroup v2 enabled before you install {EDAName}. For specific instructions, see the Knowledge-Centered Support (KCS) article, link:https://access.redhat.com/solutions/7054905[Ansible Automation Platform Event-Driven Ansible controller for {RHEL} 8 requires cgroupv2]. +* If you are running {RHEL} 8 and want to set your memory limits, you must have cgroup v2 enabled before you install {EDAName}. +For specific instructions, see the Knowledge-Centered Support (KCS) article, link:https://access.redhat.com/solutions/7054905[Ansible Automation Platform Event-Driven Ansible controller for {RHEL} 8 requires cgroupv2]. + +* When you activate an {EDAName} rulebook under standard conditions, it uses about 250 MB of memory. +However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. +In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. +This ensures that your maximum number of activations is based on the capacity of your resources. -* When you activate an {EDAName} rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources. See link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/red_hat_ansible_automation_platform_installation_guide/index#ref-single-controller-hub-eda-with-managed-db[Single {ControllerName}, single {HubName}, and single {EDAcontroller} node with external (installer managed) database] for an example on setting {EDAController} maximum -running activations. +For an example of setting {EDAController} maximumrunning activations, see xref:ref-gateway-controller-hub-eda-ext-db[Single {ControllerName}, single {HubName}, and single {EDAcontroller} node with external (installer managed) database]. ==== \ No newline at end of file diff --git a/downstream/modules/platform/ref-edge-manager-K8s-cluster.adoc b/downstream/modules/platform/ref-edge-manager-K8s-cluster.adoc new file mode 100644 index 0000000000..149182295a --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-K8s-cluster.adoc @@ -0,0 +1,25 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-k8s-cluster"] + += Secrets from a Kubernetes cluster + +The {RedHatEdge} can query only the Kubernetes cluster that the {RedHatEdge} is running on for a Kubernetes secret. +You can write the content of that secret to a path on the device file system. + +The Kubernetes Secret Provider takes the following parameters: + +|=== +|Parameter|Description +|`Name`|The name of the secret. + +|`NameSpace`|The namespace of the secret. + +|`MountPath`|The directory in the file system of the device to write the secret contents to. +|=== + +[NOTE] +==== +The {RedHatEdge} needs permission to access secrets in the defined namespace. +For example, creating a `ClusterRole` and `ClusterRoleBinding` allows the `flightctl-worker` service account to get and list secrets in that namespace. +==== diff --git a/downstream/modules/platform/ref-edge-manager-additional-fields.adoc b/downstream/modules/platform/ref-edge-manager-additional-fields.adoc new file mode 100644 index 0000000000..8c17d5570c --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-additional-fields.adoc @@ -0,0 +1,38 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-additional-fields"] + += List of additional supported fields + +In addition to the metadata fields, each resource has its own unique set of fields that you can select, offering further flexibility in filtering and selection based on resource-specific attributes. + +The following table lists the fields supported for filtering for each resource kind: + +[width="100%",cols="39%,61%",options="header",] +|=== +|Kind |Fields +|*Certificate Signing Request* |`status.certificate` + +|*Device* +|`status.summary.status` + +`status.applicationsSummary.status` + +`status.updated.status` + +`status.lastSeen` + +`status.lifecycle.status` + +|*Enrollment Request* |`status.approval.approved` + +`status.certificate` + +|*Fleet* |`spec.template.spec.os.image` + +|*Repository* |`spec.type` + +`spec.url` + +|*Resource Sync* |`spec.repository` +|=== diff --git a/downstream/modules/platform/ref-edge-manager-additional-resources-images.adoc b/downstream/modules/platform/ref-edge-manager-additional-resources-images.adoc new file mode 100644 index 0000000000..c4a7cb51cb --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-additional-resources-images.adoc @@ -0,0 +1,8 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-additional-resources-images"] + += Additional resources + +//Relevant for ACM only * For a full list of available repositories for the {RedHatEdge}, see link:https://access.redhat.com/downloads/content/609/[Download {acm}]. +* For more information about building the operating system image on different target platforms, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/using_image_mode_for_rhel_to_build_deploy_and_manage_operating_systems/index#configuring-container-pull-secrets_managing-users-groups-ssh-key-and-secrets-in-image-mode-for-rhel[Configuring container pull secrets]. diff --git a/downstream/modules/platform/ref-edge-manager-certificates.adoc b/downstream/modules/platform/ref-edge-manager-certificates.adoc new file mode 100644 index 0000000000..1022b6fb1c --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-certificates.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-certificates"] + += Self-signed certificates + +The {RedHatEdge} services automatically generate and store self-signed certificates in the `/etc/flightctl/pki` directory. +These include: + +* `/etc/flightctl/pki/ca.crt` +* `/etc/flightctl/pki/ca.key` +* `/etc/flightctl/pki/client-enrollment.crt` +* `/etc/flightctl/pki/client-enrollment.key` +* `/etc/flightctl/pki/server.crt` +* `/etc/flightctl/pki/server.key` + + +You can use your own custom certificates by placing them in the following locations: + +* Custom Server Certificate/Key Pair: +** `/etc/flightctl/pki/server.crt` +** `/etc/flightctl/pki/server.key` +* Custom CA Certificate for {PlatformNameShort} authentication: +** `/etc/flightctl/pki/auth/ca.crt` + +[NOTE] +==== +Ensure that you adjust the `insecureSkipTlsVerify` setting in the `service-config.yaml` if you use a custom CA certificate for your {PlatformNameShort} instance. +==== diff --git a/downstream/modules/platform/ref-edge-manager-config-git-repo.adoc b/downstream/modules/platform/ref-edge-manager-config-git-repo.adoc new file mode 100644 index 0000000000..554c9d2641 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-config-git-repo.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-config-git-repo"] + += Configuration from a Git repository + +You can store device configuration in a Git repository such as GitHub or GitLab. +You can then add a Git Config Provider so that the {RedHatEdge} synchronizes the configuration from the repository to the file system of the device. + +The Git Config Provider takes the following parameters: + +|=== +|Parameter|Description +|`Repository`|The name of a `Repository` resource defined in the {RedHatEdge}. + +|`TargetRevision`|The branch, tag, or commit of the repository to checkout. + +|`Path`|The absolute path to the directory in the repository from which files and subdirectories are synchronized to the file system of the device. +The `Path` directory corresponds to the root directory (`/`) on the device, unless you specify the `MountPath` parameter. + +|`MountPath`|Optional. The absolute path to the directory in the file system of the device to write the content of the repository to. +By default, the value is the file system root (`/`). +|=== + +The `Repository` resource defines the Git repository, the protocol, and the access credentials that the {RedHatEdge} must use. +You only need to set up the repository once. +After setting up, you can use the repository to configure individual devices or device fleets. diff --git a/downstream/modules/platform/ref-edge-manager-config-http.adoc b/downstream/modules/platform/ref-edge-manager-config-http.adoc new file mode 100644 index 0000000000..86ca463a9a --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-config-http.adoc @@ -0,0 +1,22 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-config-http"] + += Configuration from an HTTP server + +The {RedHatEdge} can query an HTTP server for configuration. +The HTTP server can serve static or dynamically generated configuration for a device. + +The HTTP Config Provider takes the following parameters: + +|=== +|Parameter|Description +|`Repository`|The name of a `Repository` resource defined in the {RedHatEdge}. + +|`Suffix`|The suffix to append to the base URL defined in the `Repository` resource. The suffix can include path and query parameters, for example `/path/to/endpoint?query=param`. + +|`FilePath`|The absolute path to the file in the file system of the device to write the response of the HTTP server to. +|=== + +The `Repository` resource specifies the HTTP server for the {RedHatEdge} to connect to, and the protocol and access credentials to use. +You must set up the repository needs once, and then you can use the repository to configure many devices or device fleets. diff --git a/downstream/modules/platform/ref-edge-manager-config-inline.adoc b/downstream/modules/platform/ref-edge-manager-config-inline.adoc new file mode 100644 index 0000000000..08b678ec70 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-config-inline.adoc @@ -0,0 +1,31 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-config-inline"] + += Configuration inline in the device specification + +You can specify configuration inline in a device specification. +When you use the inline device specification, the {RedHatEdge} does not need to connect to external systems to fetch the configuration. + +The Inline Config Provider takes a list of file specifications, where each file specification takes the following parameters: + +|=== +|Parameter|Description +|`Path`|The absolute path to the file in the file system of the device to write the content to. +If a file already exists in the specified path, the file is overwritten. + +|`Content`|The UTF-8 or base64-encoded content of the file. + +|`ContentEncoding`|Defines how the contents are encoded. Must be either `plain` or `base64`. Default value is set to `plain`. + +|`Mode`|Optional. The permission mode of the file. You can specify the octal with a leading zero, for example `0644`, or as a decimal without a leading zero, for example `420`. The `setuid`, `setgid`, and `sticky` bits are supported. If not specified, the permission mode for files defaults to `0644`. + +|`User`|Optional. The owner of the file. Specified either as a name or numeric ID. Default value is set to `root`. + +|`Group`|Optional. The group of the file. Specified either as a name or numeric ID. +|=== + +.Additional resources + +* For more information about device lifecycle hooks and the default rules used by the {RedHatEdge} agent, see xref:edge-manager-device-lifecycle[Use device lifecycle hooks]. +//* For more information about granting {RedHatEdge} permissions, see xref:edge-manager-rbac-auth[{RedHatEdge} authorization]. diff --git a/downstream/modules/platform/ref-edge-manager-device-lifecycle.adoc b/downstream/modules/platform/ref-edge-manager-device-lifecycle.adoc new file mode 100644 index 0000000000..9a76019cee --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-device-lifecycle.adoc @@ -0,0 +1,30 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-device-lifecycle"] + += Use device lifecycle hooks + +The {RedHatEdge} agent can run user-defined commands at specific points in the device lifecycle by using device lifecycle hooks. +For example, you can add a shell script to your operating system images that backs up your application data. +You can then specify that the script must run and complete successfully before the agent can start updating the operating system. + +As another example, certain applications or system services do not automatically reload their configuration file when the file changes on the disk. +You can manually reload the configuration file by specifying a command as another hook, which is called after the agent completes the update process. + +The following device lifecycle hooks are supported: + +[width="100%",cols="56%,44%",options="header",] +|=== +|Lifecycle Hook |Description +|`beforeUpdating` |This hook is called after the agent completed preparing for the update and before actually making changes to the system. +If an action in this hook returns with failure, the agent cancels the update. + +|`afterUpdating` |This hook is called after the agent has written the update to disk. +If an action in this hook returns with failure,the agent cancels and rolls back the update. + +|`beforeRebooting` |This hook is called before the system reboots. The agent blocks the reboot until running the action has completed or timed out. +If any action in this hook returns with failure, the agent cancels and rolls back the update. + +|`afterRebooting` |This hook is called when the agent first starts after a reboot. +If any action in this hook returns with failure, the agent reports this but continues starting up. +|=== diff --git a/downstream/modules/platform/ref-edge-manager-device-selection.adoc b/downstream/modules/platform/ref-edge-manager-device-selection.adoc new file mode 100644 index 0000000000..00a8ae2f57 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-device-selection.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-device-selection"] + += Device selection into a fleet + +By default, devices are not assigned to a fleet. +Instead, each fleet uses a selector that defines which labels a device must have to be added to the fleet. + +To understand how to use labels in a fleet, see the following example. + +The following list shows point-of-sales terminal devices and their labels: + +|=== +|Device |Labels +|A |`type: pos-terminal`, `region: east`, `stage: production` +|B |`type: pos-terminal`, `region: east`, `stage: development` +|C |`type: pos-terminal`, `region: west`, `stage: production` +|D |`type: pos-terminal`, `region: west`, `stage: development` +|=== + +If all point-of-sale terminals use the same configuration and are managed by the same operations team, you can define a single fleet called `pos-terminals` with the `type=pos-terminal` label selector. +Then, the fleet contains devices A, B, C, and D. + +However, you might want to create separate fleets for the different organizations for development or production. +You can define a fleet for development with the `type=pos-terminal, stage=development` label selector, which selects devices C and D. +Then, you can define another fleet for production with the `type=pos-terminal, stage=production` label selector. +By using the correct label selectors, you can manage both fleets independently. + +[IMPORTANT] +==== +You must define selectors in a way that two fleets do not select the same device. +For example, if one fleet selects `region=east`, and another fleet selects `stage=production`, both fleets try to select device A. +If two fleets try to select the same device, the {RedHatEdge} keeps the device in the currently assigned fleet, if any, and sets the `OverlappingSelectors` condition on the affected fleets to `true`. +==== diff --git a/downstream/modules/platform/ref-edge-manager-device-templates.adoc b/downstream/modules/platform/ref-edge-manager-device-templates.adoc new file mode 100644 index 0000000000..96afcbc84c --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-device-templates.adoc @@ -0,0 +1,55 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-device-templates"] + += Device templates + +A device template of a fleet contains a device specification that is applied to all devices in the fleet when the template is updated. + +For example, you can specify in the device template of a fleet that all devices in the fleet must run the `quay.io/flightctl/rhel:9.5` operating system image. + +The {RedHatEdge} service then rolls out the target specification to all devices in the fleet, and the {RedHatEdge} agents update each device. + +You can change other specification items in the device template and the {RedHatEdge} applies the changes in the same way. + +However, sometimes not all of the devices in the fleet need to have the exact same specification. +The {RedHatEdge} allows templates to contain placeholders that are populated based on the device name or label values. + +The syntax of the placeholders matches that of https://pkg.go.dev/text/template[Go templates]. +However, you can only use simple text and actions. + +The use of conditionals or loops in the placeholders is not supported. + +You can reference anything from the metadata of a device, such as `{{ .metadata.labels.key }}` or `{{ .metadata.name }}`. + +You can also use the following functions in your placeholders: + +* The `upper` function changes the value to uppercase. For example, the function is `{{ upper .metadata.name }}`. +* The `lower` function changes the value to lowercase. For example, the function is `{{ lower .metadata.labels.key }}`. +* The `replace` function replaces all occurrences of a substring with another string. +For example, the function is `{{ replace "old" "new" .metadata.labels.key }}`. +* The `getOrDefault` function returns a default value if accessing a missing label. +For example, the function is `{{ getOrDefault .metadata.labels "key" "default" }}`. +You can combine the functions in pipelines, for example, a combined function is `{{ getOrDefault .metadata.labels "key" "default" | upper | replace " " "-" }}`. + +[NOTE] +==== +Ensure you are using proper Go template syntax. For example, `{{ .metadata.labels.target-revision }}` is not valid because of the hyphen. +Instead, you must refer to the field as `{{ index .metadata.labels "target-revision" }}`. +==== + +You can use the placeholders in device templates in the following ways: + +* You can label devices by deployment stage, for example, stage labels are `stage: testing` and `stage: production`. +Then, you can use the label with the `stage` key as placeholder when referencing the operating system image to use, for example, use `quay.io/myorg/myimage:latest-{{ .metadata.labels.stage }}` or when referencing a configuration folder in a Git repository. +* You can label devices by deployment site, for example, deployment sites are `site: factory-berlin` and `site: factory-madrid`. +* Then, you can use the label with the `site` key as parameter when referencing the secret with network access credentials in Kubernetes. +The following fields in device templates support placeholders: ++ +|=== +|Field |Placeholders supported in +|Operating System Image |repository name, image name, image tag +|Git Config Provider |target revision, path +|HTTP Config Provider |URL suffix, path +|Inline Config Provider |content, path +|=== diff --git a/downstream/modules/platform/ref-edge-manager-disruption-parameters.adoc b/downstream/modules/platform/ref-edge-manager-disruption-parameters.adoc new file mode 100644 index 0000000000..55a49c2afa --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-disruption-parameters.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-disruption-parameters"] + += Disruption budget parameters + +* `groupBy`: Defines how devices are grouped when applying the disruption budget. +The grouping is done by label keys. +* `minAvailable`: Specifies the minimum number of devices that must remain available during a rollout. +* `maxUnavailable`: Limits the number of devices that can be unavailable at the same time. + +.Example + +The following shows an example YAML configuration for a fleet specification: + +[literal, options="nowrap" subs="+attributes"] +---- +apiVersion: v1alpha1 +kind: Fleet +metadata: + name: default +spec: + selector: + matchLabels: + fleet: default + rolloutPolicy: + disruptionBudget: + groupBy: ['site', 'function'] + minAvailable: 1 + maxUnavailable: 10 +---- + +In this example, the grouping is performed on 2 label keys: *site* and *function*. +A group for disruption budget consists of all devices in a fleet having the same label values for the preceding label keys. +For every such group the conditions defined in this specification are continuously enforced. diff --git a/downstream/modules/platform/ref-edge-manager-field-selectors.adoc b/downstream/modules/platform/ref-edge-manager-field-selectors.adoc new file mode 100644 index 0000000000..793031a6a4 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-field-selectors.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-field-selectors"] + += Field selectors + +Field selectors filter a list of {RedHatEdge} resources based on specific resource field values. +They follow the same syntax, principles, and operators as Kubernetes Field and Label selectors, with additional operators available for more advanced search use cases. + +== Supported fields + +{RedHatEdge} resources give a set of metadata fields that you can select. + +Each resource supports the following metadata fields: + +* `metadata.name` +* `metadata.owner` +* `metadata.creationTimestamp` + +[NOTE] + +==== +To query labels, use Label Selectors for advanced and flexible label filtering. +==== + +For more information, see xref:edge-manager-labels[Labels and label selectors]. diff --git a/downstream/modules/platform/ref-edge-manager-fields-discovery.adoc b/downstream/modules/platform/ref-edge-manager-fields-discovery.adoc new file mode 100644 index 0000000000..048c4369aa --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-fields-discovery.adoc @@ -0,0 +1,63 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-fields-discovery"] + += Fields discovery + +Some {RedHatEdge} resources might expose additional supported fields. +You can discover the supported fields by using `flightctl` with the `--field-selector` option. +If you try to use an unsupported field, the error message lists the available supported fields. + +See the following examples: + +[source,bash] +---- +flightctl get device --field-selector='text' +---- + +[source,bash] +---- +Error: listing devices: 400, message: unknown or unsupported selector: unable to resolve selector name "text". Supported selectors are: [metadata.alias metadata.creationTimestamp metadata.name metadata.nameoralias metadata.owner status.applicationsSummary.status status.lastSeen status.summary.status status.updated.status] +---- + +The field `text` is not a valid field for filtering. +The error message provides a list of supported fields that you can use with `--field-selector` for the `Device` resource. + +You can then use one of the supported fields: + +[source,bash] +---- +flightctl get devices --field-selector 'metadata.alias contains cluster' +---- + +The `metadata.alias` field is checked with the containment operator `contains` to see if it has the value `cluster`. + +.Examples + +.Example 1: Excluding a specific device by name + +The following command filters out a specific device by its name: + +[source,bash] +---- +flightctl get devices --field-selector 'metadata.name!=c3tkb18x9fw32fzx5l556n0p0dracwbl4uiojxu19g2' +---- + +.Example 2: Filter by owner, labels, and creation timestamp + +This command retrieves devices owned by `Fleet/pos-fleet`, located in the `us` region, and created in 2024: + +[source,bash] +---- +flightctl get devices --field-selector 'metadata.owner=Fleet/pos-fleet, metadata.creationTimestamp >= 2024-01-01T00:00:00Z, metadata.creationTimestamp < //2025-01-01T00:00:00Z' -l 'region=us' +---- + +.Example 3: Filter by Owner, Labels, and Device Status + +This command retrieves devices owned by `Fleet/pos-fleet`, located in the `us` region, and with a `status.updated.status` of either `Unknown` +or `OutOfDate`: + +[source,bash] +---- +flightctl get devices --field-selector 'metadata.owner=Fleet/pos-fleet, status.updated.status in (Unknown, OutOfDate)' -l 'region=us' +---- diff --git a/downstream/modules/platform/ref-edge-manager-images-special-considerations.adoc b/downstream/modules/platform/ref-edge-manager-images-special-considerations.adoc new file mode 100644 index 0000000000..16b07d76a8 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-images-special-considerations.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-images-special-considerations"] + += Special considerations for building images + +* xref:edge-manager-buildtime-runtime[Build-time configuration over dynamic runtime configuration] +* xref:edge-manager-usr-dir[Configuration in the `/usr` directory] +* xref:edge-manager-drop-dir[Drop-in directories] +* xref:edge-manager-os-img-script[Operating system images with scripts] diff --git a/downstream/modules/platform/ref-edge-manager-monitor-device.adoc b/downstream/modules/platform/ref-edge-manager-monitor-device.adoc new file mode 100644 index 0000000000..d5034b2770 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-monitor-device.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-monitor-device"] + += Monitor device resources + +You can set up monitors for device resources and define alerts when the use of these resources crosses a defined threshold. +When the agent alerts the {RedHatEdge} service, the service sets the device status to "degraded" or "error" (depending on the severity level). + +Resource monitors take the following parameters: + +[width="100%",cols="45%,55%",options="header",] +|=== +|Parameter |Description +|MonitorType |The resource to monitor. +Currently supported resources are "CPU", "Memory", and "Disk". + +|SamplingInterval |The interval in which the monitor samples use, specified as positive integer followed by a time unit ("s" for seconds, "m" for minutes, "h" for hours). + +|AlertRules |A list of alert rules. + +|Path |(Disk monitor only) The absolute path to the directory to monitor. +Utilization reflects the filesystem containing the path, similar to df, even if it’s not a mount point. +|=== + +Alert rules take the following parameters: + +[width="100%",cols="45%,55%",options="header",] +|=== +|Parameter |Description +|Severity |The alert rule's severity level out of "Info", "Warning", or "Critical". +Only one alert rule is allowed per severity level and monitor. + +|Duration |The duration that resource use is measured and averaged over when sampling, specified as positive integer followed by a time unit ("s" for seconds, "m" for minutes, "h" for hours). +It must be smaller than the sampling interval. + +|Percentage |The use threshold that triggers the alert, as percentage value (range 0 to 100 without the "%" sign). + +|Description |A human-readable description of the alert. +This is useful for adding details about the alert that might help with debugging. +By default it populates the alert as : load is above >% for more than. +|=== diff --git a/downstream/modules/platform/ref-edge-manager-platform-requirements.adoc b/downstream/modules/platform/ref-edge-manager-platform-requirements.adoc new file mode 100644 index 0000000000..121d65fc11 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-platform-requirements.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-platform-requirements"] + += Requirements for specific target platforms + +See the following platform considerations: + +* xref:edge-manager-virt[Building images for Red Hat OpenShift Virtualization] +* xref:edge-manager-vmware[Building images for VMware vSphere] diff --git a/downstream/modules/platform/ref-edge-manager-rule-files.adoc b/downstream/modules/platform/ref-edge-manager-rule-files.adoc new file mode 100644 index 0000000000..82b408b1b2 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-rule-files.adoc @@ -0,0 +1,91 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-rule-files"] + += Rule files + +You can define device lifecycle hooks by adding rule files to one of the following locations in the device file system: + +* Rules in the `/usr/lib/flightctl/hooks.d//` drop-in directory are read-only. +To add rules to the `/usr` directory, you must add them to the operating system image during image building. +* Rules in the `/etc/flightctl/hooks.d//` drop-in directory are read-writable. +You can update the rules at runtime by using several methods. + +When creating and placing the files, you must consider the following practices: + +* The name of the rule must be all lower case. +* If you define rules in both locations, the rules are merged. +* If you add more than one rule files to a lifecycle hook directory, the files are processed in lexical order of the file names. +* If you define files with identical file names in both locations, the file in the `/etc` folder takes precedence over the file of the same name in the `/usr` folder. + +A rule file is written in YAML format and has a list of one or more actions. +An action can be an instruction to run an external command. + +When you specify many actions for a hook, the actions are performed in sequence, finishing one action before starting the next. + +If an action returns with a failure, the following actions are skipped. + +A `run` action takes the following parameters: + +|=== +|Parameter | Description +|`Run` | The absolute path to the command to run, followed by any flags or arguments, for example `/usr/bin/nmcli connection reload`. The command is not executed in a shell, so you cannot use shell variables, such as `$PATH` or `$HOME`, or chain commands, such as `\|` or `;`. If necessary, you can start a shell by specifying the shell as command to run, for example `/usr/bin/bash -c 'echo $SHELL $HOME $USER'`. + +|`EnvVars` | Optional. A list of key-value pairs to set as environment variables for the command. + +|`WorkDir` | Optional. The directory the command is run from. + +|`Timeout` | Optional. The maximum duration that is allowed for the action to complete. Specify the duration as a single positive integer followed by a time unit. The `s`, `m`, and `h` units are supported for seconds, minutes, and hours. + +|`If` | Optional. A list of conditions that must be true for the action to be run. If not provided, actions run unconditionally. +|=== + +By default, the system performs actions every time the hook is triggered. +However, for the `afterUpdating` hook, you can use the `If` parameter to add conditions that must be true for an action to be performed. +Otherwise, the action is skipped. + +For example, to run an action only if a given file or directory changes during the update, you can define a path condition that takes the following parameters: + +|=== +|Parameter | Description +| `Job type` a| An absolute path to a file or directory that must change during the update as a condition for the action to be performed. Specify paths by using forward slashes (/): + +- If the path is to a directory, it must end with a forward slash (`/`). + +- If you specify a path to a file, the file must have changed to satisfy the condition + +- If you specify a path to a directory, a file in that directory or any of its subdirectories must have changed to satisfy the condition + +|`Op` | A list of file operations, such as `created`, `updated`, and `removed`, to limit the type of changes to the specified path as a condition for the action to be performed. +|=== + +If you specify a path condition for an action in the `afterUpdating` hook, you have the following variables that you can include in arguments to your command and are replaced with the absolute paths to the changed files: + +|=== +|Variable | Description +|`${ Path }` | The absolute path to the file or directory specified in the path condition. + +|`${ Files }` | A space-separated list of absolute paths of the files that changed during the update and are covered by the path condition. + +|`${ CreatedFiles }` | A space-separated list of absolute paths of the files that were created during the update and are covered by the path condition. + +|`${ UpdatedFiles }` | A space-separated list of absolute paths of the files that were updated during the update and are covered by the path condition. + +|`${ RemovedFiles }` | A space-separated list of absolute paths of the files that were removed during the update and are covered by the path condition. +|=== + +The {RedHatEdge} agent includes a built-in set of rules defined in `/usr/lib/flightctl/hooks.d/afterupdating/00-default.yaml`. +The following commands are executed if certain files are changed: + +|=== +|File | Command | Description +|`/etc/systemd/system/` | `systemctl daemon-reload` | Changes to `systemd` units are activated by signaling the `systemd` daemon to reload the `systemd` manager configuration. This reruns all generators, reloads all unit files, and re-creates the entire dependency tree. + +|`/etc/NetworkManager/system-connections/` |`nmcli conn reload` | Changes to `NetworkManager` system connections are activated by signaling the `NetworkManager` daemon to reload all connections. For more information, see the _Additional resources_ section. + +|`/etc/firewalld/` | `firewall-cmd --reload` | Changes to the permanent configuration of `firewalld` are activated by signaling `firewalld` to reload firewall rules as new runtime configuration. +|=== + +.Additional resources + +* For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/index[Configuring and managing networking]. diff --git a/downstream/modules/platform/ref-edge-manager-specify-apps-inline.adoc b/downstream/modules/platform/ref-edge-manager-specify-apps-inline.adoc new file mode 100644 index 0000000000..51fb758c08 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-specify-apps-inline.adoc @@ -0,0 +1,46 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-specify-apps-inline"] + += Specify applications inline in the device specification + +Application manifests are specified inline in a device's specification, so you do not need to build an OCI registry application package. + +The inline application provider accepts a list of application content with the following parameters: + +|=== +| Parameter | Description +| Path | The relative path to the file on the device. Note that any existing file is overwritten. +| Content (Optional) | The plain text (UTF-8) or base64-encoded content of the file. +| ContentEncoding | How the contents are encoded. Must be either "plain" or "base64". Defaults to "plain". +|=== + +.Example + +[source,yaml] +---- +apiVersion: flightctl.io/v1alpha1 +kind: Device +metadata: + name: some_device_name +spec: +[...] + applications: + - name: my-app + appType: compose + inline: + - content: | + version: "3.8" + services: + service1: + image: quay.io/flightctl-tests/alpine:v1 + command: ["sleep", "infinity"] + path: podman-compose.yaml +[...] +---- + +[NOTE] +==== +Inline compose applications can have two paths at most. +You must name the first one `podman-compose.yaml`, and the second (override) `podman-compose.override.yaml`. +==== diff --git a/downstream/modules/platform/ref-edge-manager-success-threshold.adoc b/downstream/modules/platform/ref-edge-manager-success-threshold.adoc new file mode 100644 index 0000000000..a770887973 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-success-threshold.adoc @@ -0,0 +1,53 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-success-threshold"] + += Success threshold + +The `successThreshold` defines the percentage of successfully updated devices required to continue the rollout. +If the success rate falls below this threshold, the rollout might be paused to prevent further failures. + +.Example + +The following shows an example YAML configuration for a fleet specification: + +[literal, options="nowrap" subs="+attributes"] +---- +apiVersion: v1alpha1 +kind: Fleet +metadata: + name: default +spec: + selector: + matchLabels: + fleet: default + rolloutPolicy: + deviceSelection: + strategy: 'BatchSequence' + sequence: + - selector: + matchLabels: + site: madrid + limit: 1 # Absolute number + - selector: + matchLabels: + site: madrid + limit: 80% # Percentage of devices matching the label criteria within the fleet + - limit: 50% # Percentage of all devices in the fleet + - selector: + matchLabels: + site: paris + - limit: 80% + - limit: 100% + successThreshold: 95% +---- + +In this example, there are 6 explicit batches and 1 implicit batch: + +* The first batch selects 1 device having a label *site:madrid*. +* With the second batch 80% of all devices having the label *site:madrid* are either selected for rollout in the current batch or were previously selected for rollout. +* With the third batch 50% of all devices are either selected for rollout in the current batch or were previously selected for rollout. +* With the fourth batch all devices that were not previously selected and have the label *site:paris* are selected. +* With the fifth batch 80% of all devices are either selected for rollout in the current batch or were previously selected for rollout. +* With the sixth batch 100% of all devices are either selected for rollout in the current batch or were previously selected for rollout. +* The last implicit batch selects all devices that have not been selected in any previous batch (might be none). diff --git a/downstream/modules/platform/ref-edge-manager-supported-operators.adoc b/downstream/modules/platform/ref-edge-manager-supported-operators.adoc new file mode 100644 index 0000000000..6682b03275 --- /dev/null +++ b/downstream/modules/platform/ref-edge-manager-supported-operators.adoc @@ -0,0 +1,139 @@ +:_mod-docs-content-type: REFERENCE + +[id="edge-manager-supported-operators"] + += Supported operators + +[width="100%",cols="24%,17%,59%",options="header",] +|=== +|Operator |Symbol |Description +|Exists |`exists` |Checks if a field exists + +|DoesNotExist |`!` |Checks if a field does not exist + +|Equals |`=` |Checks if a field is equal to a value + +|DoubleEquals |`==` |Another form of equality check + +|NotEquals |`!=` |Checks if a field is not equal to a value + +|GreaterThan |`>` |Checks if a field is greater than a value + +|GreaterThanOrEquals |`>=` |Checks if a field is greater than or equal to a value + +|LessThan |`<` |Checks if a field is less than a value + +|LessThanOrEquals |`<=` |Checks if a field is less than or equal to a value + +|In |`in` |Checks if a field is within a list of values + +|NotIn |`notin` |Checks if a field is not in a list of values + +|Contains |`contains` |Checks if a field has a value + +|NotContains |`notcontains` |Checks if a field does not contain a value +|=== + +== Operators usage by field type + +Each field type supports a specific subset of operators: + +[width="100%",cols="20,60%,20%",options="header",] +|=== +|Field Type |Supported Operators |Value +|*String* |`Equals`: Matches if the field value is an exact match to the specified string. + +`DoubleEquals`: Matches if the field value is an exact match to the specified string (alternative to `Equals`). + +`NotEquals`: Matches if the field value is not an exact match to the specified string. + +`In`: Matches if the field value matches at least one string in the list. + +`NotIn`: Matches if the field value does not match any of the strings in the list. + +`Contains`: Matches if the field value has the specified substring. + +`NotContains`: Matches if the field value does not contain the specified substring. + +`Exists`: Matches if the field is present. + +`DoesNotExist`: Matches if the field is not present. |Text +string + +|*Timestamp* |`Equals`: Matches if the field value is an exact match to the specified timestamp. + +`DoubleEquals`: Matches if the field value is an exact match to the specified timestamp (alternative to `Equals`). + +`NotEquals`: Matches if the field value is not an exact match to the specified timestamp. + +`GreaterThan`: Matches if the field value is after the specified timestamp. + +`GreaterThanOrEquals`: Matches if the field value is after or equal to the specified timestamp. + +`LessThan`: Matches if the field value is before the specified timestamp. + +`LessThanOrEquals`: Matches if the field value is before or equal to the specified timestamp. + +`In`: Matches if the field value matches at least one timestamp in the list. + +`NotIn`: Matches if the field value does not match any of the timestamps in the list. + +`Exists`: Matches if the field is present. + +`DoesNotExist`: Matches if the field is not present. |RFC 3339 format + +|*Number* |`Equals`: Matches if the field value equals the specified number. + +`DoubleEquals`: Matches if the field value equals the specified number (alternative to `Equals`). + +`NotEquals`: Matches if the field value does not equal to the specified number. + +`GreaterThan`: Matches if the field value is greater than the specified number. + +`GreaterThanOrEquals`: Matches if the field value is greater than or equal to the specified number. + +`LessThan`: Matches if the field value is less than the specified number. + +`LessThanOrEquals`: Matches if the field value is less than or equal to the specified number. + +`In`: Matches if the field value equals at least one number in the list. + +`NotIn`: Matches if the field value does not equal any numbers in the list. + +`Exists`:Matches if the field is present. + +`DoesNotExist`: Matches if the field is not present. |Number format + +|*Boolean* a|`Equals`: Matches if the value is `true` or `false`. + +`DoubleEquals`: Matches if the value is `true` or `false` (alternative to `Equals`). + +`NotEquals`: Matches if the value is the opposite of the specified value. + +`In`: Matches if the value (`true` or `false`) is in the list. + +[NOTE] +==== +The list can only contain `true` or `false`, so this operator is limited in use. +==== + +`NotIn`: Matches if the value is not in the list. + +`Exists`: Matches if the field is present. + +`DoesNotExist`: Matches if the field is not present. |Boolean format (`true`, `false`) + +|*Array* a|`Contains`: Matches if the array has the specified value. + +`NotContains`: Matches if the array does not contain the specified value. `In`: Matches if the array overlaps with the specified values. + +`NotIn`: Matches if the array does not overlap with the specified values. `Exists`: Matches if the field is present. + +`DoesNotExist`:Matches if the field is not present. + +[NOTE] +==== +Using `Array[Index]` treats the element as the type defined for the array elements. For example string, timestamp, number, or boolean. +==== +|Array element +|=== diff --git a/downstream/modules/platform/ref-encrypting-plaintext-passwords.adoc b/downstream/modules/platform/ref-encrypting-plaintext-passwords.adoc new file mode 100644 index 0000000000..189bdbf92f --- /dev/null +++ b/downstream/modules/platform/ref-encrypting-plaintext-passwords.adoc @@ -0,0 +1,10 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-encrypting-plaintext-passwords"] + += Encrypting plain text passwords in {ControllerName} configuration files + +Passwords stored in {ControllerName} configuration files are stored in plain text. +A user with access to the `/etc/tower/conf.d/` directory can view the passwords used to access the database. +Access to the directories is controlled with permissions, so they are protected, but some security findings deem this protection to be inadequate. +The solution is to encrypt the passwords individually. diff --git a/downstream/modules/platform/ref-fetching-a-monthly-report.adoc b/downstream/modules/platform/ref-fetching-a-monthly-report.adoc new file mode 100644 index 0000000000..cc61c1e00f --- /dev/null +++ b/downstream/modules/platform/ref-fetching-a-monthly-report.adoc @@ -0,0 +1,106 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-fetching-a-monthly-report"] + += Fetching a monthly report + +Fetch a monthly report from {PlatformNameShort} to gather usage metrics and create a consumption-based billing report. To fetch a monthly report on {RHEL} or on {OCPShort}, use the following procedures: + +== Fetching a monthly report on {RHEL} + +Use the following procedure to fetch a monthly report on {RHEL}: + +.Procedure + +. Run: +`scp -r username@controller_host:$METRICS_UTILITY_SHIP_PATH/data/// /local/directory/` + +The system savess the generated report as `CCSP--.xlsx` in the ship path that you specified. + +== Fetching a monthly report on {OCPShort} from the {PlatformNameShort} Operator + +Use the following playbook to fetch a monthly consumption report for {PlatformNameShort} on {OCPShort}: + +---- +- name: Copy directory from Kubernetes PVC to local machine + hosts: localhost + + vars: + report_dir_path: "/mnt/metrics/reports/{{ year }}/{{ month }}/" + + tasks: + - name: Create a temporary pod to access PVC data + kubernetes.core.k8s: + definition: + apiVersion: v1 + kind: Pod + metadata: + name: temp-pod + namespace: "{{ namespace_name }}" + spec: + containers: + - name: busybox + image: busybox + command: ["/bin/sh"] + args: ["-c", "sleep 3600"] # Keeps the container alive for 1 hour + volumeMounts: + - name: "{{ pvc }}" + mountPath: "/mnt/metrics" + volumes: + - name: "{{ pvc }}" + persistentVolumeClaim: + claimName: automationcontroller-metrics-utility + restartPolicy: Never + register: pod_creation + + - name: Wait for both initContainer and main container to be ready + kubernetes.core.k8s_info: + kind: Pod + namespace: "{{ namespace_name }}" + name: temp-pod + register: pod_status + until: > + pod_status.resources[0].status.containerStatuses[0].ready + retries: 30 + delay: 10 + + - name: Create a tarball of the directory of the report in the container + kubernetes.core.k8s_exec: + namespace: "{{ namespace_name }}" + pod: temp-pod + container: busybox + command: tar czf /tmp/metrics.tar.gz -C "{{ report_dir_path }}" . + register: tarball_creation + + - name: Copy the report tarball from the container to the local machine + kubernetes.core.k8s_cp: + namespace: "{{ namespace_name }}" + pod: temp-pod + container: busybox + state: from_pod + remote_path: /tmp/metrics.tar.gz + local_path: "{{ local_dir }}/metrics.tar.gz" + when: tarball_creation is succeeded + + - name: Ensure the local directory exists + ansible.builtin.file: + path: "{{ local_dir }}" + state: directory + + - name: Extract the report tarball on the local machine + ansible.builtin.unarchive: + src: "{{ local_dir }}/metrics.tar.gz" + dest: "{{ local_dir }}" + remote_src: yes + extra_opts: "--strip-components=1" + when: tarball_creation is succeeded + + - name: Delete the temporary pod + kubernetes.core.k8s: + api_version: v1 + kind: Pod + namespace: "{{ namespace_name }}" + name: temp-pod + state: absent +---- + diff --git a/downstream/modules/platform/ref-gateway-controller-ext-db.adoc b/downstream/modules/platform/ref-gateway-controller-ext-db.adoc new file mode 100644 index 0000000000..8ccdcf4afc --- /dev/null +++ b/downstream/modules/platform/ref-gateway-controller-ext-db.adoc @@ -0,0 +1,65 @@ +:_mod-docs-content-type: REFERENCE + + +[id="ref-gateway-controller-ext-db"] + += Single {Gateway} and {ControllerName} with an external (installer managed) database + + +[role="_abstract"] +Use this example to see what is minimally needed within the inventory file to deploy single instances of {Gateway} and {ControllerName} with an external (installer managed) database. + +----- +[automationcontroller] +controller.example.com + +[automationgateway] +gateway.example.com + +[database] +data.example.com + +[all:vars] +admin_password='' +redis_mode=standalone +pg_host='data.example.com' +pg_port=5432 +pg_database='awx' +pg_username='awx' +pg_password='' +pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL + +registry_url='registry.redhat.io' +registry_username='' +registry_password='' + +# Automation Gateway configuration +automationgateway_admin_password='' + +automationgateway_pg_host='data.example.com' +automationgateway_pg_port=5432 + +automationgateway_pg_database='automationgateway' +automationgateway_pg_username='automationgateway' +automationgateway_pg_password='' +automationgateway_pg_sslmode='prefer' + +# The main automation gateway URL that clients will connect to (e.g. https://). +# If not specified, the first node in the [automationgateway] group will be used when needed. +# automationgateway_main_url = '' + +# Certificate and key to install in Automation Gateway +# automationgateway_ssl_cert=/path/to/automationgateway.cert +# automationgateway_ssl_key=/path/to/automationgateway.key + +# SSL-related variables +# If set, this will install a custom CA certificate to the system trust store. +# custom_ca_cert=/path/to/ca.crt +# Certificate and key to install in nginx for the web UI and API +# web_server_ssl_cert=/path/to/tower.cert +# web_server_ssl_key=/path/to/tower.key +# Server-side SSL settings for PostgreSQL (when we are installing it). +# postgres_use_ssl=False +# postgres_ssl_cert=/path/to/pgsql.crt +# postgres_ssl_key=/path/to/pgsql.key +----- diff --git a/downstream/modules/platform/ref-gateway-controller-hub-eda-ext-db.adoc b/downstream/modules/platform/ref-gateway-controller-hub-eda-ext-db.adoc new file mode 100644 index 0000000000..6b449699d8 --- /dev/null +++ b/downstream/modules/platform/ref-gateway-controller-hub-eda-ext-db.adoc @@ -0,0 +1,138 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-gateway-controller-hub-eda-ext-db"] + += Single {Gateway}, {ControllerName}, {HubName}, and {EDAcontroller} with an external (installer managed) database + +[role="_abstract"] +Use this example to populate the inventory file to deploy single instances of {Gateway}, {ControllerName}, {HubName}, and {EDAcontroller} with an external (installer managed) database. + +[IMPORTANT] +==== +* This scenario requires a minimum of {ControllerName} 2.4 for successful deployment of {EDAcontroller}. + +* {EDAController} must be installed on a separate server and cannot be installed on the same host as {HubName} and {ControllerName}. + +* When an {EDAName} rulebook is activated under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of the rules and the volume and size of the events processed. +In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. +This ensures that the maximum number of activations is based on the resource capacity. +In the following example, the default `automationedacontroller_max_running_activations` setting is 12, but can be adjusted according to fit capacity. + +==== + +[literal, subs="+attributes"] +----- +[automationcontroller] +controller.example.com + +[automationhub] +automationhub.example.com + +[automationedacontroller] +automationedacontroller.example.com + +[automationgateway] +gateway.example.com + +[database] +data.example.com + +[all:vars] +admin_password='' +redis_mode=standalone +pg_host='data.example.com' +pg_port='5432' +pg_database='awx' +pg_username='awx' +pg_password='' +pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL + +registry_url='registry.redhat.io' +registry_username='' +registry_password='' + +# {HubNameStart} configuration + +automationhub_admin_password= + +automationhub_pg_host='data.example.com' +automationhub_pg_port=5432 + +automationhub_pg_database='automationhub' +automationhub_pg_username='automationhub' +automationhub_pg_password= +automationhub_pg_sslmode='prefer' + +# Automation {EDAController} configuration + +automationedacontroller_admin_password='' + +automationedacontroller_pg_host='data.example.com' +automationedacontroller_pg_port=5432 + +automationedacontroller_pg_database='automationedacontroller' +automationedacontroller_pg_username='automationedacontroller' +automationedacontroller_pg_password='' + +# Keystore file to install in SSO node +# sso_custom_keystore_file='/path/to/sso.jks' + +# This install will deploy SSO with sso_use_https=True +# Keystore password is required for https enabled SSO +sso_keystore_password='' + +# This install will deploy a TLS enabled Automation Hub. +# If for some reason this is not the behavior wanted one can +# disable TLS enabled deployment. +# +# automationhub_disable_https = False +# The default install will generate self-signed certificates for the Automation +# Hub service. If you are providing valid certificate via automationhub_ssl_cert +# and automationhub_ssl_key, one should toggle that value to True. +# +# automationhub_ssl_validate_certs = False +# SSL-related variables +# If set, this will install a custom CA certificate to the system trust store. +# custom_ca_cert=/path/to/ca.crt +# Certificate and key to install in Automation Hub node +# automationhub_ssl_cert=/path/to/automationhub.cert +# automationhub_ssl_key=/path/to/automationhub.key + +# Automation Gateway configuration +automationgateway_admin_password='' + +automationgateway_pg_host='' +automationgateway_pg_port=5432 + +automationgateway_pg_database='automationgateway' +automationgateway_pg_username='automationgateway' +automationgateway_pg_password='' +automationgateway_pg_sslmode='prefer' + +# The main automation gateway URL that clients will connect to (e.g. https://). +# If not specified, the first node in the [automationgateway] group will be used when needed. +# automationgateway_main_url = '' + +# Certificate and key to install in Automation Gateway +# automationgateway_ssl_cert=/path/to/automationgateway.cert +# automationgateway_ssl_key=/path/to/automationgateway.key + +# Certificate and key to install in nginx for the web UI and API +# web_server_ssl_cert=/path/to/tower.cert +# web_server_ssl_key=/path/to/tower.key +# Server-side SSL settings for PostgreSQL (when we are installing it). +# postgres_use_ssl=False +# postgres_ssl_cert=/path/to/pgsql.crt +# postgres_ssl_key=/path/to/pgsql.key + +# Boolean flag used to verify Automation Controller's +# web certificates when making calls from Automation {EDAcontroller}. +# automationedacontroller_controller_verify_ssl = true +# +# Certificate and key to install in Automation {EDAcontroller} node +# automationedacontroller_ssl_cert=/path/to/automationeda.crt +# automationedacontroller_ssl_key=/path/to/automationeda.key + +----- +.Additional resources +For more information about these inventory variables, see link:{URLInstallationGuide}/appendix-inventory-files-vars#hub-variables[{HubNameMain} variables] in the _{PlatformName} Installation Guide_. \ No newline at end of file diff --git a/downstream/modules/platform/ref-gateway-controller-hub-ext-db.adoc b/downstream/modules/platform/ref-gateway-controller-hub-ext-db.adoc new file mode 100644 index 0000000000..082249febc --- /dev/null +++ b/downstream/modules/platform/ref-gateway-controller-hub-ext-db.adoc @@ -0,0 +1,90 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-gateway-controller-hub-ext-db"] + += Single {Gateway}, {ControllerName}, and {HubName} with an external (installer managed) database + +[role="_abstract"] +Use this example to populate the inventory file to deploy single instances of {Gateway}, {ControllerName}, and {HubName} with an external (installer managed) database. + +----- +[automationcontroller] +controller.example.com + +[automationhub] +automationhub.example.com + +[automationgateway] +gateway.example.com + +[database] +data.example.com + +[all:vars] +admin_password='' +redis_mode=standalone +pg_host='data.example.com' +pg_port='5432' +pg_database='awx' +pg_username='awx' +pg_password='' +pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL + +registry_url='registry.redhat.io' +registry_username='' +registry_password='' + +automationhub_admin_password= + +automationhub_pg_host='data.example.com' +automationhub_pg_port=5432 + +automationhub_pg_database='automationhub' +automationhub_pg_username='automationhub' +automationhub_pg_password= +automationhub_pg_sslmode='prefer' + +# The default install will deploy a TLS enabled Automation Hub. +# If for some reason this is not the behavior wanted one can +# disable TLS enabled deployment. +# +# automationhub_disable_https = False +# The default install will generate self-signed certificates for the Automation +# Hub service. If you are providing valid certificate via automationhub_ssl_cert +# and automationhub_ssl_key, one should toggle that value to True. +# +# automationhub_ssl_validate_certs = False +# SSL-related variables +# If set, this will install a custom CA certificate to the system trust store. +# custom_ca_cert=/path/to/ca.crt +# Certificate and key to install in Automation Hub node +# automationhub_ssl_cert=/path/to/automationhub.cert +# automationhub_ssl_key=/path/to/automationhub.key + +# Automation Gateway configuration +automationgateway_admin_password='' + +automationgateway_pg_host='' +automationgateway_pg_port=5432 + +automationgateway_pg_database='automationgateway' +automationgateway_pg_username='automationgateway' +automationgateway_pg_password='' +automationgateway_pg_sslmode='prefer' + +# The main automation gateway URL that clients will connect to (e.g. https://). +# If not specified, the first node in the [automationgateway] group will be used when needed. +# automationgateway_main_url = '' + +# Certificate and key to install in Automation Gateway +# automationgateway_ssl_cert=/path/to/automationgateway.cert +# automationgateway_ssl_key=/path/to/automationgateway.key + +# Certificate and key to install in nginx for the web UI and API +# web_server_ssl_cert=/path/to/tower.cert +# web_server_ssl_key=/path/to/tower.key +# Server-side SSL settings for PostgreSQL (when we are installing it). +# postgres_use_ssl=False +# postgres_ssl_cert=/path/to/pgsql.crt +# postgres_ssl_key=/path/to/pgsql.key +----- diff --git a/downstream/modules/platform/ref-gateway-system-requirements.adoc b/downstream/modules/platform/ref-gateway-system-requirements.adoc new file mode 100644 index 0000000000..ac72946745 --- /dev/null +++ b/downstream/modules/platform/ref-gateway-system-requirements.adoc @@ -0,0 +1,9 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-gateway-system-requirements"] + += {GatewayStart} system requirements + +The {Gateway} is the service that handles authentication and authorization for {PlatformNameShort}. It provides a single entry into the platform and serves the platform's user interface. + +You are required to set `umask=0022`. \ No newline at end of file diff --git a/downstream/modules/platform/ref-gateway-variables.adoc b/downstream/modules/platform/ref-gateway-variables.adoc new file mode 100644 index 0000000000..1a3769f1f3 --- /dev/null +++ b/downstream/modules/platform/ref-gateway-variables.adoc @@ -0,0 +1,313 @@ +:_mod-docs-content-type: REFERENCE + +[id="platform-gateway-variables"] + += {GatewayStart} variables + +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `automationgateway_admin_email` +| `gateway_admin_email` +| Email address used by Django for the admin user for {Gateway}. +| Optional +| `admin@example.com` + +| `automationgateway_admin_password` +| `gateway_admin_password` +| {GatewayStart} administrator password. Use of special characters for this variable is limited. The password can include any printable ASCII character except `/`, `”`, or `@`. +| Required +| + +| `automationgateway_admin_username` +| `gateway_admin_user` +| Username used to identify and create the administrator user in {Gateway}. +| Optional +| `admin` + +| `automationgateway_cache_cert` +| `gateway_redis_tls_cert` +| Path to the {Gateway} Redis certificate file. +| Optional +| + +| `automationgateway_cache_key` +| `gateway_redis_tls_key` +| Path to the {Gateway} Redis key file. +| Optional +| + +| `automationgateway_cache_tls_files_remote` +| +| Denote whether the cache client certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| The value defined in `automationgateway_tls_files_remote` which defaults to `false`. + +| `automationgateway_client_regen_cert` +| +| Controls whether or not to regenerate {Gateway} client certificates for the platform cache. Set to `true` to regenerate {Gateway} client certificates. +| Optional +| `false` + +| `automationgateway_control_plane_port` +| `gateway_control_plane_port` +| Port number for the {Gateway} control plane. +| Optional +| `50051` + +| `automationgateway_disable_hsts` +| `gateway_nginx_disable_hsts` +| Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for {Gateway}. Set this variable to `true` to disable HSTS. +| Optional +| `false` + +| `automationgateway_disable_https` +| `gateway_nginx_disable_https` +| Controls whether HTTPS is enabled or disabled for {Gateway}. Set this variable to `true` to disable HTTPS. +| Optional +| RPM = The value defined in `disable_https` which defaults to `false`. Container = `false`. + +| `automationgateway_firewalld_zone` +| `gateway_proxy_firewall_zone` +| The firewall zone where {Gateway} related firewall rules are applied. This controls which networks can access {Gateway} based on the zone's trust level. +| Optional +| RPM = no default set. Container = 'public'. + +| `automationgateway_grpc_auth_service_timeout` +| `gateway_grpc_auth_service_timeout` +| Timeout duration (in seconds) for requests made to the gRPC service on {Gateway}. +| Optional +| `30s` + +| `automationgateway_grpc_server_max_threads_per_process` +| `gateway_grpc_server_max_threads_per_process` +| Maximum number of threads that each gRPC server process can create to handle requests on {Gateway}. +| Optional +| `10` + +| `automationgateway_grpc_server_processes` +| `gateway_grpc_server_processes` +| Number of processes for handling gRPC requests on {Gateway}. +| Optional +| `5` + +| `automationgateway_http_port` +| `gateway_nginx_http_port` +| Port number that {Gateway} listens on for HTTP requests. +| Optional +| RPM = `8080`. Container = `8083`. + +| `automationgateway_https_port` +| `gateway_nginx_https_port` +| Port number that {Gateway} listens on for HTTPS requests. +| Optional +| RPM = `8443`. Container = `8446`. + +| `automationgateway_main_url` +| `gateway_main_url` +| URL of the main instance of {Gateway} that clients connect to. Use if you are performing a clustered deployment and you need to use the URL of the load balancer instead of the component's server. The URL must start with `http://` or `https://` prefix. +| Optional +| + +| `automationgateway_nginx_tls_files_remote` +| +| Denote whether the web cert sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| The value defined in `automationgateway_tls_files_remote` which defaults to `false`. + + +| `automationgateway_pg_cert_auth` +| `gateway_pg_cert_auth` +| Controls whether client certificate authentication is enabled or disabled on the {Gateway} PostgreSQL database. Set this variable to `true` to enable client certificate authentication. +| Optional +| `false` + +| `automationgateway_pg_database` +| `gateway_pg_database` +| Name of the PostgreSQL database used by {Gateway}. +| Optional +| RPM = `automationgateway`. Container = `gateway`. + +| `automationgateway_pg_host` +| `gateway_pg_host` +| Hostname of the PostgreSQL database used by {Gateway}. +| Required +| + +| `automationgateway_pg_password` +| `gateway_pg_password` +| Password for the {Gateway} PostgreSQL database user. Use of special characters for this variable is limited. The `!`, `#`, `0` and `@` characters are supported. Use of other special characters can cause the setup to fail. +| Optional +| + +| `automationgateway_pg_port` +| `gateway_pg_port` +| Port number for the PostgreSQL database used by {Gateway}. +| Optional +| `5432` + +| `automationgateway_pg_sslmode` +| `gateway_pg_sslmode` +| Controls the SSL mode to use when {Gateway} connects to the PostgreSQL database. Valid options include `verify-full`, `verify-ca`, `require`, `prefer`, `allow`, `disable`. +| Optional +| `prefer` + +| `automationgateway_pg_username` +| `gateway_pg_username` +| Username for the {Gateway} PostgreSQL database user. +| Optional +| RPM = `automationgateway`. Container = `gateway` + +| `automationgateway_pgclient_sslcert` +| `gateway_pg_tls_cert` +| Path to the PostgreSQL SSL/TLS certificate file for {Gateway}. +| Required if using client certificate authentication. +| + +| `automationgateway_pgclient_sslkey` +| `gateway_pg_tls_key` +| Path to the PostgreSQL SSL/TLS key file for {Gateway}. +| Required if using client certificate authentication. +| + +| `automationgateway_pgclient_tls_files_remote` +| +| Denote whether the PostgreSQL client cert sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| The value defined in `automationgateway_tls_files_remote` which defaults to `false`. + +| `automationgateway_redis_host` +| `gateway_redis_host` +| Hostname of the Redis host used by {Gateway}. +| Optional +| First node in the `[automationgateway]` inventory group. + +| `automationgateway_redis_password` +| `gateway_redis_password` +| Password for {Gateway} Redis. +| Optional +| Randomly generated string. + +| `automationgateway_redis_username` +| `gateway_redis_username` +| Username for {Gateway} Redis. +| Optional +| `gateway` + +| `automationgateway_secret_key` +| `gateway_secret_key` +| Secret key value used by {Gateway} to sign and encrypt data. +| Optional +| + +| `automationgateway_ssl_cert` +| `gateway_tls_cert` +| Path to the SSL/TLS certificate file for {Gateway}. +| Optional +| + +| `automationgateway_ssl_key` +| `gateway_tls_key` +| Path to the SSL/TLS key file for {Gateway}. +| Optional +| + +| `automationgateway_tls_files_remote` +| `gateway_tls_remote` +| Denote whether the {Gateway} provided certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `automationgateway_use_archive_compression` +| `gateway_use_archive_compression` +| Controls whether archive compression is enabled or disabled for {Gateway}. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` + +| `automationgateway_use_db_compression` +| `gateway_use_db_compression` +| Controls whether database compression is enabled or disabled for {Gateway}. You can control this functionality globally by using `use_db_compression`. +| Optional +| `true` + +| `automationgateway_user_headers` +| `gateway_nginx_user_headers` +| List of additional NGINX headers to add to {Gateway}'s NGINX configuration. +| Optional +| `[]` + +| `automationgateway_verify_ssl` +| +| Denotes whether or not to verify {Gateway}'s web certificates when making calls from {Gateway} to itself during installation. Set to `false` to disable web certificate verification. +| Optional +| `true` + +| `automationgatewayproxy_disable_https` +| `envoy_disable_https` +| Controls whether or not HTTPS is disabled when accessing the platform UI. Set to `true` to disable HTTPS (HTTP is used instead). +| Optional +| RPM = The value defined in `disable_https` which defaults to `false`. Container = `false`. + +| `automationgatewayproxy_http_port` +| `envoy_http_port` +| Port number on which the Envoy proxy listens for incoming HTTP connections. +| Optional +| `80` + +| `automationgatewayproxy_https_port` +| `envoy_https_port` +| Port number on which the Envoy proxy listens for incoming HTTPS connections. +| Optional +| `443` + +| `nginx_tls_protocols` +| `gateway_nginx_https_protocols` +| Protocols that {Gateway} will support when handling HTTPS traffic. +| Optional +| RPM = `[TLSv1.2]`. Container = `[TLSv1.2, TLSv1.3]`. + +| `redis_disable_tls` +| `gateway_redis_disable_tls` +| Controls whether TLS is enabled or disabled for {Gateway} Redis. Set this variable to `true` to disable TLS. +| Optional +| `false` + +| `redis_port` +| `gateway_redis_port` +| Port number for the Redis host for {Gateway}. +| Optional +| `6379` + +| +| `gateway_extra_settings` +a| Defines additional settings for use by {Gateway} during installation. + +For example: +---- +gateway_extra_settings: + - setting: OAUTH2_PROVIDER['ACCESS_TOKEN_EXPIRE_SECONDS'] + value: 600 +---- +| Optional +| `[]` + +| +| `gateway_nginx_client_max_body_size` +| Maximum allowed size for data sent to {Gateway} through NGINX. +| Optional +| `5m` + +| +| `gateway_nginx_hsts_max_age` +| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for {Gateway}. +| Optional +| `63072000` + +| +| `gateway_uwsgi_listen_queue_size` +| Number of requests `uwsgi` will allow in the queue on {Gateway} until `uwsgi_processes` can serve them. +| Optional +| `4096` + +|=== diff --git a/downstream/modules/platform/ref-general-inventory-variables.adoc b/downstream/modules/platform/ref-general-inventory-variables.adoc index 0b10521d27..ef9701840b 100644 --- a/downstream/modules/platform/ref-general-inventory-variables.adoc +++ b/downstream/modules/platform/ref-general-inventory-variables.adoc @@ -1,45 +1,276 @@ -[id="ref-genera-inventory-variables"] +:_mod-docs-content-type: REFERENCE + +[id="general-variables"] = General variables -[cols="50%,50%",options="header"] -|==== -| *Variable* | *Description* -| *`enable_insights_collection`* | The default install registers the node to the {InsightsName} Service if the node is registered with Subscription Manager. -Set to `False` to disable. +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `aap_ca_cert_file` +|`ca_tls_cert` +| Path to the user provided CA certificate file used to generate SSL/TLS certificates for all {PlatformNameShort} services. +// This content is used in RPM installation +ifdef::aap-install[] +For more information, see link:{URLInstallationGuide}/platform-system-requirements#optional_using_custom_tls_certificates[Optional: Using custom TLS certificates]. +endif::aap-install[] +// This content is used in Containerized installation +ifdef::container-install[] +For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#using-custom-tls-certificates_aap-containerized-installation[Using custom TLS certificates]. +endif::container-install[] +| Optional +| + +| `aap_ca_cert_files_remote` +| `ca_tls_remote` +| Denote whether the CA certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `aap_ca_cert_size` +| +| Bit size of the internally managed CA certificate private key. +| Optional +| `4096` + +| `aap_ca_key_file` +| `ca_tls_key` +| Path to the key file for the CA certificate provided in `aap_ca_cert_file` (RPM) and `ca_tls_cert` (Container). +// This content is used in RPM installation +ifdef::aap-install[] +For more information, see link:{URLInstallationGuide}/platform-system-requirements#optional_using_custom_tls_certificates[Optional: Using custom TLS certificates]. +endif::aap-install[] +// This content is used in Containerized installation +ifdef::container-install[] +For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#using-custom-tls-certificates_aap-containerized-installation[Using custom TLS certificates]. +endif::container-install[] +| Optional +| + +| `aap_ca_passphrase_cipher` +| +| Cipher used for signing the internally managed CA certificate private key. +| Optional +| `aes256` + +| `aap_ca_regenerate` +| +| Denotes whether or not to regenerate the internally managed CA certificate key pair. +| Optional +| `false` + +| `aap_service_cert_size` +| +| Bit size of the component key pair managed by the internal CA. +| Optional +| `4096` + +| `aap_service_regen_cert` +| +| Denotes whether or not to regenerate the component key pair managed by the internal CA. +| Optional +| `false` + +| `aap_service_san_records` +| +| A list of additional SAN records for signing a service. Assign these to components in the inventory file as host variables rather than group or all variables. All strings must also contain their corresponding SAN option prefix such as `DNS:` or `IP:`. +| Optional +| `[]` + +| `backup_dest` +| +| Directory local to `setup.sh` for the final backup file. +| Optional +| The value defined in `setup_dir`. + +| `backup_dir` +| `backup_dir` +| Directory used to store backup files. +| Optional +| RPM = `/var/backups/automation-platform/`. Container = `~/backups` + +| `backup_file_prefix` +| +| Prefix used for the file backup name for the final backup file. +| Optional +| `automation-platform-backup` + +| `bundle_install` +| `bundle_install` +| Controls whether or not to perform an offline or bundled installation. Set this variable to `true` to enable an offline or bundled installation. +| Optional +| `false` if using the setup installation program. `true` if using the setup bundle installation program. + +| `bundle_install_folder` +| `bundle_dir` +| Path to the bundle directory used when performing a bundle install. +| Required if `bundle_install=true` +| RPM = `/var/lib/ansible-automation-platform-bundle`. Container = `/bundle`. + +| `custom_ca_cert` +| `custom_ca_cert` +| Path to the custom CA certificate file. This is required if any of the TLS certificates you manually provided are signed by a custom CA. +// This content is used in RPM installation +ifdef::aap-install[] +For more information, see link:{URLInstallationGuide}/platform-system-requirements#optional_using_custom_tls_certificates[Optional: Using custom TLS certificates]. +endif::aap-install[] +// This content is used in Containerized installation +ifdef::container-install[] +For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#using-custom-tls-certificates_aap-containerized-installation[Using custom TLS certificates]. +endif::container-install[] +| Optional +| + +| `enable_insights_collection` +| +| The default install registers the node to the {InsightsName} for the {PlatformName} Service if the node is registered with Subscription Manager. Set to `false` to disable this functionality. +| Optional +| `true` + +| `registry_password` +| `registry_password` +| Password credential for access to the registry source defined in `registry_url`. +// This content is used in RPM installation +ifdef::aap-install[] +For more information, see link:{URLInstallationGuide}/assembly-platform-install-scenario#proc-set-registry-username-password[Setting registry_username and registry_password]. +endif::aap-install[] +// This content is used in Containerized installation +ifdef::container-install[] +For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#proc-set-registry-username-password[Setting registry_username and registry_password]. +endif::container-install[] +| RPM = Required if you need a password to access `registry_url`. Container = Required if `registry_auth=true`. +| + +| `registry_url` +| `registry_url` +| URL of the registry source from which to pull {ExecEnvShort} images. +| Optional +| `registry.redhat.io` + +| `registry_username` +| `registry_username` +| Username credential for access to the registry source defined in `registry_url`. +// This content is used in RPM installation +ifdef::aap-install[] +For more information, see link:{URLInstallationGuide}/assembly-platform-install-scenario#proc-set-registry-username-password[Setting registry_username and registry_password]. +endif::aap-install[] +// This content is used in Containerized installation +ifdef::container-install[] +For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#proc-set-registry-username-password[Setting registry_username and registry_password]. +endif::container-install[] +| RPM = Required if you need a password to access `registry_url`. Container = Required if `registry_auth=true`. +| + +| `registry_verify_ssl` +| `registry_tls_verify` +| Controls whether SSL/TLS certificate verification is enabled or disabled when making HTTPS requests. +| Optional +| `true` + +| `restore_backup_file` +| +| Path to the tar file used for the platform restore. +| Optional +| `{{ setup_dir }}/automation-platform-backup-latest.tar.gz` + +| `restore_file_prefix` +| +| Path prefix for the staged restore components. +| Optional +| `automation-platform-restore` + +| `routable_hostname` +| `routable_hostname` +| Used if the machine running the installation program can only route to the target host through a specific URL. +For example, if you use short names in your inventory, but the node running the installation program can only resolve that host by using a FQDN. If `routable_hostname` is not set, it defaults to `ansible_host`. +If you do not set `ansible_host`, `inventory_hostname` is used as a last resort. This variable is used as a host variable for particular hosts and not under the `[all:vars]` section. +For further information, see link:https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-one-machine-host-variables[Assigning a variable to one machine: host variables]. +| Optional +| -Default = `true` -| *`nginx_user_http_config`* | List of nginx configurations for `/etc/nginx/nginx.conf` under the http section. +| `use_archive_compression` +| `use_archive_compression` +a| Controls at a global level whether the filesystem-related backup files are compressed before being sent to the host to run the backup operation. If set to `true`, a `tar.gz` file is generated on each {PlatformNameShort} host and then gzip compression is used. If set to `false`, a simple tar file is generated. -Each element in the list is provided into `http nginx config` as a separate line. +You can control this functionality at a component level by using the `_use_archive_compression` variables. +| Optional +| `true` -Default = empty list -| *`registry_password`* | `registry_password` is only required if a non-bundle installer is used. +| `use_db_compression` +| `use_db_compression` +a| Controls at a global level whether the database-related backup files are compressed before being sent to the host to run the backup operation. -Password credential for access to `registry_url`. +You can control this functionality at a component level by using the `_use_db_compression` variables. +| Optional +| `true` -Used for both `[automationcontroller]` and `[automationhub]` groups. +| +| `ca_tls_key_passphrase` +| Passphrase used to decrypt the key provided in `ca_tls_key`. +| Optional +| -Enter your Red Hat Registry Service Account credentials in `registry_username` and `registry_password` to link to the Red Hat container registry. +| +| `client_request_timeout` +| Sets the HTTP timeout for end-user requests. The minimum value is `10` seconds. +| Optional +| `30` -When `registry_url` is `registry.redhat.io`, username and password are required if not using a bundle installer. -| *`registry_url`* | Used for both `[automationcontroller]` and `[automationhub]` groups. +| +| `container_compress` +| Compression software to use for compressing container images. +| Optional +| `gzip` -Default = `registry.redhat.io`. -| *`registry_username`* | `registry_username` is only required if a non-bundle installer is used. +| +| `container_keep_images` +| Controls whether or not to keep container images when uninstalling {PlatformNameShort}. +Set to `true` to keep container images when uninstalling {PlatformNameShort}. +| Optional +| `false` -User credential for access to `registry_url`. +| +| `container_pull_images` +| Controls whether or not to pull newer container images during installation. +Set to `false` to prevent pulling newer container images during installation. +| Optional +| `true` -Used for both `[automationcontroller]` and `[automationhub]` groups, but only if the value of `registry_url` is `registry.redhat.io`. +| +| `images_tmp_dir` +| The directory where the installation program temporarily stores container images during installation. +| Optional +| The system's temporary directory. -Enter your Red Hat Registry Service Account credentials in `registry_username` and `registry_password` to link to the Red Hat container registry. -| *`routable_hostname`* | `routable hostname` is used if the machine running the installer can only route to the target host through a specific URL, for example, if you use shortnames in your inventory, but the node running the installer can only resolve that host using FQDN. +| +| `pcp_firewall_zone` +| The firewall zone where Performance Co-Pilot related firewall rules are applied. This controls which networks can access Performance Co-Pilot based on the zone's trust level. +| Optional +| public -If `routable_hostname` is not set, it should default to `ansible_host`. If you do not set `ansible_host`, `inventory_hostname` is used as a last resort. +| +| `pcp_use_archive_compression` +| Controls whether archive compression is enabled or disabled for Performance Co-Pilot. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` -This variable is used as a host variable for particular hosts and not under the `[all:vars]` section. -For further information, see link:https://docs.ansible.com/ansible/latest/inventory_guide/intro_inventory.html#assigning-a-variable-to-one-machine-host-variables[Assigning a variable to one machine:host variables]. -|==== +| +| `registry_auth` +| Set whether or not to use registry authentication. When this variable is set to true, `registry_username` and `registry_password` are required. +| Optional +| `true` +| +| `registry_ns_aap` +| {PlatformNameShort} registry namespace. +| Optional +| `ansible-automation-platform-26` +| +| `registry_ns_rhel` +| RHEL registry namespace. +| Optional +| `rhel8` +|=== diff --git a/downstream/modules/platform/ref-get-started-credential-types.adoc b/downstream/modules/platform/ref-get-started-credential-types.adoc index 857820a4fe..5d3a3a81a9 100644 --- a/downstream/modules/platform/ref-get-started-credential-types.adoc +++ b/downstream/modules/platform/ref-get-started-credential-types.adoc @@ -1,18 +1,23 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-get-started-credential-types"] = Getting started with credential types //[ddacosta] Consider rewriting this as a procedure. -From the navigation panel, select {MenuAMCredentialType}. -If no custom credential types have been created, the *Credential Types* prompts you to add one. +.Procedure +. From the navigation panel, select {MenuAECredentials}. +If no custom credential types have been created, the *Credential Types* page prompts you to add one. ++ //image:credential-types-home-empty.png[Credential Types - empty] - ++ If credential types have been created, this page displays a list of existing and available Credential Types. - ++ //image:credential-types-home-with-example-types.png[Credential Types - example credential types] -To view more information about a credential type, click the name of a credential or the Edit image:leftpencil.png[Edit, 15,15] icon. +. Select the name of a credential or the Edit image:leftpencil.png[Edit, 15,15] icon to view more information about a credential type, . -Each credential type displays its own unique configurations in the *Input Configuration* field and the *Injector Configuration* field, if -applicable. +. On the *Details* tab, each credential type displays its own unique configurations in the *Input Configuration* field and the *Injector Configuration* field, if applicable. Both YAML and JSON formats are supported in the configuration fields. + +//NOTE The Back to Credential Types Tab throws an error. diff --git a/downstream/modules/platform/ref-gs-about-container-registries.adoc b/downstream/modules/platform/ref-gs-about-container-registries.adoc new file mode 100644 index 0000000000..34bb888a39 --- /dev/null +++ b/downstream/modules/platform/ref-gs-about-container-registries.adoc @@ -0,0 +1,10 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-06-25 +:_mod-docs-content-type: REFERENCE + +[id="about-container-registries_{context}"] += About container registries + +If you have many {ExecEnvName} that you want to maintain, you can store them in a container registry linked to your {PrivateHubName}. + +For more information, see link:{URLBuilder}/assembly-populate-container-registry[Populating your private automation hub container registry] from the {TitleBuilder} guide. diff --git a/downstream/modules/platform/ref-gs-install-config.adoc b/downstream/modules/platform/ref-gs-install-config.adoc new file mode 100644 index 0000000000..714244eb66 --- /dev/null +++ b/downstream/modules/platform/ref-gs-install-config.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-gs-install-config"] + += {PlatformNameShort} installation and configuration + +{PlatformName} offers flexible installation and configuration options. +Depending on your organization's needs, you can install {PlatformName} using one of the following methods, based on your environment: + +* link:{LinkInstallationGuide} +* link:{LinkOperatorInstallation} +* link:{BaseURL}/ansible_on_clouds/2.x[Cloud environments] +* link:{LinkContainerizedInstall} diff --git a/downstream/modules/platform/ref-guidelines-hosts-groups.adoc b/downstream/modules/platform/ref-guidelines-hosts-groups.adoc index 7ac74f44c8..47f8daf332 100644 --- a/downstream/modules/platform/ref-guidelines-hosts-groups.adoc +++ b/downstream/modules/platform/ref-guidelines-hosts-groups.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-guidelines-hosts-groups"] = Guidelines for hosts and groups @@ -6,6 +8,10 @@ * When using an external database, ensure the `[database]` sections of your inventory file are properly set up. * To improve performance, do not colocate the database and the {ControllerName} on the same server. +[IMPORTANT] +==== +When using an external database with {PlatformNameShort}, you must create and maintain that database. Ensure that you clear your external database when uninstalling the {PlatformNameShort}. +==== .{HubNameStart} * If there is an `[automationhub]` group, you must include the variables `automationhub_pg_host` and `automationhub_pg_port`. @@ -13,7 +19,7 @@ * Do not install {HubNameMain} and {ControllerName} on the same node. * Provide a reachable IP address or fully qualified domain name (FQDN) for the `[automationhub]` and `[automationcontroller]` hosts to ensure that users can synchronize and install content from {HubNameMain} and {ControllerName} from a different node. + -The FQDN must not contain either the `-` or the `_` symbols, as it will not be processed correctly. +The FQDN must not contain the `_` symbol, as it will not be processed correctly in Skopeo. You may use the `-` symbol, as long as it is not at the start or the end of the host name. + Do not use `localhost`. @@ -31,11 +37,14 @@ If you use one value in `[database]` and both {ControllerName} and {HubNameMain} .{ControllerNameStart} * {ControllerNameStart} does not configure replication or failover for the database that it uses. -* {ControllerName} works with any replication that you have. +* {ControllerNameStart} works with any replication that you have. .{EDAcontroller} * {EDAcontroller} must be installed on a separate server and cannot be installed on the same host as {HubName} and {ControllerName}. +.{GatewayStart} +* The {Gateway} is the service that handles authentication and authorization for {PlatformNameShort}. It provides a single entry into the platform and serves the platform’s user interface. + .Clustered installations * When upgrading an existing cluster, you can also reconfigure your cluster to omit existing instances or instance groups. Omitting the instance or the instance group from the inventory file is not enough to remove them from the cluster. diff --git a/downstream/modules/platform/ref-gw-access-rules-apps-tokens.adoc b/downstream/modules/platform/ref-gw-access-rules-apps-tokens.adoc new file mode 100644 index 0000000000..f8330b72f6 --- /dev/null +++ b/downstream/modules/platform/ref-gw-access-rules-apps-tokens.adoc @@ -0,0 +1,26 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-gw-access-rules-apps-tokens"] + +Access rules for applications are as follows: + +* Platform administrators can view and manipulate all applications in the system. +//[ddacosta-aap-38726] Org administrators do not have this access in gateway. +//* Organization administrators can view and manipulate all applications belonging to organization members. +//* Other users can only view, update, and delete their own applications, but cannot create any new applications. +* Platform auditors can only view applications in the system. +* Tokens, on the other hand, are resources used to authenticate incoming requests and mask the permissions of the underlying user. + +Access rules for tokens are as follows: + +* Users can create personal access tokens for themselves. +* Platform administrators are able to view and manipulate every token in the system. +//[ddacosta-aap-38726] Org administrators do not have this access in gateway. +//* Organization administrators are able to view and manipulate all tokens belonging to organization members. +* Platform auditors can only view tokens in the system. +* Other normal users are only able to view and manipulate their own tokens. + +[NOTE] +==== +Users can only view the token or refresh the token value at the time of creation. +==== \ No newline at end of file diff --git a/downstream/modules/platform/ref-gw-application-functions.adoc b/downstream/modules/platform/ref-gw-application-functions.adoc new file mode 100644 index 0000000000..c898b13a4a --- /dev/null +++ b/downstream/modules/platform/ref-gw-application-functions.adoc @@ -0,0 +1,19 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-gw-application-functions"] + += Application functions + +Several OAuth 2 utilities are available for authorization, token refresh, and revoke. +You can specify the following grant types when creating an application: + +Password:: This grant type is ideal for users who have native access to the web application and must be used when the client is the resource owner. +Authorization code:: This grant type should be used when access tokens must be issued directly to an external application or service. + +[NOTE] +==== +You can only use the authorization code type to acquire an access token when using an application. When integrating an external web application with {PlatformNameShort}, that web application might need to create OAuth2 tokens on behalf of users in that other web application. Creating an application in the platform with the authorization code grant type is the preferred way to do this because: + +* This allows an external application to obtain a token from {PlatformNameShort} for a user, using their credentials. +* Compartmentalized tokens issued for a particular application enables those tokens to be easily managed. For example, revoking _all_ tokens associated with that application without having to revoke all tokens in the system. +==== diff --git a/downstream/modules/platform/ref-gw-request-token-after-expiration.adoc b/downstream/modules/platform/ref-gw-request-token-after-expiration.adoc new file mode 100644 index 0000000000..9a9b5750fc --- /dev/null +++ b/downstream/modules/platform/ref-gw-request-token-after-expiration.adoc @@ -0,0 +1,21 @@ +:_mod-docs-content-type: REFERENCE + +[id="gw-request-token-after-expiration"] + += Requesting an access token after expiration + +The default expiration for access tokens is 1 year. + +The best way to set up application integrations using the *Authorization code* grant type is to allowlist the origins for those cross-site requests. More generally, you must allowlist the service or application you are integrating with the platform, for which you want to provide access tokens. + +To do this, have your administrator add this allowlist to their local {PlatformNameShort} settings file: + +---- +CORS_ORIGIN_ALLOW_ALL = True +CORS_ALLOWED_ORIGIN_REGEXES = [ + r"http://django-oauth-toolkit.herokuapp.com*", + r"http://www.example.com*" +] +---- + +Where `http://django-oauth-toolkit.herokuapp.com` and `http://www.example.com` are applications requiring tokens with which to access the platform. diff --git a/downstream/modules/platform/ref-ha-hub-reqs.adoc b/downstream/modules/platform/ref-ha-hub-reqs.adoc index 67bd7443d6..565801aefd 100644 --- a/downstream/modules/platform/ref-ha-hub-reqs.adoc +++ b/downstream/modules/platform/ref-ha-hub-reqs.adoc @@ -1,17 +1,21 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-ha-hub-reqs"] = High availability {HubName} requirements -Before deploying a high availability (HA) {HubName}, ensure that you have a shared filesystem installed in your environment and that you have configured your network storage system, if applicable. +Before deploying a high availability (HA) {HubName}, ensure that you have a shared storage file system installed in your environment and that you have configured your network storage system, if applicable. + +== Required shared storage -== Required shared filesystem +Shared storage is required when installing more than one {HubNameStart} with a `file` storage backend. The supported shared storage type for RPM-based installations is Network File System (NFS). -A high availability {HubName} requires you to have a shared file system, such as NFS, already installed in your environment. Before you run the {PlatformName} installer, verify that you installed the `/var/lib/pulp` directory across your cluster as part of the shared file system installation. +Before you run the {PlatformName} installer, verify that you installed the `/var/lib/pulp` directory across your cluster as part of the shared storage file system installation. The {PlatformName} installer returns an error if `/var/lib/pulp` is not detected in one of your nodes, causing your high availability {HubName} setup to fail. If you receive an error stating `/var/lib/pulp` is not detected in one of your nodes, ensure `/var/lib/pulp` is properly mounted in all servers and re-run the installer. -== Installing firewalld for network storage +== Installing firewalld for HA hub deployment If you intend to install a HA {HubName} using a network storage on the {HubName} nodes itself, you must first install and use `firewalld` to open the necessary ports as required by your shared storage system before running the {PlatformNameShort} installer. diff --git a/downstream/modules/platform/ref-hashicorp-signed-ssh.adoc b/downstream/modules/platform/ref-hashicorp-signed-ssh.adoc index dbef786e46..d614a25b02 100644 --- a/downstream/modules/platform/ref-hashicorp-signed-ssh.adoc +++ b/downstream/modules/platform/ref-hashicorp-signed-ssh.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-hashicorp-signed-ssh"] = HashiCorp Vault Signed SSH diff --git a/downstream/modules/platform/ref-hashicorp-vault-lookup.adoc b/downstream/modules/platform/ref-hashicorp-vault-lookup.adoc index d66525d3d1..a61f433dbc 100644 --- a/downstream/modules/platform/ref-hashicorp-vault-lookup.adoc +++ b/downstream/modules/platform/ref-hashicorp-vault-lookup.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-hashicorp-vault-lookup"] = HashiCorp Vault Secret Lookup diff --git a/downstream/modules/platform/ref-hub-variables.adoc b/downstream/modules/platform/ref-hub-variables.adoc index dbbb8aabd7..5bde6f9d01 100644 --- a/downstream/modules/platform/ref-hub-variables.adoc +++ b/downstream/modules/platform/ref-hub-variables.adoc @@ -1,384 +1,446 @@ -[id="ref-hub-variables"] - -= {HubNameMain} variables - -[cols="50%,50%",options="header"] -|==== -| *Variable* | *Description* -| *`automationhub_admin_password`* | Required - -Passwords must be enclosed in quotes when they are provided in plain text in the inventory file. -| *`automationhub_api_token`* a| If upgrading from {PlatformNameShort} 2.0 or earlier, you must either: - -* provide an existing {HubNameMain} token as `automationhub_api_token`, or - -* set `generate_automationhub_token` to `true` to generate a new token - -Generating a new token invalidates the existing token. -| *`automationhub_authentication_backend`* a| This variable is not set by default. -Set it to `ldap` to use LDAP authentication. - -When this is set to `ldap`, you must also set the following variables: - -* `automationhub_ldap_server_uri` -* `automationhub_ldap_bind_dn` -* `automationhub_ldap_bind_password` -* `automationhub_ldap_user_search_base_dn` -* `automationhub_ldap_group_search_base_dn` - -If any of these are absent, the installation will be halted. - -| *`automationhub_auto_sign_collections`* | If a collection signing service is enabled, collections are not signed automatically by default. - -Setting this parameter to `true` signs them by default. - -Default = `false`. -| *`automationhub_backup_collections`* | _Optional_ - -{HubNameMain} provides artifacts in `/var/lib/pulp`. -{ControllerNameStart} automatically backs up the artifacts by default. - -You can also set `automationhub_backup_collections` to false and the backup/restore process does not then backup or restore `/var/lib/pulp`. - -Default = `true`. -| *`automationhub_collection_download_count`* | _Optional_ - -Determines whether download count is displayed on the UI. - -Default = `false`. -| *`automationhub_collection_seed_repository`* a| When you run the bundle installer, validated content is uploaded to the `validated` repository, and certified content is uploaded to the `rh-certified` repository. - -By default, both certified and validated content are uploaded. - -Possible values of this variable are 'certified' or 'validated'. - -If you do not want to install content, set `automationhub_seed_collections` to `false` to disable the seeding. - -If you only want one type of content, set `automationhub_seed_collections` to `true` and `automationhub_collection_seed_repository` to the type of content you do want to include. -| *`automationhub_collection_signing_service_key`* | If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed. - -`/absolute/path/to/key/to/sign` -| *`automationhub_collection_signing_service_script`* | If a collection signing service is enabled, you must provide this variable to ensure that collections can be properly signed. - -`/absolute/path/to/script/that/signs` -| *`automationhub_create_default_collection_signing_service`* | Set this variable to true to create a collection signing service. - -Default = `false`. -| *`automationhub_container_signing_service_key`* | If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed. - -`/absolute/path/to/key/to/sign` -| *`automationhub_container_signing_service_script`* | If a container signing service is enabled, you must provide this variable to ensure that containers can be properly signed. - -`/absolute/path/to/script/that/signs` -| *`automationhub_create_default_container_signing_service`* | Set this variable to true to create a container signing service. - -Default = `false`. -| *`automationhub_disable_hsts`* | The default installation deploys a TLS enabled {HubNameMain}. -Use this variable if you deploy {HubName} with _HTTP Strict Transport Security_ (HSTS) web-security policy enabled. -This variable disables, the HSTS web-security policy mechanism. - -Default = `false`. -| *`automationhub_disable_https`* | _Optional_ - -If {HubNameMain} is deployed with HTTPS enabled. - -Default = `false`. -| *`automationhub_enable_api_access_log`* | When set to `true`, this variable creates a log file at `/var/log/galaxy_api_access.log` that logs all user actions made to the platform, including their username and IP address. - -Default = `false`. -| *`automationhub_enable_analytics`* | A Boolean indicating whether to enable pulp analytics for the version of pulpcore used in {HubName} in {PlatformNameShort} {PlatformVers}. - -To enable pulp analytics, set `automationhub_enable_analytics` to true. - -Default = `false`. -| *`automationhub_enable_unauthenticated_collection_access`* | Set this variable to true to enable unauthorized users to view collections. - -Default = `false`. -| *`automationhub_enable_unauthenticated_collection_download`* | Set this variable to true to enable unauthorized users to download collections. - -Default = `false`. -| *`automationhub_importer_settings`* | _Optional_ - -Dictionary of setting to pass to galaxy-importer. - -At import time collections can go through a series of checks. - -Behavior is driven by `galaxy-importer.cfg` configuration. - -Examples are `ansible-doc`, `ansible-lint`, and `flake8`. - -This parameter enables you to drive this configuration. -| *`automationhub_main_url`* | The main {HubName} URL that clients connect to. - -For example, \https://. - -Use `automationhub_main_url` to specify the main {HubName} URL that clients connect to if you are implementing {RHSSO} on your {HubName} environment. - -If not specified, the first node in the `[automationhub]` group is used. -| *`automationhub_pg_database`* | _Required_ - -The database name. - -Default = `automationhub`. -| *`automationhub_pg_host`* | Required if not using an internal database. - -The hostname of the remote PostgreSQL database used by {HubName}. - -Default = `127.0.0.1`. -| *`automationhub_pg_password`* | The password for the {HubName} PostgreSQL database. - -Use of special characters for `automationhub_pg_password` is limited. -The `!`, `#`, `0` and `@` characters are supported. -Use of other special characters can cause the setup to fail. -| *`automationhub_pg_port`* | Required if not using an internal database. - -Default = 5432. -| *`automationhub_pg_sslmode`* | Required. - -Default = `prefer`. -| *`automationhub_pg_username`* | Required - -Default = `automationhub`. -| *`automationhub_require_content_approval`* | _Optional_ - -Value is `true` if {HubName} enforces the approval mechanism before collections are made available. - -By default when you upload collections to {HubName} an administrator must approve it before they are made available to the users. - -If you want to disable the content approval flow, set the variable to `false`. - -Default = `true`. -| *`automationhub_seed_collections`* | A Boolean that defines whether or not preloading is enabled. - -When you run the bundle installer, validated content is uploaded to the `validated` repository, and certified content is uploaded to the `rh-certified` repository. - -By default, both certified and validated content are uploaded. - -If you do not want to install content, set `automationhub_seed_collections` to `false` to disable the seeding. - -If you only want one type of content, set `automationhub_seed_collections` to `true` and `automationhub_collection_seed_repository` to the type of content you do want to include. - -Default = `true`. -| *`automationhub_ssl_cert`* | _Optional_ - -`/path/to/automationhub.cert` -Same as `web_server_ssl_cert` but for {HubName} UI and API. -| *`automationhub_ssl_key`* | _Optional_ - -`/path/to/automationhub.key`. - -Same as `web_server_ssl_key` but for {HubName} UI and API -| *`automationhub_ssl_validate_certs`* | For {PlatformName} 2.2 and later, this value is no longer used. - -Set value to `true` if {HubName} must validate certificates when requesting itself because by default, {PlatformNameShort} deploys with self-signed certificates. - -Default = `false`. -| *`automationhub_upgrade`* | *Deprecated* - -For {PlatformNameShort} 2.2.1 and later, the value of this has been fixed at `true`. - -{HubNameStart} always updates with the latest packages. -| *`automationhub_user_headers`* | List of nginx headers for {HubNameMain}'s web server. - -Each element in the list is provided to the web server's nginx configuration as a separate line. - -Default = empty list -| *`ee_from_hub_only`* | When deployed with {HubName} the installer pushes {ExecEnvShort} images to {HubName} and configures {ControllerName} to pull images from the {HubName} registry. - -To make {HubName} the only registry to pull {ExecEnvShort} images from, set this variable to `true`. - -If set to `false`, {ExecEnvShort} images are also taken directly from Red Hat. - -Default = `true` when the bundle installer is used. -| *`generate_automationhub_token`* a| If upgrading from {PlatformName} 2.0 or earlier, choose one of the following options: - -* provide an existing {HubNameMain} token as `automationhub_api_token` - -* set `generate_automationhub_token` to `true` to generate a new token. -Generating a new token will invalidate the existing token. -| *`nginx_hsts_max_age`* | This variable specifies how long, in seconds, the system should be considered as a _HTTP Strict Transport Security_ (HSTS) host. That is, how long HTTPS is used exclusively for communication. - -Default = 63072000 seconds, or two years. -| *`nginx_tls_protocols`* | Defines support for `ssl_protocols` in Nginx. - -Values available `TLSv1`, `TLSv1.1, `TLSv1.2`, `TLSv1.3` - -The TLSv1.1 and TLSv1.2 parameters only work when OpenSSL 1.0.1 or higher is used. - -The TLSv1.3 parameter only works when OpenSSL 1.1.1 or higher is used. - -If `nginx_tls-protocols = ['TLSv1.3']` only TLSv1.3 is enabled. -To set more than one protocol use `nginx_tls_protocols = ['TLSv1.2', 'TLSv.1.3']` - -Default = `TLSv1.2`. -| *`pulp_db_fields_key`* | Relative or absolute path to the Fernet symmetric encryption key that you want to import. -The path is on the Ansible management node. -It is used to encrypt certain fields in the database, such as credentials. -If not specified, a new key will be generated. -| *`sso_automation_platform_login_theme`* | _Optional_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO}. - -Path to the directory where theme files are located. -If changing this variable, you must provide your own theme files. - -Default = `ansible-automation-platform`. -| *`sso_automation_platform_realm`* | _Optional_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO}. - -The name of the realm in SSO. - -Default = `ansible-automation-platform`. -| *`sso_automation_platform_realm_displayname`* | _Optional_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO}. - -Display name for the realm. - -Default = `Ansible Automation Platform`. -//| *`sso_http_port`* or *`sso_https_port`* | IP or routable hostname for SSO. -// -//Default = `8080` for http, `8443` for https -| *`sso_console_admin_username`* | _Optional_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO}. - -SSO administration username. - -Default = `admin`. -| *`sso_console_admin_password`* | _Required_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO}. - -SSO administration password. -//| *`sso_console_keystore_file`* | Keystore file to install in SSO node. -// -//`/path/to/sso.jks` -| *`sso_custom_keystore_file`* | _Optional_ - -Used for {PlatformNameShort} managed {RHSSO} only. - -Customer-provided keystore for SSO. -| *`sso_host`* | _Required_ - -Used for {PlatformNameShort} externally managed {RHSSO} only. - -{HubNameStart} requires SSO and SSO administration credentials for -authentication. - -If SSO is not provided in the inventory for configuration, then you must use this variable to define the SSO host. -| *`sso_keystore_file_remote`* | _Optional_ - -Used for {PlatformNameShort} managed {RHSSO} only. - -Set to `true` if the customer-provided keystore is on a remote node. - -Default = `false`. -| *`sso_keystore_name`* | _Optional_ - -Used for {PlatformNameShort} managed {RHSSO} only. - -Name of keystore for SSO. - -Default = `ansible-automation-platform`. -| *`sso_keystore_password`* | Password for keystore for HTTPS enabled SSO. - -Required when using {PlatformNameShort} managed SSO and when HTTPS is enabled. The default install deploys SSO with `sso_use_https=true`. -| *`sso_redirect_host`* | _Optional_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO}. - -If `sso_redirect_host` is set, it is used by the application to connect to SSO for authentication. - -This must be reachable from client machines. -| *`sso_ssl_validate_certs`* | _Optional_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO}. - -Set to `true` if the certificate must be validated during connection. - -Default = `true`. - -| *`sso_use_https`* | _Optional_ - -Used for {PlatformNameShort} managed and externally managed {RHSSO} if Single Sign On uses HTTPS. - -Default = `true`. -|==== - -For {HubNameMain} to connect to LDAP directly, you must configure the following variables: -A list of additional LDAP related variables that can be passed using the `ldap_extra_settings` variable, see the link:https://django-auth-ldap.readthedocs.io/en/latest/reference.html#settings[Django reference documentation]. - -[cols="50%,50%",options="header"] -|==== -| *Variable* | *Description* -| *`automationhub_ldap_bind_dn`* | The name to use when binding to the LDAP server with `automationhub_ldap_bind_password`. - -Must be set when integrating {PrivateHubName} with LDAP, or the installation will fail. - -| *`automationhub_ldap_bind_password`* | _Required_ - -The password to use with `automationhub_ldap_bind_dn`. - -Must be set when integrating {PrivateHubName} LDAP, or the installation will fail. -| *`automationhub_ldap_group_search_base_dn`* | An LDAP Search object that finds all LDAP groups that users might belong to. - -If your configuration makes any references to LDAP groups, you must set this variable and `automationhub_ldap_group_type`. - -Must be set when integrating {PrivateHubName} with LDAP, or the installation will fail. - -Default = `None` -| *`automationhub_ldap_group_search_filter`* | _Optional_ - -Search filter for finding group membership. - -Variable identifies what objectClass type to use for mapping groups with {HubName} and LDAP. -Used for installing {HubName} with LDAP. - -Default = `(objectClass=Group)` -| *`automationhub_ldap_group_search_scope`* | _Optional_ - -Scope to search for groups in an LDAP tree using the django framework for LDAP authentication. -Used for installing {HubName} with LDAP. - -Default = `SUBTREE` -| *`automationhub_ldap_group_type`* | - -Describes the type of group returned by *automationhub_ldap_group_search*. - -This is set dynamically based on the the values of *automationhub_ldap_group_type_params* and *automationhub_ldap_group_type_class*, otherwise it is the default value coming from django-ldap which is 'None' - -Default = `django_auth_ldap.config:GroupOfNamesType` -| *`automationhub_ldap_group_type_class`* | _Optional_ - -The importable path for the django-ldap group type class. - -Variable identifies the group type used during group searches within the django framework for LDAP authentication. -Used for installing {HubName} with LDAP. - -Default =`django_auth_ldap.config:GroupOfNamesType` -//Removed as it seems not to be an inventory file variable, but is used in ldapextras.yml -//| *`automationhub_ldap_group_type_params`* | -// -//Default = "name_attr": "cn" -| *`automationhub_ldap_server_uri`* | The URI of the LDAP server. - -Use any URI that is supported by your underlying LDAP libraries. - -Must be set when integrating {PrivateHubName} LDAP, or the installation will fail. -| *`automationhub_ldap_user_search_base_dn`* | An LDAP Search object that locates a user in the directory. -The filter parameter must contain the placeholder %(user)s for the username. -It must return exactly one result for authentication to succeed. - -Must be set when integrating {PrivateHubName} with LDAP, or the installation will fail. -| *`automationhub_ldap_user_search_filter`* | _Optional_ - -Default = `'(uid=%(user)s)'` -| *`automationhub_ldap_user_search_scope`* | _Optional_ - -Scope to search for users in an LDAP tree by using the django framework for LDAP authentication. -Used for installing {HubName} with LDAP. - -Default = `SUBTREE` -|==== +:_mod-docs-content-type: REFERENCE + +[id="hub-variables"] + += {HubNameStart} variables + +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `automationhub_admin_password` +| `hub_admin_password` +| {HubNameStart} administrator password. +Use of special characters for this variable is limited. The password can include any printable ASCII character except `/`, `”`, or `@`. +| Required +| + +| `automationhub_api_token` +| +| Set the existing token for the installation program. +For example, a regenerated token in the {HubName} UI will invalidate an existing token. Use this variable to set that token in the installation program the next time you run the installation program. +| Optional +| + +| `automationhub_auto_sign_collections` +| `hub_collection_auto_sign` +| If a collection signing service is enabled, collections are not signed automatically by default. +Set this variable to `true` to sign collections by default. +| Optional +| `false` + +| `automationhub_backup_collections` +| +| {HubNameMain} provides artifacts in `/var/lib/pulp`. These artifacts are automatically backed up by default. +Set this variable to `false` to prevent backup or restore of `/var/lib/pulp`. +| Optional +| `true` + +| `automationhub_client_max_body_size` +| `hub_nginx_client_max_body_size` +| Maximum allowed size for data sent to {HubName} through NGINX. +| Optional +| `20m` + +| `automationhub_collection_download_count` +| +| Denote whether or not the collection download count should be displayed in the UI. +| Optional +| `false` + +| `automationhub_collection_seed_repository` +| +| Controls the type of content to upload when `hub_seed_collections` is set to `true`. +Valid options include: `certified`, `validated` +| Optional +| Both certified and validated are enabled by default. + +| `automationhub_collection_signing_service_key` +| `hub_collection_signing_key` +| Path to the collection signing key file. +| Required if a collection signing service is enabled. +| + +| `automationhub_container_repair_media_type` +| +| Denote whether or not to run the command `pulpcore-manager container-repair-media-type`. + +Valid options include: `true`, `false`, `auto` +| Optional +| `auto` + +| `automationhub_container_signing_service_key` +| `hub_container_signing_key` +| Path to the container signing key file. +| Required if a container signing service is enabled. +| + +| `automationhub_create_default_collection_signing_service` +| `hub_collection_signing` +| Set this variable to `true` to enable a collection signing service. +| Optional +| `false` + +| `automationhub_create_default_container_signing_service` +| `hub_container_signing` +| Set this variable to `true` to enable a container signing service. +| Optional +| `false` + +| `automationhub_disable_hsts` +| `hub_nginx_disable_hsts` +| Controls whether HTTP Strict Transport Security (HSTS) is enabled or disabled for {HubName}. +Set this variable to `true` to disable HSTS. +| Optional +| `false` + +| `automationhub_disable_https` +| `hub_nginx_disable_https` +| Controls whether HTTPS is enabled or disabled for {HubName}. +Set this variable to `true` to disable HTTPS. +| Optional +| `false` + +| `automationhub_enable_api_access_log` +| +| Controls whether logging is enabled or disabled at `/var/log/galaxy_api_access.log`. +The file logs all user actions made to the platform, including username and IP address. +Set this variable to `true` to enable this logging. +| Optional +| `false` + +| `automationhub_enable_unauthenticated_collection_access` +| +| Controls whether read-only access is enabled or disabled for unauthorized users viewing collections or namespaces for {HubName}. +Set this variable to `true` to enable read-only access. +| Optional +| `false` + +| `automationhub_enable_unauthenticated_collection_download` +| +| Controls whether or not unauthorized users can download read-only collections from {HubName}. +Set this variable to `true` to enable download of read-only collections. +| Optional +| `false` + +| `automationhub_firewalld_zone` +| `hub_firewall_zone` +| The firewall zone where {HubName} related firewall rules are applied. This controls which networks can access {HubName} based on the zone's trust level. +| Optional +| RPM = no default set. Container = `public`. + +| `automationhub_force_change_admin_password` +| +| Denote whether or not to require the change of the default administrator password for {HubName} during installation. + +Set to `true` to require the user to change the default administrator password during installation. +| Optional +| `false` + +| `automationhub_importer_settings` +| `hub_galaxy_importer` +| Dictionary of settings to pass to the `galaxy-importer.cfg` configuration file. These settings control how the `galaxy-importer` service processes and validates Ansible content. +Example values include: `ansible-doc`, `ansible-lint`, and `flake8`. +| Optional +| + +| `automationhub_nginx_tls_files_remote` +| +| Denote whether the web certificate sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| The value defined in `automationhub_tls_files_remote`. + +| `automationhub_pg_cert_auth` +| `hub_pg_cert_auth` +| Controls whether client certificate authentication is enabled or disabled on the {HubName} PostgreSQL database. +Set this variable to `true` to enable client certificate authentication. +| Optional +| `false` + +| `automationhub_pg_database` +| `hub_pg_database` +| Name of the PostgreSQL database used by {HubName}. +| Optional +| RPM = `automationhub`. +Container = `pulp` + +| `automationhub_pg_host` +| `hub_pg_host` +| Hostname of the PostgreSQL database used by {HubName}. +| Required +| RPM = `127.0.0.1`. Container = no default. + +| `automationhub_pg_password` +| `hub_pg_password` +| Password for the {HubName} PostgreSQL database user. +Use of special characters for this variable is limited. The `!`, `#`, `0` and `@` characters are supported. Use of other special characters can cause the setup to fail. +| Optional +| + +| `automationhub_pg_port` +| `hub_pg_port` +| Port number for the PostgreSQL database used by {HubName}. +| Optional +| `5432` + +| `automationhub_pg_sslmode` +| `hub_pg_sslmode` +| Controls the SSL/TLS mode to use when {HubName} connects to the PostgreSQL database. +Valid options include `verify-full`, `verify-ca`, `require`, `prefer`, `allow`, `disable`. +| Optional +| `prefer` + +| `automationhub_pg_username` +| `hub_pg_username` +| Username for the {HubName} PostgreSQL database user. +| Optional +| RPM = `automationhub`. Container = `pulp`. + +| `automationhub_pgclient_sslcert` +| `hub_pg_tls_cert` +| Path to the PostgreSQL SSL/TLS certificate file for {HubName}. +| Required if using client certificate authentication. +| + +| `automationhub_pgclient_sslkey` +| `hub_pg_tls_key` +| Path to the PostgreSQL SSL/TLS key file for {HubName}. +| Required if using client certificate authentication. +| + +| `automationhub_pgclient_tls_files_remote` +| +| Denote whether the PostgreSQL client certificate sources are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| The value defined in `automationhub_tls_files_remote`. + + +| `automationhub_require_content_approval` +| +| Controls whether content signing is enabled or disabled for {HubName}. +By default when you upload collections to {HubName}, an administrator must approve it before they are made available to users. +To disable the content approval flow, set the variable to `false`. +| Optional +| `true` + +| `automationhub_restore_signing_keys` +| +| Controls whether or not existing signing keys should be restored from a backup. +Set to `false` to disable restoration of existing signing keys. +| Optional +| `true` + +| `automationhub_seed_collections` +| `hub_seed_collections` +| Controls whether or not pre-loading of collections is enabled. +When you run the bundle installer, validated content is uploaded to the `validated` repository, and certified content is uploaded to the `rh-certified` repository. By default, certified content and validated content are both uploaded. +If you do not want to pre-load content, set this variable to `false`. +For the RPM-based installer, if you only want one type of content, set this variable to `true` and set the `automationhub_collection_seed_repository` variable to the type of content you want to include. +| Optional +| `true` + +| `automationhub_ssl_cert` +| `hub_tls_cert` +| Path to the SSL/TLS certificate file for {HubName}. +| Optional +| + +| `automationhub_ssl_key` +| `hub_tls_key` +| Path to the SSL/TLS key file for {HubName}. +| Optional +| + +| `automationhub_tls_files_remote` +| `hub_tls_remote` +| Denote whether the {HubName} provided certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `automationhub_use_archive_compression` +| `hub_use_archive_compression` +| Controls whether archive compression is enabled or disabled for {HubName}. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` + +| `automationhub_use_db_compression` +| `hub_use_db_compression` +| Controls whether database compression is enabled or disabled for {HubName}. You can control this functionality globally by using `use_db_compression`. +| Optional +| `true` + +| `automationhub_user_headers` +| `hub_nginx_user_headers` +| List of additional NGINX headers to add to {HubName}'s NGINX configuration. +| Optional +| `[]` + +|`generate_automationhub_token` +| +| Controls whether or not a token is generated for {HubName} during installation. By default, a token is automatically generated during a fresh installation. +If set to `true`, a token is regenerated during installation. +| Optional +| `false` + +| +| `hub_extra_settings` +a| Defines additional settings for use by {HubName} during installation. + +For example: +---- +hub_extra_settings: + - setting: REDIRECT_IS_HTTPS + value: True +---- +| Optional +| `[]` + +| `nginx_hsts_max_age` +| `hub_nginx_hsts_max_age` +| Maximum duration (in seconds) that HTTP Strict Transport Security (HSTS) is enforced for {HubName}. +| Optional +| `63072000` + +| `pulp_secret` +| `hub_secret_key` +| Secret key value used by {HubName} to sign and encrypt data. +| Optional +| + +| +| `hub_azure_account_key` +| Azure blob storage account key. +| Required if using an Azure blob storage backend. +| + +| +| `hub_azure_account_name` +| Account name associated with the Azure blob storage. +| Required when using an Azure blob storage backend. +| + +| +| `hub_azure_container` +| Name of the Azure blob storage container. +| Optional +| `pulp` + +| +| `hub_azure_extra_settings` +| Defines extra parameters for the Azure blob storage backend. +For more information about the list of parameters, see link:https://django-storages.readthedocs.io/en/latest/backends/azure.html#settings[django-storages documentation - Azure Storage]. +| Optional +| `{}` + +| +| `hub_collection_signing_pass` +| Password for the automation content collection signing service. +| Required if the collection signing service is protected by a passphrase. +| + +| +| `hub_collection_signing_service` +| Service for signing collections. +| Optional +| `ansible-default` + +| +| `hub_container_signing_pass` +| Password for the automation content container signing service. +| Required if the container signing service is protected by a passphrase. +| + +| +| `hub_container_signing_service` +| Service for signing containers. +| Optional +| `container-default` + +| +| `hub_nginx_http_port` +| Port number that {HubName} listens on for HTTP requests. +| Optional +| `8081` + +| +| `hub_nginx_https_port` +| Port number that {HubName} listens on for HTTPS requests. +| Optional +| `8444` + +| `nginx_tls_protocols` +| `hub_nginx_https_protocols` +| Protocols that {HubName} will support when handling HTTPS traffic. +| Optional +| RPM = `[TLSv1.2]`. Container = `[TLSv1.2, TLSv1.3]`. + +| +| `hub_pg_socket` +| UNIX socket used by {HubName} to connect to the PostgreSQL database. +| Optional +| + +| +| `hub_s3_access_key` +| AWS S3 access key. +| Required if using an AWS S3 storage backend. +| + +| +| `hub_s3_bucket_name` +| Name of the AWS S3 storage bucket. +| Optional +| `pulp` + +| +| `hub_s3_extra_settings` +| Used to define extra parameters for the AWS S3 storage backend. +For more information about the list of parameters, see link:https://django-storages.readthedocs.io/en/latest/backends/amazon-S3.html#settings[django-storages documentation - Amazon S3]. +| Optional +| `{}` + +| +| `hub_s3_secret_key` +| AWS S3 secret key. +| Required if using an AWS S3 storage backend. +| + +| +| `hub_shared_data_mount_opts` +| Mount options for the Network File System (NFS) share. +| Optional +| `rw,sync,hard` + +| +| `hub_shared_data_path` +| Path to the Network File System (NFS) share with read, write, and execute (RWX) access. The value must match the format `host:dir`, for example `nfs-server.example.com:/exports/hub`. +| Required if installing more than one instance of {HubName} with a `file` storage backend. When installing a single instance of {HubName}, it is optional. +| + +| +| `hub_storage_backend` +| {HubNameStart} storage backend type. +Possible values include: `azure`, `file`, `s3`. +| Optional +| `file` + +| +| `hub_workers` +| Number of {HubName} workers. +| Optional +| `2` + + +// Michelle - commenting out postinstall vars. +// | | `hub_postinstall` | Enable {HubNameStart} postinstall. +// Default = `false` +// | | `hub_postinstall_async_delay` | Postinstall delay between retries. +// Default = `1` +// | | `hub_postinstall_async_retries` | +// Postinstall number of retries to perform. +// Default = `30` +// | | `hub_postinstall_dir` | {HubNameStart} postinstall directory. +// | | `hub_postinstall_ignore_files` | {HubNameStart} ignore files. +// | | `hub_postinstall_repo_ref` | {HubNameStart} repository branch or tag. +// Default = `main` +// | | `hub_postinstall_repo_url` | {HubNameStart} repository URL. + +|=== diff --git a/downstream/modules/platform/ref-images-inventory-variables.adoc b/downstream/modules/platform/ref-images-inventory-variables.adoc new file mode 100644 index 0000000000..aa98d67f2e --- /dev/null +++ b/downstream/modules/platform/ref-images-inventory-variables.adoc @@ -0,0 +1,113 @@ +:_mod-docs-content-type: REFERENCE + +[id="image-variables"] + += Image variables + +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `extra_images` +| +| Additional container images to pull from the configured container registry during deployment. +| Optional +| `ansible-builder-rhel8` + +| +| `controller_image` +| Container image for {ControllerName}. +| Optional +| `controller-rhel8:latest` + +| +| `de_extra_images` +| Additional decision environment container images to pull from the configured container registry during deployment. +| Optional +| `[]` + +| +| `de_supported_image` +| Supported decision environment container image. +| Optional +| `de-supported-rhel8:latest` + +| +| `eda_image` +| Backend container image for {EDAName}. +| Optional +| `eda-controller-rhel8:latest` + +| +| `eda_web_image` +| Front-end container image for {EDAName}. +| Optional +| `eda-controller-ui-rhel8:latest` + +| +| `ee_extra_images` +| Additional {ExecEnvShort} container images to pull from the configured container registry during deployment. +| Optional +| `[]` + +| +| `ee_minimal_image` +| Minimal {ExecEnvShort} container image. +| Optional +| `ee-minimal-rhel8:latest` + +| +| `ee_supported_image` +| Supported {ExecEnvShort} container image. +| Optional +| `ee-supported-rhel8:latest` + +| +| `gateway_image` +| Container image for {Gateway}. +| Optional +| `gateway-rhel8:latest` + +| +| `gateway_proxy_image` +| Container image for {Gateway} proxy. +| Optional +| `gateway-proxy-rhel8:latest` + +| +| `hub_image` +| Backend container image for {HubName}. +| Optional +| `hub-rhel8:latest` + +| +| `hub_web_image` +| Front-end container image for {HubName}. +| Optional +| `hub-web-rhel8:latest` + +| +| `pcp_image` +| Container image for Performance Co-Pilot. +| Optional +| `pcp:latest` + +| +| `postgresql_image` +| Container image for PostgreSQL. +| Optional +| `postgresql-15:latest` + +| +| `receptor_image` +| Container image for receptor. +| Optional +| `receptor-rhel8:latest` + +| +| `redis_image` +| Container image for Redis. +| Optional +| `redis-6:latest` + +|=== diff --git a/downstream/modules/platform/ref-mesh-minimum-resilient-config.adoc b/downstream/modules/platform/ref-mesh-minimum-resilient-config.adoc index bcde7186ce..b1db267309 100644 --- a/downstream/modules/platform/ref-mesh-minimum-resilient-config.adoc +++ b/downstream/modules/platform/ref-mesh-minimum-resilient-config.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="mesh-min-resilient"] = Minimum resilient configuration @@ -9,7 +11,7 @@ All nodes in the control plane are peered with all nodes in the `execution_nodes This configuration is resilient because the execution nodes are reachable from all control nodes. The capacity algorithm determines which control node is chosen when a job is launched. -Refer to link:https://docs.ansible.com/automation-controller/latest/html/userguide/jobs.html#at-capacity-determination-and-job-impact[Automation controller Capacity Determination and Job Impact] in the _Automation Controller User Guide_ for more information. +Refer to link:{URLControllerUserGuide}/index#controller-capacity-determination[Automation controller capacity determination and job impact] in {TitleControllerAdminGuide} for more information. The following inventory file defines this configuration. diff --git a/downstream/modules/platform/ref-mesh-multi-hop-execution.adoc b/downstream/modules/platform/ref-mesh-multi-hop-execution.adoc index 5284daf6d7..1a0cadd9e3 100644 --- a/downstream/modules/platform/ref-mesh-multi-hop-execution.adoc +++ b/downstream/modules/platform/ref-mesh-multi-hop-execution.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="mesh-multi-hop-execution"] = Multi-hopped execution node diff --git a/downstream/modules/platform/ref-mesh-one-way-communication.adoc b/downstream/modules/platform/ref-mesh-one-way-communication.adoc index 1324ed1d63..4fc99a483c 100644 --- a/downstream/modules/platform/ref-mesh-one-way-communication.adoc +++ b/downstream/modules/platform/ref-mesh-one-way-communication.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="mesh-one-way-commuication"] = Outbound only connections to controller nodes diff --git a/downstream/modules/platform/ref-mesh-segregated-execution_nodes.adoc b/downstream/modules/platform/ref-mesh-segregated-execution_nodes.adoc index 5d5d6d360b..c9cf63009d 100644 --- a/downstream/modules/platform/ref-mesh-segregated-execution_nodes.adoc +++ b/downstream/modules/platform/ref-mesh-segregated-execution_nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="mesh-segregated-execution"] = Segregated local and remote execution configuration diff --git a/downstream/modules/platform/ref-multiple-hybrid-nodes.adoc b/downstream/modules/platform/ref-multiple-hybrid-nodes.adoc index 779b5c3cee..014809ade8 100644 --- a/downstream/modules/platform/ref-multiple-hybrid-nodes.adoc +++ b/downstream/modules/platform/ref-multiple-hybrid-nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-multiple-hybrid-nodes"] = Multiple hybrid nodes inventory file example diff --git a/downstream/modules/platform/ref-operator-crs.adoc b/downstream/modules/platform/ref-operator-crs.adoc new file mode 100644 index 0000000000..d99ebc94a7 --- /dev/null +++ b/downstream/modules/platform/ref-operator-crs.adoc @@ -0,0 +1,668 @@ +:_mod-docs-content-type: REFERENCE + +[id="operator-crs"] + += Custom resources + +== aap-existing-controller-and-hub-new-eda.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + name: existing-controller + disabled: false + + eda: + disabled: false + + hub: + name: existing-hub + disabled: false +---- + +== aap-all-defaults.yml + +[subs="+attributes"] +---- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + # Platform + ## uncomment to test bundle certs + # bundle_cacert_secret: gateway-custom-certs + + # Components + + hub: + disabled: false + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: nfs-local-rwx + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + # lightspeed: + # disabled: true + +# End state: +# * {ControllerNameStart} deployed and named: myaap-controller +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} deployed and named: myaap-hub +---- + +== aap-existing-controller-only.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + name: existing-controller + + eda: + disabled: true + + hub: + disabled: true + ## uncomment if using file storage for Content pod + # storage_type: file + # file_storage_storage_class: nfs-local-rwx + # file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + +# End state: +# * {ControllerNameStart}: existing-controller registered with {PlatformNameShort} UI +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} deployed and named: myaap-hub +---- + +== aap-existing-hub-and-controller.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + name: existing-controller + disabled: false + + eda: + disabled: true + + hub: + name: existing-hub + disabled: false + +# End state: +# * {ControllerNameStart}: existing-controller registered with {PlatformNameShort} UI +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart}: existing-hub registered with {PlatformNameShort} UI +---- + +== aap-existing-hub-controller-eda.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + name: existing-controller # <-- this is the name of the existing AutomationController CR + disabled: false + + eda: + name: existing-eda + disabled: false + + hub: + name: existing-hub + disabled: false + +# End state: +# * Controller: existing-controller registered with {PlatformNameShort} UI +# * * {EDAName}: existing-eda registered with {PlatformNameShort} UI +# * * {HubNameStart}: existing-hub registered with {PlatformNameShort} UI +# +# Note: The {ControllerName}, {EDAName}, and {HubName} names must match the names of the existing. +# {ControllerNameStart}, {EDAName}, and {HubName} CRs in the same namespace as the {PlatformNameShort} CR. If the names do not match, the {PlatformNameShort} CR will not be able to register the existing {ControllerName}, {EDAName}, and {HubName} with the {PlatformNameShort} UI,and will instead deploy new {ControllerName}, {EDAName}, and {HubName} instances. +---- + +== aap-existing-hub-controller-eda.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + name: existing-controller # <-- this is the name of the existing AutomationController CR + disabled: false + + eda: + name: existing-eda + disabled: false + + hub: + name: existing-hub + disabled: false + +# End state: +# * {ControllerNameStart}: existing-controller registered with {PlatformNameShort} UI +# * * {EDAName}: existing-eda registered with {PlatformNameShort} UI +# * * {HubNameStart}: existing-hub registered with {PlatformNameShort} UI +# +# Note: The {ControllerName}, {EDAName}, and {HubName} names must match the names of the existing. +# {ControllerNameStart}, {EDAName}, and {HubName} CRs in the same namespace as the {PlatformNameShort} CR. If the names do not match, the {PlatformNameShort} CR will not be able to register the existing {ControllerName}, {EDAName}, and {HubName} with the {PlatformNameShort} UI,and will instead deploy new {ControllerName}, {EDAName}, and {HubName} instances. +---- + +== aap-fresh-controller-eda.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + disabled: false + + eda: + disabled: false + + hub: + disabled: true + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: nfs-local-rwx + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + +# End state: +# * {ControllerNameStart} deployed and named: myaap-controller +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} disabled +# * {LightspeedShortName} disabled +---- + +== aap-fresh-external-db.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + disabled: false + + eda: + disabled: false + + hub: + disabled: false + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: nfs-local-rwx + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + +# End state: +# * {ControllerNameStart} deployed and named: myaap-controller +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} deployed and named: myaap-hub +---- + +== aap-configuring-external-db-all-default-components.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + database: + database_secret: external-postgres-configuration-gateway + controller: + postgres_configuration_secret: external-postgres-configuration-controller + hub: + postgres_configuration_secret: external-postgres-configuration-hub + eda: + database: + database_secret: external-postgres-configuration-eda +---- + +== aap-configuring-existing-external-db-all-default-components.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + database: + database_secret: external-postgres-configuration-gateway +---- + +[NOTE] +==== +The system uses the external database for {Gateway}, and {ControllerName}, {HubName}, and {EDAName} continues to use the existing databases that were used in 2.4. +==== + +== aap-configuring-external-db-with-lightspeed-enabled.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + database: + database_secret: external-postgres-configuration-gateway + controller: + postgres_configuration_secret: external-postgres-configuration-controller + hub: + postgres_configuration_secret: external-postgres-configuration-hub + eda: + database: + database_secret: external-postgres-configuration-eda + lightspeed: + disabled: false + database: + database_secret: -postgres-configuration + auth_config_secret_name: 'auth-configuration-secret' + model_config_secret_name: 'model-configuration-secret' +---- + +[NOTE] +==== +You can follow the link:{BaseURL}/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index[Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide] for help with creating the model and auth secrets. +==== + +== aap-fresh-install-with-settings.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + image_pull_policy: Always + + # Platform + ## uncomment to test bundle certs + # bundle_cacert_secret: gateway-custom-certs + + # Components + controller: + disabled: false + image_pull_policy: Always + + extra_settings: + - setting: MAX_PAGE_SIZE + value: '501' + + eda: + disabled: false + image_pull_policy: Always + + extra_settings: + - setting: EDA_MAX_PAGE_SIZE + value: '501' + + hub: + disabled: false + image_pull_policy: Always + + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: rook-cephfs + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + pulp_settings: + MAX_PAGE_SIZE: 501 + cache_enabled: false + + # lightspeed: + # disabled: true + +# End state: +# * {ControllerNameStart} deployed and named: myaap-controller +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} deployed and named: myaap-hub +---- + +== aap-fresh-install.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + # Redis Mode + # redis_mode: cluster + + # Platform + ## uncomment to test bundle certs + # bundle_cacert_secret: gateway-custom-certs + # extra_settings: + # - setting: MAX_PAGE_SIZE + # value: '501' + + # Components + controller: + disabled: false + + eda: + disabled: false + + hub: + disabled: false + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: nfs-local-rwx + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + # lightspeed: + # disabled: true + +# End state: +# * {ControllerNameStart} deployed and named: myaap-controller +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} deployed and named: myaap-hub +---- + +== aap-fresh-only-controller.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + disabled: false + + eda: + disabled: true + + hub: + disabled: true + ## uncomment if using file storage for Content pod + # storage_type: file + # file_storage_storage_class: nfs-local-rwx + # file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + +# End state: +# * {ControllerNameStart}: existing-controller registered with {PlatformNameShort} UI +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} deployed and named: myaap-hub +---- + +== aap-fresh-only-hub.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + disabled: true + + eda: + disabled: true + + hub: + disabled: false + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: nfs-local-rwx + file_storage_size: 10Gi + + # # AaaS Hub Settings + # pulp_settings: + # cache_enabled: false + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + lightspeed: + disabled: false + +# End state: +# * {ControllerNameStart} disabled +# * * {EDAName} disabled +# * * {HubNameStart} deployed and named: myaap-hub +# * {LightspeedShortName} disabled +---- + +== aap-lightspeed-enabled.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + disabled: false + + eda: + disabled: false + + hub: + disabled: false + ## uncomment if using file storage for Content pod + storage_type: file + file_storage_storage_class: nfs-local-rwx + file_storage_size: 10Gi + + ## uncomment if using S3 storage for Content pod + # storage_type: S3 + # object_storage_s3_secret: example-galaxy-object-storage + + ## uncomment if using Azure storage for Content pod + # storage_type: azure + # object_storage_azure_secret: azure-secret-name + + lightspeed: + disabled: false + +# End state: +# * {ControllerNameStart} deployed and named: myaap-controller +# * * {EDAName} deployed and named: myaap-eda +# * * {HubNameStart} deployed and named: myaap-hub +# * {LightspeedShortName} deployed and named: myaap-lightspeed +---- + +== gateway-only.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + # Development purposes only + no_log: false + + controller: + disabled: true + + eda: + disabled: true + + hub: + disabled: true + + lightspeed: + disabled: true + +# End state: +# * {GatewayStart} deployed and named: myaap-gateway +# * UI is reachable at: https://myaap-gateway-gateway.apps.ocp4.example.com +# * {ControllerNameStart} is not deployed +# * * {EDAName} is not deployed +# * * {HubNameStart} is not deployed +# * {LightspeedShortName} is not deployed +---- + +== eda-max-running-activations.yml + +[subs="+attributes"] +---- +--- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: myaap +spec: + eda: + extra_settings: + - setting: EDA_MAX_RUNNING_ACTIVATIONS + value: "15" # Setting this value to "-1" means there will be no limit + +---- diff --git a/downstream/modules/platform/ref-operator-mesh-prerequisites.adoc b/downstream/modules/platform/ref-operator-mesh-prerequisites.adoc index 5a3d07c151..d03b738799 100644 --- a/downstream/modules/platform/ref-operator-mesh-prerequisites.adoc +++ b/downstream/modules/platform/ref-operator-mesh-prerequisites.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-operator-mesh-prerequisites"] = Prerequisites -The automation mesh is dependent on hop and execution nodes running on {RHEL} (RHEL). +The automation mesh is dependent on hop and execution nodes running on _{RHEL}_ (RHEL). Your {PlatformName} subscription grants you ten {RHEL} licenses that can be used for running components of {PlatformNameShort}. For more information about {RHEL} subscriptions, see link:{BaseURL}/red_hat_enterprise_linux/9/html/configuring_basic_system_settings/assembly_registering-the-system-and-managing-subscriptions_configuring-basic-system-settings[Registering the system and managing subscriptions] in the {RHEL} documentation. @@ -10,9 +12,9 @@ For more information about {RHEL} subscriptions, see link:{BaseURL}/red_hat_ente The following steps prepare the RHEL instances for deployment of the automation mesh. . You require a {RHEL} operating system. -Each node in the mesh requires a static IP address, or a resolvable DNS hostname that {ControllerName} can access. +Each node in the mesh requires a static IP address, or a resolvable DNS hostname that {PlatformNameShort} can access. . Ensure that you have the minimum requirements for the RHEL virtual machine before proceeding. -For more information, see the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/platform-system-requirements[Red Hat Ansible Automation Platform system requirements]. +For more information, see the link:{URLPlanningGuide}/platform-system-requirements[System requirements]. . Deploy the RHEL instances within the remote networks where communication is required. For information about creating virtual machines, see link:{BaseURL}/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_creating-virtual-machines_configuring-and-managing-virtualization[Creating Virtual Machines] in the _{RHEL}_ documentation. Remember to scale the capacity of your virtual machines sufficiently so that your proposed tasks can run on them. diff --git a/downstream/modules/platform/ref-operator-ocp-version.adoc b/downstream/modules/platform/ref-operator-ocp-version.adoc index 3316628567..c7a8e579b4 100644 --- a/downstream/modules/platform/ref-operator-ocp-version.adoc +++ b/downstream/modules/platform/ref-operator-ocp-version.adoc @@ -1,11 +1,13 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-operator-ocp-version_{context}"] = {OCPShort} version compatibility [role="_abstract"] -The {OperatorPlatform} to install {PlatformNameShort} {PlatformVers} is available on {OCPShort} 4.9 and later versions. +The {OperatorPlatformNameShort} to install {PlatformNameShort} {PlatformVers} is available on {OCPShort} 4.12 through to 4.17 and later versions. [role="_additional-resources"] .Additional resources -* See the link:https://access.redhat.com/support/policy/updates/ansible-automation-platform[Red Hat Ansible Automation Platform Life Cycle] for the most current compatibility details. +* link:https://access.redhat.com/support/policy/updates/ansible-automation-platform[Red Hat Ansible Automation Platform Life Cycle]. diff --git a/downstream/modules/platform/ref-pac-inputs-outputs.adoc b/downstream/modules/platform/ref-pac-inputs-outputs.adoc new file mode 100644 index 0000000000..cf4c7beb43 --- /dev/null +++ b/downstream/modules/platform/ref-pac-inputs-outputs.adoc @@ -0,0 +1,285 @@ +:_newdoc-version: 2.18.4 +:_template-generated: 2025-05-08 +:_mod-docs-content-type: REFERENCE + +[id="pac-inputs-outputs_{context}"] += Policy enforcement inputs and outputs + +Use the following inputs and outputs to craft policies for use in policy enforcement. + +.Input data +[options="header"] + +|==== +|*Input*|*Type*|*Description* +|`id`|Integer|The job's unique identifier. +|`name`|String|Job template name. +|`created`|Datetime (ISO 8601)|Timestamp indicating when the job was created. +|`created by`|Object +a| +Information about the user who created the job. + +* `id`(integer): Unique identifier for the user +* `username`(string): creator username +* `is_superuser`(boolean): indicates if the user is a superuser +|`credentials`|List of objects +a| +Credentials associated with job execution. + +* `id` (integer): the credential's unique identifier +* `name` (string): credential name +* `description` (string): credential description +* `organization` (integer or null): organization identifier associated with the credential +* `credential_type` (integer): credential type identifier +* `managed` (boolean): indicates if the credential is managed internally +* `kind` (string): credential type ( such as `ssh`, `cloud`, or `kubernetes`) +|`execution_environment`|Object +a| +Details about the {ExecEnvShort} used for the job. + +* `id` (integer): the {ExecEnvShort}'s unique identifier +* `name` (string): {ExecEnvShort} name +* `image` (string): container image used for execution +* `pull` (string): pull policy for the {ExecEnvShort} +|`extra_vars`|JSON|Extra variables provided for job execution. +|`forks`|Integer|The number of parallel processes used for job execution. +|`hosts_count`|Integer|The number of hosts targeted by the job. +|`instance_group`|Object +a| +Information about the instance group handling the job, including: + +* `id` (integer): the instance group's unique identifier +* `name` (string): the instance group name +* `capacity` (integer): the available capacity in the group +* `jobs_running` (integer): the number of currently running jobs +* `jobs_total` (integer): total jobs handled by the group +* `max_concurrent_jobs` (integer): maximum concurrent jobs allowed +* `max_forks` (integer): maximum forks allowed +|`inventory`|Object +a| +Inventory details used in the job execution, including: + +* `id` (integer): the inventory's unique identifier +* `name` (string): The inventory's name +* `description` (string): inventory description +* `kind` (string): the inventory type +* `total_hosts` (integer): the total number of hosts in the inventory +* `total_groups` (integer): the total number of groups in the inventory +* `has_inventory_sources` (boolean): indicates if the inventory has external sources +* `total_inventory_sources` (integer): the number of external inventory sources +* `has_active_failures` (boolean): indicates if there are active failures in the inventory +* `hosts_with_active_failures` (boolean): the number of hosts with active failures +* `inventory_sources` (array): external inventory sources associated with the inventory +|`job_template`|Object +a| +Information about the job template, including: + +* `id` (integer): the job template's unique identifier +* `name` (string): the job template's name +* `job_type` (string): type of job (for example, `run`) +|`job_type`|Choice (String) +a| +Type of job execution. Allowed values are: + +* `run` +* `check` +* `scan` +|`job_type_name`|String|Human-readable name for the job type. +|`labels`|List of objects +a| +Labels associated with the job, including: + +* `id` (integer): the label's unique identifier +* `name` (string): the label name +* `organization` (object): the organization associated with the label +** `id` (integer): the organization's unique identifier +** `name` (string): the organization name +|`launch_type`|Choice (String) +a| +How the job was launched. Allowed values include: + +* `manual`: manual +* `relaunch`: relaunch +* `callback`: callback +* `scheduled`: scheduled +* `dependency`: dependency +* `workflow`: workflow +* `webhook`: webhook +* `sync`: sync +* `scm`: SCM update +|`limit`|String|The limit applied to the job execution. +|`launched_by`|Object +a| +Information about the user who launched the job, including: + +* `id` (integer): the user's unique identifier +* `name` (string): the user name +* `type` (type): the user type (for example, `user` or `system`) +* `url` (string): the user's API URL +|`organization`|Object +a| +Information about the organization associated with the job, including: + +* `id` (integer): the organization's unique identifier +* `name` (name): the organization's name +|`playbook`|String|The playbook used in the job execution. +|`project`|Object +a| +Details about the project associated with the job, including: + +* `id` (integer): the project's unique identifier +* `name` (string): the project name +* `status` (choice-string): the project status +** `successful`: Successful +** `failed`: failed +** `error`: error +* `scm_type`(string): source control type (such as`git`, or `svn`) +* `scm_url` (string): the source control repository URL +* `scm_branch` (string): the branch used in the repository +* `scm_refspec` (string): RefSpec for the repository +* `scm_clean` (boolean): whether the SCM is cleaned before updates +* `scm_track_submodules` (boolean): whether submodules are tracked +* `scm_delete_on_update` (boolean): whether SCM deletes files on update +|`scm_branch`|String|The specific branch to use for SCM. +|`scm_revision`|String|SCM revision used for the job. +|`workflow_job`|Object|Workflow job details, if the job is part of a workflow. +|`workflow_job_template`|Object|Workflow job template details. +|==== + +The following code block shows example input data from a demo job template launch: +[source,rego] +---- +{ + "id": 70, + "name": "Demo Job Template", + "created": "2025-03-19T19:07:03.329426Z", + "created_by": { + "id": 1, + "username": "admin", + "is_superuser": true, + "teams": [] + }, + "credentials": [ + { + "id": 3, + "name": "Example Machine Credential", + "description": "", + "organization": null, + "credential_type": 1, + "managed": false, + "kind": "ssh", + "cloud": false, + "kubernetes": false + } + ], + "execution_environment": { + "id": 2, + "name": "Default execution environment", + "image": "registry.redhat.io/ansible-automation-platform-25/ee-supported-rhel8@sha256:b9f60d9ebbbb5fdc394186574b95dea5763b045ceff253815afeb435c626914d", + "pull": "" + }, + "extra_vars": { + "example": "value" + }, + "forks": 0, + "hosts_count": 0, + "instance_group": { + "id": 2, + "name": "default", + "capacity": 0, + "jobs_running": 1, + "jobs_total": 38, + "max_concurrent_jobs": 0, + "max_forks": 0 + }, + "inventory": { + "id": 1, + "name": "Demo Inventory", + "description": "", + "kind": "", + "total_hosts": 1, + "total_groups": 0, + "has_inventory_sources": false, + "total_inventory_sources": 0, + "has_active_failures": false, + "hosts_with_active_failures": 0, + "inventory_sources": [] + }, + "job_template": { + "id": 7, + "name": "Demo Job Template", + "job_type": "run" + }, + "job_type": "run", + "job_type_name": "job", + "labels": [ + { + "id": 1, + "name": "Demo label", + "organization": { + "id": 1, + "name": "Default" + } + } + ], + "launch_type": "workflow", + "limit": "", + "launched_by": { + "id": 1, + "name": "admin", + "type": "user", + "url": "/api/v2/users/1/" + }, + "organization": { + "id": 1, + "name": "Default" + }, + "playbook": "hello_world.yml", + "project": { + "id": 6, + "name": "Demo Project", + "status": "successful", + "scm_type": "git", + "scm_url": "https://github.com/ansible/ansible-tower-samples", + "scm_branch": "", + "scm_refspec": "", + "scm_clean": false, + "scm_track_submodules": false, + "scm_delete_on_update": false + }, + "scm_branch": "", + "scm_revision": "", + "workflow_job": { + "id": 69, + "name": "Demo Workflow" + }, + "workflow_job_template": { + "id": 10, + "name": "Demo Workflow", + "job_type": null + } +} + +---- + +.Output data +[options="header"] + +|==== +|*Input*|*Type*|*Description* +|`allowed`|Boolean|Indicates whether the action is permitted +|`violations`|List of strings|Reasons why the action is not permitted +|==== + +The following code block shows an example of expected output from the OPA policy query: +[source,rego] +---- +{ + "allowed": false, + "violations": [ + "No job execution is allowed", + ... + ], + ... +} +---- diff --git a/downstream/modules/platform/ref-postgresql-requirements.adoc b/downstream/modules/platform/ref-postgresql-requirements.adoc index d199308bbc..6003c43111 100644 --- a/downstream/modules/platform/ref-postgresql-requirements.adoc +++ b/downstream/modules/platform/ref-postgresql-requirements.adoc @@ -1,16 +1,13 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-postgresql-requirements"] = PostgreSQL requirements -{PlatformName} uses PostgreSQL 13. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database. +{PlatformName} {PlatformVers} uses {PostgresVers} and requires the external (customer supported) databases to have ICU support. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database. To determine if your {ControllerName} instance has access to the database, you can do so with the command, `awx-manage check_db` command. -.Database - -[cols="a,a,a",options="header"] -|=== -h| Service |Required |Notes // [ddacosta - removed based on AAP-15617]| *Each {ControllerName}* | 40 GB dedicated hard disk space | //* Dedicate a minimum of 20 GB to `/var/` for file and working directory storage. @@ -21,29 +18,21 @@ h| Service |Required |Notes // | *Each {HubName}* | 60 GB dedicated hard disk space | //Storage volume must be rated for a minimum baseline of 1500 IOPS. -| *Database* | -* 20 GB dedicated hard disk space -* 4 CPUs -* 16 GB RAM | +[NOTE] +==== +* {ControllerNameStart} data is stored in the database. +Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. +For example, a playbook runs every hour (24 times a day) across 250 hosts, with 20 tasks, stores over 800000 events in the database every week. -* 150 GB+ recommended -* Storage volume must be rated for a high baseline IOPS (1500 or more). -* All {ControllerName} data is stored in the database. -Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. -For example, a playbook run every hour (24 times a day) across 250 hosts, with 20 tasks, will store over 800000 events in the database every week. -* If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#assembly-controller-management-jobs[Management Jobs] in the _Automation Controller Administration Guide_. -|=== +* If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see link:{URLControllerAdminGuide}/assembly-controller-management-jobs[Management Jobs] in the _{TitleControllerAdminGuide}_ guide. +==== .PostgreSQL Configurations -Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the {PlatformName} installer. When the {PlatformNameShort} installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see link:https://docs.ansible.com/automation-controller/latest/html/administration/performance.html#database-settings[Database Settings]. -//----- -//max_connections == 1024 -//shared_buffers == ansible_memtotal_mb*0.3 -//work_mem == ansible_memtotal_mb*0.03 -//maintenance_work_mem == ansible_memtotal_mb*0.04 -//----- +Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the {PlatformName} installer. +When the {PlatformNameShort} installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. +For more information about the settings you can use to improve database performance, see link:{URLControllerAdminGuide}/assembly-controller-improving-performance#ref-controller-database-settings[PostgreSQL database configuration and maintenance for automation controller ] in the _{TitleControllerAdminGuide}_ guide. [role="_additional-resources"] .Additional resources diff --git a/downstream/modules/platform/ref-projects-collections-support.adoc b/downstream/modules/platform/ref-projects-collections-support.adoc index a8f7694cd5..51500c2d78 100644 --- a/downstream/modules/platform/ref-projects-collections-support.adoc +++ b/downstream/modules/platform/ref-projects-collections-support.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-projects-collections-support"] = Collections support @@ -6,14 +8,14 @@ If you specify a collections requirements file in the SCM at `collections/requirements.yml`, {ControllerName} installs collections in that file in the implicit project synchronization before a job run. {ControllerNameStart} has a system-wide setting that enables collections to be dynamically downloaded from the `collections/requirements.yml` file for SCM projects. -You can turn off this setting in the *Job Settings* screen from the navigation panel {MenuSetJob}, by switching the *Enable Collection(s) Download* -toggle button to *Off*. +You can turn off this setting in the *Job Settings* screen from the navigation panel {MenuSetJob}, by unchecking the *Enable Collection(s) Download* +box. //image:configure-controller-jobs-download-collections.png[Download collections] -Roles and collections are locally cached for performance reasons, and you select *Update Revision on Launch* in the project *Options* to ensure this: +Roles and collections are locally cached for performance reasons, and you select *Update revision on launch* in the project *Options* to ensure this: -image:projects-scm-update-options-update-on-launch-checked.png[update-on-launch] +//image:projects-scm-update-options-update-on-launch-checked.png[update-on-launch] [NOTE] ==== diff --git a/downstream/modules/platform/ref-projects-galaxy-support.adoc b/downstream/modules/platform/ref-projects-galaxy-support.adoc index ce3d1ba902..e7b8c95497 100644 --- a/downstream/modules/platform/ref-projects-galaxy-support.adoc +++ b/downstream/modules/platform/ref-projects-galaxy-support.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-projects-galaxy-support"] = {Galaxy} support @@ -26,7 +28,7 @@ The cache directory is a subdirectory inside the global projects folder. You can copy the content from the cache location to `/requirements_roles`. By default, {ControllerName} has a system-wide setting that enables you to dynamically download roles from the `roles/requirements.yml` file for SCM projects. -You can turn off this setting in the *Job Settings* screen from the navigation panel {MenuSetJob}, by switching the *Enable Role Download* toggle to *Off*. +You can turn off this setting in the *Job Settings* screen from the navigation panel {MenuSetJob}, by unchecking the *Enable Role Download* box. //image:configure-tower-jobs-download-roles.png[image] @@ -45,7 +47,7 @@ The second one is _provision-role_ and it is referenced by the `roles/requiremen Jobs download the most recent roles before every job run. Roles and collections are locally cached for performance reasons. -You must select *Update Revision on Launch* in the project *Options* to ensure that the upstream role is re-downloaded before each job run: +You must select *Update revision on launch* in the project *Options* to ensure that the upstream role is re-downloaded before each job run: image:projects-scm-update-options-update-on-launch-checked.png[update-on-launch] @@ -53,7 +55,7 @@ The update happens much earlier in the process than the sync, so this identifies For more information and examples on the syntax of the `requirements.yml` file, see the link:https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#installing-multiple-roles-from-a-file[role requirements section] in the Ansible documentation. -If there are any directories that must be specifically exposed, you can specify those in the *Job Settings* screen from the navigation panel {MenuSetJob}, in *Paths to Expose to isolated Jobs*. +If there are any directories that must be specifically exposed, you can specify those in the *Job Settings* screen from the navigation panel {MenuSetJob}, in *Paths to expose to isolated Jobs*. You can also update the following entry in the settings file: [literal, options="nowrap" subs="+attributes"] diff --git a/downstream/modules/platform/ref-projects-manage-playbooks-with-source-control.adoc b/downstream/modules/platform/ref-projects-manage-playbooks-with-source-control.adoc index 7df589c5fa..cb409e6643 100644 --- a/downstream/modules/platform/ref-projects-manage-playbooks-with-source-control.adoc +++ b/downstream/modules/platform/ref-projects-manage-playbooks-with-source-control.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-projects-manage-playbooks-with-source-control"] = Managing playbooks using source control @@ -7,8 +9,3 @@ Choose one of the following options when managing playbooks using source control * xref:proc-scm-git-subversion[SCM Types - Configuring playbooks to use Git and Subversion] * xref:proc-scm-insights[SCM Type - Configuring playbooks to use Red Hat Insights] * xref:proc-scm-remote-archive[SCM Type - Configuring playbooks to use a remote archive] - -include::proc-scm-git-subversion.adoc[leveloffset=+1] -include::proc-scm-insights.adoc[leveloffset=+1] -include::proc-scm-remote-archive.adoc[leveloffset=+1] - diff --git a/downstream/modules/platform/ref-ratio-control-execution.adoc b/downstream/modules/platform/ref-ratio-control-execution.adoc index 0ba5c8da15..884754cc12 100644 --- a/downstream/modules/platform/ref-ratio-control-execution.adoc +++ b/downstream/modules/platform/ref-ratio-control-execution.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-ratio-control-execution"] = Ratio of control to execution capacity diff --git a/downstream/modules/platform/ref-receptor-inventory-variables.adoc b/downstream/modules/platform/ref-receptor-inventory-variables.adoc new file mode 100644 index 0000000000..f5fbc46901 --- /dev/null +++ b/downstream/modules/platform/ref-receptor-inventory-variables.adoc @@ -0,0 +1,145 @@ +:_mod-docs-content-type: REFERENCE + +[id="receptor-variables"] + += Receptor variables + +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `receptor_datadir` +| +| The directory where receptor stores its runtime data and local artifacts. +The target directory must be accessible to *awx* users. +If the target directory is a temporary file system *tmpfs*, ensure it is remounted correctly after a reboot. Failure to do so results in the receptor no longer having a working directory. +| Optional +| `/tmp/receptor` + +| `receptor_listener_port` +| `receptor_port` +| Port number that receptor listens on for incoming connections from other receptor nodes. +| Optional +| `27199` + +| `receptor_listener_protocol` +| `receptor_protocol` +| Protocol that receptor will support when handling traffic. +| Optional +| `tcp` + +| `receptor_log_level` +| `receptor_log_level` +| Controls the verbosity of logging for receptor. +Valid options include: `error`, `warning`, `info`, or `debug`. +| Optional +| `info` + +| `receptor_tls` +| +| Controls whether TLS is enabled or disabled for receptor. +Set this variable to `false` to disable TLS. +| Optional +| `true` + +| See `node_type` for the RPM equivalent variable. +| `receptor_type` +a| +For the `[automationcontroller]` group the two options are: + +* `receptor_type=control` - The node only runs project and inventory updates, but not regular jobs. +* `receptor_type=hybrid` - The node runs everything. + +For the `[execution_nodes]` group the two options are: + +* `receptor_type=hop` - The node forwards jobs to an execution node. +* `receptor_type=execution` - The node can run jobs. +| Optional +| For the `[automationcontroller]` group: `hybrid`. +For the `[execution_nodes]` group: `execution`. + +| See `peers` for the RPM equivalent variable +| `receptor_peers` +a| Used to indicate which nodes a specific host connects to. Wherever this variable is defined, an outbound connection to the specific host is established. The value must be a comma-separated list of hostnames. Do not use inventory group names. + +This is resolved into a set of hosts that is used to construct the `receptor.conf` file. + +// No RPM equivalent section in RPM install guide +// This content is used in Containerized installation +ifdef::container-install[] +For more information, see link:{URLContainerizedInstall}/aap-containerized-installation#adding-execution-nodes_aap-containerized-installation[Adding execution nodes]. +endif::container-install[] + +| Optional +| `[]` + +| +| `receptor_disable_signing` +| Controls whether signing of communications between receptor nodes is enabled or disabled. +Set this variable to `true` to disable communication signing. +| Optional +| `false` + +| +| `receptor_disable_tls` +| Controls whether TLS is enabled or disabled for receptor. +Set this variable to `true` to disable TLS. +| Optional +| `false` + +| +| `receptor_firewall_zone` +| The firewall zone where receptor related firewall rules are applied. This controls which networks can access receptor based on the zone's trust level. +| Optional +| `public` + +| +| `receptor_mintls13` +| Controls whether or not receptor only accepts connections that use TLS 1.3 or higher. +Set to `true` to only accept connections that use TLS 1.3 or higher. +| Optional +| `false` + +| +| `receptor_signing_private_key` +| Path to the private key used by receptor to sign communications with other receptor nodes in the network. +| Optional +| + +| +| `receptor_signing_public_key` +| Path to the public key used by receptor to sign communications with other receptor nodes in the network. +| Optional +| + +| +| `receptor_signing_remote` +| Denote whether the receptor signing files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| +| `receptor_tls_cert` +| Path to the TLS certificate file for receptor. +| Optional +| + +| +| `receptor_tls_key` +| Path to the TLS key file for receptor. +| Optional +| + +| +| `receptor_tls_remote` +| Denote whether the receptor provided certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| +| `receptor_use_archive_compression` +| Controls whether archive compression is enabled or disabled for receptor. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` + +|=== diff --git a/downstream/modules/platform/ref-redis-config-enterprise-topology.adoc b/downstream/modules/platform/ref-redis-config-enterprise-topology.adoc new file mode 100644 index 0000000000..99f0b221f8 --- /dev/null +++ b/downstream/modules/platform/ref-redis-config-enterprise-topology.adoc @@ -0,0 +1,31 @@ +:_newdoc-version: 2.18.3 +:_template-generated: 2024-10-11 + +:_mod-docs-content-type: REFERENCE + +[id="redis-config-enterprise-topology_{context}"] += Configuring Redis + +{PlatformNameShort} offers a centralized Redis instance in both `standalone` and `clustered` topologies. + +In RPM deployments, the Redis mode is set to `cluster` by default. You can change this setting in the inventory file `[all:vars]` section as in the following example: + +[source,] +---- +[all:vars] +admin_password='' +pg_host='data.example.com' +pg_port='5432' +pg_database='awx' +pg_username='awx' +pg_password='' +pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL + +registry_url='registry.redhat.io' +registry_username='' +registry_password='' + +redis_mode=cluster +---- + +For more information about Redis, see link:{URLPlanningGuide}/ha-redis_planning[Caching and queueing system] in _{TitlePlanningGuide}_. \ No newline at end of file diff --git a/downstream/modules/platform/ref-redis-inventory-variables.adoc b/downstream/modules/platform/ref-redis-inventory-variables.adoc new file mode 100644 index 0000000000..8f13f49822 --- /dev/null +++ b/downstream/modules/platform/ref-redis-inventory-variables.adoc @@ -0,0 +1,76 @@ +:_mod-docs-content-type: REFERENCE + +[id="redis-variables"] + += Redis variables + +[cols="25%,25%,30%,10%,10%",options="header"] +|=== +| RPM variable name | Container variable name | Description | Required or optional | Default + +| `redis_cluster_ip` +| `redis_cluster_ip` +| The IPv4 address used by the Redis cluster to identify each host in the cluster. +When defining hosts in the `[redis]` group, use this variable to identify the IPv4 address if the default is not what you want. +Specific to container: Redis clusters cannot use hostnames or IPv6 addresses. +| Optional +| RPM = Discovered IPv4 address from Ansible facts. If IPv4 address is not available, IPv6 address is used. Container = Discovered IPv4 address from Ansible facts. + +| `redis_disable_mtls` +| +| Controls whether mTLS is enabled or disabled for Redis. Set this variable to `true` to disable mTLS. +| Optional +| `false` + +| `redis_firewalld_zone` +| `redis_firewall_zone` +| The firewall zone where Redis related firewall rules are applied. This controls which networks can access Redis based on the zone's trust level. +| Optional +| RPM = no default set. Container = `public`. + +| `redis_hostname` +| +| Hostname used by the Redis cluster when identifying and routing the host. +By default `routable_hostname` is used. +| Optional +| The value defined in `routable_hostname` + +| `redis_mode` +| `redis_mode` +| The Redis mode to use for your {PlatformNameShort} installation. +Valid options include: `standalone` and `cluster`. +For more information about Redis, see link:{URLPlanningGuide}/ha-redis_planning[Caching and queueing system] in _{TitlePlanningGuide}_. +| Optional +| `cluster` + +| `redis_server_regen_cert` +| +| Denotes whether or not to regenerate the {PlatformNameShort} managed TLS key pair for Redis. +| Optional +| `false` + +| `redis_tls_cert` +| `redis_tls_cert` +| Path to the Redis server TLS certificate. +| Optional +| + +| `redis_tls_files_remote` +| `redis_tls_remote` +| Denote whether the Redis provided certificate files are local to the installation program (`false`) or on the remote component server (`true`). +| Optional +| `false` + +| `redis_tls_key` +| `redis_tls_key` +| Path to the Redis server TLS certificate key. +| Optional +| + +| +| `redis_use_archive_compression` +| Controls whether archive compression is enabled or disabled for Redis. You can control this functionality globally by using `use_archive_compression`. +| Optional +| `true` + +|=== diff --git a/downstream/modules/platform/ref-removing-instances.adoc b/downstream/modules/platform/ref-removing-instances.adoc index 2a6b2dab2b..8e33b9c0b1 100644 --- a/downstream/modules/platform/ref-removing-instances.adoc +++ b/downstream/modules/platform/ref-removing-instances.adoc @@ -1,8 +1,10 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-removing-instances"] = Removing Instances -From the *Add instance* page, you can add, remove or run health checks on your nodes. +From the *Instances* page, you can add, remove or run health checks on your nodes. [NOTE] ==== diff --git a/downstream/modules/platform/ref-renewal-guidance.adoc b/downstream/modules/platform/ref-renewal-guidance.adoc new file mode 100644 index 0000000000..82cfc7cf07 --- /dev/null +++ b/downstream/modules/platform/ref-renewal-guidance.adoc @@ -0,0 +1,14 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-renewal-guidance"] + += `RENEWAL_GUIDANCE` +The `RENEWAL_GUIDANCE` report provides historical usage from the HostMetric table, applying deduplication and showing real historical usage for renewal guidance purposes. + +To generate this report, set the report type to +`METRICS_UTILITY_REPORT_TYPE=RENEWAL_GUIDANCE`. + +[IMPORTANT] +==== +This report is currently a tech preview solution. It is designed to provide more information than {ControllerName} when built in the `awx-manage host_metric` command. +==== diff --git a/downstream/modules/platform/ref-report-types.adoc b/downstream/modules/platform/ref-report-types.adoc new file mode 100644 index 0000000000..f7aae82afe --- /dev/null +++ b/downstream/modules/platform/ref-report-types.adoc @@ -0,0 +1,120 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-report-types"] + += Report types +This section provides additional configurations for data gathering and report building based on a report type. Apply the environment variables to each report type based on your {PlatformNameShort} installation. + +//// +== CCSPv2 + +CCSPv2 is a report which shows the following: + +* Directly and indirectly managed node usage +* The content of all inventories +* Content usage + +The primary use of this report is for partners under the link:https://connect.redhat.com/en/programs/certified-cloud-service-provider[CCSP] program, but all customers can use it to obtain on-premise reporting showing managed nodes, jobs and content usage across their {ControllerName} organizations. + +Set the report type using `METRICS_UTILITY_REPORT_TYPE=CCSPv2`. + +=== Optional collectors for `gather` command + +You can use the following optional collectors for the `gather` command: + +* `main_jobhostsummary` +** If present by default, this incrementally collects data from the `main_jobhostsummary` table in the {ControllerName} database, containing information about jobs runs and managed nodes automated. +* `main_host` +** This collects daily snapshots of the `main_host` table in the {ControllerName} database and has managed nodes and hosts present across {ControllerName} inventories. +* `main_jobevent` +** This incrementally collects data from the `main_jobevent` table in the {ControllerName} database and contains information about which modules, roles, and Ansible collections are being used. +* `main_indirectmanagednodeaudit` +** This incrementally collects data from the `main_indirectmanagednodeaudit` table in the {ControllerName} database and contains information about indirectly managed nodes. + +---- +# Example with all optional collectors +export METRICS_UTILITY_OPTIONAL_COLLECTORS="main_host,main_jobevent,main_indirectmanagednodeaudit" +---- + +=== Optional sheets for `build_report` command + +You can use the following optional sheets for the `build_report` command: + +* `ccsp_summary` +** This is a landing page specifically for partners under CCSP program. +This report takes additional parameters to customize the summary page. For more information, see the following example: ++ +---- +export METRICS_UTILITY_PRICE_PER_NODE=11.55 # in USD +export METRICS_UTILITY_REPORT_SKU=MCT3752MO +export METRICS_UTILITY_REPORT_SKU_DESCRIPTION="EX: Red Hat Ansible Automation Platform, Full Support (1 Managed Node, Dedicated, Monthly)" +export METRICS_UTILITY_REPORT_H1_HEADING="CCSP NA Direct Reporting Template" +export METRICS_UTILITY_REPORT_COMPANY_NAME="Partner A" +export METRICS_UTILITY_REPORT_EMAIL="email@email.com" +export METRICS_UTILITY_REPORT_RHN_LOGIN="test_login" +export METRICS_UTILITY_REPORT_PO_NUMBER="123" +export METRICS_UTILITY_REPORT_END_USER_COMPANY_NAME="Customer A" +export METRICS_UTILITY_REPORT_END_USER_CITY="Springfield" +export METRICS_UTILITY_REPORT_END_USER_STATE="TX" +export METRICS_UTILITY_REPORT_END_USER_COUNTRY="US" +---- +* `jobs` +** This is a list of {ControllerName} jobs launched. It is grouped by job template. +* `managed_nodes` +** This is a deduplicated list of managed nodes automated by {ControllerName}. +* `indirectly_managed_nodes` +** This is a deduplicated list of indirect managed nodes automated by {ControllerName}. +* `inventory_scope` +** This is a deduplicated list of managed nodes present across all inventories of {ControllerName}. +* `usage_by_organizations` +** This is a list of all {ControllerName} organizations with several metrics showing the organizations usage. This provides data suitable for doing internal chargeback. +* `usage_by_collections` +** This is a list of Ansible collections used in a{ControllerName} job runs. +* `usage_by_roles` +** This is a list of roles used in {ControllerName} job runs. +* `usage_by_modules` +** This is a list of modules used in {ControllerName} job runs. +* `managed_nodes_by_organization` +** This generates a sheet per organization, listing managed nodes for every organization with the same content as the managed_nodes sheet. +* `data_collection_status` +** This generates a sheet with the status of every data collection done by the `gather` command for the date range the report is built for. + +To outline the quality of data collected it also lists: + +*** unusual gaps between collections (based on collection_start_timestamp) +*** gaps in collected intervals (based on since vs until) ++ +---- +# Example with all optional sheets +export METRICS_UTILITY_OPTIONAL_CCSP_REPORT_SHEETS='ccsp_summary,jobs,managed_nodes,indirectly_managed_nodes,inventory_scope,usage_by_organizations,usage_by_collections,usage_by_roles,usage_by_modules,data_collection_status' +---- + +=== Filtering reports by organization +To filter your report so that only certain organizations are present, use this environment variable with a semicolon separated list of organization names. + +`export METRICS_UTILITY_ORGANIZATION_FILTER="ACME;Organization 1"` + +This renders only the data from these organizations in the built report. This filter currently does not have any effect on the following optional sheets: + +* `usage_by_collections` +* `usage_by_roles` +* `usage_by_modules` + +=== Selecting a date range for your CCSPv2 report + +The default behavior of the CCSPv2 report is to build a report for the previous month. The following examples describe how to override this default behavior to select a specific date range for your report: ++ +---- +# Build report for a specific month +metrics-utility build_report --month=2025-03 + +# Build report for a specific date range, icluding the prvided days +metrics-utility build_report --since=2025-03-01 --until=2025-03-31 + +# Build report for a last 6 months from a current date +metrics-utility build_report --since=6months + +# Build report for a last 6 months from a current date overriding an exisitng report +metrics-utility build_report --since=6months --force +---- +//// \ No newline at end of file diff --git a/downstream/modules/platform/ref-requests-limits.adoc b/downstream/modules/platform/ref-requests-limits.adoc index 9f8c486fc1..19e7166222 100644 --- a/downstream/modules/platform/ref-requests-limits.adoc +++ b/downstream/modules/platform/ref-requests-limits.adoc @@ -1,6 +1,8 @@ -[id="ref-requests-limits"] +:_mod-docs-content-type: REFERENCE -== Requests and limits +[id="ref-requests-limits_{context}"] + += Requests and limits If the node where a pod is running has enough resources available, it is possible for a container to use more resources than its request for that resource specifies. However, a container is not allowed to use more than its resource limit. diff --git a/downstream/modules/platform/ref-resource-management-pods-containers.adoc b/downstream/modules/platform/ref-resource-management-pods-containers.adoc index cf9f58c541..c39a82dd9d 100644 --- a/downstream/modules/platform/ref-resource-management-pods-containers.adoc +++ b/downstream/modules/platform/ref-resource-management-pods-containers.adoc @@ -1,4 +1,6 @@ -[id="ref-resource-management-pods-containers"] +:_mod-docs-content-type: REFERENCE + +[id="ref-resource-management-pods-containers_{context}"] = Resource management for pods and containers diff --git a/downstream/modules/platform/ref-resource-types.adoc b/downstream/modules/platform/ref-resource-types.adoc index e0797f2a9a..ee63b283fb 100644 --- a/downstream/modules/platform/ref-resource-types.adoc +++ b/downstream/modules/platform/ref-resource-types.adoc @@ -1,6 +1,8 @@ -[id="ref-resource-types"] +:_mod-docs-content-type: REFERENCE -== Resource types +[id="ref-resource-types_{context}"] + += Resource types CPU and memory are both resource types. A resource type has a base unit. diff --git a/downstream/modules/platform/ref-scaling-control-nodes.adoc b/downstream/modules/platform/ref-scaling-control-nodes.adoc index 58d7778885..c10a403dfa 100644 --- a/downstream/modules/platform/ref-scaling-control-nodes.adoc +++ b/downstream/modules/platform/ref-scaling-control-nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-scaling-control-nodes"] = Benefits of scaling control nodes @@ -13,4 +15,4 @@ Scaling CPU and memory in the same proportion is recommended, for example, 1 CPU NOTE: Vertically scaling a control node does not automatically increase the number of workers that handle web requests. -An alternative to vertically scaling is horizontally scaling by deploying more control nodes. This allows spreading control tasks across more nodes as well as allowing web traffic to be spread over more nodes, given that you provision a load balancer to spread requests across nodes. Horizontally scaling by deploying more control nodes in many ways can be preferable as it additionally provides for more redundancy and workload isolation in the event that a control node goes down or experiences higher than normal load. +An alternative to vertically scaling is horizontally scaling by deploying more control nodes. This allows spreading control tasks across more nodes and allowing web traffic to be spread over more nodes, given that you provision a load balancer to spread requests across nodes. Horizontally scaling by deploying more control nodes in many ways can be preferable as it additionally provides for more redundancy and workload isolation when a control node goes down or experiences higher than normal load. diff --git a/downstream/modules/platform/ref-scaling-execution-nodes.adoc b/downstream/modules/platform/ref-scaling-execution-nodes.adoc index 83c25dbbaa..79a069a845 100644 --- a/downstream/modules/platform/ref-scaling-execution-nodes.adoc +++ b/downstream/modules/platform/ref-scaling-execution-nodes.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-scaling-execution-nodes"] = Benefits of scaling execution nodes diff --git a/downstream/modules/platform/ref-scaling-hop-nodes.adoc b/downstream/modules/platform/ref-scaling-hop-nodes.adoc index 4c0d789cb0..f432b665da 100644 --- a/downstream/modules/platform/ref-scaling-hop-nodes.adoc +++ b/downstream/modules/platform/ref-scaling-hop-nodes.adoc @@ -1,7 +1,9 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-scaling-hop-nodes"] = Benefits of scaling hop nodes Because hop nodes use very low memory and CPU, vertically scaling these nodes does not impact capacity. Monitor the network bandwidth of any hop node that serves as the sole connection between many execution nodes and the control plane. If bandwidth use is saturated, consider changing the network. -Horizontally scaling by adding more hop nodes could provide redundancy in the event that one hop node goes down, which can allow traffic to continue to flow between the control plane and the execution nodes. +Horizontally scaling by adding more hop nodes could provide redundancy when one hop node goes down, which can allow traffic to continue to flow between the control plane and the execution nodes. diff --git a/downstream/modules/platform/ref-schedule-jobs-worker-nodes.adoc b/downstream/modules/platform/ref-schedule-jobs-worker-nodes.adoc index 468ccd12fd..376368ec9a 100644 --- a/downstream/modules/platform/ref-schedule-jobs-worker-nodes.adoc +++ b/downstream/modules/platform/ref-schedule-jobs-worker-nodes.adoc @@ -1,4 +1,6 @@ -[id="ref-schedule-jobs-worker-nodes"] +:_mod-docs-content-type: REFERENCE + +[id="ref-schedule-jobs-worker-nodes_{context}"] = Jobs scheduled on the worker nodes @@ -6,7 +8,7 @@ Both {ControllerName} and Kubernetes play a role in scheduling a job. When a job is launched, its dependencies are fulfilled, meaning any project updates or inventory updates are launched by {ControllerName} as required by the job template, project, and inventory settings. -If the job is not blocked by other business logic in {Controllername} and there is control capacity in the control plane to start the job, the job is submitted to the dispatcher. +If the job is not blocked by other business logic in {ControllerName} and there is control capacity in the control plane to start the job, the job is submitted to the dispatcher. The default settings of the "cost" to control a job is 1 _capacity_. So, a control pod with 100 capacity is able to control up to 100 jobs at a time. Given control capacity, the job transitions from _pending_ to _waiting_. diff --git a/downstream/modules/platform/ref-select-a-date-range.adoc b/downstream/modules/platform/ref-select-a-date-range.adoc new file mode 100644 index 0000000000..2b497c5e45 --- /dev/null +++ b/downstream/modules/platform/ref-select-a-date-range.adoc @@ -0,0 +1,18 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-select-a-date-range"] + += Selecting a date range for your `RENEWAL_GUIDANCE` report + +The `RENEWAL_GUIDANCE` report requires a `since` parameter as the parameter is not supported due to the nature of the HostMetric data and is always set to `now`. To override a report date range that has already been built, use parameter `–force` with the command. For more information, see the following examples: + +---- +# Build report for a specific date range, including the provided days +metrics-utility build_report --since=2025-03-01 + +# Build report for a last 12 months from a current date +metrics-utility build_report --since=12months + +# Build report for a last 12 months from a current date overriding an existing report +metrics-utility build_report --since=12months --force +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-set-custom-pod-timeout.adoc b/downstream/modules/platform/ref-set-custom-pod-timeout.adoc new file mode 100644 index 0000000000..75ad2e90ef --- /dev/null +++ b/downstream/modules/platform/ref-set-custom-pod-timeout.adoc @@ -0,0 +1,44 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-set-custom-pod-timeout_{context}"] + += Extra settings +With `extra_settings`, you can pass many custom settings by using the awx-operator. +The parameter `extra_settings` is appended to `/etc/tower/settings.py` and can be an alternative to the `extra_volumes` parameter. + +[cols="20%,20%,20%",options="header"] +|==== +| Name | Description |Default +| `extra_settings` | Extra settings | ‘’ +|==== + +.Example configuration of `extra_settings` parameter + +[options="nowrap" subs="+quotes,attributes"] +---- + spec: + extra_settings: + - setting: MAX_PAGE_SIZE + value: "500" + + - setting: AUTH_LDAP_BIND_DN + value: "cn=admin,dc=example,dc=com" + + - setting: SYSTEM_TASK_ABS_MEM + value: "500" +---- + +.Custom pod timeouts + +A container group job in {ControllerName} transitions to the `running` state just before you submit the pod to the Kubernetes API. +{ControllerNameStart} then expects the pod to enter the `Running` state before `AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT` seconds has elapsed. +You can set `AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT` to a higher value if you want {ControllerName} to wait for longer before canceling jobs that fail to enter the `Running` state. +`AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT` is how long {ControllerName} waits from creation of a pod until the Ansible work begins in the pod. +You can also extend the time if the pod cannot be scheduled because of resource constraints. +You can do this using `extra_settings` on the {ControllerName} specification. +The default value is two hours. + +This is used if you are consistently launching many more jobs than Kubernetes can schedule, and jobs are spending periods longer than `AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT` in _pending_. + +Jobs are not launched until control capacity is available. +If many more jobs are being launched than the container group has capacity to run, consider scaling up your Kubernetes worker nodes. diff --git a/downstream/modules/platform/ref-set-requests-limits-for-containers.adoc b/downstream/modules/platform/ref-set-requests-limits-for-containers.adoc index d078806705..80e0ba7cc5 100644 --- a/downstream/modules/platform/ref-set-requests-limits-for-containers.adoc +++ b/downstream/modules/platform/ref-set-requests-limits-for-containers.adoc @@ -1,4 +1,6 @@ -[id="ref-set-requests-limits-for-containers"] +:_mod-docs-content-type: REFERENCE + +[id="ref-set-requests-limits-for-containers_{context}"] = Requests and limits for task containers diff --git a/downstream/modules/platform/ref-show-ephemeral-use.adoc b/downstream/modules/platform/ref-show-ephemeral-use.adoc new file mode 100644 index 0000000000..709db50b4d --- /dev/null +++ b/downstream/modules/platform/ref-show-ephemeral-use.adoc @@ -0,0 +1,13 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-show-ephemeral-use"] + += Showing ephemeral usage + +The `RENEWAL_GUIDANCE` report has the capability to list additional sheets with ephemeral usage if the `–ephemeral` parameter is provided. Using the parameter `--ephemeral=1month`, you can define ephemeral nodes as any managed node that has been automated for a maximum of one month, then never automated again. Using this parameter, the total ephemeral usage of the 12-month period is computed as maximum ephemeral nodes used over all 1-month rolling date windows. This sheet is also added into the report. + +---- +# Will generate report for 12 months back with epehemeral nodes being nodes +# automated for less than 1 month. +metrics-utility build_report --since=12months --ephemeral=1month +---- diff --git a/downstream/modules/platform/ref-single-node-control-plane-single-execution-node.adoc b/downstream/modules/platform/ref-single-node-control-plane-single-execution-node.adoc index d889723e20..75a4d34ece 100644 --- a/downstream/modules/platform/ref-single-node-control-plane-single-execution-node.adoc +++ b/downstream/modules/platform/ref-single-node-control-plane-single-execution-node.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-single-node-control-plane-single-execution-node"] = Single node control plane with single execution node diff --git a/downstream/modules/platform/ref-storage-invocation.adoc b/downstream/modules/platform/ref-storage-invocation.adoc new file mode 100644 index 0000000000..3387ec0857 --- /dev/null +++ b/downstream/modules/platform/ref-storage-invocation.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-storage-invocation"] + += Storage and invocation +The `RENEWAL_GUIDANCE` report supports the use of only local disk storage to store the report results. This report does not have a gather data step. It reads directly from the controller HostMetric table, so it does not store any raw data under the `METRICS_UTILITY_SHIP_PATH`. + +---- +# All parameters the RENEWAL_GUIDANCE report needs +export METRICS_UTILITY_SHIP_TARGET=controller_db +export METRICS_UTILITY_REPORT_TYPE=RENEWAL_GUIDANCE +export METRICS_UTILITY_SHIP_PATH=/path_to_built_report/... + +# Will generate report for 12 months back with epehemeral nodes being nodes +# automated for less than 1 month. +metrics-utility build_report --since=12months --ephemeral=1month +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-supported-storage.adoc b/downstream/modules/platform/ref-supported-storage.adoc new file mode 100644 index 0000000000..04012db9f8 --- /dev/null +++ b/downstream/modules/platform/ref-supported-storage.adoc @@ -0,0 +1,40 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-supported-storage"] + += Supported storage + +Supported storage is available for storing the raw data obtained by using the `metrics-utility gather_automation_controller_billing_data` command and storing the generated reports obtained by using the `metrics-utility build_report` command. +Apply the environment variables to this storage based on your {PlatformNameShort} installation. + +== Local disk +For an installation of {PlatformNameShort} on {RHEL}, the default storage option is a local disk. Using an OpenShift deployment of {OCPShort}, default storage is a path inside the attached Persistent Volume Claim. + +---- +# Set needed ENV VARs for gathering data and generating reports +export METRICS_UTILITY_SHIP_TARGET=directory +# Your path on the local disk +export METRICS_UTILITY_SHIP_PATH=/path_to_data_and_reports/... +---- + +== Object storage with S3 interface + +To use object storage with S3 interface, for example, with AWS S3, Ceph Object storage, or Minio, you must define environment variables for data gathering and report building commands and cronjobs. +---- +################ +export METRICS_UTILITY_SHIP_TARGET=s3 +# Your path in the object storage +export METRICS_UTILITY_SHIP_PATH=path_to_data_and_reports/... + +################ +# Define S3 config +export METRICS_UTILITY_BUCKET_NAME=metricsutilitys3 +export METRICS_UTILITY_BUCKET_ENDPOINT="https://s3.us-east-1.amazonaws.com" +# For AWS S3, define also a region +export METRICS_UTILITY_BUCKET_REGION="us-east-1" + +################ +# Define S3 credentials +export METRICS_UTILITY_BUCKET_ACCESS_KEY= +export METRICS_UTILITY_BUCKET_SECRET_KEY= +---- \ No newline at end of file diff --git a/downstream/modules/platform/ref-system-proxy-config.adoc b/downstream/modules/platform/ref-system-proxy-config.adoc new file mode 100644 index 0000000000..7d399ae3b3 --- /dev/null +++ b/downstream/modules/platform/ref-system-proxy-config.adoc @@ -0,0 +1,17 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref-system-proxy-config"] + += System proxy configuration +The outbound proxy is configured on the system level for all the nodes in the control plane. + +The following environment variables must be set: +---- +http_proxy=“http://external-proxy_0:3128” +https_proxy=“http://external-proxy_0:3128” +no_proxy=“localhost,127.0.0.0/8,10.0.0.0/8” +---- +You can also add those variables to the '/etc/environment' file to make them permanent. + +The installation program ensures that all external communication during the installation goes through the proxy. +For containerized installation, those variables ensure that the podman uses the egress proxy. diff --git a/downstream/modules/platform/ref-system-requirements.adoc b/downstream/modules/platform/ref-system-requirements.adoc index cd12a0f28d..09781a3287 100644 --- a/downstream/modules/platform/ref-system-requirements.adoc +++ b/downstream/modules/platform/ref-system-requirements.adoc @@ -1,53 +1,79 @@ +:_mod-docs-content-type: REFERENCE + // [id="ref-platform-system-requirements_{context}"] = {PlatformName} system requirements -Your system must meet the following minimum system requirements to install and run {PlatformName}. +Your system must meet the following minimum system requirements to install and run {PlatformName}. +A resilient deployment requires 10 virtual machines with a minimum of 16 gigabytes (GB) of RAM and 4 virtual CPUs (vCPU). +See link:{LinkTopologies} for more information on topology options. + .Base system -[cols="a,a,a"] +[cols="20%,40%,40%", options="header"] +|==== +| Type | Description | Notes +h| Subscription | Valid {PlatformName} subscription | +h| Operating system +a| +* {RHEL} 8.8 or later minor versions of {RHEL} 8 +* {RHEL} 9.2 or later minor versions of {RHEL} 9 | {PlatformName} are also supported on OpenShift, see link:{LinkOperatorInstallation} for more information. +h| CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) | +h| Ansible-core | Ansible-core version {CoreUseVers} or later | {PlatformNameShort} uses the system-wide ansible-core package to install the platform, but uses ansible-core {CoreUseVers} for both its control plane and built-in execution environments. +h| Browser | A currently supported version of Mozilla Firefox or Google Chrome. | +h| Database | {PostgresVers} | {PlatformName} {PlatformVers} requires the external (customer supported) databases to have ICU support. +|==== + +.Virtual machine requirements + +[cols="25%,10%,10%,10,45%", options="header"] +|=== +| Component | RAM | VCPU | Disk IOPS | Storage + +| {GatewayStart} | 16GB | 4 | 3000 | 60GB minimum +| Control nodes | 16GB | 4 | 3000 | 80GB minimum with at least 20GB available under `/var/lib/awx` +| Execution nodes | 16GB | 4 | 3000 | 60GB minimum +| Hop nodes | 16GB | 4 | 3000 | 60GB minimum +| {HubNameStart} | 16GB | 4 | 3000 | 60GB minimum with at least 40GB allocated to `/var/lib/pulp` +| Database | 16GB | 4 | 3000 | 100GB minimum allocated to `/var/lib/pgsql` +| {EDAcontroller} | 16GB | 4 | 3000 | 60GB minimum |=== -| Requirement | Required | Notes - -h| Subscription | Valid {PlatformName} | - -h| OS | {RHEL} 8.6 or later 64-bit (x86, ppc64le, s390x, aarch64) |{PlatformName} is also supported on OpenShift, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/index[Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform] for more information. - -h| Ansible-core | Ansible-core version {CoreInstVers} or later | {PlatformNameShort} includes execution environments that contain ansible-core {CoreUseVers}. -h| Python | 3.9 or later | +[NOTE] +==== +These are minimum requirements and can be increased for larger workloads in increments of 2x (for example 16GB becomes 32GB and 4 vCPU becomes 8vCPU). See the horizontal scaling guide for more information. +==== -h| Browser | A currently supported version of Mozilla FireFox or Google Chrome | +.Repository requirements -h| Database | PostgreSQL version 13 | -|=== +Enable the following repositories only when installing {PlatformName}: -The following are necessary for you to work with project updates and collections: +* RHEL BaseOS -* Ensure that the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/ref-network-ports-protocols_planning[network ports and protocols] listed in _Table 5.9. Automation Hub_ are available for successful connection and download of collections from {HubName} or {Galaxy} server. -* Disable SSL inspection either when using self-signed certificates or for the Red Hat domains. +* RHEL AppStream [NOTE] ==== -The requirements for systems managed by {PlatformNameShort} are the same as for Ansible. -See link:https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#prerequisites[Installing Ansible] in the Ansible Community Documentation. +If you enable repositories besides those mentioned above, the {PlatformName} installation could fail unexpectedly. ==== -.Additional notes for {PlatformName} requirements +The following are necessary for you to work with project updates and collections: + +* Ensure that the link:{URLPlanningGuide}/ref-network-ports-protocols_planning#ref-network-ports-protocols_planning[Network ports and protocols] listed in _Table 6.3. Automation Hub_ are available for successful connection and download of collections from {HubName} or {Galaxy} server. -* {PlatformName} depends on Ansible Playbooks and requires the installation of the latest stable version of ansible-core. You can download ansible-core manually or download it automatically as part of your installation of {PlatformName}. +.Additional notes for {PlatformName} requirements -* For new installations, {ControllerName} installs the latest release package of ansible-core. +* The {PlatformNameShort} database backups are staged on each node at `/var/backups/automation-platform` through the variable `backup_dir`. You might need to mount a new volume to `/var/backups` or change the staging location with the variable `backup_dir` to prevent issues with disk space before running the `./setup.sh -b` script. * If performing a bundled {PlatformNameShort} installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you. -* If you have installed Ansible manually, the {PlatformNameShort} installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it. +* If you have installed Ansible-core manually, the {PlatformNameShort} installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it. [NOTE] ==== -You must install Ansible using a package manager such as `dnf`, and the latest stable version of the package manager must be installed for {PlatformName} to work properly. -Ansible version 2.14 is required for versions {PlatformVers} and later. +You must use Ansible-core, which is installed via dnf. +Ansible-core version {CoreUseVers} is required for versions {PlatformVers} and later. ==== diff --git a/downstream/modules/platform/ref-thycotic-devops-vault.adoc b/downstream/modules/platform/ref-thycotic-devops-vault.adoc index 5ecd78835b..eaf3dba70c 100644 --- a/downstream/modules/platform/ref-thycotic-devops-vault.adoc +++ b/downstream/modules/platform/ref-thycotic-devops-vault.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-thycotic-devops-vault"] = Thycotic DevOps Secrets Vault diff --git a/downstream/modules/platform/ref-thycotic-secret-server.adoc b/downstream/modules/platform/ref-thycotic-secret-server.adoc index 36d193aa79..441ae261fd 100644 --- a/downstream/modules/platform/ref-thycotic-secret-server.adoc +++ b/downstream/modules/platform/ref-thycotic-secret-server.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-thycotic-secret-server"] = Thycotic Secret Server diff --git a/downstream/modules/platform/ref-using-custom-receptor-signing-keys.adoc b/downstream/modules/platform/ref-using-custom-receptor-signing-keys.adoc index 1b83fff58f..fee4f5e6a1 100644 --- a/downstream/modules/platform/ref-using-custom-receptor-signing-keys.adoc +++ b/downstream/modules/platform/ref-using-custom-receptor-signing-keys.adoc @@ -7,9 +7,9 @@ = Using custom Receptor signing keys [role="_abstract"] -Receptor signing is now enabled by default unless `receptor_disable_signing=true` is set, and a RSA key pair (public/private) is generated by the installer. However, you can provide custom RSA public/private keys by setting the path variable. +Receptor signing is enabled by default unless `receptor_disable_signing=true` is set, and an RSA key pair (public and private) is generated by the installation program. However, you can set custom RSA public and private keys by using the following variables: ---- -receptor_signing_private_key=/full/path/to/private/key -receptor_signing_public_key=/full/path/to/public/key +receptor_signing_private_key= +receptor_signing_public_key= ---- diff --git a/downstream/modules/platform/ref-using-custom-tls-certificates.adoc b/downstream/modules/platform/ref-using-custom-tls-certificates.adoc deleted file mode 100644 index e8b4a03e82..0000000000 --- a/downstream/modules/platform/ref-using-custom-tls-certificates.adoc +++ /dev/null @@ -1,48 +0,0 @@ -:_newdoc-version: 2.15.1 -:_template-generated: 2024-01-12 - -:_mod-docs-content-type: REFERENCE - -[id="using-custom-tls-certificates_{context}"] -= Using custom TLS certificates - -[role="_abstract"] - -By default, the installer generates TLS certificates and keys for all services which are signed by a custom Certificate Authority (CA). You can provide a custom TLS certificate/key for each service. If that certificate is signed by a custom CA, you must provide the CA TLS certificate and key. - -* Certificate Authority ----- -ca_tls_cert=/full/path/to/tls/certificate -ca_tls_key=/full/path/to/tls/key ----- - -* Automation Controller ----- -controller_tls_cert=/full/path/to/tls/certificate -controller_tls_key=/full/path/to/tls/key ----- - -* Automation Hub ----- -hub_tls_cert=/full/path/to/tls/certificate -hub_tls_key=/full/path/to/tls/key ----- - -* Automation EDA ----- -eda_tls_cert=/full/path/to/tls/certificate -eda_tls_key=/full/path/to/tls/key ----- - -* Postgresql ----- -postgresql_tls_cert=/full/path/to/tls/certificate -postgresql_tls_key=/full/path/to/tls/key ----- - -* Receptor ----- -receptor_tls_cert=/full/path/to/tls/certificate -receptor_tls_key=/full/path/to/tls/key ----- - diff --git a/downstream/modules/platform/ref-work-with-permissions.adoc b/downstream/modules/platform/ref-work-with-permissions.adoc index b77cfc09dd..78d993154d 100644 --- a/downstream/modules/platform/ref-work-with-permissions.adoc +++ b/downstream/modules/platform/ref-work-with-permissions.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: REFERENCE + [id="ref-work-with-permissions"] = Work with permissions @@ -5,7 +7,7 @@ The set of permissions assigned to a project (role-based access controls) that provide the ability to read, change, and administer projects, inventories, job templates, and other elements are privileges. -To access the project permissions, select the *Access* tab of the *Projects* page. +To access the project permissions, select the *User Access* or *Team Access* tab of the *Projects* page. This screen displays a list of users that currently have permissions to this project. You can sort and search this list by *Username*, *First Name*, or *Last Name*. diff --git a/downstream/modules/platform/ref_proxy-backends.adoc b/downstream/modules/platform/ref_proxy-backends.adoc new file mode 100644 index 0000000000..a1b6c47361 --- /dev/null +++ b/downstream/modules/platform/ref_proxy-backends.adoc @@ -0,0 +1,35 @@ +:_mod-docs-content-type: REFERENCE + +[id="ref_proxy-backends"] + += Proxy backends +For HTTP and HTTPS proxies you can use a squid server. +Squid is a forward proxy for the Web supporting HTTP, HTTPS, and FTP, reducing bandwidth and improving response times by caching and reusing frequently-requested web pages. +It is licensed under the GNU GPL. +Forward proxies are systems that intercept network traffic going to another network (typically the internet) and send it on the behalf of the internal systems. +The squid proxy enables all required communication to pass through it. + +Make sure all the required {PlatformNameShort} control plane ports are opened on the squid proxy backend. {PlatformNameShort}-specific ports: + +---- +acl Safe_ports port 81 +acl Safe_ports port 82 +acl Safe_ports port 389 +acl Safe_ports port 444 +acl Safe_ports port 445 +acl SSL_ports port 22 +---- +The following ports are for containerized installations: +---- +acl SSL_ports port 444 +acl SSL_ports port 445 +acl SSL_ports port 8443 +acl SSL_ports port 8444 +acl SSL_ports port 8445 +acl SSL_ports port 8446 +acl SSL_ports port 44321 +acl SSL_ports port 44322 + +http_access deny !Safe_ports +http_access deny CONNECT !SSL_ports +---- \ No newline at end of file diff --git a/downstream/modules/playbooks/ref-create-variables.adoc b/downstream/modules/playbooks/ref-create-variables.adoc index ad3955e543..15d9d514a4 100644 --- a/downstream/modules/playbooks/ref-create-variables.adoc +++ b/downstream/modules/playbooks/ref-create-variables.adoc @@ -29,4 +29,4 @@ webservers: vars: ansible_user: my_server_user ---- -For more information about inventories and Ansible inventory variables, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/about_the_installer_inventory_file[About the Installer Inventory file] and link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/appendix-inventory-files-vars[Inventory file variables]. +For more information about inventories and Ansible inventory variables, see link:{URLPlanningGuide}/about_the_installer_inventory_file[About the Installer Inventory file] and link:{URLInstallationGuide}/appendix-inventory-files-vars[Inventory file variables]. diff --git a/downstream/modules/playbooks/ref-playbook-execution.adoc b/downstream/modules/playbooks/ref-playbook-execution.adoc index 703cbb8392..c4466aaac8 100644 --- a/downstream/modules/playbooks/ref-playbook-execution.adoc +++ b/downstream/modules/playbooks/ref-playbook-execution.adoc @@ -11,7 +11,7 @@ At a minimum, each play defines two things: * the managed nodes to target, using a pattern * at least one task to execute -[Note] +[NOTE] ==== In Ansible 2.10 and later, use the fully-qualified collection name in your playbooks to ensure the correct module is selected, because multiple collections can contain modules with the same name (for example, `user`). ==== diff --git a/downstream/modules/terraform-aap/con-terraform-intro.adoc b/downstream/modules/terraform-aap/con-terraform-intro.adoc new file mode 100644 index 0000000000..7bb493abb2 --- /dev/null +++ b/downstream/modules/terraform-aap/con-terraform-intro.adoc @@ -0,0 +1,27 @@ +:_mod-docs-content-type: + +[id="introduction"] + += Introduction + +[role="_abstract"] + +Many organizations find themselves using both {PlatformNameShort} and {Terraform}, recognizing that these two open-source IT tools can work in harmony to create an improved experience for developers and operations teams. While {Terraform} excels at Infrastructure as Code (IaC) for provisioning and de-provisioning cloud resources, {PlatformNameShort} is a versatile, all-purpose automation solution ideal for configuration management, application deployment, and orchestrating complex IT workflows across diverse domains. + +This integration directly addresses common challenges such as managing disparate automation tools, ensuring consistent configuration across hybrid cloud environments and accelerating deployment cycles. By bringing together {Terraform}'s declarative approach to infrastructure provisioning with {PlatformNameShort}'s procedural approach to configuration and orchestration, users can achieve: + +* **Optimized costs:** Reduce cloud waste, minimize manual processes, and combat tool sprawl. This integration can lead to a significant reduction in infrastructure costs and a high return on investment. + +* **Reduced risk:** Lower the risk of breaches, enforce policies, and significantly decrease unplanned downtime. The ability to review {Terraform} plan output before applying it in a workflow, with approval steps, enhances security and compliance. + +* **Faster time to value:** Boost developer productivity and deploy new compute resources more rapidly, leading to a faster time to market. This is achieved through unified lifecycle management and automation for Day 0 (provisioning), Day 1 (configuration), and Day 2 (ongoing management) operations. + +This integration supports various implementation workflows to support diverse needs: + +* **Ansible-initiated workflow:** {PlatformNameShort} can directly call {Terraform} for provisioning within comprehensive, end-to-end automation workflows. This allows {PlatformNameShort} users to leverage {Terraform}'s provisioning capabilities while maintaining {PlatformNameShort} as their primary platform for configuration and lifecycle management. + +* **{Terraform} community-initiated workflow:** Community-edition users can migrate to {TerraformEnterpriseShortName} or {TerraformCloudShortName}, and then integrate the {PlatformNameShort} capabilities. + +* **{Terraform}-initiated workflow:** For existing {Terraform} users, {Terraform} can directly call {PlatformNameShort} at the end of provisioning for a more seamless and secure workflow. This enables {Terraform} users to enhance their immutable infrastructure automation with {PlatformNameShort} Day 2 automation capabilities and manage infrastructure updates and lifecycle events. + +By enabling direct calls between {PlatformNameShort} and {Terraform}, organizations can unlock time to value by creating combined workflows, reduce risk through enhanced product integrations, and enhance Infrastructure-as-Code with {PlatformNameShort} content and practices. This allows for unified lifecycle management, enabling tasks from initial provisioning and configuration to ongoing health checks, incident response, patching, and infrastructure optimization. \ No newline at end of file diff --git a/downstream/modules/terraform-aap/proc-terraform-building-execution-environment.adoc b/downstream/modules/terraform-aap/proc-terraform-building-execution-environment.adoc new file mode 100644 index 0000000000..b0f047c10f --- /dev/null +++ b/downstream/modules/terraform-aap/proc-terraform-building-execution-environment.adoc @@ -0,0 +1,36 @@ +:_mod-docs-content-type: PROCEDURE + +[id="terraform-building-execution-environment"] + += Building an execution environment in {PlatformNameShort} + +You must build an execution environment using the automation controller so that {PlatformNameShort} can provide the credentials necessary for using its automation features. + +.Prerequisites + +* You need a pre-existing execution environment with the latest version of `cloud.terraform` collection before you can create it using an automation controller. You cannot use the default execution environment provided by {PlatformNameShort} because the default environment does not include the `terraform` CLI binary. ++ +[NOTE] +==== +If you have migrated from {TerraformCommunityName}, you can continue to use your existing execution environment and update it to the latest version of `cloud.terraform`. +==== ++ +* Install the `terraform` CLI binary in your pre-existing execution environment. See **Additional resources** below for a link to the binary. + +.Procedure + +. From the navigation panel, select **Infrastructure > Execution Environments**. +. Click btn:[Create execution environment]. ++ +image::ee-create-new.png[Create a new execution environment page] ++ +. For **Name**, enter a name for your {PlatformNameShort} execution environment. +. For **Image**, enter the repository link to the image for your pre-existing execution environment. +. Click btn:[Create execution environment]. Your newly added execution environment is ready to be used in a job template. + +.Additional resources + +* link:https://developer.hashicorp.com/terraform/install[`terraform` CLI binary] +* link:https://catalog.redhat.com/search?gs&q=execution%20environments&searchType=containers[Red Hat ecosystem catalog] +* link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-controller-execution-environments#proc-controller-use-an-exec-envi[Execution environments] +* link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/creating_and_using_execution_environments/index[Creating and using execution environments] diff --git a/downstream/modules/terraform-aap/proc-terraform-creating-credential.adoc b/downstream/modules/terraform-aap/proc-terraform-creating-credential.adoc new file mode 100644 index 0000000000..bfb1e0b396 --- /dev/null +++ b/downstream/modules/terraform-aap/proc-terraform-creating-credential.adoc @@ -0,0 +1,55 @@ +:_mod-docs-content-type: PROCEDURE + +[id="terraform-creating-credential"] + += Creating a credential + +You can set up credentials directly from the {PlatformNameShort} user interface. The credentials are provided to the execution environment and {PlatformNameShort} reads them from there. This eliminates the need to manually update each playbook. + +.Prerequisite + +* You must have a {Terraform} API token set up. + +.Procedure + +. Log in to {PlatformNameShort}. +. From the navigation panel, select **Infrastructure > Credential Types**. +. Click btn:[Create a credential type]. The **Create Credential Type** page opens and displays the **Details** tab. +. For the **Credential Type**, enter a name. +. In the **Input configuration** field, enter the following YAML parameter and values: ++ +---- +fields: + - id: token + type: string + label: token + secret: true +---- ++ +. In the **Injector configuration** field, enter the environment host name and TF token. ++ +* For {TerraformEnterpriseShortName}, the hostname is the location where you have deployed TFE: ++ +---- +env: +TF_TOKEN_: ‘{{ token }}’ +---- ++ +* For {TerraformCloudShortName}, use: ++ +---- +env: +TF_TOKEN_app_terraform_io: ‘{{ token }}’ +---- ++ +. To save your configuration, click btn:[Create Credential Type] again. The new credential type is created. +. To create an instance of your new credential type, select **Infrastructure > Credentials** page, and select btn:[Create credential]. +. From the **Credential type** drop-down list, select the name of the credential type you created earlier. +. In the **Token** field, enter the {Terraform} API token. +. (Optional) Edit the **Description** and select the TF organization from the **Organization** drop-down list. +. Click btn:[Save credential]. + +.Additional resources + +* link:https://developer.hashicorp.com/terraform/cli/config/config-file#environment-variable-credentials[{Terraform} CLI configuration] +* link:https://developer.hashicorp.com/terraform/cloud-docs/users-teams-organizations/api-tokens#user-api-tokens[{Terraform} API tokens] diff --git a/downstream/modules/terraform-aap/proc-terraform-creating-launching-job-template.adoc b/downstream/modules/terraform-aap/proc-terraform-creating-launching-job-template.adoc new file mode 100644 index 0000000000..54c5f3b724 --- /dev/null +++ b/downstream/modules/terraform-aap/proc-terraform-creating-launching-job-template.adoc @@ -0,0 +1,28 @@ +:_mod-docs-content-type: PROCEDURE + +[id="terraform-creating-launching-job-template"] + += Creating and launching a job template + +Create and launch a job template to complete the integration and use the automation features in {PlatformNameShort}. + +.Procedure + +. From the navigation panel, select **Automation Execution > Templates**. +. Select **Create template > Create Job Template**. +. From the **Execution Environment** list, select the environment you created. +. From the **Credentials** drop-down list, select the credentials instance you created previously. If you do not see the credentials, select **Browse** to see more options in the list. +. Enter any additional information for the required fields. +. Click btn:[Create job template]. +. Click btn:[Launch template]. +. To launch the job, click btn:[Next] and btn:[Finish]. The job output shows that the job has run. + +.Verification + +To see that the job has run successfully from the {Terraform} user interface, select **Workspaces > Ansible-Content-Integration > Run**. The Run list shows the state of the Triggered via CLI job. You can see it go from the Queued to the Plan Finished state. + +.Additional resources + +* link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-controller-execution-environments#proc-controller-use-an-exec-env[Adding an execution environment to a job template] +* link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/index[Configuring automation execution] +* link:https://developer.hashicorp.com/terraform/enterprise[Hashicorp {TerraformEnterpriseShortName} documentation] diff --git a/downstream/modules/terraform-aap/proc-terraform-migrating-from-community.adoc b/downstream/modules/terraform-aap/proc-terraform-migrating-from-community.adoc new file mode 100644 index 0000000000..d97156599c --- /dev/null +++ b/downstream/modules/terraform-aap/proc-terraform-migrating-from-community.adoc @@ -0,0 +1,55 @@ +:_mod-docs-content-type: PROCEDURE + +[id="terraform-migrating-from-community"] + += Migrating from the community version of {Terraform} + +When you migrate from {TerraformCommunityName} Edition (TCE) to {TerraformEnterpriseShortName} (TFE) or {TerraformCloudShortName}, you are not migrating the collection itself. Instead, you are adapting your existing usage to work with {TerraformEnterpriseShortName} or {TerraformCloudShortName}. After you migrate, you will be able to leverage {PlatformNameShort}. + +[NOTE] +==== +The `cloud.terraform` collection only supports the CLI-driven workflow in {TerraformCloudShortName}. +==== + +.Prerequisites + +* Use the latest supported version of {Terraform} (1.11 or higher). +* Follow the `tf-migrate` CLI instructions under **Additional resources** below. +* Ensure that the {TerraformCloudShortName} or TFE workspace is not set to automatically apply plans. + +.Procedure + +. To prevent errors when running playbooks against TFE or {TerraformCloudShortName}, do the following actions before running a playbook: + +.. Confirm that the {Terraform} version in the execution environment is the same as the version stated in TFE or {TerraformCloudShortName}. +.. Perform an initialization in TFE or {TerraformCloudShortName}: ++ +---- +terraform init +---- ++ +.. If you have a local state file in your execution environment, delete the local state file. +.. Get a token from {TerraformCloudShortName} or {TerraformEnterpriseShortName}, which you will use to create the credential in a later step. Ensure the token has the necessary permissions based on the team or user token to execute the desired capabilities in the playbook. +.. Remove the backend config and files from your playbook definition. +.. Add the workspace within the default setting in your TF config or an environment variable if you want to define the workspace outside updating the playbook itself. ++ +[NOTE] +==== +You can add the workspace to your playbook to scale your workspace utilization. +==== ++ +. From the {PlatformNameShort} user interface: +.. Create a credential. +.. Build an execution environment. +.. Create and launch a job template. + +. After the migration is completed and verified, you can run the additional modules and plugins from the collection in your execution environment listed under **Additional resources** below. + +.Additional resources + +* link:https://console.redhat.com/ansible/automation-hub/repo/published/cloud/terraform/content/module/plan_stash/[Plan Stash module] +* link:https://console.redhat.com/ansible/automation-hub/repo/published/cloud/terraform/content/module/terraform/[{Terraform} module] +* link:https://console.redhat.com/ansible/automation-hub/repo/published/cloud/terraform/content/module/terraform_output/[Output plugin] +* link:https://console.redhat.com/ansible/automation-hub/repo/published/cloud/terraform/content/lookup/tf_output/[Output lookup plugin] +* link:https://console.redhat.com/ansible/automation-hub/repo/published/cloud/terraform/content/inventory/terraform_state/[State inventory plugin] +* link:https://developer.hashicorp.com/terraform/cloud-docs/migrate/tf-migrate[`tf-migrate` CLI instructions] diff --git a/downstream/modules/topologies/ref-cont-a-env-a.adoc b/downstream/modules/topologies/ref-cont-a-env-a.adoc new file mode 100644 index 0000000000..cc932ea004 --- /dev/null +++ b/downstream/modules/topologies/ref-cont-a-env-a.adoc @@ -0,0 +1,72 @@ +:_mod-docs-content-type: REFERENCE +[id="cont-a-env-a"] += Container {GrowthTopology} + +include::snippets/growth-topologies.adoc[] + +== Infrastructure topology +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::cont-a-env-a.png[Container {GrowthTopology} diagram] + +A single VM has been tested with the following component requirements: + +include::snippets/cont-tested-vm-config.adoc[] + +[NOTE] +==== +If performing a bundled installation of the {GrowthTopology} with `hub_seed_collections=true`, then 32 GB RAM is recommended. Note that with this configuration the install time is going to increase and can take 45 or more minutes alone to complete seeding the collections. +==== + +.Infrastructure topology +[options="header"] +|==== +| Purpose | Example group names +| All {PlatformNameShort} components +a| +* `automationgateway` +* `automationcontroller` +* `automationhub` +* `automationeda` +* `database` +|==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +include::snippets/cont-tested-system-config.adoc[] + + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {ControllerNameStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {EDAName} +| 5432 | TCP | PostgreSQL | {EDAName} | External database +| 5432 | TCP | PostgreSQL | {GatewayStart} | External database +| 5432 | TCP | PostgreSQL | {HubNameStart} | External database +| 5432 | TCP | PostgreSQL | {ControllerNameStart} | External database +| 6379 | TCP | Redis | {EDAName} | Redis container +| 6379 | TCP | Redis | {GatewayStart} | Redis container +| 8443 | TCP | HTTPS | {GatewayStart} | {GatewayStart} +| 27199 | TCP | Receptor | {ControllerNameStart} | Execution container +//| 50051 | TCP | gRPC | {GatewayStart} | {GatewayStart} +|==== + +== Example inventory file +Use the example inventory file to perform an installation for this topology: + +include::snippets/inventory-cont-a-env-a.adoc[] + +SSH keys are only required when installing on remote hosts. If doing a self contained local VM based installation, you can use `ansible_connection=local`. diff --git a/downstream/modules/topologies/ref-cont-b-env-a.adoc b/downstream/modules/topologies/ref-cont-b-env-a.adoc new file mode 100644 index 0000000000..504cfc3377 --- /dev/null +++ b/downstream/modules/topologies/ref-cont-b-env-a.adoc @@ -0,0 +1,85 @@ +:_mod-docs-content-type: REFERENCE +[id="cont-b-env-a"] += Container {EnterpriseTopology} + +include::snippets/enterprise-topologies.adoc[] + +== Infrastructure topology +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::cont-b-env-a.png[Container {EnterpriseTopology} diagram] + +Each VM has been tested with the following component requirements: + +include::snippets/cont-tested-vm-config.adoc[] + +.Infrastructure topology +[options="header"] +|==== +| VM count | Purpose | Example VM group names +| 2 | {GatewayStart} with colocated Redis | `automationgateway` +| 2 | {ControllerNameStart} | `automationcontroller` +| 2 | {PrivateHubNameStart} with colocated Redis | `automationhub` +| 2 | {EDAName} with colocated Redis | `automationeda` +| 1 | {AutomationMeshStart} hop node | `execution_nodes` +| 2 | {AutomationMeshStart} execution node | `execution_nodes` +| 1 | Externally managed database service | N/A +| 1 | HAProxy load balancer in front of {Gateway} (externally managed) | N/A +|==== + +[NOTE] +==== +include::snippets/redis-colocation-containerized.adoc[] +==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +include::snippets/cont-tested-system-config.adoc[] + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {ControllerNameStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | HAProxy load balancer | {GatewayStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {EDAName} +| 5432 | TCP | PostgreSQL | {EDAName} | External database +| 5432 | TCP | PostgreSQL | {GatewayStart} | External database +| 5432 | TCP | PostgreSQL | {HubNameStart} | External database +| 5432 | TCP | PostgreSQL | {ControllerNameStart} | External database +| 6379 | TCP | Redis | {EDAName} | Redis node +| 6379 | TCP | Redis | {GatewayStart} | Redis node +| 8443 | TCP | HTTPS | {GatewayStart} | {GatewayStart} +| 16379 | TCP | Redis | Redis node | Redis node +| 27199 | TCP | Receptor | {ControllerNameStart} | Hop node and execution node +| 27199 | TCP | Receptor | Hop node | Execution node +//| 50051 | TCP | gRPC | {GatewayStart} | {GatewayStart} +|==== + +== Example inventory file +Use the example inventory file to perform an installation for this topology: + +include::snippets/inventory-cont-b-env-a.adoc[] + +// Michelle: Removing this section until documentation can be improved - See AAP-42854 +// == Storage requirements +// * Execution environments are pulled into {ControllerName} hybrid nodes and execution nodes that run jobs. The size of these containers influences the storage requirements for `$PATH_WHERE_PODMAN_PUTS_CONTAINER_IMAGES`. + +// * The primary determining factors for the size of the database and its storage volume, which defaults to `$POSTGRES_DEFAULT_DATA_DIR`, are: +// ** The quantity of job events (lines of output from {ControllerName} jobs) +// ** The quantity of days of job data that are retained + +// * On execution nodes and {ControllerName} control and hybrid nodes, job output is buffered to the disk in `$NAME_OF_RECEPTOR_DIR_VAR`, which defaults to `/tmp`. + +// * The size and quantity of collections synced to {HubName} influence the storage requirements of `$PATH_WHERE_PULP_STORES_COLLECTIONS`. diff --git a/downstream/modules/topologies/ref-installation-deployment-models.adoc b/downstream/modules/topologies/ref-installation-deployment-models.adoc new file mode 100644 index 0000000000..ffc550ddf2 --- /dev/null +++ b/downstream/modules/topologies/ref-installation-deployment-models.adoc @@ -0,0 +1,29 @@ +:_mod-docs-content-type: REFERENCE +[id="installation-and-deployment-models"] + += Installation and deployment models + +The following table outlines the different ways to install or deploy {PlatformNameShort}: + +.{PlatformNameShort} installation and deployment models +[options="header"] +|==== +| Mode | Infrastructure | Description | Tested topologies +| RPM | Virtual machines and bare metal | The RPM installer deploys {PlatformNameShort} on {RHEL} by using RPMs to install the platform on host machines. Customers manage the product and infrastructure lifecycle. +a| +* link:{URLTopologies}/rpm-topologies#rpm-a-env-a[RPM {GrowthTopology}] +* link:{URLTopologies}/rpm-topologies#rpm-b-env-a[RPM {EnterpriseTopology}] +| Containers +| Virtual machines and bare metal +| The containerized installer deploys {PlatformNameShort} on {RHEL} by using Podman which runs the platform in containers on host machines. Customers manage the product and infrastructure lifecycle. +a| +* link:{URLTopologies}/container-topologies#cont-a-env-a[Container {GrowthTopology}] +* link:{URLTopologies}/container-topologies#cont-b-env-a[Container {EnterpriseTopology}] + +| Operator +| Red Hat OpenShift +| The Operator uses Red Hat OpenShift Operators to deploy {PlatformNameShort} within Red Hat OpenShift. Customers manage the product and infrastructure lifecycle. +a| +* link:{URLTopologies}/ocp-topologies#ocp-a-env-a[Operator {GrowthTopology}] +* link:{URLTopologies}/ocp-topologies#ocp-b-env-a[Operator {EnterpriseTopology}] +|==== diff --git a/downstream/modules/topologies/ref-mesh-nodes.adoc b/downstream/modules/topologies/ref-mesh-nodes.adoc new file mode 100644 index 0000000000..0e3b47b64a --- /dev/null +++ b/downstream/modules/topologies/ref-mesh-nodes.adoc @@ -0,0 +1,20 @@ +[id="mesh-nodes"] += {AutomationMeshStart} nodes + +{AutomationMeshStart} is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers. This is done through nodes that establish peer-to-peer connections with each other by using existing networks. + +== Tested system configurations +Each {AutomationMesh} VM has been tested with the following component requirements: 16 GB RAM, 4 CPUs, 60 GB local disk, and 3000 IOPS. + +== Network ports +{AutomationMeshStart} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | HTTP/HTTPS | Receptor | Execution node | {OCPShort} mesh ingress +| 80/443 | HTTP/HTTPS | Receptor | Hop node | {OCPShort} mesh ingress +| 27199 | TCP | Receptor | {OCPShort} cluster | Execution node +| 27199 | TCP | Receptor | {OCPShort} cluster | Hop node +|==== diff --git a/downstream/modules/topologies/ref-ocp-a-env-a.adoc b/downstream/modules/topologies/ref-ocp-a-env-a.adoc new file mode 100644 index 0000000000..56139bb13c --- /dev/null +++ b/downstream/modules/topologies/ref-ocp-a-env-a.adoc @@ -0,0 +1,103 @@ +[id="ocp-a-env-a"] += Operator {GrowthTopology} + +include::snippets/growth-topologies.adoc[] + +== Infrastructure topology +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::ocp-a-env-a.png[Operator {GrowthTopology} diagram] + +A Single Node OpenShift (SNO) cluster has been tested with the following requirements: 32 GB RAM, 16 CPUs, 128 GB local disk, and 3000 IOPS. + +.Infrastructure topology +[options="header"] +|==== +| Count | Component +| 1 | {ControllerNameStart} web pod +| 1 | {ControllerNameStart} task pod +| 1 | {HubNameStart} API pod +| 2 | {HubNameStart} content pod +| 2 | {HubNameStart} worker pod +| 1 | {HubNameStart} Redis pod +| 1 | {EDAName} API pod +| 1 | {EDAName} activation worker pod +| 1 | {EDAName} default worker pod +| 1 | {EDAName} event stream pod +| 1 | {EDAName} scheduler pod +| 1 | {GatewayStart} pod +| 1 | Database pod +| 1 | Redis pod +|==== + +[NOTE] +==== +You can deploy multiple isolated instances of {PlatformNameShort} into the same {OCP} cluster by using a namespace-scoped deployment model. +This approach allows you to use the same cluster for several deployments. +==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +.Tested system configurations +[options="header"] +|==== +| Type | Description +| Subscription | Valid {PlatformName} subscription +| Operating system | {RHEL} 9.2 or later minor versions of {RHEL} 9 +| CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) +| Red Hat OpenShift +a| +* Version: 4.14 +* num_of_control_nodes: 1 +* num_of_worker_nodes: 1 +| Ansible-core | Ansible-core version {CoreUseVers} or later +| Browser | A currently supported version of Mozilla Firefox or Google Chrome. +| Database | {PostgresVers} +|==== + +== Example custom resource file + +Use the following example custom resource (CR) to add your {PlatformNameShort} instance to your project: + +---- +apiVersion: aap.ansible.com/v1alpha1 +kind: AnsibleAutomationPlatform +metadata: + name: +spec: + eda: + automation_server_ssl_verify: 'no' + hub: + storage_type: 's3' + object_storage_s3_secret: '' +---- + +== Nonfunctional requirements + +{PlatformNameShort}'s performance characteristics and capacity are impacted by its resource allocation and configuration. With OpenShift, each {PlatformNameShort} component is deployed as a pod. You can specify resource requests and limits for each pod. + +Use the {PlatformNameShort} Custom Resource (CR) to configure resource allocation for OpenShift installations. Each configurable item has default settings. These settings are the minimum requirements for an installation, but might not meet your production workload needs. + +By default, each component's deployments are set for minimum resource requests but no resource limits. OpenShift only schedules the pods with available resource requests, but the pods are allowed to consume unlimited RAM or CPU provided that the OpenShift worker node itself is not under node pressure. + +In the Operator {GrowthTopology}, {PlatformNameShort} is deployed on a Single Node OpenShift (SNO) with 32 GB RAM, 16 CPUs, 128 GB Local disk, and 3000 IOPS. This is not a shared environment, so {PlatformNameShort} pods have full access to all of the compute resources of the OpenShift SNO. In this scenario, the capacity calculation for the {ControllerName} task pods is derived from the underlying {OCPShort} node that runs the pod. It does not have access to the entire node. This capacity calculation influences how many concurrent jobs {ControllerName} can run. + +OpenShift manages storage distinctly from VMs. This impacts how {HubName} stores its artifacts. In the Operator {GrowthTopology}, we use S3 storage because {HubName} requires a `ReadWriteMany` type storage, which is not a default storage type in OpenShift. + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | HTTP/HTTPS | Receptor | Execution node | {OCPShort} ingress +| 80/443 | HTTP/HTTPS | Receptor | Hop node | {OCPShort} ingress +| 80/443 | HTTP/HTTPS | Platform | Customer clients | {OCPShort} ingress +| 27199 | TCP | Receptor | {OCPShort} cluster | Execution node +| 27199 | TCP | Receptor | {OCPShort} cluster | Hop node +|==== diff --git a/downstream/modules/topologies/ref-ocp-b-env-a.adoc b/downstream/modules/topologies/ref-ocp-b-env-a.adoc new file mode 100644 index 0000000000..3e44787345 --- /dev/null +++ b/downstream/modules/topologies/ref-ocp-b-env-a.adoc @@ -0,0 +1,116 @@ +[id="ocp-b-env-a"] += Operator {EnterpriseTopology} + +include::snippets/enterprise-topologies.adoc[] + +== Infrastructure topology + +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::ocp-b-env-a.png[Operator {EnterpriseTopology} diagram] + +The following infrastructure topology describes an OpenShift Cluster with 3 primary nodes and 2 worker nodes. + +Each OpenShift Worker node has been tested with the following component requirements: 16 GB RAM, 4 CPUs, 128 GB local disk, and 3000 IOPS. + +.Infrastructure topology +[options="header"] +|==== +| Count | Component +| 1 | {ControllerNameStart} web pod +| 1 | {ControllerNameStart} task pod +| 1 | {HubNameStart} API pod +| 2 | {HubNameStart} content pod +| 2 | {HubNameStart} worker pod +| 1 | {HubNameStart} Redis pod +| 1 | {EDAName} API pod +| 2 | {EDAName} activation worker pod +| 2 | {EDAName} default worker pod +| 2 | {EDAName} event stream pod +| 1 | {EDAName} scheduler pod +| 1 | {GatewayStart} pod +| 2 | Mesh ingress pod +| N/A | Externally managed database service +| N/A | Externally managed Redis +| N/A | Externally managed object storage service (for {HubName}) +|==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +.Tested system configurations +[options="header"] +|==== +| Type | Description +| Subscription | Valid {PlatformName} subscription +| Operating system | {RHEL} 9.2 or later minor versions of {RHEL} 9 +| CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) +| Red Hat OpenShift +a| +* Red Hat OpenShift on AWS Hosted Control Planes 4.15.16 +** 2 worker nodes in different availability zones (AZs) at t3.xlarge +| Ansible-core | Ansible-core version {CoreUseVers} or later +| Browser | A currently supported version of Mozilla Firefox or Google Chrome. +| AWS RDS PostgreSQL service +a| +* engine: "postgres" +* engine_version: 15" +* parameter_group_name: "default.postgres15" +* allocated_storage: 20 +* max_allocated_storage: 1000 +* storage_type: "gp2" +* storage_encrypted: true +* instance_class: "db.t4g.small" +* multi_az: true +* backup_retention_period: 5 +* database: must have ICU support +| AWS Memcached Service +a| +* engine: "redis" +* engine_version: "6.2" +* auto_minor_version_upgrade: "false" +* node_type: "cache.t3.micro" +* parameter_group_name: "default.redis6.x.cluster.on" +* transit_encryption_enabled: "true" +* num_node_groups: 2 +* replicas_per_node_group: 1 +* automatic_failover_enabled: true +| s3 storage | HTTPS only accessible through AWS Role assigned to {HubName} SA at runtime by using AWS Pod Identity +|==== + +== Example custom resource file + +For example CR files for this topology, see the link:https://github.com/ansible/test-topologies/blob/aap-2.5/ocp-b.env-a/README.md[ocp-b.env-a] directory in the `test-topologies` GitHub repository. + +== Nonfunctional requirements + +{PlatformNameShort}'s performance characteristics and capacity are impacted by its resource allocation and configuration. With OpenShift, each {PlatformNameShort} component is deployed as a pod. You can specify resource requests and limits for each pod. + +Use the {PlatformNameShort} custom resource to configure resource allocation for OpenShift installations. Each configurable item has default settings. These settings are the exact configuration used within the context of this reference deployment architecture and presumes that the environment is being deployed and managed by an Enterprise IT organization for production purposes. + +By default, each component's deployments are set for minimum resource requests but no resource limits. OpenShift only schedules the pods with available resource requests, but the pods are allowed to consume unlimited RAM or CPU provided that the OpenShift worker node itself is not under node pressure. + +In the Operator {EnterpriseTopology}, {PlatformNameShort} is deployed on a Red Hat OpenShift on AWS (ROSA) Hosted Control Plane (HCP) cluster with 2 t3.xlarge worker nodes spread across 2 AZs within a single AWS Region. This is not a shared environment, so {PlatformNameShort} pods have full access to all of the compute resources of the ROSA HCP cluster. In this scenario, the capacity calculation for the {ControllerName} task pods is derived from the underlying HCP worker node that runs the pod. It does not have access to the CPU or memory resources of the entire node. This capacity calculation influences how many concurrent jobs {ControllerName} can run. + +OpenShift manages storage distinctly from VMs. This impacts how {HubName} stores its artifacts. In the Operator {EnterpriseTopology}, we use S3 storage because {HubName} requires a `ReadWriteMany` type storage, which is not a default storage type in OpenShift. Externally provided Redis, PostgreSQL, and object storage for {HubName} are specified. This provides the {PlatformNameShort} deployment with additional scalability and reliability features, including specialized backup, restore, and replication services and scalable storage. + + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | HTTP/HTTPS | Object storage | {OCPShort} cluster | External object storage service +| 80/443 | HTTP/HTTPS | Receptor | Execution node | {OCPShort} ingress +| 80/443 | HTTP/HTTPS | Receptor | Hop node | {OCPShort} ingress +| 5432 | TCP | PostgreSQL | {OCPShort} cluster | External database service +| 6379 | TCP | Redis | {OCPShort} cluster | External Redis service +| 27199 | TCP | Receptor | {OCPShort} cluster | Execution node +| 27199 | TCP | Receptor | {OCPShort} cluster | Hop node + +|==== diff --git a/downstream/modules/topologies/ref-rpm-a-env-a.adoc b/downstream/modules/topologies/ref-rpm-a-env-a.adoc new file mode 100644 index 0000000000..41a943dfc5 --- /dev/null +++ b/downstream/modules/topologies/ref-rpm-a-env-a.adoc @@ -0,0 +1,63 @@ +:_mod-docs-content-type: REFERENCE +[id="rpm-a-env-a"] += RPM {GrowthTopology} + +include::snippets/growth-topologies.adoc[] + +== Infrastructure topology +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::rpm-a-env-a.png[RPM {GrowthTopology} diagram] + +Each VM has been tested with the following component requirements: + +include::snippets/rpm-tested-vm-config.adoc[] + +.Infrastructure topology +[options="header"] +|==== +| VM count | Purpose | Example VM group names +| 1 | {GatewayStart} with colocated Redis | `automationgateway` +| 1 | {ControllerNameStart} | `automationcontroller` +| 1 | {PrivateHubNameStart} | `automationhub` +| 1 | {EDAName} | `automationedacontroller` +| 1 | {AutomationMeshStart} execution node | `execution_nodes` +| 1 | {PlatformNameShort} managed database | `database` +|==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +include::snippets/rpm-env-a-tested-system-config.adoc[] + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {ControllerNameStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {EDAName} +| 5432 | TCP | PostgreSQL | {EDAName} | Database +| 5432 | TCP | PostgreSQL | {GatewayStart} | Database +| 5432 | TCP | PostgreSQL | {HubNameStart} | Database +| 5432 | TCP | PostgreSQL | {ControllerNameStart} | Database +| 6379 | TCP | Redis | {EDAName} | Redis node +| 6379 | TCP | Redis | {GatewayStart} | Redis node +| 8443 | TCP | HTTPS | {GatewayStart} | {GatewayStart} +| 27199 | TCP | Receptor | {ControllerNameStart} | Execution node +//| 50051 | TCP | gRPC | {GatewayStart} | {GatewayStart} +|==== + +== Example inventory file +Use the example inventory file to perform an installation for this topology: + +include::snippets/inventory-rpm-a-env-a.adoc[] diff --git a/downstream/modules/topologies/ref-rpm-b-env-a.adoc b/downstream/modules/topologies/ref-rpm-b-env-a.adoc new file mode 100644 index 0000000000..f0dfbe22ea --- /dev/null +++ b/downstream/modules/topologies/ref-rpm-b-env-a.adoc @@ -0,0 +1,73 @@ +:_mod-docs-content-type: REFERENCE +[id="rpm-b-env-a"] += RPM {EnterpriseTopology} + +include::snippets/enterprise-topologies.adoc[] + +== Infrastructure topology +The following diagram outlines the infrastructure topology that Red{nbsp}Hat has tested with this deployment model that customers can use when self-managing {PlatformNameShort}: + +.Infrastructure topology diagram +image::rpm-b-env-a.png[RPM {EnterpriseTopology} diagram] + +Each VM has been tested with the following component requirements: + +include::snippets/rpm-tested-vm-config.adoc[] + +.Infrastructure topology +[options="header"] +|==== +| VM count | Purpose | Example VM group names +| 2 | {GatewayStart} with colocated Redis | `automationgateway` +| 2 | {ControllerNameStart} | `automationcontroller` +| 2 | {PrivateHubNameStart} with colocated Redis | `automationhub` +| 2 | {EDAName} with colocated Redis | `automationedacontroller` +| 1 | {AutomationMeshStart} hop node | `execution_nodes` +| 2 | {AutomationMeshStart} execution node | `execution_nodes` +| 1 | Externally managed database service | N/A +| 1 | HAProxy load balancer in front of {Gateway} (externally managed) | N/A +|==== + +[NOTE] +==== +6 VMs are required for a Redis high availability (HA) compatible deployment. Redis can be colocated on each {PlatformNameShort} component VM except for {ControllerName}, execution nodes, or the PostgreSQL database. +==== + +== Tested system configurations + +Red{nbsp}Hat has tested the following configurations to install and run {PlatformName}: + +include::snippets/rpm-env-a-tested-system-config.adoc[] + +== Network ports + +{PlatformName} uses several ports to communicate with its services. These ports must be open and available for incoming connections to the {PlatformName} server for it to work. Ensure that these ports are available and are not blocked by the server firewall. + +.Network ports and protocols +[options="header"] +|==== +| Port number | Protocol | Service | Source | Destination +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {EDAName} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {ControllerNameStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | HAProxy load balancer | {GatewayStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {ControllerNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {HubNameStart} +| 80/443 | TCP | HTTP/HTTPS | {GatewayStart} | {EDAName} +| 5432 | TCP | PostgreSQL | {EDAName} | External database +| 5432 | TCP | PostgreSQL | {GatewayStart} | External database +| 5432 | TCP | PostgreSQL | {HubNameStart} | External database +| 5432 | TCP | PostgreSQL | {ControllerNameStart} | External database +| 6379 | TCP | Redis | {EDAName} | Redis node +| 6379 | TCP | Redis | {GatewayStart} | Redis node +| 8443 | TCP | HTTPS | {GatewayStart} | {GatewayStart} +| 16379 | TCP | Redis | Redis node | Redis node +| 27199 | TCP | Receptor | {ControllerNameStart} | Hop node and execution node +| 27199 | TCP | Receptor | Hop node | Execution node +//| 50051 | TCP | gRPC | {GatewayStart} | {GatewayStart} +|==== + +== Example inventory file +Use the example inventory file to perform an installation for this topology: + +include::snippets/inventory-rpm-b-env-a.adoc[] diff --git a/downstream/modules/topologies/snippets b/downstream/modules/topologies/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/modules/topologies/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-aap-packages.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-aap-packages.adoc index 4857536867..a2d21f4a5f 100644 --- a/downstream/modules/troubleshooting-aap/proc-troubleshoot-aap-packages.adoc +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-aap-packages.adoc @@ -3,4 +3,4 @@ You cannot locate certain packages that come bundled with the {PlatformNameShort} installer, or you are seeing a "Repositories disabled by configuration" message. -To resolve this issue, enable the repository by using the `subscription-manager` command in the command line. For more information about resolving this issue, see the _Troubleshooting_ section of link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/proc-attaching-subscriptions_planning[Attaching your {PlatformName} subscription] in the {PlatformName} Planning Guide. \ No newline at end of file +To resolve this issue, enable the repository by using the `subscription-manager` command in the command line. For more information about resolving this issue, see the _Troubleshooting_ section of link:{URLCentralAuth}/assembly-gateway-licensing#proc-attaching-subscriptions[Attaching your {PlatformName} subscription] in _{TitleCentralAuth}_. \ No newline at end of file diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-pending.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-pending.adoc index a949e9a010..e393e55b46 100644 --- a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-pending.adoc +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-pending.adoc @@ -3,7 +3,7 @@ After launching jobs in {ControllerName}, the jobs stay in a pending state and do not start. -There are a few reasons jobs can become stuck in a pending state. For more information about troubleshooting this issue, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_administration_guide/index#controller-playbook-pending[Playbook stays in pending] in the Automation Controller Administration Guide. +There are a few reasons jobs can become stuck in a pending state. For more information about troubleshooting this issue, see link:{URLControllerAdminGuide}/controller-troubleshooting#controller-playbook-pending[Playbook stays in pending] in _{TitleControllerAdminGuide}_ *Cancel all pending jobs* diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-permissions.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-permissions.adoc index 9cf73e5c63..a851de36b1 100644 --- a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-permissions.adoc +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-permissions.adoc @@ -1,3 +1,4 @@ +:_mod-docs-content-type: PROCEDURE [id="troubleshoot-job-permissions"] = Issue - Jobs in {PrivateHubName} are failing with "denied: requested access to the resource is denied, unauthorized: Insufficient permissions" error message @@ -16,5 +17,4 @@ This issue happens when your {PrivateHubName} is protected with a password or to [role="_additional-resources"] .Additional resources -* For information about creating new credentials in {ControllerName}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/automation_controller_user_guide/index#controller-getting-started-create-credential[Creating new credentials] in the Automation Controller User Guide. - +* link:{URLControllerUserGuide}/controller-credentials#controller-create-credential[Creating new credentials] diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-resolve-module.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-resolve-module.adoc index 95b8ed7d30..13f6316387 100644 --- a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-resolve-module.adoc +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-resolve-module.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="troubleshoot-job-resolve-module"] = Issue - Jobs are failing with “ERROR! couldn’t resolve module/action” error message @@ -5,7 +7,7 @@ Jobs are failing with the error message “ERROR! couldn't resolve module/action This error can happen when the collection associated with the module is missing from the {ExecEnvShort}. -The recommended resolution is to create a custom {ExecEnvShort} and add the required collections inside of that {ExecEnvShort}. For more information about creating an {ExecEnvShort}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/creating_and_consuming_execution_environments/assembly-using-builder[Using {Builder}] in Creating and Consuming Execution Environments. +The recommended resolution is to create a custom {ExecEnvShort} and add the required collections inside of that {ExecEnvShort}. For more information about creating an {ExecEnvShort}, see link:{URLBuilder}/assembly-using-builder[Using {Builder}] in _{TitleBuilder}_. Alternatively, you can complete the following steps: @@ -21,4 +23,3 @@ collections: - __ ---- + - diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-timeout.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-timeout.adoc index 8a038d1d28..e8a6d7a3d9 100644 --- a/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-timeout.adoc +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-job-timeout.adoc @@ -1,9 +1,11 @@ +:_mod-docs-content-type: PROCEDURE + [id="troubleshoot-job-timeout"] = Issue - Jobs are failing with “Timeout (12s) waiting for privilege escalation prompt” error message This error can happen when the timeout value is too small, causing the job to stop before completion. The default timeout value for connection plugins is `10`. -To resolve the issue, increase the timeout value by completing one of the following procedures. +To resolve the issue, increase the timeout value by completing one of the following methods. [NOTE] ==== @@ -49,6 +51,4 @@ timeout = 60 [role="_additional-resources"] .Additional resources -* For more information about the `DEFAULT_TIMEOUT` configuration setting, see link:https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-timeout[DEFAULT_TIMEOUT] in the Ansible Community Documentation. - - +* link:https://docs.ansible.com/ansible/latest/reference_appendices/config.html#default-timeout[DEFAULT_TIMEOUT] diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-must-gather.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-must-gather.adoc index 71be3dcfcb..1f6ab25d0d 100644 --- a/downstream/modules/troubleshooting-aap/proc-troubleshoot-must-gather.adoc +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-must-gather.adoc @@ -1,3 +1,5 @@ +:_mod-docs-content-type: PROCEDURE + [id="troubleshoot-must-gather"] = Troubleshooting {PlatformNameShort} on {OCPShort} by using the must-gather command @@ -27,7 +29,7 @@ oc login __ + [subs="+quotes"] ---- -oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-24/aap-must-gather-rhel8 --dest-dir __ +oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-25/aap-must-gather-rhel8 --dest-dir __ ---- + ** `--image` specifies the image that gathers data @@ -37,7 +39,7 @@ oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-24/aap + [subs="+quotes"] ---- -oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-24/aap-must-gather-rhel8 --dest-dir __ – /usr/bin/ns-gather __ +oc adm must-gather --image=registry.redhat.io/ansible-automation-platform-25/aap-must-gather-rhel8 --dest-dir __ – /usr/bin/ns-gather __ ---- + ** `– /usr/bin/ns-gather` limits the `must-gather` data collection to a specified namespace @@ -55,6 +57,6 @@ $ tar cvaf must-gather.tar.gz __ [role="_additional-resources"] .Additional resources -* For information about installing the OpenShift CLI (`oc`), see link:https://docs.openshift.com/container-platform/{OCPLatest}/cli_reference/openshift_cli/getting-started-cli.html[Installing the OpenShift CLI] in the {OCPShort} Documentation. +* link:https://docs.openshift.com/container-platform/{OCPLatest}/cli_reference/openshift_cli/getting-started-cli.html[Installing the OpenShift CLI] -* For information about running the `oc adm inspect` command, see the link:https://docs.openshift.com/container-platform/{OCPLatest}/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect[ocm adm inspect] section in the {OCPShort} Documentation. +* link:https://docs.openshift.com/container-platform/{OCPLatest}/cli_reference/openshift_cli/administrator-cli-commands.html#oc-adm-inspect[ocm adm inspect] diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-ssl-tls-issues.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-ssl-tls-issues.adoc new file mode 100644 index 0000000000..8d64308bbf --- /dev/null +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-ssl-tls-issues.adoc @@ -0,0 +1,42 @@ +:_mod-docs-content-type: PROCEDURE + +[id="troubleshooting-ssl-tls-issues"] + += Troubleshooting SSL/TLS issues + +To troubleshoot issues with SSL/TLS, verify the certificate chain, use the correct certificates, and confirm that a trusted Certificate Authority (CA) signed the certificate. + +.Procedure + +. Check if the server is reachable over SSL/TLS. +.. Run the following command to confirm whether the server is reachable over SSL/TLS and to see the full certificate chain: ++ +---- +# true | openssl s_client -showcerts -connect : +---- ++ +.. Replace `` and `` with suitable values. +. Verify the certificate details. +.. Run the following command to view the details of a certificate: ++ +---- +# openssl x509 -in -noout -text +---- ++ +. Replace `` with the path to the certificate file you want to inspect. ++ +The result of the command shows information such as: + +* Subject - The entity the certificate has been issued to. +* Issuer - The CA that issued the certificate. +* Validity "Not Before" - The date the certificate was issued. +* Validity "Not After" - The date the certificate expires. ++ +. Verify a trusted CA signed the certificate. +.. Run the following command to verify that a specific certificate is valid and was signed by a trusted CA: ++ +---- +openssl verify -CAfile +---- ++ +.. If the command returns `OK`, it means the certificate file is valid and signed by a trusted CA. diff --git a/downstream/modules/troubleshooting-aap/proc-troubleshoot-upgrade-issues.adoc b/downstream/modules/troubleshooting-aap/proc-troubleshoot-upgrade-issues.adoc new file mode 100644 index 0000000000..ad33828ab3 --- /dev/null +++ b/downstream/modules/troubleshooting-aap/proc-troubleshoot-upgrade-issues.adoc @@ -0,0 +1,18 @@ +[id="troubleshoot-upgrade-issues"] += Issue - When upgrading from {PlatformNameShort} 2.4 to {PlatformVers}, connections to the {ControllerName} API fail if the {ControllerName} is behind a load balancer + +When upgrading {PlatformNameShort} 2.4 to {PlatformVers}, the upgrade is completed; however, connections to the {Gateway} URL fail on the {Gateway} UI if you are using the {ControllerName} behind a load balancer. The following error message is displayed: + +`Error connecting to Controller API` + +To resolve this issue, perform the following tasks for all controller hosts: + +. For each controller host, add the {Gateway} URL as a trusted source in the `CSRF_TRUSTED_ORIGIN` setting in the *settings.py* file. ++ +For example, if you configured the {Gateway} URL as `https://www.example.com`, you must add that URL in the *settings.py* file too as shown below: ++ +---- +CSRF_TRUSTED_ORIGINS = ['https://appX.example.com:8443','https://www.example.com'] +---- + +. Restart each controller host by using the `automation-controller-service restart` command so that the URL changes are implemented. For the procedure, see link:{URLControllerAdminGuide}/controller-start-stop-controller[Start, stop, and restart {ControllerName}] in _{TitleControllerAdminGuide}_. \ No newline at end of file diff --git a/downstream/quick-start-yamls/Ansible-lightspeed.yaml b/downstream/quick-start-yamls/Ansible-lightspeed.yaml new file mode 100644 index 0000000000..d5b5ced927 --- /dev/null +++ b/downstream/quick-start-yamls/Ansible-lightspeed.yaml @@ -0,0 +1,38 @@ +metadata: + name: ansible-lightspeed + # you can add additional metadata here + instructional: true +spec: + displayName: Setting up Ansible Lightspeed + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Gateway + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + + description: |- + Set up Ansible Lightspeed with IBM watsonx Code Assistant + introduction: |- + Red Hat Ansible Lightspeed with IBM watsonx Code Assistant is a generative AI service that helps automation teams create, adopt, and maintain Ansible content more efficiently. + It uses natural language prompts to generate code recommendations for automation tasks based on Ansible best practices. + + tasks: + - title: Set up Lightspeed + description: |- + ## To set up Lightspeed: + + For more information about setting up Lightspeed, see the [Red Hat Ansible Lightspeed with IBM watsonx Code Assistant User Guide](https://access.redhat.com/documentation/en-us/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index). + + conclusion: You successfully completed the set up of Ansible Lightspeed! If you + want to learn how to set up automation mesh, take the **Automation mesh** quick start. + + nextQuickStart: [automation-mesh] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation Developer/Getting-started-with-AAP-Automation-Developer.yaml b/downstream/quick-start-yamls/Automation Developer/Getting-started-with-AAP-Automation-Developer.yaml new file mode 100644 index 0000000000..06c8c728a4 --- /dev/null +++ b/downstream/quick-start-yamls/Automation Developer/Getting-started-with-AAP-Automation-Developer.yaml @@ -0,0 +1,157 @@ +metadata: + name: getting started with Ansible Automation Platform + # you can add additional metadata here + instructional: true +spec: + displayName: Getting started with Ansible Automation Platform + durationMinutes: 15 + # Optional type section, will display as a tile on the card + type: + text: Automation developer + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have completed the Ansible Automation Platform installation. + - You have a valid Ansible Automation Platform subscription. + - You have created a [project folder](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/getting_started_with_playbooks/index#proc-starting-automation) on your filesystem. + description: |- + Learn how to get started with Ansible Automation Platform. + introduction: |- + Get started with Ansible Automation Platform as an automation developer. + As an automation developer, review [Developing automation content](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/developing_automation_content/index) before beginning your Ansible development project. + tasks: + - title: Download and install tools + description: |- + ##To download and install tools + + - For more information about Red Hat Ansible Automation Platform tools and components you will use in creating automation execution environments, see [Ansible development tools](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/developing_automation_content/index#devtools-intro). + + - For more information about installing tools, see [Installing Ansible development tools](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/developing_automation_content/index#installing-devtools). + + - title: Create a playbook + description: |- + ##To create a playbook: + + Ansible playbooks are blueprints that tell Ansible Automation Platform what tasks to perform with which devices. You can use a playbook to define the automation tasks that you want the platform to run. + + For more information, see [Playbook execution](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/getting_started_with_playbooks/index#ref-playbook-execution). + + A playbook contains one or more plays. A basic play contains the following parameters: + + - **Name**: a brief description of the overall function of the playbook, which assists in keeping it readable and organized for all users. + - **Hosts**: identifies the target or targets for Ansible to run against. + - **Become statements**: this optional statement can be set to true or yes to enable privilege escalation using a become plugin, such as `sudo`, `su`, `pfexec`, `doas`, `pbrun`, `dzdo`, `ksu`. + - **Tasks**: this is the list of actions that get executed against each host in the play. + + Here is an example of a play in a playbook. You can see the name of the play, the host, and the list of tasks included in the play: + ``` + name: Set Up a Project and Job Template + hosts: host.name.ip + become: true + + tasks: + name: Create a Project + ansible.controller.project: + name: Job Template Test Project + state: present + scm_type: git + scm_url: https://github.com/ansible/ansible-tower-samples.git + + name: Create a Job Template + ansible.controller.job_template: + name: my-job-1 + project: Job Template Test Project + inventory: Demo Inventory + playbook: hello_world.yml + job_type: run + state: present + ``` + + For more information about playbooks, see [Getting Started with Ansible Playbooks.](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/getting_started_with_playbooks/index) + For help writing a playbook, see [Ansible Lightspeed.](https://developers.redhat.com/products/ansible/lightspeed?source=sso) + review: + instructions: |- + #### To verify that you've written a functional playbook: + Did your playbook execute correctly? + failedTaskHelp: Try the steps again or read more about this topic at [Run Your First Command and Playbook](https://docs.ansible.com/ansible/latest/network/getting_started/first_playbook.html). + summary: + success: You have viewed the details of your playbook! + failed: + + - title: Create a role + description: |- + ##To create a role: + + Roles are units of organization in the Ansible Automation Platform. + When you assign a role to a team or user, you are granting access to use, read, or write credentials. + Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among resources, or with other users. + All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource. + + Roles are separated out by service through automation controller, Event-Driven Ansible, and automation hub. + For more information, see [Creating a role](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/access_management_and_authentication/assembly-gw-roles#proc-gw-create-roles). + + - title: Upload a collection to automation hub + description: |- + ##To upload a collection to automation hub: + + As a content creator, you can use namespaces in automation hub to curate and manage collections for the following purposes: + + - Create groups with permissions to curate namespaces and upload collections to private automation hub. + - Add information and resources to the namespace to help end users of the collection in their automation tasks. + - Upload collections to the namespace. + - Review the namespace import logs to determine the success or failure of uploading the collection and its current approval status. + + For more information about collections, see [Managing collections in automation hub](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/managing_automation_content/index#managing-collections-hub). + + You can upload the collection by using either the automation hub user interface or the `ansible-galaxy` client. + + ###Prerequisites + + - You have configured the `ansible-galaxy` client for automation hub. + - You have at least one namespace. + - You have run all content through ansible-test sanity. + - You are a Red Hat Connect Partner. Learn more at [Red Hat Partner Connect](https://connect.redhat.com/). + + ###Procedure + + 1. From the navigation panel, select **Automation Content** > **Namespaces**. + 2. On the **My namespaces** tab, locate the namespace to which you want to upload a collection. + 3. Click **View collections** and click **Upload collection**. + 4. In the **New collection modal**, click **Select file**. Locate the file on your system. + 5. Click **Upload**. + 6. Using the ansible-galaxy client, enter the following command: + + ``` + ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET + ``` + + review: + instructions: |- + #### To verify that you've uploaded a collection: + Does the collection appear in the **Collections** list? + failedTaskHelp: This task is not verified yet. Try the task again. + summary: + success: You have viewed the details of your collection! + failed: Try the steps again. + + - title: Add an inventory plugin (optional) + description: |- + ##To add an inventory plugin: + + Inventory updates use dynamically-generated YAML files which are parsed by their inventory plugin. + In automation controller v4.4, you can give the inventory plugin configuration directly to automation controller using the inventory source `source_vars`. + + For more information about adding inventory plugins, see [Inventory Plugins](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#ref-controller-inventory-plugins). + + conclusion: You successfully completed the getting started steps for Ansible Automation Platform! If you + want to learn how to find content, take the **Finding content in ansible automation platform** quick start. + + nextQuickStart: [finding-content-in-ansible-automation-platform] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation-Operator/Environments-automation-operator.yaml b/downstream/quick-start-yamls/Automation-Operator/Environments-automation-operator.yaml new file mode 100644 index 0000000000..8e323b0188 --- /dev/null +++ b/downstream/quick-start-yamls/Automation-Operator/Environments-automation-operator.yaml @@ -0,0 +1,60 @@ +metadata: + name: view-environment + # you can add additional metadata here + instructional: true +spec: + displayName: Environments + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Automation operator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PCEtLSBHZW5lcmF0ZWQgYnkgSWNvTW9vbi5pbyAtLT4KPHN2ZyB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgd2lkdGg9IjUxMiIgaGVpZ2h0PSI1MTIiIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj4KPHRpdGxlPjwvdGl0bGU+CjxnIGlkPSJpY29tb29uLWlnbm9yZSI+CjwvZz4KPHBhdGggZD0iTTQ0OCA2NHY0MTZoLTMzNmMtMjYuNTEzIDAtNDgtMjEuNDktNDgtNDhzMjEuNDg3LTQ4IDQ4LTQ4aDMwNHYtMzg0aC0zMjBjLTM1LjE5OSAwLTY0IDI4LjgtNjQgNjR2Mzg0YzAgMzUuMiAyOC44MDEgNjQgNjQgNjRoMzg0di00NDhoLTMyeiI+PC9wYXRoPgo8cGF0aCBkPSJNMTEyLjAyOCA0MTZ2MGMtMC4wMDkgMC4wMDEtMC4wMTkgMC0wLjAyOCAwLTguODM2IDAtMTYgNy4xNjMtMTYgMTZzNy4xNjQgMTYgMTYgMTZjMC4wMDkgMCAwLjAxOS0wLjAwMSAwLjAyOC0wLjAwMXYwLjAwMWgzMDMuOTQ1di0zMmgtMzAzLjk0NXoiPjwvcGF0aD4KPC9zdmc+Cg== + + description: |- + Viewing execution and decision environments. + introduction: |- + View your execution and decision environments, and their details. + + Platform administrators and automation developers have the permissions to create environments. + As an automation operator you can view environments and their details. + + tasks: + - title: View an execution environment + description: |- + ##To view an execution environment: + + Automation execution environments create a common language for communicating automation dependencies, and offer a standard way to build and distribute the automation environment. + + 1. From the navigation panel, select **Automation Execution** > **Infrastructure** > **Execution Environments**. + 2. Click an execution environment to view its details. + As part of the initial setup, a **Control Plane Execution Environment**, a **Default execution environment**, and a **Minimal execution environment** are created to help you get started. + + For more information, see [Execution environments](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#assembly-controller-execution-environments). + + - title: View a decision environment + description: |- + ##To view a decision environment: + + Decision environments are a container image to run Ansible rulebooks. + They create a common language for communicating automation dependencies, and give a standard way to build and distribute the automation environment. + The default decision environment is found in the Ansible-Rulebook. + + 1. From the navigation panel, select **Automation Decisions** > **Decision Environments**. + 2. Click a decision environment to view its details. + As part of the initial setup a **Default Decision Environment** is created. + + For more information, see [Decision environments](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-decision-environments#eda-build-a-custom-decision-environment). + + conclusion: You successfully completed the viewing an environment steps! If you + want to learn how to execute an inventory, take the **Inventories** quick start. + + nextQuickStart: [execute-an-inventory] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation-Operator/Getting-started-with-AAP-automation-operator.yaml b/downstream/quick-start-yamls/Automation-Operator/Getting-started-with-AAP-automation-operator.yaml new file mode 100644 index 0000000000..165b398a69 --- /dev/null +++ b/downstream/quick-start-yamls/Automation-Operator/Getting-started-with-AAP-automation-operator.yaml @@ -0,0 +1,180 @@ +metadata: + name: getting started with Ansible Automation Platform + # you can add additional metadata here + instructional: true +spec: + displayName: Getting started with Ansible Automation Platform + durationMinutes: 20 + # Optional type section, will display as a tile on the card + type: + text: Automation operator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have a valid Ansible Automation Platform subscription. + description: |- + Learn how to get started with Ansible Automation Platform. + introduction: |- + Get started with Ansible Automation Platform as an automation operator. + + tasks: + + - title: Get started with playbooks + description: |- + ##To get started with playbooks: + + A playbook runs in order from top to bottom. + Within each play, tasks also run in order from top to bottom. + Playbooks with multiple "plays" can orchestrate multi-machine deployments, running one play on your webservers, then another play on your database servers, then a third play on your network infrastructure. + + For more information, see [Playbook execution](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/getting_started_with_playbooks/index#ref-playbook-execution). + + - title: Write a playbook + description: |- + ##To write a playbook: + + Create a playbook that pings your hosts and prints a "Hello world" message: + + 1. Create a file named `playbook.yaml` in your `ansible_quickstart` directory, with the following content: + + ``` + name: My first play + hosts: myhosts + tasks: + name: Ping my hosts + ansible.builtin.ping: + + name: Print message + ansible.builtin.debug: + msg: Hello world + + ``` + + 2. Run your playbook: + + ``` + ansible-playbook -i inventory.ini playbook.yaml + ``` + + - Ansible returns the following output: + ``` + PLAY [My first play] **************************************************************************** + + TASK [Gathering Facts] ************************************************************************** + ok: [192.0.2.50] + ok: [192.0.2.51] + ok: [192.0.2.52] + + TASK [Ping my hosts] **************************************************************************** + ok: [192.0.2.50] + ok: [192.0.2.51] + ok: [192.0.2.52] + + TASK [Print message] **************************************************************************** + ok: [192.0.2.50] => { + "msg": "Hello world" + } + ok: [192.0.2.51] => { + "msg": "Hello world" + } + ok: [192.0.2.52] => { + "msg": "Hello world" + } + + PLAY RECAP ************************************************************************************** + 192.0.2.50: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + 192.0.2.51: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + 192.0.2.52: ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 + ``` + + For more information about playbooks, see [Getting started with Ansible Playbooks.](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/getting_started_with_ansible_playbooks/index#doc-wrapper) + If you need help writing a playbook, see [Ansible Lightspeed.](https://developers.redhat.com/products/ansible/lightspeed?source=sso) + review: + instructions: |- + #### To verify that you've written a playbook: + Did your playbook execute correctly? + failedTaskHelp: Try the steps again or read more about this topic at [Run Your First Command and Playbook](https://docs.ansible.com/ansible/latest/network/getting_started/first_playbook.html). + summary: + success: You have viewed the details of your playbook! + failed: + + - title: Roles + description: |- + ##To learn about roles: + + Roles are units of organization in the Ansible Automation Platform. + When you assign a role to a team or user, you are granting access to use, read, or write credentials. + Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among resources, or with other users. All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource. + + Roles are separated out by service through automation controller, Event-Driven Ansible, and automation hub. + + - title: Create a plugin (optional) + description: |- + ##To create a plugin: + + Inventory updates use dynamically-generated YAML files which are parsed by their inventory plugin. + In automation controller v4.4, you can give the inventory plugin configuration directly to automation controller using the inventory source `source_vars`. + + For more information about creating inventory plugins, see [Inventory Plugins](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/automation_controller_user_guide/index#ref-controller-inventory-plugins). + + - title: Publish a collection + description: |- + ##To publish to a collection: + + As a content creator, you can use namespaces in automation hub to curate and manage collections for the following purposes: + + - Create groups with permissions to curate namespaces and upload collections to private automation hub. + - Add information and resources to the namespace to help end users of the collection in their automation tasks. + - Upload collections to the namespace. + - Review the namespace import logs to determine the success or failure of uploading the collection and its current approval status. + + For more information about collections, see [Managing collections](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/managing_content_in_automation_hub/managing-collections-hub) in automation hub. + + - title: Upload a collection to automation hub + description: |- + ##To upload a collection to automation hub: + + You can upload the collection by using either the automation hub user interface or the `ansible-galaxy` client. + + ###Prerequisites + + - You have configured the `ansible-galaxy` client for automation hub. + - You have at least one namespace. + - You have run all content through ansible-test sanity. + - You are a Red Hat Connect Partner. Learn more at [Red Hat Partner Connect](https://connect.redhat.com/). + + ###Procedure + + 1. From the navigation panel, select **Automation Content**. + 2. From the automation hub navigation panel, select **Collections** > **Namespaces**. + 3. On the **My namespaces** tab, locate the namespace to which you want to upload a collection. + 4. Click **View collections** and click **Upload collection**. + 5. In the **New collection modal**, click **Select file**. Locate the file on your system. + 6. Click **Upload**. + 7. Using the ansible-galaxy client, enter the following command: + + ``` + ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --api-key=SECRET + ``` + + review: + instructions: |- + #### To verify that you've uploaded a collection: + Does the collection appear in the **Collections** list? + failedTaskHelp: This task is not verified yet. Try the task again. + summary: + success: You have viewed the details of your collection! + failed: Try the steps again. + + conclusion: You successfully completed the getting started steps for Ansible Automation Platform! If you + want to learn how to find content, take the **Finding content in ansible automation platform** quick start. + + nextQuickStart: [finding-content-in-ansible-automation-platform] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation-Operator/Inventories-automation-operator.yaml b/downstream/quick-start-yamls/Automation-Operator/Inventories-automation-operator.yaml new file mode 100644 index 0000000000..5d0c61ab9e --- /dev/null +++ b/downstream/quick-start-yamls/Automation-Operator/Inventories-automation-operator.yaml @@ -0,0 +1,58 @@ +metadata: + name: execute-an-inventory + # you can add additional metadata here + instructional: true +spec: + displayName: Inventories + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Automation operator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + + description: |- + Executing inventories. + introduction: |- + Red Hat Ansible Automation Platform works against a list of managed nodes or hosts in your infrastructure that are logically organized, using an inventory file. + Platform administrators and can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario and describe host deployments to Ansible. + + By using an inventory file, Ansible can manage a large number of hosts with a single command. + + Inventories also help you use Ansible more efficiently by reducing the number of command line options you have to specify. + Inventories are divided into groups and these groups contain the hosts. + + Platform administrators and automation developers have the permissions to create inventories. + As an automation operator you can view inventories and their details. + + tasks: + - title: Execute an inventory + description: |- + ## To execute an inventory: + + - From the navigation panel, select **Automation Execution** > **Infrastructure** > **Inventories**. + The **Inventories** window displays a list of inventories that are currently available, along with the following information: + - **Name**: The inventory name. + - **Status**: The statuses are: + - **Success**: When the inventory source sync completed successfully. + - **Disabled**: No inventory source added to the inventory. + - **Error**: When the inventory source sync completed with error: + - **Type**: Identifies whether it is a standard inventory, a smart inventory, or a constructed inventory. + - **Organization**: The organization to which the inventory belongs. + + Click the inventory name to display the **Details** page for the selected inventory, which shows the inventory's groups and hosts. + + For more information about inventories, see the [Inventories](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-inventories) section of the Using Automation Execution guide. + + conclusion: You successfully completed the executing an inventory steps! If you + want to learn how to execute a project, take the **Projects** quick start. + + nextQuickStart: [execute-project] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation-Operator/Projects-automation-operator.yaml b/downstream/quick-start-yamls/Automation-Operator/Projects-automation-operator.yaml new file mode 100644 index 0000000000..74e1ae35b1 --- /dev/null +++ b/downstream/quick-start-yamls/Automation-Operator/Projects-automation-operator.yaml @@ -0,0 +1,65 @@ +metadata: + name: execute-project + # you can add additional metadata here + instructional: true +spec: + displayName: Projects + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Automation operator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + + description: |- + Executing projects. + introduction: |- + A project is a logical collection of Ansible playbooks, represented in automation controller. + + Platform administrators and automation developers have the permissions to create projects. + As an automation operator you can view and sync projects. + + tasks: + - title: Execute a project + description: |- + ##To execute a project: + + 1. From the navigation panel, select **Automation Execution** > **Projects**. + 2. Click a template to view its details. + [As part of the initial setup, a default project is created to help you get started.]{{admonition tip}} + 3. For each project listed, you can sync the latest SCM revision, edit the project, or copy the project attributes, using the icons next to each project. + + ###Additional information + The **Projects** window displays a list of the projects that are currently available. + + You are provided with a default project that you can work with initially. + + Status indicates the state of the project and might be one of the following (note that you can also filter your view by specific status types): + - **Pending** - The source control update has been created, but not queued or started yet. Any job (not just source control updates) stays pending until it is ready to be run by the system. Possible reasons for it not being ready are: + - It has dependencies that are currently running so it has to wait until they are done. + - There is not enough capacity to run in the locations it is configured to. + - **Waiting** - The source control update is in the queue waiting to be executed. + - **Running** - The source control update is currently in progress. + - **Success** - The last source control update for this project succeeded. + - **Failed**- The last source control update for this project failed. + - **Error** - The last source control update job failed to run at all. + - **Canceled** - The last source control update for the project was canceled. + - **Never updated** - The project is configured for source control, but has never been updated. + - **OK** - The project is not configured for source control, and is correctly in place. + - **Missing** - Projects are absent from the project base path of `/var/lib/awx/projects`. + This is applicable for manual or source control managed projects. + + For more information about projects, see [Projects](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-projects). + + conclusion: You successfully completed the executing a project steps! If you + want to learn how to launch a template, take the **Templates** quick start. + + nextQuickStart: [launch-a-job-template] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation-Operator/Rulebook-activations-automation-operator.yaml b/downstream/quick-start-yamls/Automation-Operator/Rulebook-activations-automation-operator.yaml new file mode 100644 index 0000000000..d0565ac0df --- /dev/null +++ b/downstream/quick-start-yamls/Automation-Operator/Rulebook-activations-automation-operator.yaml @@ -0,0 +1,50 @@ +metadata: + name: viewing-a-rulebook-activation + # you can add additional metadata here + instructional: true +spec: + displayName: Rulebook activations + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Automation operator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have a valid Ansible Automation Platform subscription. + description: |- + Executing rulebook activations. + introduction: |- + A rulebook activation is a process running in the background defined by a decision environment executing a specific rulebook. + + Platform administrators and automation developers have the permissions to create rulebook activations. + As an automation operator you can view rulebook activations and their details. + + tasks: + - title: Execute a rulebook activation + description: |- + ## To execute a rulebook activation: + + - From the navigation panel, select **Automation Decisions** > **Rulebook Activations**. + - On the **Rulebook Activations** page, you can view the rulebook activations that have been created along with the **Activation** status, **Number of rules associated** with the rulebook, the **Fire count** and **Restart count**. + + ###Additional information: + + If the **Activation Status** is **Running**, it means that the rulebook activation is running in the background and executing the required actions according to the rules declared in the rulebook. + You can view more details by selecting the activation from the **Rulebook Activations** list view. + For all activations that have run, you can view the **Details** and **History** tabs to get more information about what happened. + + For more information about viewing rulebook activations, see the [Rulebook activations](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/event-driven_ansible_controller_user_guide/eda-rulebook-activations) section of the Event-Driven Ansible controller user guide. + + conclusion: You successfully completed the executing a rulebook activation! If you + want to learn how to set up Ansible Lightspeed, take the **Setting up Ansible Lightspeed** quick start. + + nextQuickStart: [ansible-lightspeed] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation-Operator/Templates-automation-operator.yaml b/downstream/quick-start-yamls/Automation-Operator/Templates-automation-operator.yaml new file mode 100644 index 0000000000..5e078d2c76 --- /dev/null +++ b/downstream/quick-start-yamls/Automation-Operator/Templates-automation-operator.yaml @@ -0,0 +1,54 @@ +metadata: + name: launch-a-job-template + # you can add additional metadata here + instructional: true +spec: + displayName: Templates + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Automation operator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have a valid Ansible Automation Platform subscription. + description: |- + Launching a job template. + introduction: |- + Launch a job template, which combines an Ansible playbook from a project and the settings required to launch it. + Use job templates to execute the same job many times and encourage the reuse of Ansible playbook content. + + Platform administrators and automation developers have the permissions to create templates. + As an automation operator you can view and launch templates. + tasks: + - title: Launch a job template + description: |- + ##To launch a job template: + + 1. From the navigation panel, select **Automation Execution** > **Templates**. + 2. Click a template to view its details. + [As part of the initial setup, a default job template is created to help you get started.]{{admonition tip}} + 3. From the **Templates page**, click the launch icon to run your job template. + + ###Additional information: + + The **Templates** list view shows job templates that are currently available. + The default view is collapsed (Compact), showing the template name, template type, and the timestamp of the last job that ran using that template. + You can click the arrow icon next to each entry to expand and view more information. + This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. + From this screen you can launch, edit, and copy a job template. + + For more information about templates, see the [Job templates](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-job-templates) and [Workflow job templates](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-workflow-job-templates) sections. + + conclusion: You successfully completed the launching a template steps! If you + want to learn how to view rulebook activations, take the **Rulebook activations** quick start. + + nextQuickStart: [viewing-a-rulebook-activation] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Automation-mesh.yaml b/downstream/quick-start-yamls/Automation-mesh.yaml new file mode 100644 index 0000000000..fcc6c3f63a --- /dev/null +++ b/downstream/quick-start-yamls/Automation-mesh.yaml @@ -0,0 +1,36 @@ +metadata: + name: automation-mesh + # you can add additional metadata here + instructional: true +spec: + displayName: Setting up automation mesh + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Gateway + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + + description: |- + Automate at scale in a cloud-native way + introduction: |- + Deploy automation mesh as part of your operator or VM-based Ansible Automation Platform environment. + tasks: + - title: Set up automation mesh + description: |- + ##To set up automation mesh: + + See the installation guide that matches your installation type: + + - [Red Hat Ansible Automation Platform Automation Mesh for operator-based installations](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_managed_cloud_or_operator_environments) + - [Red Hat Ansible Automation Platform Automation Mesh Guide for VM-based installations](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/automation_mesh_for_vm_environments) + + conclusion: You successfully completed the set up of automation mesh! + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Build-decision-environment.yaml b/downstream/quick-start-yamls/Build-decision-environment.yaml new file mode 100644 index 0000000000..26d4b44b11 --- /dev/null +++ b/downstream/quick-start-yamls/Build-decision-environment.yaml @@ -0,0 +1,48 @@ +metadata: + name: build-decision-environment + # you can add additional metadata here + instructional: true +spec: + displayName: Building a decision environment + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Automation content + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PCEtLSBHZW5lcmF0ZWQgYnkgSWNvTW9vbi5pbyAtLT4KPHN2ZyB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgd2lkdGg9IjUxMiIgaGVpZ2h0PSI1MTIiIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj4KPHRpdGxlPjwvdGl0bGU+CjxnIGlkPSJpY29tb29uLWlnbm9yZSI+CjwvZz4KPHBhdGggZD0iTTQ0OCA2NHY0MTZoLTMzNmMtMjYuNTEzIDAtNDgtMjEuNDktNDgtNDhzMjEuNDg3LTQ4IDQ4LTQ4aDMwNHYtMzg0aC0zMjBjLTM1LjE5OSAwLTY0IDI4LjgtNjQgNjR2Mzg0YzAgMzUuMiAyOC44MDEgNjQgNjQgNjRoMzg0di00NDhoLTMyeiI+PC9wYXRoPgo8cGF0aCBkPSJNMTEyLjAyOCA0MTZ2MGMtMC4wMDkgMC4wMDEtMC4wMTkgMC0wLjAyOCAwLTguODM2IDAtMTYgNy4xNjMtMTYgMTZzNy4xNjQgMTYgMTYgMTZjMC4wMDkgMCAwLjAxOS0wLjAwMSAwLjAyOC0wLjAwMXYwLjAwMWgzMDMuOTQ1di0zMmgtMzAzLjk0NXoiPjwvcGF0aD4KPC9zdmc+Cg== + + description: |- + Build a decision environment. + + Persona: Platform administrator, Automation developer + introduction: |- + + Decision environments are a container image to run Ansible rulebooks. + They create a common language for communicating automation dependencies, and provide a standard way to build and distribute the automation environment. + The default decision environment is found in the Ansible-Rulebook. + + tasks: + + - title: Build a decision environment + description: |- + ##To build a decision execution environment: + + 1. From the navigation panel, select **Automation Decisions** > **Decision Environments**. + 2. Click **Create decision environment** to add a decision environment. + 3. Enter the appropriate details into the required fields. + 4. Select **Create decision environent**. + + For more information, see [Decision environments](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-decision-environments#doc-wrapper). + + conclusion: You successfully completed the building a decision environment steps! If you + want to learn how to create an inventory, take the **Creating an inventory** quick start. + + nextQuickStart: [create-inventory] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Build-execution-environment.yaml b/downstream/quick-start-yamls/Build-execution-environment.yaml new file mode 100644 index 0000000000..960a1ecc47 --- /dev/null +++ b/downstream/quick-start-yamls/Build-execution-environment.yaml @@ -0,0 +1,89 @@ +metadata: + name: build-execution-environment + # you can add additional metadata here + instructional: true +spec: + displayName: Building an automation execution environment + durationMinutes: 10 + # Optional type section, will display as a tile on the card + type: + text: Automation Content + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PCEtLSBHZW5lcmF0ZWQgYnkgSWNvTW9vbi5pbyAtLT4KPHN2ZyB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgd2lkdGg9IjUxMiIgaGVpZ2h0PSI1MTIiIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj4KPHRpdGxlPjwvdGl0bGU+CjxnIGlkPSJpY29tb29uLWlnbm9yZSI+CjwvZz4KPHBhdGggZD0iTTQ0OCA2NHY0MTZoLTMzNmMtMjYuNTEzIDAtNDgtMjEuNDktNDgtNDhzMjEuNDg3LTQ4IDQ4LTQ4aDMwNHYtMzg0aC0zMjBjLTM1LjE5OSAwLTY0IDI4LjgtNjQgNjR2Mzg0YzAgMzUuMiAyOC44MDEgNjQgNjQgNjRoMzg0di00NDhoLTMyeiI+PC9wYXRoPgo8cGF0aCBkPSJNMTEyLjAyOCA0MTZ2MGMtMC4wMDkgMC4wMDEtMC4wMTkgMC0wLjAyOCAwLTguODM2IDAtMTYgNy4xNjMtMTYgMTZzNy4xNjQgMTYgMTYgMTZjMC4wMDkgMCAwLjAxOS0wLjAwMSAwLjAyOC0wLjAwMXYwLjAwMWgzMDMuOTQ1di0zMmgtMzAzLjk0NXoiPjwvcGF0aD4KPC9zdmc+Cg== + + description: |- + Build, view, and sync an environment. + + Persona: Platform administrator, Automation developer + introduction: |- + All automation in Red Hat Ansible Automation Platform runs on container images called automation execution environments. + Automation execution environments create a common language for communicating automation dependencies, and offer a standard way to build and distribute the automation environment. + + tasks: + - title: Build an execution environment + description: |- + ##To build an execution environment: + + An automation execution environment must contain the following: + - Ansible Core 2.16 or later + - Python 3.10 or later + - Ansible Runner + - Ansible content collections and their dependencies + - System dependencies + + Ansible Builder is a command line tool that automates the process of building automation execution environments by using metadata defined in various Ansible Collections or created by the user. + + For more information about Ansible Builder and execution environments, see: + - [Using Ansible Builder](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/creating_and_using_execution_environments/index#assembly-using-builder) + - [Creating and Consuming Execution Environments](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/creating_and_using_execution_environments/index) + + - title: View an execution environment + description: |- + ##To view an execution environment: + + 1. From the navigation panel, select **Automation Execution** > **Infrastructure** > **Execution Environments**. + 2. Click an execution environment to view its details. + As part of the initial setup, a **Control Plane Execution Environment**, a **Default execution environment**, and a **Minimal execution environment** are created to help you get started, but you can also create your own. + + - title: Add an execution environment to a job template + description: |- + ##To add an execution environment to a job template: + + ###Prerequisites: + - You have access to an execution environment created using ansible-builder as described in [Building an execution environment](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#ref-controller-build-exec-envs). + Use the automation controller UI to specify the execution environment to use in your job templates. + - You have the appropriate permissions to use an execution environment in a job. + - For jobs or jobs template that use an execution environment with an assigned credential, ensure that the credential contains a username, host, and password. + + ###Procedure: + 1. From the navigation panel, select **Automation Execution** > **Infrastructure** > **Execution Environments**. + 2. Click **Create execution environment**. + 3. Enter the appropriate details into the required fields. + 4. Click **Create execution environent**. + 5. To add an execution environment to a job template, specify it in the **Execution environment** field of the job template. + + When you have added an execution environment to a job template, those templates are listed in the **Templates tab** of the execution environment. + + For more information, see [Execution environments](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#assembly-controller-execution-environments). + + review: + instructions: |- + #### To verify that you've added an execution environment: + Is the execution environment listed on the **Execution Environments** list view? + failedTaskHelp: Try the steps again. + summary: + success: You have viewed the details of your environment! + failed: + + conclusion: You successfully completed the building an execution environment steps! If you + want to learn how to build a decision environment, take the **Building a decision environment** quick start. + + nextQuickStart: [build-decision-environment] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Create-inventory.yaml b/downstream/quick-start-yamls/Create-inventory.yaml new file mode 100644 index 0000000000..effdf07648 --- /dev/null +++ b/downstream/quick-start-yamls/Create-inventory.yaml @@ -0,0 +1,81 @@ +metadata: + name: create-inventory + # you can add additional metadata here + instructional: true +spec: + displayName: Creating an inventory + durationMinutes: 10 + # Optional type section, will display as a tile on the card + type: + text: Platform + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have completed the initial setup of Ansible Automation Platform. + - You have a valid Ansible Automation Platform subscription. + + description: |- + Create or view an inventory. + + Persona: Platform administrator, Automation developer + introduction: |- + View the default inventory that was created during initial setup, and create a new inventory. + + tasks: + - title: Get started with inventories + description: |- + ##To view the default inventory: + + 1. From the navigation panel, select **Automation Execution** > **Infrastructure** > **Inventories**. + + 2. Click **Demo Inventory**. + As part of the initial setup, the **Demo Inventory** is created to help you get started, but you can also create your own. + + - title: Create an inventory + description: |- + ##To create an inventory: + + 1. From the navigation panel, select **Automation Execution** > **Infrastructure** > **Inventories**. + The **Inventories** window displays a list of the inventories that are currently available. + 2. Click **Create inventory**, and select **Create inventory** from the list. + 3. In the Create inventory wizard, complete the required fields: + - **Name**: Give a unique name for your inventory. + - **Description**: Optionally, write a description for your inventory. + - **Organization**: Select an organization to associate with the inventory. + - **Instance Groups**: Optionally, select the instance groups for this inventory to run on. + - **Labels**: Optional labels that describe this inventory, such as 'dev' or 'test'. Labels can be used to group and filter inventories and completed jobs. + - **Variables**: Optionally, enter variables. + They must be in JSON or YAML syntax. Use the radio button to toggle between the two. + 4. Click **Create inventory**. + + After saving the new inventory, you can proceed with configuring permissions, groups, hosts, sources, and view completed jobs, if applicable to your inventory type. + + For more information on these configurations and tasks, see the following documentation: + + - [Adding permissions to inventories](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#proc-controller-adding-inv-permissions) + - [Adding groups to inventories](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#proc-controller-add-groups) + - [Adding a host](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#proc-controller-add-hosts) + - [Adding a source](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#proc-controller-add-source) + - [View completed jobs](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#ref-controller-view-completed-jobs) + + review: + instructions: |- + #### To verify that you've created an inventory: + Is the inventory listed on the **Inventories** list view? + failedTaskHelp: Try the steps again. For more information about inventories, see [Inventories](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-inventories). + summary: + success: You have viewed the details of your inventory! + failed: + + conclusion: You successfully completed the creating an inventory steps! If you + want to learn how to create a project, take the **Creating a project** quick start. + + nextQuickStart: [create-project] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Create-project.yaml b/downstream/quick-start-yamls/Create-project.yaml new file mode 100644 index 0000000000..02682ca09a --- /dev/null +++ b/downstream/quick-start-yamls/Create-project.yaml @@ -0,0 +1,66 @@ +metadata: + name: create-project + # you can add additional metadata here + instructional: true +spec: + displayName: Creating a project + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Platform + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + + description: |- + Create a project. + + Persona: Platform administrator, Automation developer + introduction: |- + Create a project, which is a logical collection of playbooks, in automation controller. + You can manage playbooks and playbook directories in the following ways: + + - By placing them manually under the Project Base Path on your automation controller server. + - By placing your playbooks into a source code management (SCM) system supported by automation controller. + These include Git, Subversion, Mercurial and Red Hat Insights. + + tasks: + - title: Creating a project + description: |- + ##To create a project: + + 1. From the navigation panel, select **Automation Execution** > **Projects**. + 2. On the **Projects** page, click **Create project**. + 3. In the **Create Project** form, complete the required fields: + - **Name**: Give your rulebook activation a unique name. + - **Description**: Optionally, write a description for your project. + - **Organization**: Associate an organization with this project. + - **Execution environment**: Optionally, enter the name of the execution environment or search from a list of existing ones to run this project. + - **Source Control Type**: Select a source code management (SCM) type associated with this project from the menu. + - **Content Signature Validation Credential**: Optionally, use this field to enable content verification. + Specify the GPG key to use for validating content signature during project synchronization. + If the content has been tampered with, the job does not run. + 4. Click **Create project**. + + For more information about projects, see [Projects](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/controller-projects). + + review: + instructions: |- + #### To verify that you've created a project: + Is the project listed on the **Projects** list view? + failedTaskHelp: Try the steps again. + summary: + success: You have viewed the details of your project! + failed: + + conclusion: You successfully completed the creating a project steps! If you + want to learn how to create a job template or worfklow job template, take the **Creating and running a job or workflow template** quick start. + + nextQuickStart: [creating-a-job-template] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Create-rulebook-activation.yaml b/downstream/quick-start-yamls/Create-rulebook-activation.yaml new file mode 100644 index 0000000000..9bbbc588ad --- /dev/null +++ b/downstream/quick-start-yamls/Create-rulebook-activation.yaml @@ -0,0 +1,68 @@ +metadata: + name: creating-a-rulebook-activation + # you can add additional metadata here + instructional: true +spec: + displayName: Creating a rulebook activation + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Platform + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have set up a project. + - You have set up a decision environment. + - You have set up an automation controller token. + + description: |- + Create a rulebook activation. + + Persona: Platform administrator, Automation developer + introduction: |- + Create a rulebook activation, which is a process running in the background defined by a decision environment executing a specific rulebook. + + tasks: + - title: Create a rulebook activation + description: |- + ## To create a rulebook activation: + + 1. Log in to Red Hat Ansible Automation Platform. + 2. From the navigation panel, select **Automation Decisions** > **Rulebook Activations**. + 3. Click **Create rulebook activation**. + 4. Complete the required fields: + + - **Name**: Give your rulebook activation a unique name. + - **Project**: Select a project to choose one of its rulebooks for use with this rulebook activation. + - **Rulebook**: Select the rulebook that this rulebook activation will work with. + - **Decision environment**: Select a decision environment, which is a container image used to run Ansible rulebooks. + - **Restart policy**: Select a policy that determines when to restart your rulebook. + - **Log level**: + - **Rulebook activation enabled**: Choose whether or not to automatically run your rulebook activation after creation. + - **Variables**: The variables for the rulebook are in a JSON/YAML format. The content is equivalent to the file passed through the `--vars` flag of ansible-rulebook command. + + 5. Click **Create rulebook activation**. + + Your rulebook activation is now created and can be managed in the **Rulebook Activations** screen. + + review: + instructions: |- + #### To verify that you've created a rulebook activation: + Do you see the details page of your new rulebook activation? + failedTaskHelp: Try the steps again. For more information, see [Rulebook activations](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html/using_automation_decisions/eda-rulebook-activations). + summary: + success: You have viewed the details of your user! + failed: + + conclusion: You successfully completed the creating a rulebook activation steps! If you + want to learn how to set up Ansible Lightspeed, take the **Setting up Ansible Lightspeed** quick start. + + nextQuickStart: [ansible-lightspeed] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Creating-a-job-template.yaml b/downstream/quick-start-yamls/Creating-a-job-template.yaml new file mode 100644 index 0000000000..ed802bc0c6 --- /dev/null +++ b/downstream/quick-start-yamls/Creating-a-job-template.yaml @@ -0,0 +1,160 @@ +metadata: + name: creating-a-job-template + # you can add additional metadata here + instructional: true +spec: + displayName: Creating and running a job or workflow template + durationMinutes: 10 + # Optional type section, will display as a tile on the card + type: + text: Platform + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have a valid Ansible Automation Platform subscription. + description: |- + Create and run a job or workflow template. + + Persona: Platform administrator, Automation developer + introduction: |- + A job template combines an Ansible playbook from a project and the settings required to launch it. + Job templates are used for executing the same job many times and encouraging the reuse of Ansible playbook content. + For more information, see [Creating a job template](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-create-job-template). + tasks: + - title: Get started with job templates + description: |- + ## To view a demo job template: + + 1. From the navigation panel, select **Automation Execution** > **Templates**. + + 2. Click **Demo Job Template**. + As part of the initial setup, a **Demo Job Template** is created to help you get started, but you can also create your own. + + - title: Create a job template + description: |- + ##To create a job template: + + 1. From the navigation panel, select **Automation Execution** > **Templates**. + + 2. On the **Templates** list view, click **Create template → Create job template**. + + 3. Complete the following mandatory fields: + [If a field has the **Prompt on launch** checkbox selected, launching the job prompts you for the value for that field when launching. Most prompted values override any values set in the job template.]{{admonition tip}} + + - **Name**: Give a unique name for your job template. + - **Job Type**: Select a job template type. + - **Inventory**: Select the inventory that the job template runs against. + A system administrator must grant you or your team permissions to be able to use certain inventories in a job template. + - **Project**: Select the project containing your Ansible playbooks. + - **Playbook**: Specify the playbook that the job template executes. + + Optional: Complete the following fields: + + - **Description**: Enter a description as appropriate. + - **ExecutionEnvironment**: Select the container image to use for this job. + - **Credentials**: Select credentials for accessing the nodes this job runs against. + - **Labels**: Optional labels that describe this job template, such as `dev` or `test`. + - **Forks**: The number of parallel or simultaneous processes to use while executing the playbook. + - **Limit**: A host pattern to further constrain the list of hosts managed or affected by the playbook. + - **Verbosity**: Control the level of output Ansible produces as the playbook executes. + - **Job slicing**: Specify the number of slices you want this job template to run. + - **Timeout**: This enables you to specify the length of time (in seconds) that the job might run before it is canceled. + - **Show changes**: If enabled, show the changes made by Ansible tasks, where supported. + - **Instance groups**: Select the instance groups for this job template to run on. + - **Job tags**: Tags are useful when you have a large playbook, and you want to run a specific part of a play or task. + - **Skip tags**: Skip tags are useful when you have a large playbook, and you want to skip specific parts of a play or task. + - **Extra variables**: Apply optional extra variables to be applied to the job template. +

 

+ 4. Optional: Configure the following options: + + - **Privilege escalation**: If checked, you enable this playbook to run as an administrator. + This is the equivalent of passing the `--become` option to the `ansible-playbook` command. + - **Provisioning callback**: If checked, you enable a host to call back to the automation controller through the REST API and start the launch of a job from this job template. + - **Enable webhook**: If checked, you turn on the ability to interface with a predefined SCM system web service that is used to launch a job template. + - **Concurrent jobs**: If checked, you are allowing jobs in the queue to run simultaneously if not dependent on one another. Check this box if you want to run job slices simultaneously. + - **Enable fact storage**: If checked, automation controller stores gathered facts for all hosts in an inventory related to the job running. + - **Prevent instance group fallback**: Check this option to allow only the instance groups listed in the Instance Groups field to execute the job. + If clear, all available instances in the execution pool are used based on the hierarchy. + 5. Click **Create job template**. + review: + instructions: |- + #### To verify that you have created a job template: + Is the job template listed on the **Templates** list view? + failedTaskHelp: This task is not verified yet. Try the task again. For more information about job templates, see the [Job templates](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-job-templates). + summary: + success: You have viewed the details of your job template! + failed: Try the steps again. + + - title: Create a worklfow job template + description: |- + ##To create a workflow job template: + + 1. From the navigation panel, select **Automation Execution** > **Templates**. + + 2. On the **Templates** list view, click **Create template → Create workflow job template**. + + 3. Complete the following fields: + [If a field has the **Prompt on launch** checkbox selected, launching the job prompts you for the value for that field when launching. Most prompted values override any values set in the job template.]{{admonition tip}} + + - **Name**: Give a unique name for your job template. + - **Description**: Optionally, enter an arbitrary description as appropriate. + - **Organization**: Choose the organization to use with this template from the organizations available to the logged in user. + - **Inventory**: Optionally, select the inventory to use with this template from the inventories available to the logged in user. + - **Limit**: Give a host pattern to further constrain the list of hosts that will be managed or affected by the playbook. + Many patterns are allowed. Refer to Ansible documentation for more information and examples on patterns. + - **Source control branch**: Select a branch for the workflow. This branch is applied to all workflow job template nodes that prompt for a branch. + - **Labels**: Optional labels that describe this job template, such as 'dev' or 'test'. Use labels to group and filter job templates and completed jobs. + - **Job tags**: Tags are useful when you have a large playbook, and you want to run a specific part of a play or task. + Use commas to separate tags. + - **Skip Tags**: Skip tags are useful when you have a large playbook, and you want to skip specific parts of a play or task. Use commas to separate multiple tags. + - **Extra Variables**: Optional extra variables to be applied to the job template. + + 4. Specify the following **Options** for launching this template, if necessary: + - Check **Enable webhook** to turn on the ability to interface with a predefined source code management (SCM) system web service that is used to launch a workflow job template. + GitHub and GitLab are the supported SCM systems. + - If you enable webhooks, other fields display, prompting for additional information: + - **Webhook service**: Select which service to listen for webhooks from. + - **Webhook URL**: Optionally, select a webhook service. + - **Webhook key**: Generated shared secret to be used by the webhook service to sign payloads sent to automation controller. + You must configure this in the settings on the webhook service so that webhooks from this service are accepted in automation controller. + - Check **Enable concurrent jobs** to allow simultaneous runs of this workflow. + + 5. Click **Create workflow job** template. + + review: + instructions: |- + #### To verify that you have created a worfklow job template: + Is the workflow job template listed on the **Templates** list view? + failedTaskHelp: This task is not verified yet. Try the task again. For more information about workflow job templates, see [Workflow job templates](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-workflow-job-templates). + summary: + success: You have viewed the details of your job template! + failed: Try the steps again. + + - title: Launch a job template + description: |- + ##To launch a job template: + + A benefit of automation controller is the push-button deployment of Ansible playbooks. + You can configure a template to store all the parameters that you would normally pass to the Ansible playbook on the command line. + In addition to the playbooks, the template passes the inventory, credentials, extra variables, and all options and settings that you can specify on the command line. + + Easier deployments drive consistency, by running your playbooks the same way each time, and allowing you to delegate responsibilities. + + Run a job template by using one of these methods: + + - From the navigation panel, select **Automation Execution** > **Templates** > **Launch template**. + - In the job template **Details** view of the job template you want to launch, click **Launch template**. + + For more information, see [Launching a job template](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#controller-launch-job-template). + + conclusion: You successfully completed the creating and running a job or workflow template steps! If you + want to learn how to create a rulebook activation, take the **Creating a rulebook activation** quick start. + + nextQuickStart: [creating-a-rulebook-activation] diff --git a/downstream/quick-start-yamls/Platform Admin/Create-organization-platform-admin.yaml b/downstream/quick-start-yamls/Platform Admin/Create-organization-platform-admin.yaml new file mode 100644 index 0000000000..50d4fcd8b5 --- /dev/null +++ b/downstream/quick-start-yamls/Platform Admin/Create-organization-platform-admin.yaml @@ -0,0 +1,64 @@ +metadata: + name: create-organization + # you can add additional metadata here + instructional: true +spec: + displayName: Create organization + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Platform administrator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + + description: |- + Create an organization. + introduction: |- + You'll learn how to create an organization in this quick start. + tasks: + - title: Create an organization + description: |- + ##To create organizations: + + Ansible Automation Platform automatically creates a default organization. + If you have a Self-support level license, you have only the default organization available and must not delete it. + + You can use the default organization as it is initially set up and edit it later. + + 1. From the navigation panel, select **Access Management** > **Organizations**. + 2. Click **Create organization**. + 3. Enter the **Name** and optionally provide a **Description** for your organization. + [If automation controller is enabled on the platform, continue with Step 4. Otherwise, proceed to Step 6.]{{admonition note}} + 4. Select the name of the **Execution environment** or search for one that exists that members of this team can run automation. + 5. Enter the name of the **Instance Groups** on which to run this organization. + 6. Enter the **Galaxy credentials** or search from a list of existing ones. + 7. Select the **Max hosts** for this organization. + The default is 0. + 8. Click **Next**. + 9. If you include more than one instance group, you can manage the instance group order by dragging and dropping the instance group up or down in the list. + [The execution precedence is determined by the order in which the instance groups are listed.]{{admonition note}} + 10. Click **Next** and verify the organization settings. + 11. Click **Finish**. + + review: + instructions: |- + #### To verify that you've added an organization: + Did the **Details** page open after creating the organization? + Is the organization listed on the **Organizations** list view? + failedTaskHelp: Try the steps again. For more information, see [Organizations](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/automation_controller_user_guide/index#assembly-controller-organizations). + summary: + success: You have viewed the details of your organization! + failed: + + conclusion: You successfully completed the creating an organization steps! If you + want to learn how to create a team, take the **Create teams** quick start. + + nextQuickStart: [creating-a-team] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Platform Admin/Create-teams-platform-admin.yaml b/downstream/quick-start-yamls/Platform Admin/Create-teams-platform-admin.yaml new file mode 100644 index 0000000000..014f85966d --- /dev/null +++ b/downstream/quick-start-yamls/Platform Admin/Create-teams-platform-admin.yaml @@ -0,0 +1,57 @@ +metadata: + name: creating-a-team + # you can add additional metadata here + instructional: true +spec: + displayName: Create teams + durationMinutes: 10 + # Optional type section, will display as a tile on the card + type: + text: Platform administrator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have completed the Ansible Automation Platform installation. + - You have a valid Ansible Automation Platform subscription. + description: |- + Create a team and associate organizations and roles to that team. + introduction: |- + You'll create a team and associate organizations and roles as needed in this quick start. + tasks: + - title: Create teams + description: |- + ##To create teams: + + You can create new teams, assign an organization to the team, and manage the users and administrators associated with each team. + Users associated with a team inherit the permissions associated with the team and any organization permissions to which the team has membership. + To add a user to a team, the user must have already been created. + + 1. From the navigational panel, select **Access Management** > **Teams**. + 2. Click **Create team**. + 3. Enter the appropriate details into the required fields. + 4. Select an organization from the **Organization** list to which you want to associate this team. + [Each team can only be assigned to one organization.]{{admonition note}} + 5. Click **Create team**. The **Details** page opens, where you can review and edit your team information. + + review: + instructions: |- + #### To verify that you've added a team: + Did the **Details** page open after creating the team? + Is the team listed on the **Teams** list view? + failedTaskHelp: Try the steps again. For more information, see [Managing teams](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/access_management_and_authentication/index#assembly-controller-teams). + summary: + success: You have viewed the details of your user! + failed: + + conclusion: You successfully completed the creating a team steps! If you + want to learn how to create a user, take the **Create users** quick start. + + nextQuickStart: [create-users] + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Platform Admin/Create-users-platform-admin.yaml b/downstream/quick-start-yamls/Platform Admin/Create-users-platform-admin.yaml new file mode 100644 index 0000000000..091a30e5e8 --- /dev/null +++ b/downstream/quick-start-yamls/Platform Admin/Create-users-platform-admin.yaml @@ -0,0 +1,61 @@ +metadata: + name: create-users + # you can add additional metadata here + instructional: true +spec: + displayName: Create users + durationMinutes: 5 + # Optional type section, will display as a tile on the card + type: + text: Platform administrator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have completed the Ansible Automation Platform installation. + - You have a valid Ansible Automation Platform subscription. + description: |- + Create a user and associate organizations, teams, and roles to that user. + introduction: |- + You'll create a user and associate it with organizations, teams, and roles as needed in this quick start. + tasks: + - title: Create users + description: |- + ##To create users: + + 1. From the navigational panel, select **Access Management** > **Users**. + 2. Click **Create user**. + 3. Enter the appropriate details into the required fields. + + [If you are modifying your own password, log out and log back in again for it to take effect.]{{admonition note}} + + 4. Select the **User type**. You can select the following options: + + - **Ansible Automation Platform Administrator**: An Administrator has full access to services and can manage other users. + - **Ansible Automation Platform Auditor**: An Auditor has view-only permissions on all objects. + + 5. Optional: Select the **Organizations** to be assigned for this user. + 6. Click **Create user**. + When the user is successfully created, the User dialog opens. + From here, you can review and modify the user’s **Teams**, **Roles** and other membership details. + + [If the user is not newly-created, the details screen displays the last login activity of that user.]{{admonition note}} + + review: + instructions: |- + #### To verify that you've added a user: + Did the **Details** page open after creating the user? + Is the user listed on the **Users** list view? + failedTaskHelp: Try the steps again. For more information, see [Managing Users in automation controller](* https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/access_management_and_authentication/index#assembly-controller-users). + summary: + success: You have viewed the details of your user! + failed: + + conclusion: You successfully completed the creating a user steps! + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Platform Admin/Dynamic-inventory-platform-admin.yaml b/downstream/quick-start-yamls/Platform Admin/Dynamic-inventory-platform-admin.yaml new file mode 100644 index 0000000000..8c1de7c6c2 --- /dev/null +++ b/downstream/quick-start-yamls/Platform Admin/Dynamic-inventory-platform-admin.yaml @@ -0,0 +1,95 @@ +metadata: + name: dynamic-inventory + # you can add additional metadata here + instructional: true +spec: + displayName: Creating a dynamic inventory + durationMinutes: 10 + # Optional type section, will display as a tile on the card + type: + text: Platform + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have completed the initial setup of Ansible Automation Platform. + - You have a valid Ansible Automation Platform subscription. + + description: |- + Create or view a dynamic inventory + + Persona: Platform administrator + introduction: |- + + Use built-in Automation Execution inventory plugins to source your existing resource, or Amazon Web Services (AWS), Microsoft Azure, and Google Compute Engine (GCP) resources into a dynamic inventory and get to automation quickly. + + tasks: + - title: Create a credential to connect your resources to Automation Execution + description: |- + ##To create a credential to connect your resources to Automation Execution (Automation controller): + + 1. From the navigation panel, select **Automation Execution** > **Infrastructure** > **Credentials**. + 2. Click **Create credential**. + 3. Enter the appropriate details into the following fields: + - **Name**: Give your credential a unique name. + - Optional: **Description** + - Optional: **Organization** + - **Credential type**: Select your chosen source. + - Enter the relevant information for your chosen source. + For more information on your chosen credential type, see [Credential types](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/using_automation execution/index#ref-controller-credential-types). + 4. Click **Create credential**. + + review: + instructions: |- + #### To verify that you've created a credential: + Does your credential appear in the **Credentials** list view? + failedTaskHelp: Try the steps again or read more about this topic at [Creating new credentials](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/using_automation_execution/index#creating-credentials). + success: You have viewed the details of your credential! + failed: + + - title: Set up an inventory source under a newly created inventory + description: |- + ##To set up an inventory source under a newly created inventory: + + After you've created your credential, you're ready to create your dynamic inventory. + + 1. From the navigation panel, select **Automation Execution** > **Infrastructure** > **Inventories**. + The **Inventories** window displays a list of the inventories that are currently available. + 2. Click **Create inventory**, and select **Create inventory** from the list. + 3. Enter the appropriate details into the following fields: + - **Name**: Give a unique name for your inventory. + - **Description**: Optionally, write a description for your inventory. + - **Organization**: Select an organization to associate with the inventory. + - **Instance Groups**: Optionally, select the instance groups for this inventory to run on. + - **Labels**: Optional labels that describe this inventory, such as 'dev' or 'test'. + Use labels to group and filter inventories and completed jobs. + - **Variables**: Optionally, enter variables. They must be in JSON or YAML syntax. Use the radio button to toggle between the two. + 4. Click **Create inventory**. + 5. In your newly created inventory, click the **Sources** tab. + 6. Click **Add source**. + 7. In the **Add new source** form, complete the required fields: + - **Name**: Give a unique name for your source. + - **Source**: Select your existing source, an [Amazon EC2](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/automation_controller_user_guide/index#ref-controller-inventory-sources), [Google Compute Engine](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/automation_controller_user_guide/index#ref-controller-inventory-sources), or a [Microsoft Azure Resource Manager](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/automation_controller_user_guide/index#proc-controller-azure-resource-manager) source. + - **Credential**: Select the credential you created in task 1. + - **Source Variables**: The source for your resources have additional parameters you can add to the **Source Variables** section. + See your source documentation for more information. + 8. Click **Save**. + 9. Click **Launch inventory update** in the inventory **Details** tab to sync your instances into this inventory. + + review: + instructions: |- + #### To verify that your inventory has synced correctly: + Check the status of your inventory sync in the **Inventores list** view. + Does the status say **Success**? + failedTaskHelp: Try the steps again or read more about this topic at [Add a new inventory](https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/automation_controller_user_guide/index#proc-controller-adding-new-inventory). + success: You have viewed the details of your inventory! + failed: + + conclusion: You successfully completed the creating a dynamic inventory steps for Ansible Automation Platform! + \ No newline at end of file diff --git a/downstream/quick-start-yamls/Platform Admin/Getting-started-with-AAP-platform-admin.yaml b/downstream/quick-start-yamls/Platform Admin/Getting-started-with-AAP-platform-admin.yaml new file mode 100644 index 0000000000..38f23f9398 --- /dev/null +++ b/downstream/quick-start-yamls/Platform Admin/Getting-started-with-AAP-platform-admin.yaml @@ -0,0 +1,151 @@ +metadata: + name: getting started with Ansible Automation Platform + # you can add additional metadata here + instructional: true +spec: + displayName: Getting started with Ansible Automation Platform + durationMinutes: 20 + # Optional type section, will display as a tile on the card + type: + text: Platform administrator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have completed the Ansible Automation Platform installation. + - You have a valid Ansible Automation Platform subscription. + description: |- + Learn how to get started with Ansible Automation Platform + introduction: |- + Get started with Ansible Automation Platform as a platform administrator. + + tasks: + - title: Configure authentication + description: |- + ##To configure authentication: + + After you have logged in and configured your administrator credentials, you must configure authentication for your users. + Depending on your organization's needs and resources, you can either: + + - Add users and teams manually. + - Add users and teams through social authentication, using identity providers such as LDAP. + + - title: Review roles + description: |- + ##To review roles: + + Roles are units of organization in the Ansible Automation Platform. + When you assign a role to a team or user, you are granting access to use, read, or write credentials. + Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among resources, or with other users. All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource. + + Roles are separated out by service through automation controller, Event-Driven Ansible, and automation hub. + + For more information, see the quick start **Review roles**. + + - title: Create an organization + description: |- + ##To create organizations: + + Ansible Automation Platform automatically creates a default organization. + If you have a Self-support level license, you have only the default organization available and must not delete it. + + You can use the default organization as it is initially set up and edit it later. + + 1. From the navigation panel, select **Access Management** > **Organizations**. + 2. Click **Create organization**. + 3. Enter the appropriate details into the required fields. + 4. Click **Next**. + You can review and edit your organization information. + + review: + instructions: |- + #### To verify that you've added an organization: + Did the **Details** page open after creating the organization? + Is the organization listed on the **Organizations** list view? + failedTaskHelp: Try the steps again. For more information, see [Organizations](https://docs.redhat.com/documentation/en-us/red_hat_ansible_automation_platform/2.5/html-single/access_management_and_authentication/index#assembly-controller-organizations). + summary: + success: You have viewed the details of your organization! + failed: + + - title: Create teams + description: |- + ##To create teams: + + You can create new teams and manage the users and organizations associated with each team. + Users associated with a team inherit the permissions associated with the team and any organization permissions to which the team has membership. + To add a user to a team, the user must have already been created. + + 1. From the navigational panel, select **Access Management** > **Teams**. + 2. Click **Create team**. + 3. Enter the appropriate details into the required fields. + 4. Select an organization from the **Organization** list to which you want to associate this team. + [Each team can only be assigned to one organization.]{{admonition note}} + 5. Click **Create team**. The **Details** page opens, where you can review and edit your team information. + + review: + instructions: |- + #### To verify that you've added a team: + Did the **Details** page open after creating the team? + Is the team listed on the **Teams** list view? + failedTaskHelp: Try the steps again. + summary: + success: You have viewed the details of your user! + failed: + + - title: Create users + description: |- + ##To create users: + + 1. From the navigational panel, select **Access Management** > **Users**. + 2. Click **Create user**. + 3. Enter the appropriate details into the required fields. + + [If you are modifying your own password, log out and log back in again for it to take effect.]{{admonition note}} + + 4. Select the **User type**. You can select the following options: + + - **Ansible Automation Platform Administrator**: An Administrator has full access to services and can manage other users. + - **Ansible Automation Platform Auditor**: An Auditor has view-only permissions on all objects. + + 5. Optional: Select the **Organizations** to be assigned for this user. + 6. Click **Create user**. + When the user is successfully created, the User dialog opens. + From here, you can review and modify the user’s **Teams**, **Roles** and other membership details. + + [If the user is not newly-created, the details screen displays the last login activity of that user.]{{admonition note}} + + review: + instructions: |- + #### To verify that you've added a user: + Did the **Details** page open after creating the user? + Is the user listed on the **Users** list view? + failedTaskHelp: Try the steps again. + summary: + success: You have viewed the details of your user! + failed: + + - title: Add a user, administrator, or team to an organization + description: |- + ##To add a user, administrator, or team to an organization: + + 1. From the **Organizations** list view, select the organization to which you want to add a user, administrator, or team. + 2. Click the **Users** tab to add users, click the **Administrators** tab to add administrators, or click the **Team** tab to add teams. + 3. Select one or more users, administrators, or teams from the list by clicking the checkbox next to the name to add them as members. + 4. Click **Next**. + 5. Select the role you want the selected user, administrator, or team to have. + 6. Click **Save** on the **Review** screen to apply the roles to the selected user, administrator, or team, and to add them as members. + The **Add Users**, **Add Administrators**, or **Add Teams** window displays the updated roles assigned for each user and team. + + [A user, administrator, or team with associated roles retains them if they are reassigned to another organization.]{{admonition note}} + + conclusion: You successfully completed the getting started steps for Ansible Automation Platform! If you + want to learn how to find content, take the **Finding content in ansible automation platform** quick start. + + nextQuickStart: [finding-content-in-ansible-automation-platform] + diff --git a/downstream/quick-start-yamls/Platform Admin/Review-roles-platform-admin.yaml b/downstream/quick-start-yamls/Platform Admin/Review-roles-platform-admin.yaml new file mode 100644 index 0000000000..d9cb7d1725 --- /dev/null +++ b/downstream/quick-start-yamls/Platform Admin/Review-roles-platform-admin.yaml @@ -0,0 +1,137 @@ +metadata: + name: review-roles + # you can add additional metadata here + instructional: true +spec: + displayName: Review roles + durationMinutes: 10 + # Optional type section, will display as a tile on the card + type: + text: Platform administrator + # 'blue' | 'cyan' | 'green' | 'orange' | 'purple' | 'red' | 'grey' + color: grey + # - The icon defined as a base64 value. Example flow: + # 1. Find an .svg you want to use, like from here: https://www.patternfly.org/v4/guidelines/icons/#all-icons + # 2. Upload the file here and encode it (output format - plain text): https://base64.guru/converter/encode/image + # 3. compose - `icon: data:image/svg+xml;base64,` + # - If empty string (icon: ''), will use a default rocket icon + # - If set to null (icon: ~) will not show an icon + icon: data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIGlkPSJ1dWlkLWFlMzcyZWFiLWE3YjktNDU4ZC04MzkwLWI5OWZiNzhmYzFlNiIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIiB2aWV3Qm94PSIwIDAgMzggMzgiPjxwYXRoIGQ9Im0yOCwxSDEwQzUuMDI5NDIsMSwxLDUuMDI5NDIsMSwxMHYxOGMwLDQuOTcwNTgsNC4wMjk0Miw5LDksOWgxOGM0Ljk3MDU4LDAsOS00LjAyOTQyLDktOVYxMGMwLTQuOTcwNTgtNC4wMjk0Mi05LTktOVptNy43NSwyN2MwLDQuMjczMzgtMy40NzY2OCw3Ljc1LTcuNzUsNy43NUgxMGMtNC4yNzMzMiwwLTcuNzUtMy40NzY2Mi03Ljc1LTcuNzVWMTBjMC00LjI3MzM4LDMuNDc2NjgtNy43NSw3Ljc1LTcuNzVoMThjNC4yNzMzMiwwLDcuNzUsMy40NzY2Miw3Ljc1LDcuNzV2MThaIi8+PHBhdGggZD0ibTE0LDEwLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yNywxMC40NzU1OWgtM2MtLjM0NDczLDAtLjYyNS4yODAyNy0uNjI1LjYyNXYzYzAsLjM0NDczLjI4MDI3LjYyNS42MjUuNjI1aDNjLjM0NDczLDAsLjYyNS0uMjgwMjcuNjI1LS42MjV2LTNjMC0uMzQ0NzMtLjI4MDI3LS42MjUtLjYyNS0uNjI1Wm0tLjYyNSwzaC0xLjc1di0xLjc1aDEuNzV2MS43NVoiLz48cGF0aCBkPSJtMjcsMjMuNDc1NTloLTNjLS4zNDQ3MywwLS42MjUuMjgwMjctLjYyNS42MjV2M2MwLC4zNDQ3My4yODAyNy42MjUuNjI1LjYyNWgzYy4zNDQ3MywwLC42MjUtLjI4MDI3LjYyNS0uNjI1di0zYzAtLjM0NDczLS4yODAyNy0uNjI1LS42MjUtLjYyNVptLS42MjUsM2gtMS43NXYtMS43NWgxLjc1djEuNzVaIi8+PHBhdGggZD0ibTE0LDIzLjQ3NTU5aC0zYy0uMzQ0NzMsMC0uNjI1LjI4MDI3LS42MjUuNjI1djNjMCwuMzQ0NzMuMjgwMjcuNjI1LjYyNS42MjVoM2MuMzQ0NzMsMCwuNjI1LS4yODAyNy42MjUtLjYyNXYtM2MwLS4zNDQ3My0uMjgwMjctLjYyNS0uNjI1LS42MjVabS0uNjI1LDNoLTEuNzV2LTEuNzVoMS43NXYxLjc1WiIvPjxwYXRoIGQ9Im0yMS4xNTUyNywxMC43NDcwN2MtMS40MDQzLS4zNTkzOC0yLjkwNjI1LS4zNTkzOC00LjMxMDU1LDAtLjMzMzk4LjA4NTk0LS41MzYxMy40MjY3Ni0uNDUwMi43NjA3NC4wODQ5Ni4zMzMwMS40MjI4NS41MzkwNi43NjA3NC40NTAyLDEuMjAzMTItLjMwODU5LDIuNDg2MzMtLjMwODU5LDMuNjg5NDUsMCwuMDUxNzYuMDEzNjcuMTA0NDkuMDE5NTMuMTU1MjcuMDE5NTMuMjc5MywwLC41MzMyLS4xODc1LjYwNTQ3LS40Njk3My4wODU5NC0uMzMzOTgtLjExNjIxLS42NzQ4LS40NTAyLS43NjA3NFoiLz48cGF0aCBkPSJtMTEuNDA3MjMsMTYuNDk1MTJjLS4zMzY5MS0uMDg5ODQtLjY3NDguMTE3MTktLjc2MDc0LjQ1MDItLjE3OTY5LjcwMTE3LS4yNzE0OCwxLjQyNjc2LS4yNzE0OCwyLjE1NTI3LDAsLjcyNzU0LjA5MTgsMS40NTMxMi4yNzE0OCwyLjE1NTI3LjA3MjI3LjI4MjIzLjMyNjE3LjQ2OTczLjYwNTQ3LjQ2OTczLjA1MDc4LDAsLjEwMzUyLS4wMDU4Ni4xNTUyNy0uMDE5NTMuMzMzOTgtLjA4NTk0LjUzNjEzLS40MjY3Ni40NTAyLS43NjA3NC0uMTU0My0uNjAxNTYtLjIzMjQyLTEuMjIxNjgtLjIzMjQyLTEuODQ0NzMsMC0uNjI0MDIuMDc4MTItMS4yNDQxNC4yMzI0Mi0xLjg0NDczLjA4NTk0LS4zMzM5OC0uMTE1MjMtLjY3NDgtLjQ1MDItLjc2MDc0WiIvPjxwYXRoIGQ9Im0yMC44NDQ3MywyNi4yNDMxNmMtMS4yMDMxMi4zMDg1OS0yLjQ4NjMzLjMwODU5LTMuNjg5NDUsMC0uMzM2OTEtLjA4Nzg5LS42NzQ4LjExNjIxLS43NjA3NC40NTAycy4xMTYyMS42NzQ4LjQ1MDIuNzYwNzRjLjcwMjE1LjE3OTY5LDEuNDI3NzMuMjcxNDgsMi4xNTUyNy4yNzE0OHMxLjQ1MzEyLS4wOTE4LDIuMTU1MjctLjI3MTQ4Yy4zMzM5OC0uMDg1OTQuNTM2MTMtLjQyNjc2LjQ1MDItLjc2MDc0LS4wODQ5Ni0uMzMzOTgtLjQyMzgzLS41MzkwNi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTI2LjU5Mjc3LDE2LjQ5NTEyYy0uMzM0OTYuMDg1OTQtLjUzNjEzLjQyNjc2LS40NTAyLjc2MDc0LjE1NDMuNjAwNTkuMjMyNDIsMS4yMjA3LjIzMjQyLDEuODQ0NzMsMCwuNjIzMDUtLjA3ODEyLDEuMjQzMTYtLjIzMjQyLDEuODQ0NzMtLjA4NTk0LjMzMzk4LjExNjIxLjY3NDguNDUwMi43NjA3NC4wNTE3Ni4wMTM2Ny4xMDQ0OS4wMTk1My4xNTUyNy4wMTk1My4yNzkzLDAsLjUzMzItLjE4NzUuNjA1NDctLjQ2OTczLjE3OTY5LS43MDIxNS4yNzE0OC0xLjQyNzczLjI3MTQ4LTIuMTU1MjcsMC0uNzI4NTItLjA5MTgtMS40NTQxLS4yNzE0OC0yLjE1NTI3LS4wODU5NC0uMzMzMDEtLjQyMzgzLS41NDEwMi0uNzYwNzQtLjQ1MDJaIi8+PHBhdGggZD0ibTIwLjkxMTEzLDE3LjM4NTc0YzAtMS4wNTM3MS0uODU3NDItMS45MTAxNi0xLjkxMTEzLTEuOTEwMTZzLTEuOTExMTMuODU2NDUtMS45MTExMywxLjkxMDE2YzAsLjYyNzc1LjMwODM1LDEuMTgxMDMuNzc3MjIsMS41Mjk2NmwtLjc1ODY3LDMuMDMzODFjLS4wNDY4OC4xODY1Mi0uMDA0ODguMzg0NzcuMTE0MjYuNTM2MTMuMTE4MTYuMTUxMzcuMjk5OC4yNDAyMy40OTIxOS4yNDAyM2gyLjU3MjI3Yy4xOTIzOCwwLC4zNzQwMi0uMDg4ODcuNDkyMTktLjI0MDIzLjExOTE0LS4xNTEzNy4xNjExMy0uMzQ5NjEuMTE0MjYtLjUzNjEzbC0uNzU4NjctMy4wMzM4MWMuNDY4ODctLjM0ODYzLjc3NzIyLS45MDE5Mi43NzcyMi0xLjUyOTY2Wm0tMi4zOTY0OCw0LjA4OTg0bC40ODUzNS0xLjk0MTQxLjQ4NTM1LDEuOTQxNDFoLS45NzA3Wm0uNDg1MzUtMy40Mjg3MWMtLjM2NDI2LDAtLjY2MTEzLS4yOTY4OC0uNjYxMTMtLjY2MTEzcy4yOTY4OC0uNjYwMTYuNjYxMTMtLjY2MDE2LjY2MTEzLjI5NTkuNjYxMTMuNjYwMTYtLjI5Njg4LjY2MTEzLS42NjExMy42NjExM1oiLz48L3N2Zz4= + prerequisites: + - You have completed the Ansible Automation Platform installation. + - You have a valid Ansible Automation Platform subscription. + description: |- + Review roles and create new roles as needed by your organization. + introduction: |- + Roles are units of organization in the Ansible Automation Platform. When you assign a role to a team or user, you are granting access to use, read, or write credentials. Because of the file structure associated with a role, roles become redistributable units that enable you to share behavior among resources, or with other users. All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource. + tasks: + - title: Review an Automation Execution role + description: |- + ##To review Automation Execution roles: + + You can display the set of roles assigned for automation execution resources by using **Access Management**. + From here, you can also sort or search the roles list, and create, edit, or delete automation execution roles. + + 1. From the navigation panel, select **Access Management** > **Roles**. + 2. Select the **Automation Execution** tab. + 3. From the table header, you can sort the list of roles by using the arrows for **Name**, **Description**, **Created** and **Editable** or by making sort selections in the Sort list. + 4. You can filter the list of roles by selecting **Name** or **Editable** from the filter list and clicking the arrow. + + - title: Review an Automation Decision role + description: |- + ##To review an Automation Decision role: + + You can display the set of roles assigned for automation decision resources by using **Access Management**. + From here, you can also sort or search the roles list, and create, edit, or delete automation decision roles. + + 1. From the navigation panel, select **Access Management** > **Roles**. + 2. Select the **Automation Decisions** tab. + 3. From the table header, you can sort the list of roles by using the arrows for **Name**, **Description**, **Created** and **Editable** or by making sort selections in the Sort list. + 4. You can filter the list of roles by selecting **Name** or **Editable** from the filter list and clicking the arrow. + + - title: Review an Automation Content role + description: |- + ##To review an Automation Content role: + + You can display the set of roles assigned for automation decision resources by using **Access Management**. + From here, you can also sort or search the roles list, and create, edit, or delete automation content roles. + + 1. From the navigation panel, select **Access Management** > **Roles**. + 2. Select the **Automation Content** tab. + 3. From the table header, you can sort the list of roles by using the arrows for **Name**, **Description**, **Created** and **Editable** or by making sort selections in the Sort list. + 4. You can filter the list of roles by selecting **Name** or **Editable** from the filter list and clicking the arrow. + **Role type** can be filtered for Galaxy-only roles or All roles. + + - title: Create an Automation Execution role + description: |- + ##To create Automation Execution roles: + + Automation controller provides a set of predefined roles with permissions sufficient for standard automation execution tasks. + It is also possible to configure custom roles, and assign one or more permission filters to them. + Permission filters define the actions allowed for a specific resource type. + + 1. From the navigation panel, select Access Management > Roles. + 2. Select the **Automation Execution** tab and click **Create role**. + 3. Provide a name and optionally include a description for the role. + 4. Select a **Content** type. + 5. Select the **Permissions** you want assigned to this role. + 6. Click **Create role**. + + review: + instructions: |- + #### To verify that you've added an Automation Execution role: + Is the role listed on the **Roles** list view? + failedTaskHelp: Try the steps again. + summary: + success: You have viewed the details of your role! + failed: + + - title: Create an Automation Decision role + description: |- + ##To create Automation Decision roles: + + Event-Driven Ansible provides a set of predefined roles with permissions sufficient for standard automation decision tasks. + It is also possible to configure custom roles, and assign one or more permission filters to them. + Permission filters define the actions allowed for a specific resource type. + + 1. From the navigation panel, select **Access Management** > **Roles**. + 2. Select the **Automation Decisions** tab and click **Create role**. + 3. Provide a name and optionally include a description for the role. + 4. Select a **Content** type. + 5. Select the **Permissions** you want assigned to this role. + 6. Click **Create role**. + + review: + instructions: |- + #### To verify that you've added an Automation Decision role: + Is the role listed on the **Roles** list view? + failedTaskHelp: Try the steps again. + summary: + success: You have viewed the details of your role! + failed: + + - title: Create an Automation Content role + description: |- + ##To create Automation Content roles: + + Automation hub provides a set of predefined roles with permissions enough for standard automation decision tasks. + It is also possible to configure custom roles, and assign one or more permission filters to them. + Permission filters define the actions allowed for a specific resource type. + + 1. From the navigation panel, select **Access Management** > **Roles**. + 2. Select the **Automation Content** tab and click **Create role**. + 3. Give a name and optionally include a description for the role. + 4. Select a **Content** type. + 5. Select the **Permissions** you want assigned to this role. + 6. Click **Create role**. + + review: + instructions: |- + #### To verify that you've added an Automation Content role: + Is the role listed on the **Roles** list view? + failedTaskHelp: Try the steps again. + summary: + success: You have viewed the details of your role! + failed: + + conclusion: You successfully completed the review roles steps for Ansible Automation Platform! diff --git a/downstream/quick-start-yamls/readme-quickstarts.md b/downstream/quick-start-yamls/readme-quickstarts.md new file mode 100644 index 0000000000..66de86fc01 --- /dev/null +++ b/downstream/quick-start-yamls/readme-quickstarts.md @@ -0,0 +1,3 @@ +This directory contains YAML files for quick starts in the AAP UI. + +They are not used in our doc set but keeping them here for reference and backup. \ No newline at end of file diff --git a/downstream/snippets/cont-tested-system-config.adoc b/downstream/snippets/cont-tested-system-config.adoc new file mode 100644 index 0000000000..4e7a812769 --- /dev/null +++ b/downstream/snippets/cont-tested-system-config.adoc @@ -0,0 +1,39 @@ +//Tested system configuration snippet for container (CONT) topologies +.System configuration +[options="header"] +|==== +| Type | Description | Notes +| Subscription +a| +* Valid {PlatformName} subscription +* Valid {RHEL} subscription (to consume the BaseOS and AppStream repositories) +| + +| Operating system + +a| +* {RHEL} 9.2 or later minor versions of {RHEL} 9. +* {RHEL} 10 or later minor versions of {RHEL} 10 for enterprise topologies. +| + +| CPU architecture +| x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) +| + +| `ansible-core` +a| +* For the installation: `ansible-core` version {CoreInstVers}. +* For {PlatformNameShort} operation: `ansible-core` version {CoreUseVers}. +a| +* The installation program uses the `ansible-core` {CoreInstVers} package from the RHEL 9 AppStream repository. +* {PlatformNameShort} bundles `ansible-core` version {CoreUseVers} for its operation, so you do not need to install it manually. + +| Browser +| A currently supported version of Mozilla Firefox or Google Chrome. +| + +| Database +| {PostgresVers} +| External (customer supported) databases require ICU support. + +|==== diff --git a/downstream/snippets/cont-tested-vm-config.adoc b/downstream/snippets/cont-tested-vm-config.adoc new file mode 100644 index 0000000000..30a79381a4 --- /dev/null +++ b/downstream/snippets/cont-tested-vm-config.adoc @@ -0,0 +1,22 @@ +//Tested VM configuration snippet for container (CONT) topologies +.Virtual machine requirements +[cols=2,options="header"] +|==== +| Requirement | Minimum requirement +| RAM +| 16 GB + +| CPUs +| 4 + +| Local disk +a| +* Total available disk space: 60 GB +* Installation directory: 15 GB (if on a dedicated partition) +* `/var/tmp` for online installations: 1 GB +* `/var/tmp` for offline or bundled installations: 3 GB +* Temporary directory (defaults to `/tmp`) for offline or bundled installations: 10GB + +| Disk IOPS +| 3000 +|==== \ No newline at end of file diff --git a/downstream/snippets/container-upgrades.adoc b/downstream/snippets/container-upgrades.adoc new file mode 100644 index 0000000000..821d6d878f --- /dev/null +++ b/downstream/snippets/container-upgrades.adoc @@ -0,0 +1 @@ +Upgrades from 2.4 Containerized {PlatformNameShort} Tech Preview to 2.5 Containerized {PlatformNameShort} are not supported. \ No newline at end of file diff --git a/downstream/snippets/deprecated-features.adoc b/downstream/snippets/deprecated-features.adoc index 4fa07cbb4b..752d75b62d 100644 --- a/downstream/snippets/deprecated-features.adoc +++ b/downstream/snippets/deprecated-features.adoc @@ -1 +1 @@ -Deprecated functionality is still included in {PlatformNameShort} and continues to be supported. However, the functionality will be removed in a future release of {PlatformNameShort} and is not recommended for new deployments. \ No newline at end of file +Deprecated functionality is still included in {PlatformNameShort} and continues to be supported during this version's support cycle. However, the functionality will be removed in a future release of {PlatformNameShort} and is not recommended for new deployments. \ No newline at end of file diff --git a/downstream/snippets/docker-devcontainer.json b/downstream/snippets/docker-devcontainer.json new file mode 100644 index 0000000000..f79edca4bb --- /dev/null +++ b/downstream/snippets/docker-devcontainer.json @@ -0,0 +1,20 @@ +---- +{ + "name": "ansible-dev-container-docker", + "image": "registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest", + "containerUser": "root", + "runArgs": [ + "--privileged", + "--device", + "/dev/fuse", + "--hostname=ansible-dev-container" + ], + "updateRemoteUserUID": true, + "customizations": { + "vscode": { + "extensions": ["redhat.ansible"] + } + } +} +---- +// From https://github.com/ansible/ansible-dev-tools/blob/main/.devcontainer/docker/devcontainer.json diff --git a/downstream/snippets/enterprise-topologies.adoc b/downstream/snippets/enterprise-topologies.adoc new file mode 100644 index 0000000000..bc5f39d5e1 --- /dev/null +++ b/downstream/snippets/enterprise-topologies.adoc @@ -0,0 +1,2 @@ +// Snippet that describes enterprise topology +The {EnterpriseTopology} is intended for organizations that require {PlatformNameShort} to be deployed with redundancy or higher compute for large volumes of automation. \ No newline at end of file diff --git a/downstream/snippets/growth-topologies.adoc b/downstream/snippets/growth-topologies.adoc new file mode 100644 index 0000000000..31afefc7fb --- /dev/null +++ b/downstream/snippets/growth-topologies.adoc @@ -0,0 +1,2 @@ +//Snippet that describes growth topology +The {GrowthTopology} is intended for organizations that are getting started with {PlatformNameShort} and do not require redundancy or higher compute for large volumes of automation. This topology allows for smaller footprint deployments. \ No newline at end of file diff --git a/downstream/snippets/inventory-cont-a-env-a.adoc b/downstream/snippets/inventory-cont-a-env-a.adoc new file mode 100644 index 0000000000..d2b52af0ca --- /dev/null +++ b/downstream/snippets/inventory-cont-a-env-a.adoc @@ -0,0 +1,84 @@ +//Inventory file for CONT A ENV A topology + +[source,yaml,subs="+attributes"] +---- +# This is the {PlatformNameShort} installer inventory file intended for the container growth deployment topology. +# This inventory file expects to be run from the host where {PlatformNameShort} will be installed. +# Consult the {PlatformNameShort} product documentation about this topology's tested hardware configuration. +# {URLTopologies}/container-topologies +# +# Consult the docs if you are unsure what to add +# For all optional variables consult the included README.md +# or the {PlatformNameShort} documentation: +# {URLContainerizedInstall} + +# This section is for your {Gateway} hosts +# ----------------------------------------------------- +[automationgateway] +aap.example.org + +# This section is for your {ControllerName} hosts +# ----------------------------------------------------- +[automationcontroller] +aap.example.org + +# This section is for your {HubName} hosts +# ----------------------------------------------------- +[automationhub] +aap.example.org + +# This section is for your {EDAcontroller} hosts +# ----------------------------------------------------- +[automationeda] +aap.example.org + +# This section is for the {PlatformNameShort} database +# ----------------------------------------------------- +[database] +aap.example.org + +[all:vars] +# Ansible +ansible_connection=local + +# Common variables +# {URLContainerizedInstall}/appendix-inventory-files-vars#general-variables +# ----------------------------------------------------- +postgresql_admin_username=postgres +postgresql_admin_password= + +registry_username= +registry_password= + +redis_mode=standalone + +# {GatewayStart} +# {URLContainerizedInstall}/appendix-inventory-files-vars#platform-gateway-variables +# ----------------------------------------------------- +gateway_admin_password= +gateway_pg_host=aap.example.org +gateway_pg_password= + +# {ControllerNameStart} +# {URLContainerizedInstall}/appendix-inventory-files-vars#controller-variables +# ----------------------------------------------------- +controller_admin_password= +controller_pg_host=aap.example.org +controller_pg_password= +controller_percent_memory_capacity=0.5 + +# {HubNameStart} +# {URLContainerizedInstall}/appendix-inventory-files-vars#hub-variables +# ----------------------------------------------------- +hub_admin_password= +hub_pg_host=aap.example.org +hub_pg_password= +hub_seed_collections=false + +# {EDAcontroller} +# {URLContainerizedInstall}/appendix-inventory-files-vars#event-driven-ansible-variables +# ----------------------------------------------------- +eda_admin_password= +eda_pg_host=aap.example.org +eda_pg_password= +---- \ No newline at end of file diff --git a/downstream/snippets/inventory-cont-b-env-a.adoc b/downstream/snippets/inventory-cont-b-env-a.adoc new file mode 100644 index 0000000000..fe8c9c8ff5 --- /dev/null +++ b/downstream/snippets/inventory-cont-b-env-a.adoc @@ -0,0 +1,95 @@ +//Inventory file for CONT B ENV A topology + +[source,yaml,subs="+attributes"] +---- +# This is the {PlatformNameShort} enterprise installer inventory file +# Consult the docs if you are unsure what to add +# For all optional variables consult the included README.md +# or the Red Hat documentation: +# {URLContainerizedInstall} + +# This section is for your {Gateway} hosts +# ----------------------------------------------------- +[automationgateway] +gateway1.example.org +gateway2.example.org + +# This section is for your {ControllerName} hosts +# ----------------------------------------------------- +[automationcontroller] +controller1.example.org +controller2.example.org + +# This section is for your {PlatformNameShort} execution hosts +# ----------------------------------------------------- +[execution_nodes] +hop1.example.org receptor_type='hop' +exec1.example.org +exec2.example.org + +# This section is for your {HubName} hosts +# ----------------------------------------------------- +[automationhub] +hub1.example.org +hub2.example.org + +# This section is for your {EDAcontroller} hosts +# ----------------------------------------------------- +[automationeda] +eda1.example.org +eda2.example.org + +[redis] +gateway1.example.org +gateway2.example.org +hub1.example.org +hub2.example.org +eda1.example.org +eda2.example.org + +[all:vars] + +# Common variables +# {URLContainerizedInstall}/appendix-inventory-files-vars#general-variables +# ----------------------------------------------------- +postgresql_admin_username= +postgresql_admin_password= +registry_username= +registry_password= + +# {GatewayStart} +# {URLContainerizedInstall}/appendix-inventory-files-vars#platform-gateway-variables +# ----------------------------------------------------- +gateway_admin_password= +gateway_pg_host=externaldb.example.org +gateway_pg_database= +gateway_pg_username= +gateway_pg_password= + +# {ControllerNameStart} +# {URLContainerizedInstall}/appendix-inventory-files-vars#controller-variables +# ----------------------------------------------------- +controller_admin_password= +controller_pg_host=externaldb.example.org +controller_pg_database= +controller_pg_username= +controller_pg_password= + +# {HubNameStart} +# {URLContainerizedInstall}/appendix-inventory-files-vars#hub-variables +# ----------------------------------------------------- +hub_admin_password= +hub_pg_host=externaldb.example.org +hub_pg_database= +hub_pg_username= +hub_pg_password= + +# {EDAcontroller} +# {URLContainerizedInstall}/appendix-inventory-files-vars#event-driven-ansible-variables +# ----------------------------------------------------- +eda_admin_password= +eda_pg_host=externaldb.example.org +eda_pg_database= +eda_pg_username= +eda_pg_password= +---- \ No newline at end of file diff --git a/downstream/snippets/inventory-rpm-a-env-a.adoc b/downstream/snippets/inventory-rpm-a-env-a.adoc new file mode 100644 index 0000000000..0b3d680733 --- /dev/null +++ b/downstream/snippets/inventory-rpm-a-env-a.adoc @@ -0,0 +1,84 @@ +//Inventory file for RPM A ENV A topology + +[source,yaml,subs="+attributes"] +---- +# This is the {PlatformNameShort} installer inventory file intended for the RPM growth deployment topology. +# Consult the {PlatformNameShort} product documentation about this topology's tested hardware configuration. +# {URLTopologies}/rpm-topologies +# +# Consult the docs if you are unsure what to add +# For all optional variables consult the {PlatformNameShort} documentation: +# {URLInstallationGuide} + + +# This section is for your {Gateway} hosts +# ----------------------------------------------------- +[automationgateway] +gateway.example.org + +# This section is for your {ControllerName} hosts +# ----------------------------------------------------- +[automationcontroller] +controller.example.org + +[automationcontroller:vars] +peers=execution_nodes + +# This section is for your {PlatformNameShort} execution hosts +# ----------------------------------------------------- +[execution_nodes] +exec.example.org + +# This section is for your {HubName} hosts +# ----------------------------------------------------- +[automationhub] +hub.example.org + +# This section is for your {EDAcontroller} hosts +# ----------------------------------------------------- +[automationedacontroller] +eda.example.org + +# This section is for the {PlatformNameShort} database +# ----------------------------------------------------- +[database] +db.example.org + +[all:vars] + +# Common variables +# {URLInstallationGuide}/appendix-inventory-files-vars#general-variables +# ----------------------------------------------------- +registry_username= +registry_password= + +redis_mode=standalone + +# {GatewayStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#platform-gateway-variables +# ----------------------------------------------------- +automationgateway_admin_password= +automationgateway_pg_host=db.example.org +automationgateway_pg_password= + +# {ControllerNameStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#controller-variables +# ----------------------------------------------------- +admin_password= +pg_host=db.example.org +pg_password= + +# {HubNameStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#hub-variables +# ----------------------------------------------------- +automationhub_admin_password= +automationhub_pg_host=db.example.org +automationhub_pg_password= + +# {EDAcontroller} +# {URLInstallationGuide}/appendix-inventory-files-vars#event-driven-ansible-variables +# ----------------------------------------------------- +automationedacontroller_admin_password= +automationedacontroller_pg_host=db.example.org +automationedacontroller_pg_password= +---- \ No newline at end of file diff --git a/downstream/snippets/inventory-rpm-b-env-a.adoc b/downstream/snippets/inventory-rpm-b-env-a.adoc new file mode 100644 index 0000000000..fce25e63de --- /dev/null +++ b/downstream/snippets/inventory-rpm-b-env-a.adoc @@ -0,0 +1,94 @@ +//Inventory file for RPM B ENV A topology + +[source,yaml,subs="+attributes"] +---- +# This is the {PlatformNameShort} enterprise installer inventory file +# Consult the docs if you are unsure what to add +# For all optional variables consult the Red Hat documentation: +# {URLInstallationGuide} + +# This section is for your {Gateway} hosts +# ----------------------------------------------------- +[automationgateway] +gateway1.example.org +gateway2.example.org + +# This section is for your {ControllerName} hosts +# ----------------------------------------------------- +[automationcontroller] +controller1.example.org +controller2.example.org + +[automationcontroller:vars] +peers=execution_nodes + +# This section is for your {PlatformNameShort} execution hosts +# ----------------------------------------------------- +[execution_nodes] +hop1.example.org node_type='hop' +exec1.example.org +exec2.example.org + +# This section is for your {HubName} hosts +# ----------------------------------------------------- +[automationhub] +hub1.example.org +hub2.example.org + +# This section is for your {EDAcontroller} hosts +# ----------------------------------------------------- +[automationedacontroller] +eda1.example.org +eda2.example.org + +[redis] +gateway1.example.org +gateway2.example.org +hub1.example.org +hub2.example.org +eda1.example.org +eda2.example.org + +[all:vars] +# Common variables +# {URLInstallationGuide}/appendix-inventory-files-vars#general-variables +# ----------------------------------------------------- +registry_username= +registry_password= + +# {GatewayStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#platform-gateway-variables +# ----------------------------------------------------- +automationgateway_admin_password= +automationgateway_pg_host= +automationgateway_pg_database= +automationgateway_pg_username= +automationgateway_pg_password= + +# {ControllerNameStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#controller-variables +# ----------------------------------------------------- +admin_password= +pg_host= +pg_database= +pg_username= +pg_password= + +# {HubNameStart} +# {URLInstallationGuide}/appendix-inventory-files-vars#hub-variables +# ----------------------------------------------------- +automationhub_admin_password= +automationhub_pg_host= +automationhub_pg_database= +automationhub_pg_username= +automationhub_pg_password= + +# {EDAcontroller} +# {URLInstallationGuide}/appendix-inventory-files-vars#event-driven-ansible-variables +# ----------------------------------------------------- +automationedacontroller_admin_password= +automationedacontroller_pg_host= +automationedacontroller_pg_database= +automationedacontroller_pg_username= +automationedacontroller_pg_password= +---- \ No newline at end of file diff --git a/downstream/snippets/podman-devcontainer.json b/downstream/snippets/podman-devcontainer.json new file mode 100644 index 0000000000..749dbfd616 --- /dev/null +++ b/downstream/snippets/podman-devcontainer.json @@ -0,0 +1,31 @@ +---- +{ + "name": "ansible-dev-container-podman", + "image": "registry.redhat.io/ansible-automation-platform-25/ansible-dev-tools-rhel8:latest", + "containerUser": "root", + "runArgs": [ + "--cap-add=CAP_MKNOD", + "--cap-add=NET_ADMIN", + "--cap-add=SYS_ADMIN", + "--cap-add=SYS_RESOURCE", + "--device", + "/dev/fuse", + "--security-opt", + "seccomp=unconfined", + "--security-opt", + "label=disable", + "--security-opt", + "apparmor=unconfined", + "--security-opt", + "unmask=/sys/fs/cgroup", + "--userns=host", + "--hostname=ansible-dev-container" + ], + "customizations": { + "vscode": { + "extensions": ["redhat.ansible"] + } + } +} +---- +// From https://github.com/ansible/ansible-dev-tools/blob/main/.devcontainer/podman/devcontainer.json diff --git a/downstream/snippets/redis-colocation-containerized.adoc b/downstream/snippets/redis-colocation-containerized.adoc new file mode 100644 index 0000000000..d4de91f5d0 --- /dev/null +++ b/downstream/snippets/redis-colocation-containerized.adoc @@ -0,0 +1,2 @@ +//This snippet details the colocation configuration for a containerized install of AAP - note that it can be colocated with controller. +* 6 VMs are required for a Redis high availability (HA) compatible deployment. When installing {PlatformNameShort} with the containerized installer, Redis can be colocated on any {PlatformNameShort} component VMs of your choice except for execution nodes or the PostgreSQL database. They might also be assigned VMs specifically for Redis use. diff --git a/downstream/snippets/rpm-env-a-tested-system-config.adoc b/downstream/snippets/rpm-env-a-tested-system-config.adoc new file mode 100644 index 0000000000..4593c2ffdb --- /dev/null +++ b/downstream/snippets/rpm-env-a-tested-system-config.adoc @@ -0,0 +1,15 @@ +//Tested system configuration snippet for RPM ENV A topologies +.Tested system configurations +[options="header"] +|==== +| Type | Description | +| Subscription | Valid {PlatformName} subscription | +| Operating system +a| +* {RHEL} 8.8 or later minor versions of {RHEL} 8. +* {RHEL} 9.2 or later minor versions of {RHEL} 9. | +| CPU architecture | x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) | +| `ansible-core` | `ansible-core` version {CoreUseVers} or later | {PlatformNameShort} uses the system-wide ansible-core package to install the platform, but uses ansible-core 2.16 for both its control plane and built-in execution environments. +| Browser | A currently supported version of Mozilla Firefox or Google Chrome | +| Database | {PostgresVers} | External (customer supported) databases require ICU support. +|==== \ No newline at end of file diff --git a/downstream/snippets/rpm-tested-vm-config.adoc b/downstream/snippets/rpm-tested-vm-config.adoc new file mode 100644 index 0000000000..d23a9ed8fb --- /dev/null +++ b/downstream/snippets/rpm-tested-vm-config.adoc @@ -0,0 +1,10 @@ +//Tested VM configuration snippet for RPM topologies +.Virtual machine requirements +[cols=2,options="header"] +|==== +| Requirement | Minimum requirement +| RAM | 16 GB +| CPUs | 4 +| Local disk | 60 GB +| Disk IOPS | 3000 +|==== \ No newline at end of file diff --git a/downstream/snippets/snip-gateway-component-description.adoc b/downstream/snippets/snip-gateway-component-description.adoc new file mode 100644 index 0000000000..5f55154a32 --- /dev/null +++ b/downstream/snippets/snip-gateway-component-description.adoc @@ -0,0 +1 @@ +{GatewayStart} is the service that handles authentication and authorization for the {PlatformNameShort}. It provides a single entry into the {PlatformNameShort} and serves the platform user interface so you can authenticate and access all of the {PlatformNameShort} services from a single location. diff --git a/downstream/snippets/snip-gw-authentication-additional-auth-fields.adoc b/downstream/snippets/snip-gw-authentication-additional-auth-fields.adoc new file mode 100644 index 0000000000..fe550d0ef1 --- /dev/null +++ b/downstream/snippets/snip-gw-authentication-additional-auth-fields.adoc @@ -0,0 +1,6 @@ +. Optional: Enter any *Additional Authenticator Fields* that this authenticator can take. These fields are not validated and are passed directly back to the authenticator. ++ +[NOTE] +==== +Values defined in this field override the dedicated fields provided in the UI. Any values not defined here are not provided to the authenticator. +==== \ No newline at end of file diff --git a/downstream/snippets/snip-gw-authentication-auto-migrate.adoc b/downstream/snippets/snip-gw-authentication-auto-migrate.adoc new file mode 100644 index 0000000000..08566167c2 --- /dev/null +++ b/downstream/snippets/snip-gw-authentication-auto-migrate.adoc @@ -0,0 +1 @@ +. Select a legacy authenticator method from the *Auto migrate users from* list. After upgrading from 2.4 to 2.5, this is the legacy authenticator from which to automatically migrate users to this new authentication configuration. Refer to {URLUpgrade}/aap-post-upgrade[{PlatformNameShort} post-upgrade steps] in the {TitleUpgrade} guide for important information about migrating users. diff --git a/downstream/snippets/snip-gw-authentication-common-checkboxes.adoc b/downstream/snippets/snip-gw-authentication-common-checkboxes.adoc new file mode 100644 index 0000000000..d1c8921e0f --- /dev/null +++ b/downstream/snippets/snip-gw-authentication-common-checkboxes.adoc @@ -0,0 +1,3 @@ +. To automatically create organizations, users, and teams upon successful login, select *Create objects*. +. To enable this authentication method upon creation, select *Enabled*. +. To remove a user for any groups they were previously added to when they authenticate from this source, select *Remove users*. diff --git a/downstream/snippets/snip-gw-authentication-next-steps.adoc b/downstream/snippets/snip-gw-authentication-next-steps.adoc new file mode 100644 index 0000000000..02760dd5cb --- /dev/null +++ b/downstream/snippets/snip-gw-authentication-next-steps.adoc @@ -0,0 +1 @@ +To control which users are allowed into the {PlatformNameShort} server, and placed into {PlatformNameShort} organizations or teams based on their attributes (like username and email address) or to what groups they belong, continue to link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/access_management_and_authentication/index#gw-mapping[Mapping]. diff --git a/downstream/snippets/snip-gw-authentication-verification.adoc b/downstream/snippets/snip-gw-authentication-verification.adoc new file mode 100644 index 0000000000..a19b861830 --- /dev/null +++ b/downstream/snippets/snip-gw-authentication-verification.adoc @@ -0,0 +1,3 @@ +.Verification + +To verify that the authentication is configured correctly, log out of {PlatformNameShort} and check that the login screen displays the logo of your authentication chosen method to enable logging in with those credentials. \ No newline at end of file diff --git a/downstream/snippets/snip-gw-mapping-next-steps.adoc b/downstream/snippets/snip-gw-mapping-next-steps.adoc new file mode 100644 index 0000000000..bbfb038fd4 --- /dev/null +++ b/downstream/snippets/snip-gw-mapping-next-steps.adoc @@ -0,0 +1,9 @@ +. You can manage the authentication mappings order by clicking btn:[Manage mappings]. +. Drag and drop the mapping up or down in the list. ++ +[NOTE] +==== +The mapping precedence is determined by the order in which the mappings are listed. +==== ++ +. Click btn:[Apply]. diff --git a/downstream/snippets/snip-gw-roles-note-multiple-components.adoc b/downstream/snippets/snip-gw-roles-note-multiple-components.adoc new file mode 100644 index 0000000000..81796a1ec4 --- /dev/null +++ b/downstream/snippets/snip-gw-roles-note-multiple-components.adoc @@ -0,0 +1,4 @@ +[NOTE] +==== +If you have multiple {PlatformNameShort} components installed, you will see selections for the roles associated with each component in the *Roles* menu bar. For example, Automation Execution for {ControllerName} roles, Automation Decisions for {EDAName} roles. +==== \ No newline at end of file diff --git a/downstream/snippets/technology-preview.adoc b/downstream/snippets/technology-preview.adoc index 4f60f2f7a7..62fbd6aaf7 100644 --- a/downstream/snippets/technology-preview.adoc +++ b/downstream/snippets/technology-preview.adoc @@ -1,3 +1,5 @@ -Technology Preview features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. +Technology Preview features are not supported with Red{nbsp}Hat production service level agreements (SLAs) and might not be functionally complete. +Red{nbsp}Hat does not recommend using them in production. +These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. -For more information about the support scope of Red Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope]. \ No newline at end of file +For more information about the support scope of Red{nbsp}Hat Technology Preview features, see link:https://access.redhat.com/support/offerings/techpreview/[Technology Preview Features Support Scope]. diff --git a/downstream/titles/aap-containerized-install/docinfo.xml b/downstream/titles/aap-containerized-install/docinfo.xml index 3fff7e069d..47f1005992 100644 --- a/downstream/titles/aap-containerized-install/docinfo.xml +++ b/downstream/titles/aap-containerized-install/docinfo.xml @@ -1,9 +1,9 @@ -Containerized Ansible Automation Platform installation guide +Containerized installation Red Hat Ansible Automation Platform 2.5 -Containerized Ansible Automation Platform Installation Guide +Install the containerized version of Ansible Automation Platform -Containerized Ansible Automation Platform Installation Guide +This guide helps you to understand the installation requirements and processes behind our containerized version of Ansible Automation Platform. Red Hat Customer Content Services diff --git a/downstream/titles/aap-containerized-install/master.adoc b/downstream/titles/aap-containerized-install/master.adoc index 1a5cf65b27..bd7e1671d3 100644 --- a/downstream/titles/aap-containerized-install/master.adoc +++ b/downstream/titles/aap-containerized-install/master.adoc @@ -1,13 +1,23 @@ :imagesdir: images :toclevels: 4 :experimental: - +:container-install: include::attributes/attributes.adoc[] // Book Title -= Containerized Ansible Automation Platform installation guide += Containerized installation include::{Boilerplate}[] +include::platform/assembly-gateway-licensing.adoc[leveloffset=+1] + include::platform/assembly-aap-containerized-installation.adoc[leveloffset=+1] + +include::platform/assembly-horizontal-scaling.adoc[leveloffset=+1] + +[appendix] +include::platform/assembly-appendix-troubleshoot-containerized-aap.adoc[leveloffset=1] + +[appendix] +include::platform/assembly-appendix-inventory-file-vars.adoc[leveloffset=1] diff --git a/downstream/titles/aap-hardening/docinfo.xml b/downstream/titles/aap-hardening/docinfo.xml index e854befb03..a5430ad33c 100644 --- a/downstream/titles/aap-hardening/docinfo.xml +++ b/downstream/titles/aap-hardening/docinfo.xml @@ -1,7 +1,7 @@ -Red Hat Ansible Automation Platform hardening guide +Hardening and compliance Red Hat Ansible Automation Platform 2.5 -Install, configure, and maintain Ansible Automation Platform running on Red Hat Enterprise Linux in a secure manner. +Install, configure, and maintain Ansible Automation Platform running on Red Hat Enterprise Linux in a secure manner This guide provides recommended practices for various processes needed to install, configure, and maintain {PlatformNameShort} on Red Hat Enterprise Linux in a secure manner. diff --git a/downstream/titles/aap-hardening/master.adoc b/downstream/titles/aap-hardening/master.adoc index 9c7a628e47..6dd81e67b9 100644 --- a/downstream/titles/aap-hardening/master.adoc +++ b/downstream/titles/aap-hardening/master.adoc @@ -3,17 +3,21 @@ :toclevels: 1 :experimental: +:hardening: include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform hardening guide += Hardening and compliance This guide provides recommended practices for various processes needed to install, configure, and maintain {PlatformNameShort} on Red Hat Enterprise Linux in a secure manner. include::{Boilerplate}[] include::aap-hardening/assembly-intro-to-aap-hardening.adoc[leveloffset=+1] include::aap-hardening/assembly-hardening-aap.adoc[leveloffset=+1] -// include::aap-hardening/assembly-aap-compliance.adoc[leveloffset=+1] -// include::aap-hardening/assembly-aap-security-enabling.adoc[leveloffset=+1] +//include::aap-hardening/assembly-aap-compliance.adoc[leveloffset=+1] +//Move this to Security automation Guide +//include::aap-hardening/assembly-aap-security-use-cases.adoc[leveloffset=+1] +//Move this to Security automation Guide +//include::platform/assembly-eda-for-security-related-events.adoc[leveloffset=+1] diff --git a/downstream/titles/aap-hardening/platform b/downstream/titles/aap-hardening/platform new file mode 120000 index 0000000000..9a3ca429a1 --- /dev/null +++ b/downstream/titles/aap-hardening/platform @@ -0,0 +1 @@ +../../modules/platform/ \ No newline at end of file diff --git a/downstream/titles/aap-installation-guide/docinfo.xml b/downstream/titles/aap-installation-guide/docinfo.xml index 4cc3491516..1153b284cc 100644 --- a/downstream/titles/aap-installation-guide/docinfo.xml +++ b/downstream/titles/aap-installation-guide/docinfo.xml @@ -1,7 +1,7 @@ -Red Hat Ansible Automation Platform installation guide +RPM installation Red Hat Ansible Automation Platform 2.5 -Install Ansible Automation Platform +Install the RPM version of Ansible Automation Platform This guide shows you how to install Red Hat Ansible Automation Platform based on supported installation scenarios. diff --git a/downstream/titles/aap-installation-guide/master.adoc b/downstream/titles/aap-installation-guide/master.adoc index c118a659c2..56acf6e6c8 100644 --- a/downstream/titles/aap-installation-guide/master.adoc +++ b/downstream/titles/aap-installation-guide/master.adoc @@ -3,11 +3,12 @@ :toclevels: 1 :experimental: +:aap-install: include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform installation guide += RPM installation Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. @@ -18,9 +19,11 @@ include::{Boilerplate}[] //12/2/22 [dcd: moved following assembly to new planning guide] //include::platform/assembly-planning-installation.adoc[leveloffset=+1] include::platform/assembly-platform-install-overview.adoc[leveloffset=+1] +include::platform/assembly-gateway-licensing.adoc[leveloffset=+2] include::platform/assembly-system-requirements.adoc[leveloffset=+1] include::platform/assembly-platform-install-scenario.adoc[leveloffset=+1] //[dcdacosta]Removing this assembly because modules are now included in assembly-platform-install-scenario] include::platform/assembly-deploy-high-availability-hub.adoc[leveloffset=+1] +include::platform/assembly-horizontal-scaling.adoc[leveloffset=+1] include::platform/assembly-disconnected-installation.adoc[leveloffset=+1] //12/2/22 [dcd: moved following assemblies to new operations guide] //include::platform/assembly-configuring-proxy-support.adoc[leveloffset=+1] @@ -29,6 +32,7 @@ include::platform/assembly-disconnected-installation.adoc[leveloffset=+1] //12/2/22 [dcd: removed the following assemblies] //include::platform/assembly-supported-inventory-plugins-template.adoc[leveloffset=+1] //include::platform/assembly-supported-attributes-custom-notifications.adoc[leveloffset=+1] +include::platform/assembly-appendix-troubleshoot-rpm-aap.adoc[leveloffset=+1] [appendix] include::platform/assembly-appendix-inventory-file-vars.adoc[leveloffset=1] diff --git a/downstream/titles/aap-plugin-rhdh/aap-common b/downstream/titles/aap-migration/aap-common similarity index 100% rename from downstream/titles/aap-plugin-rhdh/aap-common rename to downstream/titles/aap-migration/aap-common diff --git a/downstream/titles/aap-migration/aap-migration b/downstream/titles/aap-migration/aap-migration new file mode 120000 index 0000000000..cc54fc71b6 --- /dev/null +++ b/downstream/titles/aap-migration/aap-migration @@ -0,0 +1 @@ +../../assemblies/aap-migration \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh/attributes b/downstream/titles/aap-migration/attributes similarity index 100% rename from downstream/titles/aap-plugin-rhdh/attributes rename to downstream/titles/aap-migration/attributes diff --git a/downstream/titles/aap-migration/docinfo.xml b/downstream/titles/aap-migration/docinfo.xml new file mode 100644 index 0000000000..b307c93a17 --- /dev/null +++ b/downstream/titles/aap-migration/docinfo.xml @@ -0,0 +1,13 @@ +Ansible Automation Platform migration +Red Hat Ansible Automation Platform +2.5 +Migrate your deployment of Ansible Automation Platform from one installation type to another + + +This guide provides instructions for migrating your Red Hat Ansible Automation Platform deployment from one installation type to another + + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/aap-plugin-rhdh/images b/downstream/titles/aap-migration/images similarity index 100% rename from downstream/titles/aap-plugin-rhdh/images rename to downstream/titles/aap-migration/images diff --git a/downstream/titles/aap-migration/master.adoc b/downstream/titles/aap-migration/master.adoc new file mode 100644 index 0000000000..7c0f6fe2aa --- /dev/null +++ b/downstream/titles/aap-migration/master.adoc @@ -0,0 +1,45 @@ +:imagesdir: images +:toclevels: 4 +:experimental: +:aap-migration: + +include::attributes/attributes.adoc[] + +// Book Title += Ansible Automation Platform migration + +include::{Boilerplate}[] + +[IMPORTANT] +==== +include::snippets/technology-preview.adoc[] +==== + +include::aap-migration/aap-migration/con-introduction-and-objectives.adoc[leveloffset=+1] + +include::aap-migration/aap-migration/con-out-of-scope.adoc[leveloffset=+1] + +include::aap-migration/aap-migration/con-migration-process-overview.adoc[leveloffset=+1] + +include::aap-migration/assembly-migration-prerequisites.adoc[leveloffset=+1] + +include::aap-migration/assembly-migration-artifact.adoc[leveloffset=+1] + +== Source environment + +Prepare and export data from your existing {PlatformNameShort} deployment. The exported data forms a critical migration artifact, which you use to configure your new environment. + +include::aap-migration/assembly-source-rpm.adoc[leveloffset=+2] + +include::aap-migration/assembly-source-containerized.adoc[leveloffset=+2] + + +== Target environment + +Prepare, configure, and validate your target {PlatformNameShort} environment. + +include::aap-migration/assembly-target-containerized.adoc[leveloffset=+2] + +include::aap-migration/assembly-target-ocp.adoc[leveloffset=+2] + +include::aap-migration/assembly-target-managed-aap.adoc[leveloffset=+2] diff --git a/downstream/titles/aap-migration/snippets b/downstream/titles/aap-migration/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/aap-migration/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/aap-operations-guide/docinfo.xml b/downstream/titles/aap-operations-guide/docinfo.xml index d018690328..5ff53ae44c 100644 --- a/downstream/titles/aap-operations-guide/docinfo.xml +++ b/downstream/titles/aap-operations-guide/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible Automation Platform operations guide +Operating Ansible Automation Platform Red Hat Ansible Automation Platform 2.5 Post installation configurations to ensure a smooth deployment of Ansible Automation Platform installation diff --git a/downstream/titles/aap-operations-guide/master.adoc b/downstream/titles/aap-operations-guide/master.adoc index 74ee76e4cf..d425ac1faf 100644 --- a/downstream/titles/aap-operations-guide/master.adoc +++ b/downstream/titles/aap-operations-guide/master.adoc @@ -3,22 +3,30 @@ :toclevels: 1 :experimental: +:operationG: include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform operations guide += Operating Ansible Automation Platform After installing Red Hat Ansible Automation Platform, your system might need extra configuration to ensure your deployment runs smoothly. This guide provides procedures for configuration tasks that you can perform after installing {PlatformName}. +include::aap-common/external-site-disclaimer.adoc[] + include::{Boilerplate}[] + +// ddacosta - removed to avoid duplication with access management guide +// ifowler - added assembly back in but with links to access management include::platform/assembly-aap-activate.adoc[leveloffset=+1] -include::platform/assembly-aap-manifest-files.adoc[leveloffset=+1] +// emurtoug removed this assembly to avoid duplication within Access management and authentication include::platform/assembly-aap-manifest-files.adoc[leveloffset=+1] //ifowler assembly transferred from installation guide as part of AAP-18700 -include::platform/assembly-platform-whats-next.adoc[leveloffset=+1] +//ifowlere -assembly removed. Info on execution environments moved to execution environments doc, info on automation mesh removed. +//include::platform/assembly-platform-whats-next.adoc[leveloffset=+1] include::platform/assembly-configuring-proxy-support.adoc[leveloffset=+1] -include::platform/assembly-configuring-websockets.adoc[leveloffset=+1] +include::platform/assembly-configure-egress-proxy.adoc[leveloffset=+1] +include::platform/assembly-aap-advanced-config.adoc[leveloffset=+1] include::platform/assembly-controlling-data-collection.adoc[leveloffset=+1] -include::platform/assembly-encrypting-plaintext-passwords.adoc[leveloffset=+1] +//include::platform/assembly-encrypting-plaintext-passwords.adoc[leveloffset=+1] include::platform/assembly-changing-ssl-certs-keys.adoc[leveloffset=+1] diff --git a/downstream/titles/aap-operator-backup/docinfo.xml b/downstream/titles/aap-operator-backup/docinfo.xml index 94391e3c32..8b76b1f66a 100644 --- a/downstream/titles/aap-operator-backup/docinfo.xml +++ b/downstream/titles/aap-operator-backup/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible Automation Platform operator backup and recovery guide +Backup and recovery for operator environments Red Hat Ansible Automation Platform 2.5 diff --git a/downstream/titles/aap-operator-backup/master.adoc b/downstream/titles/aap-operator-backup/master.adoc index 8604ad7c07..b259c2d0ad 100644 --- a/downstream/titles/aap-operator-backup/master.adoc +++ b/downstream/titles/aap-operator-backup/master.adoc @@ -8,7 +8,7 @@ include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform operator backup and recovery guide += Backup and recovery for operator environments Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. diff --git a/downstream/titles/aap-operator-installation/docinfo.xml b/downstream/titles/aap-operator-installation/docinfo.xml index 8cd75da97f..2b16dbedcf 100644 --- a/downstream/titles/aap-operator-installation/docinfo.xml +++ b/downstream/titles/aap-operator-installation/docinfo.xml @@ -1,4 +1,4 @@ -Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform +Installing on OpenShift Container Platform Red Hat Ansible Automation Platform 2.5 Install and configure Ansible Automation Platform operator on OpenShift Container Platform diff --git a/downstream/titles/aap-operator-installation/master.adoc b/downstream/titles/aap-operator-installation/master.adoc index 1a6283a893..65a06617a0 100644 --- a/downstream/titles/aap-operator-installation/master.adoc +++ b/downstream/titles/aap-operator-installation/master.adoc @@ -9,39 +9,49 @@ include::attributes/attributes.adoc[] // Book Title -= Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform += Installing on OpenShift Container Platform Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. -This guide helps you to understand the installation, migration and upgrade requirements for deploying the {OperatorPlatform} on {OCPShort}. +This guide helps you to understand the installation, migration and upgrade requirements for deploying the {OperatorPlatformNameShort} on {OCPShort}. include::{Boilerplate}[] -include::platform/assembly-operator-install-planning.adoc[leveloffset=+1] +include::platform/assembly-operator-install-operator.adoc[leveloffset=+1] -include::platform/assembly-install-aap-operator.adoc[leveloffset=+1] +include::platform/assembly-operator-install-planning.adoc[leveloffset=+2] -// Part of the 2.5 release commenting out until live -include::platform/assembly-configure-aap-operator.adoc[leveloffset=+1] +include::platform/assembly-gateway-licensing.adoc[leveloffset=+2] -include::platform/assembly-installing-controller-operator.adoc[leveloffset=+1] +include::platform/assembly-install-aap-operator.adoc[leveloffset=+2] -include::platform/assembly-installing-hub-operator.adoc[leveloffset=+1] +include::platform/assembly-installing-aap-operator-cli.adoc[leveloffset=+2] -// include::platform/assembly-installing-hub-operator-local-db.adoc[leveloffset=+1] +include::platform/assembly-install-aap-gateway.adoc[leveloffset=+1] -// include::platform/assembly-installing-controller-operator-local-db.adoc[leveloffset=+1] +include::platform/assembly-operator-configure-aap-components.adoc[leveloffset=+1] -include::platform/assembly-installing-aap-operator-cli.adoc[leveloffset=+1] +include::platform/assembly-operator-configure-gateway.adoc[leveloffset=+2] -include::platform/assembly-deploy-eda-controller-on-aap-operator.adoc[leveloffset=+1] +include::platform/assembly-installing-controller-operator.adoc[leveloffset=+2] -include::platform/assembly-using-rhsso-operator-with-automation-hub.adoc[leveloffset=+1] +include::platform/assembly-installing-hub-operator.adoc[leveloffset=+2] + +include::platform/platform/proc-operator-deploy-redis.adoc[leveloffset=+2] + +// [sayjadha]Added the chatbot deployment info. in OCP install guide. +include::platform/assembly-deploying-chatbot-operator.adoc[leveloffset=+1] + +include::platform/platform/proc-operator-scaling-down-aap.adoc[leveloffset=+1] include::platform/assembly-aap-migration.adoc[leveloffset=+1] -// [gmurray] Commenting out this module as part of AAP-22627. Upgrade is not supported in the initial 2.5 release. -// include::platform/assembly-operator-upgrade.adoc[leveloffset=+1] +include::platform/assembly-operator-upgrade.adoc[leveloffset=+1] + +include::platform/assembly-update-ocp.adoc[leveloffset=+1] include::platform/assembly-operator-add-execution-nodes.adoc[leveloffset=+1] + include::platform/assembly-controller-resource-operator.adoc[leveloffset=+1] + +include::platform/assembly-appendix-operator-crs.adoc[leveloffset=+1] diff --git a/downstream/titles/analytics/job-explorer/.gitkeep b/downstream/titles/aap-operator-installation/my_module_files.txt similarity index 100% rename from downstream/titles/analytics/job-explorer/.gitkeep rename to downstream/titles/aap-operator-installation/my_module_files.txt diff --git a/downstream/titles/aap-planning-guide/docinfo.xml b/downstream/titles/aap-planning-guide/docinfo.xml index 71a8b37b09..1d7ad68203 100644 --- a/downstream/titles/aap-planning-guide/docinfo.xml +++ b/downstream/titles/aap-planning-guide/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible Automation Platform planning guide +Planning your installation Red Hat Ansible Automation Platform 2.5 Plan for installation of Ansible Automation Platform diff --git a/downstream/titles/aap-planning-guide/master.adoc b/downstream/titles/aap-planning-guide/master.adoc index 8a8424685b..609610eca1 100644 --- a/downstream/titles/aap-planning-guide/master.adoc +++ b/downstream/titles/aap-planning-guide/master.adoc @@ -8,7 +8,7 @@ include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform planning guide += Planning your installation Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments. @@ -16,11 +16,15 @@ Use the information in this guide to plan your {PlatformName} installation. include::{Boilerplate}[] include::platform/assembly-planning-installation.adoc[leveloffset=+1] -include::platform/assembly-aap-architecture.adoc[leveloffset=+1] +// emurtough removed to avoid duplication with topologies chapter include::platform/assembly-aap-architecture.adoc[leveloffset=+1] include::platform/assembly-aap-platform-components.adoc[leveloffset=+1] +include::platform/assembly-HA-redis.adoc[leveloffset=+1] +:aap-plan: +include::topologies/assembly-overview-tested-deployment-models.adoc[leveloffset=+1] include::platform/assembly-system-requirements.adoc[leveloffset=+1] +:!aap-plan: include::platform/assembly-network-ports-protocols.adoc[leveloffset=+1] -include::platform/assembly-attaching-subscriptions.adoc[leveloffset=+1] +// emurtough removed subscription info to avoid duplication within Access management and authentication include::platform/assembly-attaching-subscriptions.adoc[leveloffset=+1] include::platform/assembly-choosing-obtaining-installer.adoc[leveloffset=+1] include::platform/assembly-inventory-introduction.adoc[leveloffset=+1] -include::platform/assembly-supported-installation-scenarios.adoc[leveloffset=+1] +// emurtough removed to avoid duplication with topologies docs include::platform/assembly-supported-installation-scenarios.adoc[leveloffset=+1] diff --git a/downstream/titles/aap-planning-guide/topologies b/downstream/titles/aap-planning-guide/topologies new file mode 120000 index 0000000000..760101fd3c --- /dev/null +++ b/downstream/titles/aap-planning-guide/topologies @@ -0,0 +1 @@ +../../assemblies/topologies \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh-install/aap-common b/downstream/titles/aap-plugin-rhdh-install/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-install/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh-install/attributes b/downstream/titles/aap-plugin-rhdh-install/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-install/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh/devtools b/downstream/titles/aap-plugin-rhdh-install/devtools similarity index 100% rename from downstream/titles/aap-plugin-rhdh/devtools rename to downstream/titles/aap-plugin-rhdh-install/devtools diff --git a/downstream/titles/aap-plugin-rhdh-install/docinfo.xml b/downstream/titles/aap-plugin-rhdh-install/docinfo.xml new file mode 100644 index 0000000000..8445ef71f4 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-install/docinfo.xml @@ -0,0 +1,11 @@ +Installing Ansible plug-ins for Red Hat Developer Hub +Red Hat Ansible Automation Platform +2.5 +Install and configure Ansible plug-ins for Red Hat Developer Hub + + This guide describes how to install and configure Ansible plug-ins for Red Hat Developer Hub so that users can learn about Ansible, explore curated collections, and develop automation projects. + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/aap-plugin-rhdh-install/images b/downstream/titles/aap-plugin-rhdh-install/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-install/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh-install/master.adoc b/downstream/titles/aap-plugin-rhdh-install/master.adoc new file mode 100644 index 0000000000..bead6d5f46 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-install/master.adoc @@ -0,0 +1,53 @@ +:imagesdir: images +:numbered: +:toclevels: 4 +:experimental: +:context: aap-plugin-rhdh-installing + +include::attributes/attributes.adoc[] + +// Book Title += Installing Ansible plug-ins for Red Hat Developer Hub + +Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. + +This guide describes how to install {AAPRHDH}. +This document has been updated to include information for the latest release of {PlatformNameShort}. + +include::{Boilerplate}[] + +// [IMPORTANT] +// ==== +// {AAPRHDH} is a Technology Preview feature only. +// include::snippets/technology-preview.adoc[] + +// ==== + +include::devtools/assembly-rhdh-intro.adoc[leveloffset=+1] + + +// Installation +include::devtools/assembly-rhdh-install-ocp-helm.adoc[leveloffset=+1] + +include::devtools/assembly-rhdh-install-ocp-operator.adoc[leveloffset=+1] + +// +// Subscription warnings +include::devtools/assembly-rhdh-subscription-warnings.adoc[leveloffset=+1] + +// +// Upgrade +include::devtools/assembly-rhdh-upgrade-ocp-helm.adoc[leveloffset=+1] + +include::devtools/assembly-rhdh-upgrade-ocp-operator.adoc[leveloffset=+1] + +// +// Uninstall +include::devtools/assembly-rhdh-uninstall-ocp-helm.adoc[leveloffset=+1] + +include::devtools/assembly-rhdh-uninstall-ocp-operator.adoc[leveloffset=+1] + +// +// Telemetry +include::devtools/assembly-rhdh-telemetry-capturing.adoc[leveloffset=+1] + diff --git a/downstream/titles/aap-plugin-rhdh-install/snippets b/downstream/titles/aap-plugin-rhdh-install/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-install/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh-using/aap-common b/downstream/titles/aap-plugin-rhdh-using/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-using/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh-using/attributes b/downstream/titles/aap-plugin-rhdh-using/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-using/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh-using/devtools b/downstream/titles/aap-plugin-rhdh-using/devtools new file mode 120000 index 0000000000..dc79f7e1fa --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-using/devtools @@ -0,0 +1 @@ +../../assemblies/devtools \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh-using/docinfo.xml b/downstream/titles/aap-plugin-rhdh-using/docinfo.xml new file mode 100644 index 0000000000..84f78c8be9 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-using/docinfo.xml @@ -0,0 +1,11 @@ +Using Ansible plug-ins for Red Hat Developer Hub +Red Hat Ansible Automation Platform +2.4 +Use Ansible plug-ins for Red Hat Developer Hub + + This guide describes how to use Ansible plug-ins for Red Hat Developer Hub to learn about Ansible, explore curated collections, and create playbook projects. + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/aap-plugin-rhdh-using/images b/downstream/titles/aap-plugin-rhdh-using/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-using/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh/master.adoc b/downstream/titles/aap-plugin-rhdh-using/master.adoc similarity index 54% rename from downstream/titles/aap-plugin-rhdh/master.adoc rename to downstream/titles/aap-plugin-rhdh-using/master.adoc index 6e4a7cc2c6..2a85d1a635 100644 --- a/downstream/titles/aap-plugin-rhdh/master.adoc +++ b/downstream/titles/aap-plugin-rhdh-using/master.adoc @@ -2,23 +2,30 @@ :numbered: :toclevels: 4 :experimental: -:context: aap-plugin-rhdh +:context: aap-plugin-rhdh-using include::attributes/attributes.adoc[] // Book Title -= Ansible plug-ins for Red Hat Developer Hub += Using Ansible plug-ins for Red Hat Developer Hub Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. -This guide describes how to install and use {AAPRHDH}. +This guide describes how to use {AAPRHDH}. This document has been updated to include information for the latest release of {PlatformNameShort}. include::{Boilerplate}[] -include::devtools/assembly-rhdh-intro.adoc[leveloffset=+1] -include::devtools/assembly-rhdh-planning.adoc[leveloffset=+1] -include::devtools/assembly-rhdh-install.adoc[leveloffset=+1] -include::devtools/assembly-rhdh-upgrading-uninstalling.adoc[leveloffset=+1] -include::devtools/assembly-rhdh-configure.adoc[leveloffset=+1] +// [IMPORTANT] +// ==== +// {AAPRHDH} is a Technology Preview feature only. +// include::snippets/technology-preview.adoc[] + +// ==== + include::devtools/assembly-rhdh-using.adoc[leveloffset=+1] + +include::devtools/assembly-rhdh-feedback.adoc[leveloffset=+1] + +include::devtools/assembly-rhdh-example.adoc[leveloffset=+1] + diff --git a/downstream/titles/aap-plugin-rhdh-using/snippets b/downstream/titles/aap-plugin-rhdh-using/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/aap-plugin-rhdh-using/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/analytics/aap-common b/downstream/titles/analytics/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/analytics/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/analytics/analytics b/downstream/titles/analytics/analytics new file mode 120000 index 0000000000..20840e99de --- /dev/null +++ b/downstream/titles/analytics/analytics @@ -0,0 +1 @@ +../../assemblies/analytics \ No newline at end of file diff --git a/downstream/titles/analytics/attributes b/downstream/titles/analytics/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/analytics/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/analytics/automation-savings-planner/aap-common b/downstream/titles/analytics/automation-savings-planner/aap-common deleted file mode 120000 index ab3cbbd419..0000000000 --- a/downstream/titles/analytics/automation-savings-planner/aap-common +++ /dev/null @@ -1 +0,0 @@ -../../../aap-common/ \ No newline at end of file diff --git a/downstream/titles/analytics/automation-savings-planner/analytics b/downstream/titles/analytics/automation-savings-planner/analytics deleted file mode 120000 index 150b501734..0000000000 --- a/downstream/titles/analytics/automation-savings-planner/analytics +++ /dev/null @@ -1 +0,0 @@ -../../../assemblies/analytics \ No newline at end of file diff --git a/downstream/titles/analytics/automation-savings/aap-common b/downstream/titles/analytics/automation-savings/aap-common deleted file mode 120000 index ab3cbbd419..0000000000 --- a/downstream/titles/analytics/automation-savings/aap-common +++ /dev/null @@ -1 +0,0 @@ -../../../aap-common/ \ No newline at end of file diff --git a/downstream/titles/analytics/automation-savings/analytics b/downstream/titles/analytics/automation-savings/analytics deleted file mode 120000 index 4d9cc94a9d..0000000000 --- a/downstream/titles/analytics/automation-savings/analytics +++ /dev/null @@ -1 +0,0 @@ -../../../assemblies/analytics/ \ No newline at end of file diff --git a/downstream/titles/analytics/automation-savings/attributes b/downstream/titles/analytics/automation-savings/attributes deleted file mode 120000 index 8615cf3107..0000000000 --- a/downstream/titles/analytics/automation-savings/attributes +++ /dev/null @@ -1 +0,0 @@ -../../../attributes/ \ No newline at end of file diff --git a/downstream/titles/analytics/docinfo.xml b/downstream/titles/analytics/docinfo.xml new file mode 100644 index 0000000000..a15b8b438d --- /dev/null +++ b/downstream/titles/analytics/docinfo.xml @@ -0,0 +1,11 @@ +Using automation analytics +Red Hat Ansible Automation Platform +2.5 +Evaluate the cost savings associated with automated processes + +This guide shows how to use the features of automation analytics to evaluate how automation is deployed across your environments and the savings associated with it. + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/analytics/images b/downstream/titles/analytics/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/analytics/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/analytics/job-explorer/aap-common b/downstream/titles/analytics/job-explorer/aap-common deleted file mode 120000 index ab3cbbd419..0000000000 --- a/downstream/titles/analytics/job-explorer/aap-common +++ /dev/null @@ -1 +0,0 @@ -../../../aap-common/ \ No newline at end of file diff --git a/downstream/titles/analytics/job-explorer/analytics b/downstream/titles/analytics/job-explorer/analytics deleted file mode 120000 index 4d9cc94a9d..0000000000 --- a/downstream/titles/analytics/job-explorer/analytics +++ /dev/null @@ -1 +0,0 @@ -../../../assemblies/analytics/ \ No newline at end of file diff --git a/downstream/titles/analytics/job-explorer/attributes b/downstream/titles/analytics/job-explorer/attributes deleted file mode 120000 index 8615cf3107..0000000000 --- a/downstream/titles/analytics/job-explorer/attributes +++ /dev/null @@ -1 +0,0 @@ -../../../attributes/ \ No newline at end of file diff --git a/downstream/titles/analytics/master.adoc b/downstream/titles/analytics/master.adoc new file mode 100644 index 0000000000..c7da4a1c0b --- /dev/null +++ b/downstream/titles/analytics/master.adoc @@ -0,0 +1,27 @@ +:imagesdir: images +:numbered: +:toclevels: 1 + +:experimental: + +include::attributes/attributes.adoc[] + + +// Book Title += Using automation analytics + +This guide shows how to use the features of automation analytics to evaluate how automation is deployed across your environments and the savings associated with it. + +// Downstream content only +include::{Boilerplate}[] + +// Contents +include::analytics/assembly-evaluating-automation-return.adoc[leveloffset=+1] + +include::analytics/assembly-automation-savings-planner.adoc[leveloffset=+1] + +include::analytics/assembly-insights-reports.adoc[leveloffset=+1] + +include::analytics/assembly-using-job-explorer.adoc[leveloffset=+1] + +include::analytics/assembly-data-dictionary.adoc[leveloffset=+1] diff --git a/downstream/titles/analytics/reports/aap-common b/downstream/titles/analytics/reports/aap-common deleted file mode 120000 index ab3cbbd419..0000000000 --- a/downstream/titles/analytics/reports/aap-common +++ /dev/null @@ -1 +0,0 @@ -../../../aap-common/ \ No newline at end of file diff --git a/downstream/titles/analytics/reports/analytics b/downstream/titles/analytics/reports/analytics deleted file mode 120000 index 150b501734..0000000000 --- a/downstream/titles/analytics/reports/analytics +++ /dev/null @@ -1 +0,0 @@ -../../../assemblies/analytics \ No newline at end of file diff --git a/downstream/titles/analytics/snippets b/downstream/titles/analytics/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/analytics/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/automation-mesh/docinfo.xml b/downstream/titles/automation-mesh/docinfo.xml index 9aedd47fd6..21d82d8834 100644 --- a/downstream/titles/automation-mesh/docinfo.xml +++ b/downstream/titles/automation-mesh/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible Automation Platform automation mesh guide for VM-based installations +Automation mesh for VM environments Red Hat Ansible Automation Platform 2.5 Automate at scale in a cloud-native way diff --git a/downstream/titles/automation-mesh/master.adoc b/downstream/titles/automation-mesh/master.adoc index 81af89fcd7..b037bc6122 100644 --- a/downstream/titles/automation-mesh/master.adoc +++ b/downstream/titles/automation-mesh/master.adoc @@ -9,7 +9,7 @@ include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform automation mesh guide for VM-based installations += Automation mesh for VM environments Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. diff --git a/downstream/titles/builder/docinfo.xml b/downstream/titles/builder/docinfo.xml index c31a36f9fb..55235fd4a1 100644 --- a/downstream/titles/builder/docinfo.xml +++ b/downstream/titles/builder/docinfo.xml @@ -1,9 +1,10 @@ -Creating and consuming execution environments +Creating and using execution environments Red Hat Ansible Automation Platform 2.5 - Create and use execution environments with Ansible Builder + Create and use execution environment containers This guide shows how to create consistent and reproducible automation execution environments for your Red Hat Ansible Automation Platform. +This document includes content from the upstream docs.ansible.com documentation, which is covered by the Apache 2.0 license. Red Hat Customer Content Services diff --git a/downstream/titles/builder/master.adoc b/downstream/titles/builder/master.adoc index 7a526eadc5..18ba5375a0 100644 --- a/downstream/titles/builder/master.adoc +++ b/downstream/titles/builder/master.adoc @@ -1,26 +1,35 @@ :imagesdir: images :numbered: :toclevels: 1 - +:context: builder :experimental: include::attributes/attributes.adoc[] // Book Title -= Creating and consuming execution environments += Creating and using execution environments -Use {Builder} to create consistent and reproducible {ExecEnvName} for your {PlatformName} needs. +Use {ExecEnvshort} builder to create consistent and reproducible containers for your {PlatformName} needs. include::{Boilerplate}[] include::builder/assembly-intro-to-builder.adoc[leveloffset=+1] + include::builder/assembly-using-builder.adoc[leveloffset=+1] + include::builder/assembly-common-ee-scenarios.adoc[leveloffset=+1] + include::builder/assembly-publishing-exec-env.adoc[leveloffset=+1] + //include::builder/assembly-building-off-existing-ee.adoc[leveloffset=+1] -include::hub/assembly-populate-container-registry.adoc[leveloffset=+1] -include::hub/assembly-setup-container-repository.adoc[leveloffset=+1] -include::hub/assembly-pull-image.adoc[leveloffset=+1] + +include::builder/assembly-populate-container-registry.adoc[leveloffset=+1] [appendix] include::builder/builder/con-ee-precedence.adoc[leveloffset=+1] + +//include::builder/assembly-open-source-license.adoc[leveloffset=+1] + +include::{OpenSourceA}[] + + \ No newline at end of file diff --git a/downstream/titles/builder/platform b/downstream/titles/builder/platform new file mode 120000 index 0000000000..06b49528ee --- /dev/null +++ b/downstream/titles/builder/platform @@ -0,0 +1 @@ +../../assemblies/platform \ No newline at end of file diff --git a/downstream/titles/central-auth/docinfo.xml b/downstream/titles/central-auth/docinfo.xml index 63edd7f066..8a66af0c12 100644 --- a/downstream/titles/central-auth/docinfo.xml +++ b/downstream/titles/central-auth/docinfo.xml @@ -1,10 +1,10 @@ -Installing and configuring central authentication for the Ansible Automation Platform +Access management and authentication Red Hat Ansible Automation Platform 2.5 -Enable central authentication functions for your Ansible Automation Platform +Configure role based access control, authenticators and authenticator maps in Ansible Automation Platform -This guide provides platform administrators with the information and procedures required to enable and configure central authentication on Ansible Automation Platform. +This guide provides requirements, options, and recommendations for controlling access to Red Hat Ansible Automation Platform resources. Red Hat Customer Content Services diff --git a/downstream/titles/central-auth/images b/downstream/titles/central-auth/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/central-auth/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/central-auth/master.adoc b/downstream/titles/central-auth/master.adoc index 07ac6a359c..41ac18dbf4 100644 --- a/downstream/titles/central-auth/master.adoc +++ b/downstream/titles/central-auth/master.adoc @@ -7,15 +7,26 @@ include::attributes/attributes.adoc[] // Book Title -= Installing and configuring central authentication for the Ansible Automation Platform - -{AAPCentralAuth} is a third-party identity provider (idP) solution, allowing for a simplified single sign-on solution that can be used across the {PlatformNameShort}. Platform administrators can utilize {CentralAuth} to test connectivity and authentication, as well as onboard new users and manage user permissions by configuring and assigning them to groups. Along with OpenID Connect-based and LDAP support, {CentralAuth} also provides a supported REST API which can be used to bootstrap customer usage. += Access management and authentication include::{Boilerplate}[] -include::central-auth/assembly-central-auth-hub.adoc[leveloffset=+1] -include::central-auth/assembly-central-auth-add-user-storage.adoc[leveloffset=+1] -include::central-auth/assembly-assign-hub-admin-permissions.adoc[leveloffset=+1] -include::central-auth/assembly-central-auth-identity-broker.adoc[leveloffset=+1] -include::central-auth/assembly-central-auth-group-perms.adoc[leveloffset=+1] -include::central-auth/assembly-configuring-central-auth-generic-oidc-settings.adoc[leveloffset=+1] +include::platform/platform/con-gw-overview-access-auth.adoc[leveloffset=+1] + +include::platform/assembly-gateway-licensing.adoc[leveloffset=+1] + +include::platform/assembly-gw-configure-authentication.adoc[leveloffset=+1] + +include::platform/assembly-gw-config-authentication-type.adoc[leveloffset=+2] + +include::platform/assembly-gw-mapping.adoc[leveloffset=+2] + +include::platform/assembly-gw-managing-authentication.adoc[leveloffset=+2] + +include::platform/assembly-gw-token-based-authentication.adoc[leveloffset=+1] + +include::platform/assembly-gw-managing-access.adoc[leveloffset=+1] + +include::platform/assembly-gw-roles.adoc[leveloffset=+1] + +include::platform/assembly-gw-settings.adoc[leveloffset=+1] diff --git a/downstream/titles/central-auth/platform b/downstream/titles/central-auth/platform new file mode 120000 index 0000000000..06b49528ee --- /dev/null +++ b/downstream/titles/central-auth/platform @@ -0,0 +1 @@ +../../assemblies/platform \ No newline at end of file diff --git a/downstream/titles/central-auth/snippets b/downstream/titles/central-auth/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/central-auth/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/controller/controller-admin-guide/docinfo.xml b/downstream/titles/controller/controller-admin-guide/docinfo.xml index a22b7974ff..6e76c749b6 100644 --- a/downstream/titles/controller/controller-admin-guide/docinfo.xml +++ b/downstream/titles/controller/controller-admin-guide/docinfo.xml @@ -1,9 +1,9 @@ -Automation controller administration guide +Configuring automation execution Red Hat Ansible Automation Platform 2.5 -Administrator Guide for Automation Controller +Learn how to manage, monitor, and use automation controller - Learn how to manage automation controller through custom scripts, management jobs, and more. + This guide shows how to manage automation controller with custom scripts, management jobs, and more. Red Hat Customer Content Services diff --git a/downstream/titles/controller/controller-admin-guide/master.adoc b/downstream/titles/controller/controller-admin-guide/master.adoc index 0e285bfe8f..cdfd712c08 100644 --- a/downstream/titles/controller/controller-admin-guide/master.adoc +++ b/downstream/titles/controller/controller-admin-guide/master.adoc @@ -9,35 +9,57 @@ include::attributes/attributes.adoc[] // Book Title -= Automation controller administration guide += Configuring automation execution -The {ControllerName} Administration Guide describes the administration of {ControllerName} through custom scripts, management jobs, and more. -Written for DevOps engineers and administrators, the {ControllerName} Administration Guide assumes a basic understanding of the systems requiring management with {ControllerName}s easy-to-use graphical interface. +This guide describes the administration of {ControllerName} through custom scripts, management jobs, and more. +Written for DevOps engineers and administrators, the Configuring automation execution guide assumes a basic understanding of the systems requiring management with {ControllerName}s easy-to-use graphical interface. include::{Boilerplate}[] //include::platform/assembly-controller-licensing.adoc[leveloffset=+1] include::platform/assembly-ag-controller-start-stop-controller.adoc[leveloffset=+1] + +//Uses Settings menu. Which may be separate documentation. +include::platform/assembly-ag-controller-config.adoc[leveloffset=+1] + +include::platform/assembly-controller-improving-performance.adoc[leveloffset=+1] + +include::platform/assembly-controller-management-jobs.adoc[leveloffset=+1] //Deprecated //include::platform/assembly-custom-inventory-scripts.adoc[leveloffset=+1] + include::platform/assembly-inventory-file-importing.adoc[leveloffset=+1] + //include::platform/assembly-multi-credential-assignment.adoc[leveloffset=+1] -include::platform/assembly-controller-management-jobs.adoc[leveloffset=+1] include::platform/assembly-ag-controller-clustering.adoc[leveloffset=+1] -include::platform/assembly-ag-instance-and-container-groups.adoc[leveloffset=+1] -include::platform/assembly-controller-instances.adoc[leveloffset=+1] -include::platform/assembly-controller-topology-viewer.adoc[leveloffset=+1] + +include::platform/assembly-controller-pac.adoc[leveloffset=+1] +//Removed to user Guide +//include::platform/assembly-controller-instances.adoc[leveloffset=+1] +//include::platform/assembly-ag-instance-and-container-groups.adoc[leveloffset=+1] +//Removed to User Guide +//include::platform/assembly-controller-topology-viewer.adoc[leveloffset=+1] + include::platform/assembly-controller-log-files.adoc[leveloffset=+1] + //Lizzi's work: Logging removed at 2.5-next include::platform/assembly-controller-logging-aggregation.adoc[leveloffset=+1] + include::platform/assembly-controller-metrics.adoc[leveloffset=+1] -include::platform/assembly-controller-improving-performance.adoc[leveloffset=+1] + +include::platform/assembly-controller-subscription-management.adoc[leveloffset=+1] + +include::platform/assembly-metrics-utility.adoc[leveloffset=+1] + +include::platform/assembly-controller-secret-management.adoc[leveloffset=+1] + include::platform/assembly-ag-controller-secret-handling.adoc[leveloffset=+1] + include::platform/assembly-ag-controller-security-best-practices.adoc[leveloffset=+1] + include::platform/assembly-controller-awx-manage-utility.adoc[leveloffset=+1] -//Uses Settings menu. Which may be separate documentation. -include::platform/assembly-ag-controller-config.adoc[leveloffset=+1] -include::platform/assembly-controller-isolation-function-variables.adoc[leveloffset=+1] +//Duplicate of content in security and now moved to jobs +//include::platform/assembly-controller-isolation-function-variables.adoc[leveloffset=+1] //Donna's work //include::platform/assembly-controller-token-based-authentication.adoc[leveloffset=+1] //include::platform/assembly-controller-set-up-social-authentication.adoc[leveloffset=+1] @@ -47,9 +69,10 @@ include::platform/assembly-controller-isolation-function-variables.adoc[leveloff //include::platform/assembly-controller-kerberos-authentication.adoc[leveloffset=+1] //include::platform/assembly-ag-controller-session-limits.adoc[leveloffset=+1] include::platform/assembly-ag-controller-backup-and-restore.adoc[leveloffset=+1] + //section 28 is a replica of section 18.4.2, so removing it -//Uses Settings menu. Which may be separate documentation. -//Usability analytics is no longer supported. -//include::platform/assembly-ag-controller-usability-analytics.adoc[leveloffset=+1] +include::platform/assembly-ag-controller-usability-analytics.adoc[leveloffset=+1] + include::platform/assembly-ag-controller-troubleshooting.adoc[leveloffset=+1] + include::platform/assembly-ag-controller-tips-and-tricks.adoc[leveloffset=+1] diff --git a/downstream/titles/controller/controller-api-overview/docinfo.xml b/downstream/titles/controller/controller-api-overview/docinfo.xml index a866e685ea..7047335f62 100644 --- a/downstream/titles/controller/controller-api-overview/docinfo.xml +++ b/downstream/titles/controller/controller-api-overview/docinfo.xml @@ -1,4 +1,4 @@ -Automation controller API overview +Automation execution API overview Red Hat Ansible Automation Platform 2.5 Developer overview for the {ControllerName} API diff --git a/downstream/titles/controller/controller-api-overview/master.adoc b/downstream/titles/controller/controller-api-overview/master.adoc index cb57796ed5..e815782687 100644 --- a/downstream/titles/controller/controller-api-overview/master.adoc +++ b/downstream/titles/controller/controller-api-overview/master.adoc @@ -8,7 +8,7 @@ include::attributes/attributes.adoc[] // Book Title -= Automation controller API overview += Automation execution API overview Thank you for your interest in {PlatformName}. {PlatformNameShort} helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments. @@ -18,12 +18,21 @@ The {ControllerName} API Overview focuses on helping you understand the {Control include::{Boilerplate}[] include::platform/assembly-controller-api-tools.adoc[leveloffset=+1] + include::platform/assembly-controller-api-browsing-api.adoc[leveloffset=+1] + include::platform/assembly-controller-api-conventions.adoc[leveloffset=+1] + include::platform/assembly-controller-api-sorting.adoc[leveloffset=+1] + include::platform/assembly-controller-api-search.adoc[leveloffset=+1] + include::platform/assembly-controller-api-filter.adoc[leveloffset=+1] + include::platform/assembly-controller-api-pagination.adoc[leveloffset=+1] + include::platform/assembly-controller-api-access-resources.adoc[leveloffset=+1] + include::platform/assembly-controller-api-readonly-fields.adoc[leveloffset=+1] + include::platform/assembly-controller-api-auth-methods.adoc[leveloffset=+1] diff --git a/downstream/titles/controller/controller-getting-started/attributes b/downstream/titles/controller/controller-getting-started/attributes deleted file mode 120000 index 0d100da61c..0000000000 --- a/downstream/titles/controller/controller-getting-started/attributes +++ /dev/null @@ -1 +0,0 @@ -../../../attributes \ No newline at end of file diff --git a/downstream/titles/controller/controller-user-guide/docinfo.xml b/downstream/titles/controller/controller-user-guide/docinfo.xml index ab2fc6ca91..0138758634 100644 --- a/downstream/titles/controller/controller-user-guide/docinfo.xml +++ b/downstream/titles/controller/controller-user-guide/docinfo.xml @@ -1,9 +1,9 @@ -Automation controller user guide +Using automation execution Red Hat Ansible Automation Platform 2.5 -User Guide for Automation Controller +Use automation execution to deploy, define, operate, scale and delegate automation - This guide describes the use of the Red Hat Ansible Automation Platform Controller (automation controller). + This guide shows you how to use automation controller to define, operate, scale and delegate automation across your enterprise. Red Hat Customer Content Services diff --git a/downstream/titles/controller/controller-user-guide/master.adoc b/downstream/titles/controller/controller-user-guide/master.adoc index e530de6e13..baf5ca5905 100644 --- a/downstream/titles/controller/controller-user-guide/master.adoc +++ b/downstream/titles/controller/controller-user-guide/master.adoc @@ -5,52 +5,105 @@ :experimental: :controller-UG: +[id="assembly-using-automation-execution"] include::attributes/attributes.adoc[] // Book Title -= Automation controller user guide += Using automation execution Thank you for your interest in {PlatformName} {ControllerName}. {ControllerNameStart} helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments. -The {ControllerNameStart} User Guide describes all of the functionality available in {ControllerName}. +Using {ControllerName} describes all of the functionality available in {ControllerName}. It assumes moderate familiarity with Ansible, including concepts such as playbooks, variables, and tags. For more information about these and other Ansible concepts, see the link:https://docs.ansible.com/[Ansible documentation]. include::{Boilerplate}[] include::platform/assembly-UG-overview.adoc[leveloffset=+1] -include::platform/assembly-controller-licensing.adoc[leveloffset=+1] + +//Moved to Access management doc +//include::platform/assembly-controller-licensing.adoc[leveloffset=+1] include::platform/assembly-controller-login.adoc[leveloffset=+1] -include::platform/assembly-controller-managing-subscriptions.adoc[leveloffset=+1] + +//Moved to Access management doc +//include::platform/assembly-controller-managing-subscriptions.adoc[leveloffset=+1] +//Rewritten for 2.5 include::platform/assembly-controller-user-interface.adoc[leveloffset=+1] + include::platform/assembly-controller-search.adoc[leveloffset=+1] -include::platform/assembly-controller-organizations.adoc[leveloffset=+1] -include::platform/assembly-controller-users.adoc[leveloffset=+1] -include::platform/assembly-controller-teams.adoc[leveloffset=+1] -include::platform/assembly-controller-credentials.adoc[leveloffset=+1] -//In the new UI, Credential types is part of Credentials. -include::platform/assembly-controller-custom-credentials.adoc[leveloffset=+1] -include::platform/assembly-controller-secret-management.adoc[leveloffset=+1] -include::platform/assembly-controller-applications.adoc[leveloffset=+1] -include::platform/assembly-controller-execution-environments.adoc[leveloffset=+1] -include::platform/assembly-controller-ee-setup-reference.adoc[leveloffset=+1] + +//Jobs +include::platform/assembly-ug-controller-jobs.adoc[leveloffset=+1] + +//Templates +include::platform/assembly-ug-controller-job-templates.adoc[leveloffset=+1] + +include::platform/assembly-ug-controller-job-slicing.adoc[leveloffset=+1] + +//This includes workflow approvals. +include::platform/assembly-ug-controller-workflow-job-templates.adoc[leveloffset=+1] + +include::platform/assembly-ug-controller-workflows.adoc[leveloffset=+1] + +//Schedules +include::platform/assembly-ug-controller-schedules.adoc[leveloffset=+1] + +//Projects include::platform/assembly-controller-projects.adoc[leveloffset=+1] + include::platform/assembly-controller-project-signing.adoc[leveloffset=+1] + +//Infrastructure-Topology View +include::platform/assembly-controller-topology-viewer.adoc[leveloffset=+1] + +//Infrastructure-Inventories include::platform/assembly-controller-inventories.adoc[leveloffset=+1] + include::platform/assembly-controller-inventory-templates.adoc[leveloffset=+1] -include::platform/assembly-ug-controller-job-templates.adoc[leveloffset=+1] -include::platform/assembly-ug-controller-job-slicing.adoc[leveloffset=+1] -include::platform/assembly-ug-controller-workflows.adoc[leveloffset=+1] -include::platform/assembly-ug-controller-workflow-job-templates.adoc[leveloffset=+1] + +//Adding short Hosts assembly +include::platform/assembly-controller-hosts.adoc[leveloffset=+1] + +//Infrastructure-Instance Groups include::platform/assembly-ug-controller-instance-groups.adoc[leveloffset=+1] -include::platform/assembly-ug-controller-jobs.adoc[leveloffset=+1] -include::platform/assembly-ug-controller-work-with-webhooks.adoc[leveloffset=+1] + +include::platform/assembly-ag-instance-and-container-groups.adoc[leveloffset=+1] + +//Infrastructure-Instances +include::platform/assembly-controller-instances.adoc[leveloffset=+1] + +//Infrastructure-Execution environments +include::platform/assembly-controller-execution-environments.adoc[leveloffset=+1] + +include::platform/assembly-controller-ee-setup-reference.adoc[leveloffset=+1] + +//Moved to Donna's Access management document +//include::platform/assembly-controller-organizations.adoc[leveloffset=+1] +//include::platform/assembly-controller-users.adoc[leveloffset=+1] +//include::platform/assembly-controller-teams.adoc[leveloffset=+1] +//Possibly in Donna's credentials document +include::platform/assembly-controller-credentials.adoc[leveloffset=+1] + +include::platform/assembly-controller-custom-credentials.adoc[leveloffset=+1] + +include::platform/assembly-controller-activity-stream.adoc[leveloffset=+1] + +//Moved to admin guide +//include::platform/assembly-controller-secret-management.adoc[leveloffset=+1] +//include::platform/assembly-controller-applications.adoc[leveloffset=+1] include::platform/assembly-ug-controller-notifications.adoc[leveloffset=+1] + include::platform/assembly-ug-controller-attributes-custom-notifications.adoc[leveloffset=+1] -include::platform/assembly-ug-controller-schedules.adoc[leveloffset=+1] + +include::platform/assembly-ug-controller-work-with-webhooks.adoc[leveloffset=+1] + include::platform/assembly-ug-controller-setting-up-insights.adoc[leveloffset=+1] + include::platform/assembly-controller-best-practices.adoc[leveloffset=+1] -include::platform/assembly-controller-security.adoc[leveloffset=+1] + +//RBAC contents to Donna's document, Jobs info to Jobs. +//Moved to admin guide. +//include::platform/assembly-controller-security.adoc[leveloffset=+1] include::platform/assembly-controller-glossary.adoc[leveloffset=+1] diff --git a/downstream/titles/dev-guide/aap-common b/downstream/titles/dev-guide/aap-common deleted file mode 120000 index 0034872719..0000000000 --- a/downstream/titles/dev-guide/aap-common +++ /dev/null @@ -1 +0,0 @@ -../../aap-common/ \ No newline at end of file diff --git a/downstream/titles/dev-guide/attributes b/downstream/titles/dev-guide/attributes deleted file mode 120000 index 53222966c5..0000000000 --- a/downstream/titles/dev-guide/attributes +++ /dev/null @@ -1 +0,0 @@ -../../attributes/ \ No newline at end of file diff --git a/downstream/titles/dev-guide/core b/downstream/titles/dev-guide/core deleted file mode 120000 index 582f928693..0000000000 --- a/downstream/titles/dev-guide/core +++ /dev/null @@ -1 +0,0 @@ -../../assemblies/core/ \ No newline at end of file diff --git a/downstream/titles/dev-guide/dev-guide b/downstream/titles/dev-guide/dev-guide deleted file mode 120000 index ff0ddce961..0000000000 --- a/downstream/titles/dev-guide/dev-guide +++ /dev/null @@ -1 +0,0 @@ -../../assemblies/dev-guide/ \ No newline at end of file diff --git a/downstream/titles/dev-guide/images b/downstream/titles/dev-guide/images deleted file mode 120000 index 847b03ed05..0000000000 --- a/downstream/titles/dev-guide/images +++ /dev/null @@ -1 +0,0 @@ -../../images/ \ No newline at end of file diff --git a/downstream/titles/dev-guide/navigator b/downstream/titles/dev-guide/navigator deleted file mode 120000 index 998c17ff10..0000000000 --- a/downstream/titles/dev-guide/navigator +++ /dev/null @@ -1 +0,0 @@ -../../assemblies/navigator/ \ No newline at end of file diff --git a/downstream/titles/develop-automation-content/docinfo.xml b/downstream/titles/develop-automation-content/docinfo.xml index 5c04d7e7de..1473821ead 100644 --- a/downstream/titles/develop-automation-content/docinfo.xml +++ b/downstream/titles/develop-automation-content/docinfo.xml @@ -1,7 +1,7 @@ -Developing Ansible automation content +Developing automation content Red Hat Ansible Automation Platform 2.5 -Install Ansible Automation Platform +Develop Ansible automation content to run automation jobs This guide describes how to develop Ansible automation content and how to use it to run automation jobs from Red Hat Ansible Automation Platforms. diff --git a/downstream/titles/develop-automation-content/master.adoc b/downstream/titles/develop-automation-content/master.adoc index 23b3d8ab91..757955aba3 100644 --- a/downstream/titles/develop-automation-content/master.adoc +++ b/downstream/titles/develop-automation-content/master.adoc @@ -7,7 +7,7 @@ include::attributes/attributes.adoc[] // Book Title -= Developing Ansible automation content += Developing automation content Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. @@ -17,10 +17,25 @@ This document has been updated to include information for the latest release of include::{Boilerplate}[] include::devtools/assembly-devtools-intro.adoc[leveloffset=+1] + include::devtools/assembly-developer-workflow.adoc[leveloffset=+1] + include::devtools/assembly-devtools-install.adoc[leveloffset=+1] -include::devtools/assembly-devtools-setup.adoc[leveloffset=+1] + +// ----- include::devtools/assembly-devtools-setup.adoc[leveloffset=+1] + include::devtools/assembly-creating-playbook-project.adoc[leveloffset=+1] + include::devtools/assembly-writing-running-playbook.adoc[leveloffset=+1] -// include::devtools/assembly-testing-playbooks.adoc[leveloffset=+1] + +// ----- include::devtools/assembly-testing-playbooks.adoc[leveloffset=+1] + +include::devtools/assembly-publishing-playbook-collection-aap.adoc[leveloffset=+1] + + +// Roles collections + +include::devtools/assembly-devtools-develop-collections.adoc[leveloffset=+1] + +include::devtools/assembly-devtools-create-roles-collection.adoc[leveloffset=+1] diff --git a/downstream/titles/develop-automation-content/snippets b/downstream/titles/develop-automation-content/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/develop-automation-content/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/eda/eda-getting-started-guide/attributes b/downstream/titles/eda/eda-getting-started-guide/attributes deleted file mode 120000 index 0d100da61c..0000000000 --- a/downstream/titles/eda/eda-getting-started-guide/attributes +++ /dev/null @@ -1 +0,0 @@ -../../../attributes \ No newline at end of file diff --git a/downstream/titles/eda/eda-getting-started-guide/eda b/downstream/titles/eda/eda-getting-started-guide/eda deleted file mode 120000 index 2a8c0ea9aa..0000000000 --- a/downstream/titles/eda/eda-getting-started-guide/eda +++ /dev/null @@ -1 +0,0 @@ -../../../assemblies/eda \ No newline at end of file diff --git a/downstream/titles/eda/eda-getting-started-guide/images b/downstream/titles/eda/eda-getting-started-guide/images deleted file mode 120000 index 4dd3347de1..0000000000 --- a/downstream/titles/eda/eda-getting-started-guide/images +++ /dev/null @@ -1 +0,0 @@ -../../../images \ No newline at end of file diff --git a/downstream/titles/eda/eda-user-guide/docinfo.xml b/downstream/titles/eda/eda-user-guide/docinfo.xml index 4d49b33a80..4a544938ec 100644 --- a/downstream/titles/eda/eda-user-guide/docinfo.xml +++ b/downstream/titles/eda/eda-user-guide/docinfo.xml @@ -1,7 +1,7 @@ -Event-Driven Ansible controller user guide +Using automation decisions Red Hat Ansible Automation Platform 2.5 -Learn to configure and use {EDAcontroller} to enhance and expand automation +Configure and use {EDAcontroller} to enhance and expand automation Learn how to configure your {EDAcontroller} to set up credentials, new projects, decision environments, tokens to authenticate to Ansible Automation Platform Controller, and rulebook activation. diff --git a/downstream/titles/eda/eda-user-guide/master.adoc b/downstream/titles/eda/eda-user-guide/master.adoc index 087b29f56d..95ab8c84cc 100644 --- a/downstream/titles/eda/eda-user-guide/master.adoc +++ b/downstream/titles/eda/eda-user-guide/master.adoc @@ -6,16 +6,23 @@ include::attributes/attributes.adoc[] // Book Title -= Event-Driven Ansible controller user guide += Using automation decisions -{EDAcontroller} is a new way to enhance and expand automation by improving IT speed and agility while enabling consistency and resilience. +{EDAcontroller} is a new way to enhance and expand automation by improving IT speed and agility while enabling consistency and resilience. Developed by Red Hat, this feature is designed for simplicity and flexibility. include::{Boilerplate}[] include::eda/assembly-eda-user-guide-overview.adoc[leveloffset=+1] include::eda/assembly-eda-credentials.adoc[leveloffset=+1] +include::eda/assembly-eda-credential-types.adoc[leveloffset=+1] include::eda/assembly-eda-projects.adoc[leveloffset=+1] include::eda/assembly-eda-decision-environments.adoc[leveloffset=+1] -include::eda/assembly-eda-set-up-token.adoc[leveloffset=+1] +include::eda/assembly-eda-set-up-rhaap-credential.adoc[leveloffset=+1] +//include::eda/assembly-eda-set-up-token.adoc[leveloffset=+1] include::eda/assembly-eda-rulebook-activations.adoc[leveloffset=+1] +include::eda/assembly-eda-rulebook-troubleshooting.adoc[leveloffset=+1] include::eda/assembly-eda-rule-audit.adoc[leveloffset=+1] +include::eda/assembly-simplified-event-routing.adoc[leveloffset=+1] +include::eda/assembly-eda-performance-tuning.adoc[leveloffset=+1] +include::eda/assembly-eda-event-filter-plugins.adoc[leveloffset=+1] +include::eda/assembly-eda-logging-strategy.adoc[leveloffset=+1] diff --git a/downstream/titles/eda/eda-getting-started-guide/aap-common b/downstream/titles/edge-manager/edge-manager-user-guide/aap-common similarity index 100% rename from downstream/titles/eda/eda-getting-started-guide/aap-common rename to downstream/titles/edge-manager/edge-manager-user-guide/aap-common diff --git a/downstream/titles/analytics/automation-savings-planner/attributes b/downstream/titles/edge-manager/edge-manager-user-guide/attributes similarity index 100% rename from downstream/titles/analytics/automation-savings-planner/attributes rename to downstream/titles/edge-manager/edge-manager-user-guide/attributes diff --git a/downstream/titles/edge-manager/edge-manager-user-guide/docinfo.xml b/downstream/titles/edge-manager/edge-manager-user-guide/docinfo.xml new file mode 100644 index 0000000000..d7492d723a --- /dev/null +++ b/downstream/titles/edge-manager/edge-manager-user-guide/docinfo.xml @@ -0,0 +1,11 @@ +Managing device fleets with the Red Hat Edge Manager +Red Hat Ansible Automation Platform +2.5 +Install, configure, and use the Red Hat Edge Manager to manage individual and fleets of devices + + Learn about components that you can use for scalable and secure edge management. + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/analytics/automation-savings-planner/images b/downstream/titles/edge-manager/edge-manager-user-guide/images similarity index 100% rename from downstream/titles/analytics/automation-savings-planner/images rename to downstream/titles/edge-manager/edge-manager-user-guide/images diff --git a/downstream/titles/edge-manager/edge-manager-user-guide/master.adoc b/downstream/titles/edge-manager/edge-manager-user-guide/master.adoc new file mode 100644 index 0000000000..97bbf5cb0e --- /dev/null +++ b/downstream/titles/edge-manager/edge-manager-user-guide/master.adoc @@ -0,0 +1,39 @@ +:imagesdir: images +:numbered: +:toclevels: 1 +:experimental: + +include::attributes/attributes.adoc[] + +// Book Title += Managing device fleets with the Red Hat Edge Manager + +The {RedHatEdge} aims to give simple, scalable, and secure management of edge devices and applications. +You can declare the operating system version, host configuration, and set of applications that you want to run on an individual device or a whole fleet of devices. +The {RedHatEdge} rolls out the target configuration to devices where a device agent automatically applies them and reports progress and health status back up. + +[IMPORTANT] +==== +The {RedHatEdge} is a Technology Preview feature only. +include::platform/snippets/technology-preview.adoc[] +==== + +include::{Boilerplate}[] + +include::platform/assembly-edge-manager-intro.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-architecture.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-install.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-images.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-provisioning-devices.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-manage-devices.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-manage-apps.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-device-fleets.adoc[leveloffset=+1] + +include::platform/assembly-edge-manager-troubleshooting.adoc[leveloffset=+1] diff --git a/downstream/titles/controller/controller-getting-started/platform b/downstream/titles/edge-manager/edge-manager-user-guide/platform similarity index 100% rename from downstream/titles/controller/controller-getting-started/platform rename to downstream/titles/edge-manager/edge-manager-user-guide/platform diff --git a/downstream/titles/getting-started/aap-common b/downstream/titles/getting-started/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/getting-started/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/getting-started/attributes b/downstream/titles/getting-started/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/getting-started/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/aap-plugin-rhdh/docinfo.xml b/downstream/titles/getting-started/docinfo.xml similarity index 55% rename from downstream/titles/aap-plugin-rhdh/docinfo.xml rename to downstream/titles/getting-started/docinfo.xml index f3f147268e..e1b8436be6 100644 --- a/downstream/titles/aap-plugin-rhdh/docinfo.xml +++ b/downstream/titles/getting-started/docinfo.xml @@ -1,9 +1,9 @@ -Ansible plug-ins for Red Hat Developer Hub +Getting started with Ansible Automation Platform Red Hat Ansible Automation Platform 2.5 -Install and use Ansible plug-ins for Red Hat Developer Hub +Get started with Ansible Automation Platform - This guide describes how to install and use Ansible plug-ins for Red Hat Developer Hub. + This guide shows how to get started with Ansible Automation Platform. Red Hat Customer Content Services diff --git a/downstream/titles/getting-started/eda b/downstream/titles/getting-started/eda new file mode 120000 index 0000000000..4f3e9af334 --- /dev/null +++ b/downstream/titles/getting-started/eda @@ -0,0 +1 @@ +../../assemblies/eda \ No newline at end of file diff --git a/downstream/titles/getting-started/images b/downstream/titles/getting-started/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/getting-started/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/getting-started/master.adoc b/downstream/titles/getting-started/master.adoc new file mode 100644 index 0000000000..5f5956a8ca --- /dev/null +++ b/downstream/titles/getting-started/master.adoc @@ -0,0 +1,26 @@ +:imagesdir: images +:numbered: +:toclevels: 1 + +:experimental: + +:controller-GS: + +include::attributes/attributes.adoc[] + + +// Book Title += Getting started with Ansible Automation Platform + +{PlatformName} is a unified automation solution that automates a variety of IT processes, including provisioning, configuration management, application deployment, orchestration, and security and compliance changes (including patching systems). + +{PlatformNameShort} features a platform interface where you can set up centralized authentication, configure access management, and execute automation tasks from a single location. + +This guide will help you get started with {PlatformNameShort} by introducing three central concepts: automation execution, automation decisions, and automation content. + +include::{Boilerplate}[] + +include::platform/assembly-gs-key-functionality.adoc[leveloffset=+1] +include::platform/assembly-gs-platform-admin.adoc[leveloffset=+1] +include::platform/assembly-gs-auto-dev.adoc[leveloffset=+1] +include::platform/assembly-gs-auto-op.adoc[leveloffset=+1] diff --git a/downstream/titles/getting-started/platform b/downstream/titles/getting-started/platform new file mode 120000 index 0000000000..06b49528ee --- /dev/null +++ b/downstream/titles/getting-started/platform @@ -0,0 +1 @@ +../../assemblies/platform \ No newline at end of file diff --git a/downstream/titles/getting-started/snippets b/downstream/titles/getting-started/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/getting-started/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/hub/getting-started/attributes b/downstream/titles/hub/getting-started/attributes deleted file mode 120000 index 8615cf3107..0000000000 --- a/downstream/titles/hub/getting-started/attributes +++ /dev/null @@ -1 +0,0 @@ -../../../attributes/ \ No newline at end of file diff --git a/downstream/titles/hub/getting-started/hub b/downstream/titles/hub/getting-started/hub deleted file mode 120000 index 8185591f40..0000000000 --- a/downstream/titles/hub/getting-started/hub +++ /dev/null @@ -1 +0,0 @@ -../../../assemblies/hub \ No newline at end of file diff --git a/downstream/titles/hub/managing-content/docinfo.xml b/downstream/titles/hub/managing-content/docinfo.xml index 1c27ab4354..26b6154d33 100644 --- a/downstream/titles/hub/managing-content/docinfo.xml +++ b/downstream/titles/hub/managing-content/docinfo.xml @@ -1,4 +1,4 @@ -Managing content in automation hub +Managing automation content Red Hat Ansible Automation Platform 2.5 Create and manage collections, content and repositories in automation hub diff --git a/downstream/titles/hub/managing-content/master.adoc b/downstream/titles/hub/managing-content/master.adoc index 943dd4c74f..ccc26c42c3 100644 --- a/downstream/titles/hub/managing-content/master.adoc +++ b/downstream/titles/hub/managing-content/master.adoc @@ -4,13 +4,31 @@ :experimental: include::attributes/attributes.adoc[] -= Managing content in automation hub += Managing automation content include::{Boilerplate}[] include::hub/assembly-managing-cert-valid-content.adoc[leveloffset=+1] +include::hub/assembly-syncing-to-cloud-repo.adoc[leveloffset=+2] +include::hub/assembly-synclists.adoc[leveloffset=+2] +include::hub/assembly-collections-and-content-signing-in-pah.adoc[leveloffset=+2] +//include::hub/assembly-faq.adoc[leveloffset=+2] +include::hub/assembly-validated-content.adoc[leveloffset=+2] include::hub/assembly-managing-collections-hub.adoc[leveloffset=+1] +include::hub/assembly-working-with-namespaces.adoc[leveloffset=+2] +include::hub/assembly-managing-private-collections.adoc[leveloffset=+2] +include::hub/assembly-repo-management.adoc[leveloffset=+2] +include::hub/assembly-remote-management.adoc[leveloffset=+2] +include::hub/assembly-repo-sync.adoc[leveloffset=+2] +include::hub/assembly-collection-import-export.adoc[leveloffset=+2] include::hub/assembly-managing-containers-hub.adoc[leveloffset=+1] +include::hub/assembly-managing-container-registry.adoc[leveloffset=+2] +include::hub/assembly-container-user-access.adoc[leveloffset=+2] +include::hub/assembly-populate-container-registry.adoc[leveloffset=+2] +include::hub/assembly-setup-container-repository.adoc[leveloffset=+2] +include::hub/assembly-pull-image.adoc[leveloffset=+2] +include::hub/assembly-working-with-signed-containers.adoc[leveloffset=+2] +include::hub/assembly-delete-container.adoc[leveloffset=+2] diff --git a/downstream/titles/navigator-guide/devtools b/downstream/titles/navigator-guide/devtools new file mode 120000 index 0000000000..dc79f7e1fa --- /dev/null +++ b/downstream/titles/navigator-guide/devtools @@ -0,0 +1 @@ +../../assemblies/devtools \ No newline at end of file diff --git a/downstream/titles/navigator-guide/docinfo.xml b/downstream/titles/navigator-guide/docinfo.xml index 90c42afc4e..4d80c41c38 100644 --- a/downstream/titles/navigator-guide/docinfo.xml +++ b/downstream/titles/navigator-guide/docinfo.xml @@ -1,4 +1,4 @@ -Automation content navigator creator guide +Using content navigator Red Hat Ansible Automation Platform 2.5 Develop content that is compatible with Ansible Automation Platform diff --git a/downstream/titles/navigator-guide/master.adoc b/downstream/titles/navigator-guide/master.adoc index 7cf601b9b4..ee37645c98 100644 --- a/downstream/titles/navigator-guide/master.adoc +++ b/downstream/titles/navigator-guide/master.adoc @@ -5,18 +5,28 @@ include::attributes/attributes.adoc[] - // Book Title -= Automation content navigator creator guide += Using content navigator include::{Boilerplate}[] include::navigator/assembly-intro-navigator.adoc[leveloffset=+1] -include::navigator/assembly-installing-on-rhel.adoc[leveloffset=+1] + +// include::navigator/assembly-installing-on-rhel.adoc[leveloffset=+1] + +include::devtools/assembly-devtools-install.adoc[leveloffset=+1] + include::navigator/assembly-review-ee-navigator.adoc[leveloffset=+1] + include::navigator/assembly-review-inventory-navigator.adoc[leveloffset=+1] + include::navigator/assembly-browse-collections-navigator.adoc[leveloffset=+1] + include::navigator/assembly-execute-playbook-navigator.adoc[leveloffset=+1] + include::navigator/assembly-review-ansible-config-navigator.adoc[leveloffset=+1] + include::navigator/assembly-settings-navigator.adoc[leveloffset=+1] + include::navigator/assembly-troubleshooting-navigator.adoc[leveloffset=+1] + diff --git a/downstream/titles/ocp_performance_guide/docinfo.xml b/downstream/titles/ocp_performance_guide/docinfo.xml index 9f9f964eae..0771bd481b 100644 --- a/downstream/titles/ocp_performance_guide/docinfo.xml +++ b/downstream/titles/ocp_performance_guide/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible Automation Platform performance considerations for operator based installations +Performance considerations for operator environments Red Hat Ansible Automation Platform 2.5 diff --git a/downstream/titles/ocp_performance_guide/master.adoc b/downstream/titles/ocp_performance_guide/master.adoc index 4cdc6cdc3b..f2b9405a3f 100644 --- a/downstream/titles/ocp_performance_guide/master.adoc +++ b/downstream/titles/ocp_performance_guide/master.adoc @@ -8,7 +8,7 @@ include::attributes/attributes.adoc[] :context: ocp-performance // Book Title -= Red Hat Ansible Automation Platform performance considerations for operator based installations += Performance considerations for operator environments Deploying applications to a container orchestration platform such as {OCP} provides a number of advantages from an operational perspective. For example, an update to the base image of an application can be made through a simple in-place upgrade with little to no disruption. @@ -22,6 +22,10 @@ This type of configuration must be provided by the user as the {PlatformNameShor include::{Boilerplate}[] include::platform/assembly-pod-spec-modifications.adoc[leveloffset=+1] + include::platform/assembly-control-plane-adjustments.adoc[leveloffset=+1] + include::platform/assembly-specify-dedicated-nodes.adoc[leveloffset=+1] + include::platform/assembly-configure-controller-OCP.adoc[leveloffset=+1] + diff --git a/downstream/titles/operator-mesh/docinfo.xml b/downstream/titles/operator-mesh/docinfo.xml index 9e7f41bea0..d7f34fed3b 100644 --- a/downstream/titles/operator-mesh/docinfo.xml +++ b/downstream/titles/operator-mesh/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible Automation Platform automation mesh for operator-based installations +Automation mesh for managed cloud or operator environments Red Hat Ansible Automation Platform 2.5 Automate at scale in a cloud-native way diff --git a/downstream/titles/operator-mesh/master.adoc b/downstream/titles/operator-mesh/master.adoc index aefdfd47a8..28c57b522c 100644 --- a/downstream/titles/operator-mesh/master.adoc +++ b/downstream/titles/operator-mesh/master.adoc @@ -9,7 +9,7 @@ include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform automation mesh for operator-based installations += Automation mesh for managed cloud or operator environments Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. diff --git a/downstream/titles/playbooks/playbooks-getting-started/docinfo.xml b/downstream/titles/playbooks/playbooks-getting-started/docinfo.xml index c6bc31a01b..5dc012241f 100644 --- a/downstream/titles/playbooks/playbooks-getting-started/docinfo.xml +++ b/downstream/titles/playbooks/playbooks-getting-started/docinfo.xml @@ -1,9 +1,10 @@ -Getting started with Ansible Playbooks +Getting started with playbooks Red Hat Ansible Automation Platform 2.5 -Getting started with ansible playbooks +Get started with Ansible Playbooks - Learn how to set up an ansible playbook. + This guide shows how to create and use playbooks to address your automation requirements. + This document includes content from the upstream docs.ansible.com documentation, which is covered by the GNU GENERAL PUBLIC LICENSE v3.0. Red Hat Customer Content Services diff --git a/downstream/titles/playbooks/playbooks-getting-started/master.adoc b/downstream/titles/playbooks/playbooks-getting-started/master.adoc index a98fa51619..8a71e75144 100644 --- a/downstream/titles/playbooks/playbooks-getting-started/master.adoc +++ b/downstream/titles/playbooks/playbooks-getting-started/master.adoc @@ -8,7 +8,7 @@ include::attributes/attributes.adoc[] // Book Title -= Getting started with Ansible Playbooks += Getting started with playbooks Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. @@ -16,10 +16,7 @@ This guide provides an introduction to the use of Ansible Playbooks.. include::{Boilerplate}[] -//include::playbooks/assembly-playbook-gs.adoc[leveloffset=+1] include::playbooks/assembly-intro-to-playbooks.adoc[leveloffset=+1] include::playbooks/assembly-networking-playbook.adoc[leveloffset=+1] include::playbooks/assembly-playbook-practical-example.adoc[leveloffset=+1] - - - +include::playbooks/assembly-open-source-license.adoc[leveloffset=+1] diff --git a/downstream/titles/playbooks/playbooks-reference/docinfo.xml b/downstream/titles/playbooks/playbooks-reference/docinfo.xml index 34e0e320f9..58966318a4 100644 --- a/downstream/titles/playbooks/playbooks-reference/docinfo.xml +++ b/downstream/titles/playbooks/playbooks-reference/docinfo.xml @@ -1,11 +1,11 @@ -Reference Guide to Ansible Playbooks +Reference guide to Ansible Playbooks Red Hat Ansible Automation Platform 2.5 -Reference Guide to Ansible Playbooks +Learn about the different approaches for creating playbooks This guide provides a reference for the differing approaches to the creating of Ansible playbooks. Red Hat Customer Content Services - \ No newline at end of file + diff --git a/downstream/titles/playbooks/playbooks-reference/master.adoc b/downstream/titles/playbooks/playbooks-reference/master.adoc index 40f9d25164..251e74881b 100644 --- a/downstream/titles/playbooks/playbooks-reference/master.adoc +++ b/downstream/titles/playbooks/playbooks-reference/master.adoc @@ -8,7 +8,7 @@ include::attributes/attributes.adoc[] // Book Title -= Reference Guide for Ansible Playbooks += Reference guide to Ansible Playbooks Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. @@ -16,4 +16,4 @@ This guide provides a reference for the differing approaches to the creating of include::{Boilerplate}[] -include::playbooks/assembly-reference-test.adoc[leveloffset=+1] \ No newline at end of file +include::playbooks/assembly-reference-test.adoc[leveloffset=+1] diff --git a/downstream/titles/release-notes/async/aap-25-1-7-oct.adoc b/downstream/titles/release-notes/async/aap-25-1-7-oct.adoc new file mode 100644 index 0000000000..17443ba194 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-1-7-oct.adoc @@ -0,0 +1,48 @@ +//This is the working version of the patch release notes document. + +[[aap-25-1-7-oct]] + + += {PlatformNameShort} patch release October 7, 2024 + +The following enhancements and fixes have been implemented in this release of {PlatformName}. + +== Enhancements + +* {EDAName} workers and scheduler add timeout and retry resilience when communicating with a Redis cluster. (AAP-32139) + +* Removed the *MTLS* credential type that was incorrectly added. (AAP-31848) + +== Fixed issues + +=== {PlatformNameShort} + +* Fixed conditional that was skipping necessary tasks in the restore role, which was causing restores to not finish reconciling. (AAP-30437) + +* Systemd services in the containerized installer are now set with restart policy set to *always* by default. (AAP-31824) + +* *FLUSHDB* is now modified to account for shared usage of a Redis database. It now respects access limitations by removing only those keys that the client has permissions to. (AAP-32138) + +* Added a fix to ensure default *extra_vars* values are rendered in the *Prompt on launch* wizard. (AAP-30585) + +* Filtered out the unused *ANSIBLE_BASE_* settings from the environment variable in job execution. (AAP-32208) + + +=== {EDAName} + +* Configured the setting *EVENT_STREAM_MTLS_BASE_URL* to the correct default to ensure MTLS is disallowed in the RPM installer. (AAP-32027) + +* Configured the setting *EVENT_STREAM_MTLS_BASE_URL* to the correct default to ensure MTLS is disallowed in the containerized installer. (AAP-31851) + +* Fixed a bug where the {EDAName} workers and scheduler are unable to reconnect to the Redis cluster if a primary Redis node enters a *failed* state and a new primary node is promoted. See the KCS article link:https://access.redhat.com/articles/7088545[Redis failover causes {EDAName} activation failures] that include the steps that were necessary before this bug was fixed. (AAP-30722) + +== Advisories +The following errata advisories are included in this release: + +* link:https://access.redhat.com/errata/RHBA-2024:7756[RHBA-2024:7756 - Product Release Update] + +* link:https://access.redhat.com/errata/RHBA-2024:7760[RHBA-2024:7760 - Container Release Update] + +* link:https://access.redhat.com/errata/RHBA-2024:7766[RHBA-2024:7766 - Cluster Scoped Container Release Update] + +* link:https://access.redhat.com/errata/RHBA-2024:7810[RHBA-2024:7810 - Setup Bundle Release Update] diff --git a/downstream/titles/release-notes/async/aap-25-12-18-dec.adoc b/downstream/titles/release-notes/async/aap-25-12-18-dec.adoc new file mode 100644 index 0000000000..7f59521e4e --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-12-18-dec.adoc @@ -0,0 +1,171 @@ +[[aap-25-12-18-dec]] + += {PlatformNameShort} patch release December 18, 2024 + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + +== Enhancements + +=== {PlatformNameShort} + +* Added help text to all missing fields in {PlatformNameShort} gateway and `django-ansible-base`. (AAP-37068) + +* Consistently formatted sentence structure for `help_text`, and provided more context in the help text where it was vague.(AAP-37016) + +* Added dynamic preferences for usage by {Analytics}.(AAP-36710) + +** `INSIGHTS_TRACKING_STATE`: Enables the service to gather data on automation and send it to {Analytics}. + +** `RED_HAT_CONSOLE_URL`: This setting is used to to configure the upload URL for data collection for {Analytics}. + +** `REDHAT_USERNAME`: Username used to send data to {Analytics}. + +** `REDHAT_PASSWORD`: Password for the account used to send data to {Analytics}. + +** `SUBSCRIPTIONS_USERNAME`: Username is used to retrieve subscription and content information. + +** `SUBSCRIPTIONS_PASSWORD`: Password is used to retrieve subscription and content information. + +** `AUTOMATION_ANALYTICS_GATHER_INTERVAL`: interval in seconds at which {Analytics} gathers data. + +* Added an enabled flag for turning authenticator maps on or off. (AAP-36709) + +* `aap-metrics-utility` has been updated to 0.4.1. (AAP-36393) + +* Added the setting `trusted_header_timeout_in_ns` to timegate `X_TRUSTED_PROXY_HEADER` validation in the `django-ansible-base` libraries used by {PlatformNameShort} components. (AAP-36712) + + +=== Documentation updates + +* With this update, the {OperatorPlatformNameShort} growth topology and {OperatorPlatformNameShort} enterprise topology have been updated to include s390x (IBM Z) architecture test support. + + +=== {EDAName} + +* Extended the scope of the `log_level` and debug settings. (AAP-33669) + +* A project can now be synced with the {EDAName} collection modules. (AAP-32264) + +* In the Rulebook activation create form, selecting a project is now required before selecting a rulebook.(AAP-28082) + +* The btn:[Create credentials] button is now visible irrespective of whether there are any existing credentials or not.(AAP-23707) + + +== Bug fixes + +=== General + +* Fixed an issue where `django-ansible-base` fallback cache kept creating a *tmp* file even if the *LOCATION* was set to another path.(AAP-36869) + +* Fixed an issue where the OIDC authenticator was not allowed to use the JSON key to extract user groups, or for a user to be modified via the new `GROUPS_CLAIM` configuration setting.(AAP-36716) + + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/cve-2024-11079[CVE-2024-11079] `ansible-core`: Unsafe Tagging Bypass via `hostvars` Object in Ansible-Core.(AAP-35563) + +* link:https://access.redhat.com/security/cve/cve-2024-53908[CVE-2024-53908] `ansible-lightspeed-container`: Potential SQL injection in `HasKey(lhs, rhs)` on Oracle.(AAP-36767) + +* link:https://access.redhat.com/security/cve/cve-2024-53907[CVE-2024-53907] `ansible-lightspeed-container`: Potential denial-of-service in `django.utils.html.strip_tags()`.(AAP-36755) + +* link:https://access.redhat.com/security/cve/cve-2024-11483[CVE-2024-11483] which allowed users to escape the scope of their personal access *OAuth2* tokens, from read-scoped to read-write-scoped, in the gateway.(AAP-36261) + + +=== {PlatformName} + +* Fixed an issue where when role user assignments were queried in the platform UI, the query is successful about 75% of the time.(AAP-36872) + +* Fixed an issue where the user was unable to filter job templates by *label* in {PlatformNameShort} 2.5.(AAP-36540) + +* Fixed an issue where it was not possible to open a job template after removing the user that created the template.(AAP-35820) + +* Fixed an issue where the inventory source update failed, and did not allow selection of the inventory file.(AAP-35246) + +* Fixed an issue where the *Login Redirect Override* setting was missing and not functioning as expected in {PlatformNameShort} 2.5.(AAP-33295) + +* Fixed an issue where users were able to select a credential that required a password when defining a schedule.(AAP-32821) + +* Fixed an issue where the job output did not show unless you switched tabs. This also fixed other display issues.(AAP-31125) + +* Fixed an issue where adding a new Automation Decision role to a team did not work from the {MenuAMTeams} navigation path.(AAP-31873) + +* Fixed an issue where migration was missing from {PlatformNameShort}.(AAP-37015) + +* Fixed an issue where the gateway *OAuth* token was not encrypted at rest.(AAP-36715) + +* Fixed an issue where the API forces the user to save a service with an API port even if one does not exist.(AAP-36714) + +* Fixed an issue where the Gateway did not properly interpret SAML attributes for mappings.(AAP-36713) + +* Fixed an issue where non-self-signed *certificate+key* pairs were allowed to be used in SAML authenticator configurations.(AAP-36707) + +* Fixed an issue where the login page was not redirecting to `/api/gateway/v1` if a user was already logged in.(AAP-36638) + + +=== {HubNameMain} + +* When configuring an *Ansible Remote* to sync collections from other servers, a requirements file is only required for syncs from Galaxy, and optional otherwise. Without a requirements file, all collections are synced.(AAP-31238) + + +==== Container-based {PlatformNameShort} + +* Fixed an issue that allowed {ControllerName} nodes to override the `receptor_peers` variable. (AAP-37085) + +* Fixed an issue where the containerized installer ignored `receptor_type` for {ControllerName} hosts and always installed them as hybrid.(AAP-37012) + +* Fixed an issue where Podman was not present in the task container, and the cleanup image task failed.(AAP-37011) + +* Fixed an issue where only one {ControllerName} node was configured with Execution/Hop node peers rather than all {ControllerName} nodes.(AAP-36851) + +* Fixed an issue where the {ControllerName} services lost connection to the database, where the containers are stopped and the `systemd` unit does not try to restart.(AAP-36850) + +* Fixed an issue where receptor_type and `receptor_protocol` variables validation checks were skipped during the preflight role execution.(AAP-36857) + + +=== {EDAName} + +* Fixed an issue where the url field of the event stream was not updated if `EDA_EVENT_STREAM_BASE_URL` setting changed. (AAP-33819) + +* Fixed an issue where {EDAName} and {ControllerName} fields were pre-populated with gateway credentials when `secret: true` is set on custom credentials.(AAP-33188) + +* Fixed an issue where the bulk removal of selected role permissions disappeared when more than 4 permissions were selected.(AAP-28030) + + * Fixed an issue where *Enabled options* had its own scrollbar on the *Rulebook Activation Details* page.(AAP-31130) + +* Fixed an issue where the status of an activation was occasionally inconsistent with the status of the latest instance after a restart.(AAP-29755) + +* Fixed an issue where importing a project from a non-existing branch resulted in the completed state instead of a Failed status.(AAP-29144) + +* Fixed an issue with respect to the custom credential types where if the user clicked *The generate extra vars* before the `fields: key` in the input configuration it would create an empty line that is uneditable.(AAP-28084) + +* Fixed an issue where the project sync would not fail on an empty or unstructured git repository.(AAP-35777) + +* Fixed an issue where rulebook validation import/sync fails when a rulebook has a duplicated rule name.(AAP-35164) + +* Fixed an issue where the Event Driven Ansible API allowed a credential's type to be changed.(AAP-34968) + +* Fixed an issue where a previously failed project could be accidentally changed to *completed* after a resync.(AAP-34744) + +* Fixed an issue where no message was recorded when a project did not contain any rulebooks.(AAP-34555) + +* Fixed an issue where the name for credentials in the rulebook activation form field was not updated.(AAP-34123) + +* Updated the message for the rulebook activation/event streams for better clarity.(AAP-33485) + +* Fixed an issue where the source plugin was not able to use the `env vars` to establish a successful connection to the remote source.(AAP-35597) + +* Fixed an issue in the collection where the activation module failed with a misleading error message if the rulebook, project, decision environment, or organization, could not be found.(AAP-35360) + +* Fixed an issue where the validation a host specified as part of a container registry credential did not conform to container registry standards. The specified host was previously able to use a non-syntactically valid host (name or net address) and optional port value `([:])`. The validation is now applied when creating a credential as well as when modifying an existing credential regardless of fields being modified.(AAP-34969) + +* Fixed an issue whereby multiple {PlatformName} credentials were being attached to activations.(AAP-34025) + +* Fixed an issue where there was an erroneous dependency on the existence of an organization named *Default*.(AAP-33551) + +* Fixed an issue where occasionally an activation is reported as running, before it is ready to receive events.(AAP-31225) + +* Fixed an issue where the user could not edit auto-generated *injector vars* while creating {EDAName} custom credentials.(AAP-29752) + +* Fixed an issue where in some cases the `file_watch` source plugin in an {EDAName} collection raised the *QueueFull* exception.(AAP-29139) + +* Fixed an issue where the {EDAName} database increased in size continuously, even if the database was unused. Addend the purge_record script to clean up outdated database records.(AAP-30684) \ No newline at end of file diff --git a/downstream/titles/release-notes/async/aap-25-2-14-oct.adoc b/downstream/titles/release-notes/async/aap-25-2-14-oct.adoc new file mode 100644 index 0000000000..f388c316f1 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-2-14-oct.adoc @@ -0,0 +1,39 @@ +[[aap-25-1-14-oct]] + += {PlatformNameShort} patch release October 14, 2024 + +The following fixes have been implemented in this release of {PlatformName}. + +== Fixed issues + +=== {PlatformNameShort} + +* Fixed an issue in {Gateway} where examining output logs for UWSGI shows a message that can be viewed as insensitive. (AAP-33213) + +* Fixed external Redis port configuration issue, which resulted in a `cluster_host` error when trying to connect to Redis. (AAP-32691) + +* Fixed a faulty conditional which was causing managed Redis to be deployed even if an external Redis was being configured. (AAP-31607) + +* After the initial deployment of {PlatformNameShort}, if you make changes to the {ControllerName}, {HubName}, or {EDAName} sections of the {PlatformNameShort} CR specification, those changes are now propagated to the component custom resources. (AAP-32350) + +* Fixed addressing issues when the filter `keep_keys` is used, all keys are removed from the dictionary. The `keepkey` fix is available in the updated `ansible.utils` collection. (AAP-32960) + +* Fixed an issue in `cisco.ios.ios_static_routes` where the metric distance is to be populated in the `forward_router_address` attribute. (AAP-32960) + +* Fixed an issue where {OperatorPlatformNameShort} is not transferring metric settings to the controller. (AAP-32073) + +* Fixed an issue where you have a schedule on a resource, such as a job template, that prompts for credentials, and you update the credential to be different from what is on the resource by default, the new credential is not submitted to the API and it does not get updated. (AAP-31957) + +* Fixed an issue where setting `*pg_host=` without any other context no longer results in an empty HOST section of `settings.py` in controller. (AAP-32440) + +== Advisories +The following errata advisories are included in this release: + +* link:https://access.redhat.com/errata/RHBA-2024:8079[RHBA-2024:8079 - Product Release Update] + +* link:https://access.redhat.com/errata/RHBA-2024:8084[RHBA-2024:8084 - Container Release Update] + +* link:https://access.redhat.com/errata/RHBA-2024:8096[RHBA-2024:8096 - Cluster Scoped Container Release Update] + +* link:https://access.redhat.com/errata/RHBA-2024:8141[RHBA-2024:8141 - Setup Bundle Release Update] + diff --git a/downstream/titles/release-notes/async/aap-25-20250115.adoc b/downstream/titles/release-notes/async/aap-25-20250115.adoc new file mode 100644 index 0000000000..475083431c --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250115.adoc @@ -0,0 +1,106 @@ +[[aap-25-20250115]] + += {PlatformNameShort} patch release January 15, 2025 + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + +== Enhancements + +=== {PlatformNameShort} + +* With this update, the `ansible.controller` collection has been updated to 4.6.6.(AAP-38443) + +* Enhanced the *status API*, `/api/gateway/v1/status/`, from the *services* property within the JSON to an array. Consumers of this API can still request the previous format with a URL query parameter `service_keys=true`.(AAP-37903) + + +=== {OperatorPlatformNameShort} + +* Added the ability to configure `topology_spread_constraints, `node_selector, and `tolerations` for gateway deployments. (AAP-37193) + +=== Container-based {PlatformNameShort} + +* TLS certificate and key files are now validated during the preflight role execution. + +** If the TLS certificate file is provided then the TLS key file must be provided. + +** If the TLS key file is provided then the TLS certificate file must be provided. + +** Both TLS certificate and key modulus should match.(AAP-37845) + + + +== Bug fixes + +=== CVE + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/cve-2024-52304[CVE-2024-52304] `python3.11-aiohttp`: `aiohttp` vulnerable to request smuggling due to incorrect parsing of chunk extensions.(AAP-36192) + +* link:https://access.redhat.com/security/cve/cve-2024-55565[CVE-2024-55565] `automation-gateway`: `nanoid` mishandles non-integer values.(AAP-37168) + +* link:https://access.redhat.com/security/cve/cve-2024-53908[CVE-2024-53908] `automation-controller`: Potential SQL injection in `HasKey(lhs, rhs)` on Oracle.(AAP-36769) + +* link:https://access.redhat.com/security/cve/cve-2024-53907[CVE-2024-53907] `automation-controller`: Potential denial-of-service in `django.utils.html.strip_tags()`.(AAP-36756) + +* link:https://access.redhat.com/security/cve/cve-2024-11407[CVE-2024-11407] `automation-controller`: Denial-of-Service through data corruption in `gRPC-C++`.(AAP-36745) + +* link:https://access.redhat.com/security/cve/cve-2024-52304[CVE-2024-52304] `ansible-lightspeed-container`: `aiohttp` vulnerable to request smuggling due to incorrect parsing of chunk extensions.(AAP-36185) + +* link:https://access.redhat.com/security/cve/cve-2024-56201[CVE-2024-56201] `ansible-lightspeed-container`: Jinja has a sandbox breakout through malicious filenames.(AAP-38079) + +* link:https://access.redhat.com/security/cve/cve-2024-56326[CVE-2024-56326] `ansible-lightspeed-container`: Jinja has a sandbox breakout through indirect reference to format method.(AAP-38056) + +* link:https://access.redhat.com/security/cve/cve-2024-11407[CVE-2024-11407] `ansible-lightspeed-container`: Denial-of-Service through data corruption in `gRPC-C++`.(AAP-36744) + + +=== {PlatformName} + +* Fixed *not found* error that occurred occasionally when navigating through the form wizards.(AAP-37495) + +* Fixed an issue where installing `ansible-core` no longer installs `python3-jmespath` on {RHEL} 8.(AAP-18251) + +* Fixed an issue where `ID_KEY` attribute was improperly used to determine the username field in social auth pipelines.(AAP-38300) + +* Fixed an issue where authenticator could create a *userid* and return a non-viable *authenticator_uid*.(AAP-38021) + +* Fixed an issue where a private key was displayed in plain text when downloading the OpenAPI schema file. This was not the private key used by gateway, but a random default key.(AAP-37843) + + +=== {ControllerNameStart} + +* Fixed an issue that did not allow sending `job_lifecycle` logs to external aggregators.(AAP-37537) + +* Fixed an issue where there was a date comparison mismatch for traceback from `host_metric_summary_monthly` task.(AAP-37487) + +* Fixed an issue where the scheduled jobs with count set to a *non-zero* value would run unexpectedly. (AAP-37290) + +* Fixed an issue where a project's requirements.yml could revert to a prior state in a cluster. (AAP-37228) + +* Fixed an issue where there would be an occasional error creating the event partition table before starting a job, when a large number of jobs were launched quickly. (AAP-37227) + +* Fixed an issue where temporary receptor files were not cleaned up after a job completed on nodes. (AAP-36904) + +* Fixed an issue where *POST* to `/api/controller/login/` via the gateway resulted in a fatal response.(AAP-33911) + +* Fixed an issue when a job template was launched, the named URL returned a *404* error code.(AAP-37025) + + +==== Container-based {PlatformNameShort} + +* Fixed an issue where the receptor TLS certificate content was not validated during the preflight role execution ensuring that the *x509 Subject Alt Name* (SAN) field contains the required ISO Object Identifier (OID) 1.3.6.1.4.1.2312.19.1. (AAP-37880) + +* Fixed an issue where the *Postgresql SSL* mode variables for controller, {EDAName}, gateway and {HubName} were not validated during the preflight role execution. (AAP-37352) + +* Fixed an issue where the {PlatformNameShort} containerized setup installation would upload collections when inventory growth in the AIO installation was used.(AAP-38372) + +* Fixed an issue where the throttle capacity of controller in an AIO installation would allow for performance degradation.(AAP-38207) + + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where adding a new {HubName} host to an upgraded environment has caused the installation to fail. (AAP-38204) + +* Fixed an issue where the link to the documents in the installer *README.md* was broken. (AAP-37627) + +* Fixed an issue where the Gateway API status on {EDAName} proxy component returned *404* errors. (AAP-32816) diff --git a/downstream/titles/release-notes/async/aap-25-20250122.adoc b/downstream/titles/release-notes/async/aap-25-20250122.adoc new file mode 100644 index 0000000000..e0260a5685 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250122.adoc @@ -0,0 +1,23 @@ +[[aap-25-20250122]] + += {PlatformNameShort} patch release January 22, 2025 + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + +== Enhancements + +=== {PlatformNameShort} + +* Legacy *Auth SSO URL* settings are now customizable if needed for gateway, controller, and hub overrides passed on the {PlatformNameShort} CR if provided. This is mainly useful if you are using a custom ingress controller.(AAP-37364) + + +== Bug fixes + +=== {PlatformNameShort} + +* Fixed an issue where there was a `service_id` mismatch between gateway and {EDAName} which was causing activation rulebooks to fail.(AAP-38172) + +[NOTE] +==== +This fix applies to {OCPShort} only. +==== diff --git a/downstream/titles/release-notes/async/aap-25-20250129.adoc b/downstream/titles/release-notes/async/aap-25-20250129.adoc new file mode 100644 index 0000000000..36d29cddf6 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250129.adoc @@ -0,0 +1,84 @@ +[[aap-25-20250129]] + += {PlatformNameShort} patch release January 29, 2025 + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + +== Enhancements + +=== {PlatformNameShort} + +* Using PostgreSQL TLS certificate authentication with an external database is now available.(AAP-38400) + + +=== {EDAName} + +* The `ansible.eda` collection has been updated to 2.3.1.(AAP-39057) +* Users are now able to create a new {EDAName} credential by copying an existing one.(AAP-39249) +* Added support for *file* and *env* injectors for credentials.(AAP-39091) + + +=== RPM-based {PlatformNameShort} + +* Implemented certificate authentication support (mTLS) for external databases. +** Postgresql TLS certificate authentication is available for external databases. +** Postgresql TLS certificate authentication can be turned on/off (off by default for backward compatibility). +** Each component, {ControllerName}, {EDAName}, {Gateway}, and {HubName}, now provides off the shelf (OTS) TLS certificate and key files (mandatory).(AAP-38400) + + +== Bug fixes + +=== CVE + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/cve-2024-56326[CVE-2024-56326] `python3.11-jinja2`: Jinja has a sandbox breakout through indirect reference to format method.(AAP-38852) + +* link:https://access.redhat.com/security/cve/CVE-2024-56374[CVE-2024-56374] `ansible-lightspeed-container`: Potential denial-of-service vulnerability in IPv6 validation.(AAP-38647) + +* link:https://access.redhat.com/security/cve/CVE-2024-56374[CVE-2024-56374] `python3.11-django`: potential denial-of-service vulnerability in IPv6 validation.(AAP-38630) + +* link:https://access.redhat.com/security/cve/cve-2024-53907[CVE-2024-53907] `python3.11-django`: Potential denial-of-service in django.utils.html.strip_tags().(AAP-38486) + +* link:https://access.redhat.com/security/cve/cve-2024-56201[CVE-2024-56201] `python3.11-jinja2`: Jinja has a sandbox breakout through malicious filenames.(AAP-38331) + +* link:https://access.redhat.com/security/cve/CVE-2024-56374[CVE-2024-56374] `automation-controller`: Potential denial-of-service vulnerability in IPv6 validation.(AAP-38648) + +* link:https://access.redhat.com/security/cve/cve-2024-56201[CVE-2024-56201] `automation-controller`: Jinja has a sandbox breakout through malicious filenames.(AAP-38081) + +* link:https://access.redhat.com/security/cve/cve-2024-56326[CVE-2024-56326] `automation-controller`: Jinja has a sandbox breakout through indirect reference to format method.(AAP-38058) + + + +=== {ControllerNameStart} + +* Fixed an issue where the order of source inventories was not respected by the collection `ansible.controller`.(AAP-38524) + +* Fixed an issue where an actively running job on an execution node may have had its folder deleted by a system task. This fix addresses some *Failed to JSON parse a line from worker stream* type errors.(AAP-38137) + + + +=== Container-based {PlatformNameShort} + +* The inventory file variable *postgresql_admin_username* is no longer required when using an external database. If you do not have database administrator credentials, you can supply the database credentials for each component in the inventory file instead.(AAP-39077) + + +=== {EDAName} + +* Fixed an issue where the application version in the *openapi* spec was incorrectly set.(AAP-38392) + +* Fixed an issue where activations were not properly updated in some scenarios with a high load of the system. (AAP-38374) + +* Fixed an issue where users were unable to filter *Rule Audits* by rulebook activation name.(AAP-39253) + +* Fixed an issue where the input field of the injector configuration could not be empty.(AAP-39086) + + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where setting `automationedacontroller_max_running_activations` could cause the installer to fail. (AAP-38708) + +* Fixed an issue where the {Gateway} services are not restarted when a dependency changes.(AAP-38918) + +* Fixed an issue where the {Gateway} could not be setup with custom SSL certificates.(AAP-38985) + diff --git a/downstream/titles/release-notes/async/aap-25-20250213.adoc b/downstream/titles/release-notes/async/aap-25-20250213.adoc new file mode 100644 index 0000000000..6d694e9de7 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250213.adoc @@ -0,0 +1,133 @@ +[[aap-25-20250213]] + += {PlatformNameShort} patch release February 13, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release Date | Component versions + +| February 13, 2025 | +* {ControllerNameStart} 4.6.8 +* {HubNameStart} 4.10.1 +* {EDAName} 1.1.4 +* Container-based installer {PlatformNameShort} (bundle) 2.5-10 +* Container-based installer {PlatformNameShort} (online) 2.5-10 +* Receptor 1.5.1 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-8.1 +* RPM-based installer {PlatformNameShort} (online) 2.5-8 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: `aap-operator.v2.5.0-0.1738808953` + +* Cluster-scoped Bundle: `aap-operator.v2.5.0-0.1738809624` + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + + +== New Features + +=== {PlatformNameShort} + +* Keycloak now allows for the configuration of the claim key/name for the field containing a user's group membership returned in the ID token and/or user info data. This can be configured by setting the `GROUPS_CLAIM` configuration value on a per-authenticator plugin basis as was done for the OIDC plugin.(AAP-38720) + +== Enhancements + +=== General + +* The `ansible.controller` collection has been updated to 4.6.8.(AAP-39848) + +* `ansible.platform` collection has been updated to 2.5.20250213.(AAP-39740) + +* `ansible.eda` collection has been updated to 2.4.0.(AAP-39577) + +=== {PlatformNameShort} + +* It is now possible to configure {HubName} without Redis PVC.(AAP-39600) + + +=== {ControllerNameStart} + +* This release sees the addition of `client_id` and `client_secret` fields to the Insights credential to support service accounts via console.redhat.com.(AAP-36565) + +* You are now able to specify the input for the `client_id` and `client_secret` for the insights credential via the `awx.awx.credential_type` module.(AAP-37441) + +* Updated `awxkit` by adding service account support for Insights credential type, specifically adding the fields `client_id` and `client_secret` to `credential_input_fields`.(AAP-39352) + +=== {ExecEnvNameStart} + +* The *file* command has been added to *ee-minimal* and *ee-supported* container images.(AAP-40009) + +== Bug fixes + +=== Migration + +* Fixed an issue where after upgrading {PlatformNameShort} from 2.4 to 2.5, many of the surveys that had multiple choice options displayed a blank space in the drop down menu.(AAP-35093) + +=== {PlatformNameShort} + +* Fixed a bug in the collections token module where it was unable to find an application if multiple organizations had the same application name.(AAP-38625) + +* Fixed an issue where upgrading {PlatformNameShort} 2.5 caused an occasional internal server error for all users with {EDAName} and {HubNameStart} post upgrade.(AAP-39293) + +* Fixed an issue where the administrator was not allowed to configure auto migration of legacy authenticators.(AAP-39949) + +* Fixed an issue where there were two launch/relaunch icons displayed from the jobs list for failed jobs.(AAP-38483) + +* Fixed an issue where the *Schedules Add* wizard returned a `RequestError` *Not Found*.(AAP-37909) + +* Fixed an issue where the *EC2 Inventory Source* type required credentials, which is not necessary when using IAM instance profiles.(AAP-37346) + +* Fixed an issue when attempting to assign the *Automation Decisions - Organization Admin* role to a user in an organization resulted in the error, *Not managed locally, use the resource server instead*. Administrators can now be added by using the *Organization -> Administrators* tab.(AAP-37106) + +* Fixed an issue where when updating a workflow node, the Job Tags were lost and Skip Tags were not saved.(AAP-35956) + +* Fixed an issue where new users who logged in with legacy authentication were not merged when switching to Gateway authentication.(AAP-40120) + +* Fixed an issue where the user was unable to link legacy SSO accounts to Gateway.(AAP-40050) + +* Fixed an issue where updating {PlatformNameShort} to 2.5 caused an Internal Service Error for all users with {EDAName} and {HubNameStart} post upgrade. The migration process will now detect and fix users who were created in services via JWT auth and improperly linked to the service instead of the {Gateway}.(AAP-39914) + + +=== {OperatorPlatformNameShort} + +* Fixed an issue where `AnsibleWorkflow` custom resources would not parse and utilize `extra_vars` if specified.(AAP-39005) + +=== {ControllerNameStart} + +* Fixed an issue where when an Azure credential was created using `awxkit`, the creation failed because the parameter `client_id` was added to the input fields while the API was not expecting it.(AAP-39846) + +* Fixed an issue where the job schedules were running at incorrect times when that schedule's start time fell within a Daylight Saving Time period.(AAP-39826) + + +=== {HubNameStart} + +* Fixed an issue where the use of empty usernames and passwords when creating a remote registry was not allowed.(AAP-26462) + + +=== Container-based {PlatformNameShort} + +* Fixed an issue where the containerized installer had no preflight check for the Postgres version of an external database.(AAP-39727) + +* Fixed an issue where the containerized installer could not register other peers in the database.(AAP-39470) + +* Fixed an issue where there was a missing installation user UID check.(AAP-39393) + +* Fixed an issue where Postgresql connection errors would be hidden during its configuration.(AAP-39389) + +* Fixed an issue in the preflight check regression when the TLS private key provided is not an RSA type.(AAP-39816) + + +=== {EDAName} + +* Fixed an issue where the btn:[Generate extra vars] button did not handle file/env injected credentials.(AAP-36003) + +=== Known Issues + +* In the {Gateway}, the tooltip for *Projects -> Create Project - Project Base Path* is undefined.(AAP-27631) + +* Deploying the {Gateway} on FIPS enabled RHEL 9 is currently not supported.(AAP-39146) diff --git a/downstream/titles/release-notes/async/aap-25-20250225.adoc b/downstream/titles/release-notes/async/aap-25-20250225.adoc new file mode 100644 index 0000000000..908ec1b4e6 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250225.adoc @@ -0,0 +1,105 @@ +[[aap-25-20250225]] + += {PlatformNameShort} patch release February 25, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release Date | Component versions + +| February 25, 2025 | +* {ControllerNameStart} 4.6.8 +* {HubNameStart} 4.10.1 +* {EDAName} 1.1.4 +* Container-based installer {PlatformNameShort} (bundle) 2.5-10.1 +* Container-based installer {PlatformNameShort} (online) 2.5-10 +* Receptor 1.5.1 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-8.2 +* RPM-based installer {PlatformNameShort} (online) 2.5-8 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: `aap-operator.v2.5.0-0.1740093573` + +* Cluster-scoped Bundle: `aap-operator.v2.5.0-0.1740094176` + + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + + +== Enhancements + +=== {GatewayStart} + +* Previously `gateway_proxy_url` was used for the proxy health check, but is no longer used in favor of the `ENVOY_HOSTNAME` setting.(AAP-39907) + + +=== {EDAName} + +* In the credential type schema the format field can be set to binary_base64 to specify a file should be loaded as a binary file.(AAP-36581) + +** Sample Credential Type Schema +** Inputs Configuration +** fields: +*** id: keytab +*** type: string +*** label: Kerberos Keytab file +*** format: binary_base64 +secret: true +*** help_text: Please select a Kerberos Keytab file +*** multiline: true + + +== Bug fixes + +=== {PlatformNameShort} + +* Fixed an issue where the subscription entitlement expiration notification was visible, even when the subscription was active.(AAP-39982) + +* Fixed an issue where upon UI reload/refresh, logs of a running job before the refresh would not appear until new logs were generated from the playbook.(AAP-38924) + +* Fixed an issue when the customer was unable to scale down replicas to put {PlatformNameShort} into idle mode.(AAP-39492) + +* After launching the *Workflow Job Template*, the launched job for a job template node in the workflow should contain the `job_tags` and `skip_tags` that were specified in the *launch prompt* step.(AAP-40395) + +* Fixed an issue where the user was not able to create a members role in {PlatformNameShort} 2.5.(AAP-37626) + +* Fixed an issue where a custom image showed Base64 encoded data.(AAP-26984) + +* Fixed an issue where a custom logo showed Base64 encoded data.(AAP-26909) + +* Fixed an issue that restricted users from executing jobs for which they had the correct permissions.(AAP-40398) + +* Fixed an issue where the workflow job template node extra vars were not saved.(AAP-40396) + +* Fixed an issue where the {TitleBuilder} guide had the incorrect ansible-core version.(AAP-40390) + +* Fixed an issue where you were not able to create a members role in {PlatformNameShort} 2.5.(AAP-40698) + +* Fixed an issue where the initial login to any of the services from {Gateway} could result in the user being given access to the wrong account.(AAP-40617) + +* Fixed an issue where the service owned resources were not kept in sync with the {Gateway} allowing for duplicate name values on user login.(AAP-40616) + +* Fixed an issue where users, organizations, and teams, became permanently out of sync if any user, organization, or team, was deleted from the {Gateway}.(AAP-40615) + +* Fixed an issue where {HubName} would fail to run the sync task if any users were deleted from the system.(AAP-40613) + + +=== {GatewayStart} + +* Fixed an issue where ping and status checks with resolvable, but nonresponding, URLs could cause all {Gateway} `uwsgi` workers to hang until all were exhausted. The new settings are `PING_PAGE_CHECK_TIMEOUT` and `PING_PAGE_CHECK_IGNORE_CERT`.(AAP-39907) + + +=== {EDAName} + +* Fixed an issue where credentials could be copied in AAP but could not be copied in {EDAName}.(AAP-35875) + + +=== Known Issues + +* In the {Gateway}, the tooltip for *Projects -> Create Project - Project Base Path* is undefined.(AAP-27631) + +* Deploying the {Gateway} on FIPS enabled RHEL 9 is currently not supported.(AAP-39146) diff --git a/downstream/titles/release-notes/async/aap-25-20250305.adoc b/downstream/titles/release-notes/async/aap-25-20250305.adoc new file mode 100644 index 0000000000..0bac9f8a7b --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250305.adoc @@ -0,0 +1,47 @@ +[[aap-25-20250305]] + += {PlatformNameShort} patch release March 01, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| March 01, 2025 | +* {ControllerNameStart} 4.6.8 +* {HubNameStart} 4.10.1 +* {EDAName} 1.1.4 +* Container-based installer {PlatformNameShort} (bundle) 2.5-10.2 +* Container-based installer {PlatformNameShort} (online) 2.5-10 +* Receptor 1.5.1 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-8.3 +* RPM-based installer {PlatformNameShort} (online) 2.5-8 + +|=== + +CSV versions in this release: + +* Namespace-scoped bundle: `aap-operator.v2.5.0-0.1740773472` + +* Cluster-scoped bundle: `aap-operator.v2.5.0-0.1740774104` + +[IMPORTANT] +==== +An issue was found in affected versions of {PlatformNameShort} that enabled a lesser privileged user (even unauthenticated) promotion to a greater privileged user. All {PlatformNameShort} {PlatformVers} customers should upgrade their environments to the latest version as soon as possible to resolve this issue. +{AAPonAzureNameShort} and {SaaSonAWSShort} environments are already patched by Red Hat. +==== + +The following bug fixes have been implemented in this release of {PlatformNameShort}: + +== Bug fixes + +=== CVE + +With this update, the following CVE has been addressed: + +* link:https://access.redhat.com/security/cve/CVE-2025-1801[CVE-2025-1801] `automation-gateway`: `aap-gateway` privilege escalation. (AAP-41180) + +=== {GatewayStart} + +* Fixed an issue that caused the API to randomly return 401 errors. (AAP-41054) diff --git a/downstream/titles/release-notes/async/aap-25-20250312.adoc b/downstream/titles/release-notes/async/aap-25-20250312.adoc new file mode 100644 index 0000000000..447679eb4a --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250312.adoc @@ -0,0 +1,177 @@ +[[aap-25-20250312]] + += {PlatformNameShort} patch release March 12, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release Date | Component versions + +| March 12, 2025 | +* {ControllerNameStart} 4.6.9 +* {HubNameStart} 4.10.2 +* {EDAName} 1.1.6 +* Container-based installer {PlatformNameShort} (bundle) 2.5-11 +* Container-based installer {PlatformNameShort} (online) 2.5-11 +* Receptor 1.5.3 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-9 +* RPM-based installer {PlatformNameShort} (online) 2.5-9 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: `aap-operator.v2.5.0-0.1740093573` + +* Cluster-scoped Bundle: `aap-operator.v2.5.0-0.1740094176` + + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + + +== General +* The `ansible.controller` collection has been updated to 4.6.9.(AAP-41400) + +* `ansible-lint` has been updated to 25.1.2.(AAP-38116) + +* Fixed an issue where the bundle installer/ee-supported did not contain the latest collection versions. The following collections have been updated in the ee-supported and the bundle installer: +** amazon.aws 9.2.0 +** ansible.windows 2.7.0 +** arista.eos 10.0.1 +** cisco.ios 9.1.1 +** cisco.iosxr 10.3.0 +** cisco.nxos 9.3.0 +** cloud.common 4.0.0 +** cloud.terraform 3.0.0 +** kubernetes.core 5.1.0 +** microsoft.ad 1.8.0 +** redhat.openshift 4.0.1 +** vmware.vmware 1.10.1 +** vmware.vmware_rest 4.6.0.(AAP-39960) + +* Fixed an issue where `ansible-rulebook` did not support by default third party python libraries.(AAP-41341) + + +== Features + +=== {EDAName} + +* Adopts the new credential copy endpoint from the API.(AAP-41384) + + +== Enhancements + +=== {EDAName} + +* {EDAName} activation logging is now provided via the `journald` driver.(AAP-39745) + +* Rulebook activations' log message field is now separated into timestamps and message fields.(AAP-39743) + +* Moved `ansible.eda` collection from de-supported to de-minimal as elements of the collection are required for all {EDAName} images.(AAP-39749) + + +=== RPM-based {PlatformNameShort} + +* The `setup.sh` script now has an option to collect `sosreport`.(AAP-40085) + + +== Deprecated + +* Deprecated the variables `eda_main_url` and `hub_main_url` in favor of the {Gateway} proxy URL. {HubNameStart} will now use the {Gateway} proxy URL.(AAP-41306) + + +== Bug fixes + +With this update, the following CVEs have been addressed: + +link:https://access.redhat.com/security/cve/cve-2025-26791[CVE-2025-26791] `automation-gateway`: Mutation XSS in `DOMPurify` due to improper template literal handling.(AAP-40402) + +=== {PlatformNameShort} + +* Fixed an issue in the user collection module where running with `state: present` would cause a stack trace.(AAP-40887) + +* Fixed an issue that caused updates to SAML authenticators to ignore an updated public certificate provided via UI or API and then fail with the message *The certificate and private key do not match*.(AAP-40767) + +* Fixed an issue with the `ServiceAuthToken` destroy method to allow HTTP delete via `ServiceAuth` to work properly.(AAP-37630) + +=== {GatewayStart} + +* Fixed an issue that would prevent some types of resources from getting synced if there was a naming conflict.(AAP-41241) + +* Fixed an issue where the login failed for users who were members of a team or organization that had a naming conflict.(AAP-41240) + +* Fixed an issue where there would be *401 unauthorized* errors thrown at random in the {Gateway} UI.(AAP-41165) + +* Fixed an issue where services could not request `cloud.redhat.com` settings from the {Gateway} using `ServiceTokenAuth`.(AAP-39649) + +=== {ControllerNameStart} + +* Fixed an issue where upgrading was preventing {ControllerName} administrator password to be set for the {Gateway} administrator account.(AAP-40839) + +* Fixed an issue where the indirect host counting name recorded the hostname, instead of from the query result.(AAP-41033) + +* Fixed an issue where the `OpaClient` was not initializing properly after timeouts and retries.(AAP-40997) + +* Fixed an issue where {ControllerName} was missing the service account credentials for analytics.(AAP-40769) + +* Fixed an issue where the ability to enable feature flags via the corresponding setting of the same name was not possible.(AAP-39783) + +* Fixed an issue where the DAB feature flags endpoints were not registered in the {ControllerName} API.(AAP-39778) + +* Fixed an issue where the API was missing a helper method for fetching the service account token from `sso.redhat.com`.(AAP-39637) + +=== Container-based {PlatformNameShort} + +* Fixed an issue where the containerized installer was not creating receptor mesh connections between all {ControllerName} nodes.(AAP-41102) + +* Fixed an issue where a default installation of the containerized {PlatformNameShort} was unable to use container groups.(AAP-40431) + +* Fixed an issue where errors would be hidden during {EDAName} status validation.(AAP-40021) + +* Fixed an issue where the `polkit` RPM package was not installed, therefore, not enabling user lingering.(AAP-39860) + +=== {EDAName} + +* Fixed an issue where the `EDA_ACTIVATION_DB_HOST` environment variable in the `eda-initial-data` container was missing.(AAP-41270) + +* Fixed an issue with the behavior of the `ansible-rulebook` and {EDAcontroller} to help when an activation that was started correctly was considered unresponsive and was scheduled for a restart.(AAP-41070) + +* Fixed an issue where editing and copying of rulebook activations in the API were not allowed.(AAP-40254) + +* Fixed an issue where the activation was incorrectly restarted with the error message *Missing container for running activation*.(AAP-39545) + +* Fixed an issue where the {EDAName} server did not support `PG Notify` using certificates.(AAP-39294) + +* Fixed an issue where the user was not required to give a unique user defined name when copying a credential.(AAP-39079) + +* Fixed an issue where the image URL in the collection `decision_environment` testing was not OCI compliant.(AAP-39064) + +* Fixed an issue where when creating a new team with the same name should have propagated `IntegrityError`.(AAP-38941) + +* Fixed an issue where decision environment URLs were not validated against OCI specification to ensure successful authentication to the container registry when pulling the image.(AAP-38822) + +* Fixed an issue where the *Activation* module did not support the `copy` operation from other activations.(AAP-37306) + +=== Receptor + +* Fixed an issue where {MeshReceptor} was creating too many `inotify` processes, and where the user would encounter a *too many open files* error.(AAP-22605) + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where the activation instance logs were missing in RPM deployments.(AAP-40886) + +* Fixed an issue where the managed CA would not correctly assign eligible groups during discovery, during installation, and backup and restore.(AAP-40277) + +* Fixed an issue where during an installation or upgrade, SELinux relabeling was not occurring even if new `fcontext` rules were added.(AAP-40489) + +* Fixed an issue where the credentials for {ExecEnvShort}s and decision environments hosted in {HubName} were incorrectly configured.(AAP-40419) + +* Fixed an issue where projects failed to sync due to incorrectly configured credentials for {PlatformNameShort} collections hosted in {HubName}.(AAP-40418) + + +== Known Issues + +* In the {Gateway}, the tooltip for *Projects -> Create Project - Project Base Path* is undefined.(AAP-27631) + +* Deploying {Gateway} on FIPS enabled RHEL 9 is currently not supported.(AAP-39146) diff --git a/downstream/titles/release-notes/async/aap-25-20250326.adoc b/downstream/titles/release-notes/async/aap-25-20250326.adoc new file mode 100644 index 0000000000..6ad7ec1621 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250326.adoc @@ -0,0 +1,94 @@ +[[aap-25-20250326]] + += {PlatformNameShort} patch release March 26, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| March 26, 2025 | +* {GatewayStart} 2.5.20250326 +* {ControllerNameStart} 4.6.10 +* {HubNameStart} 4.10.3 +* {EDAName} 1.1.6 +* Container-based installer {PlatformNameShort} (bundle) 2.5-11.1 +* Container-based installer {PlatformNameShort} (online) 2.5-11 +* Receptor 1.5.3 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-10 +* RPM-based installer {PlatformNameShort} (online) 2.5-10 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: aap-operator.v2.5.0-0.1742434024 + +* Cluster-scoped Bundle: aap-operator.v2.5.0-0.1742434756 + +== General + +* The `ansible.controller` collection has been updated to 4.6.10.(AAP-42242) + +* Service account support has been integrated into {PlatformNameShort} Analytics; service account credentials have replaced basic auth credentials when linking to Analytics.(AAP-39472) + +** For more information, see the KCS article link:https://access.redhat.com/articles/7112649[Configure {PlatformNameShort} to use service account credentials for authentication.] + +=== Deprecated + +* Deprecated and suppressed the warning about `ANSIBLE_COLLECTIONS_PATHS` in the job output.(AAP-41566) + +== Bug fixes + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/cve-2025-27516[CVE-2025-27516] `python3.11-jinja2`: Jinja sandbox breakout through attr filter selecting format method.(AAP-42104) + +* link:https://access.redhat.com/security/cve/CVE-2025-26699[CVE-2025-26699] `python3.11-django`: Potential denial-of-service vulnerability in `django.utils.text.wrap()`.(AAP-42107) + +* link:https://access.redhat.com/security/cve/CVE-2025-26699[CVE-2025-26699] `ansible-lightspeed-container`: Potential denial-of-service vulnerability in `django.utils.text.wrap()`.(AAP-41138) + +* link:https://access.redhat.com/security/cve/cve-2025-27516[CVE-2025-27516] `automation-controller`: Jinja sandbox breakout through attr filter selecting format method.(AAP-41692) + +* link:https://access.redhat.com/security/cve/cve-2025-27516[CVE-2025-27516] `ansible-lightspeed-container`: Jinja sandbox breakout through attr filter selecting format method.(AAP-41690) + +=== {PlatformNameShort} + +* Fixed an issue when migrating user accounts with invalid email addresses, the process would print a message showing the user name of the user whose email address has been removed.(AAP-41675) + +* Fixed an issue that occurred after enabling `automigration` of user accounts from the previous SSO authenticator to a new authenticator, the user accounts from other {PlatformNameShort} services such as {ControllerName} or {HubName}, were not properly merged into one account, and the account on those services deleted.(AAP-42146) + +=== {OperatorPlatformNameShort} + +* Fixed an issue where the legacy {ControllerName} API information link on the {ControllerName} redirect page was broken.(AAP-41510) + +* Fixed an issue where {PlatformNameShort} backups would fail when writing `yaml` to the PVC on {OCPShort} clusters with {OCPShort} Virtualization installed.(AAP-28609) + +=== {ControllerNameStart} + +* Fixed an issue where Insights projects were failing on {OCPShort} on {PlatformNameShort}, due to incorrectly specifying the extra `vars` path.(AAP-41874) + +* Fixed an issue where the host metrics for dark, unreachable hosts were being collected.(AAP-41567) + +* Fixed an issue where the system auditor could download the execution node install bundle.(AAP-37922) + +* Fixed an issue where the host record was added to `HostMetric` when the host had failures or unreachable tasks completed.(AAP-32094) + +=== {HubNameStart} + +* Fixed an issue where the user could not delete {HubName} teams on the resource API.(AAP-42158) + +* Fixed an issue where the `retain_repo_versions` was null for the validated repos.(AAP-42005) + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where preflight was not accounting for `automationgateway` being a CA server node.(AAP-41817) + +* Fixed an issue where {Gateway} installations resulted in failures in environments with IPv6 due to `nginx` configuration timing.(AAP-41816) + +== Known Issues + +* In the {Gateway}, the tooltip for *Projects -> Create Project - Project Base Path* is undefined.(AAP-27631) + +* Deploying {Gateway} on FIPS enabled RHEL 9 is currently not supported.(AAP-39146) diff --git a/downstream/titles/release-notes/async/aap-25-20250409.adoc b/downstream/titles/release-notes/async/aap-25-20250409.adoc new file mode 100644 index 0000000000..e028310664 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250409.adoc @@ -0,0 +1,100 @@ +[[aap-25-20250409]] + += {PlatformNameShort} patch release April 9, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| April 9, 2025 | +* {ControllerNameStart} 4.6.11 +* {HubNameStart} 4.10.3 +* {EDAName} 1.1.7 +* Container-based installer {PlatformNameShort} (bundle) 2.5-12 +* Container-based installer {PlatformNameShort} (online) 2.5-12 +* Receptor 1.5.3 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-11 +* RPM-based installer {PlatformNameShort} (online) 2.5-11 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: aap-operator.v2.5.0-0.1743660124 + +* Cluster-scoped Bundle: aap-operator.v2.5.0-0.1743660958 + +== General + +* The `ansible.controller` collection has been updated to 4.6.11.(AAP-43126) + +* Fixed an issue where authentication configuration for *AzureAD/EntraId* groups could not be used in authentication mapping.(AAP-42890) + + +== Enhancements + + +=== Container-based {PlatformNameShort} + +* Implemented variables for applying `extra_settings` for {ControllerName}, {EDAName}, {Gateway}, and {HubName} during installation.(AAP-42932) + + +== Bug fixes + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/CVE-2025-2877[CVE-2025-2877] `ansible-rulebook`: exposure of inventory passwords in plain text when starting a rulebook activation with verbosity set to debug in {EDAName}.(AAP-42817) + + +=== {PlatformNameShort} + +* Fixed an issue where job workflow templates failed with limits.(AAP-33726) + +* Fixed an issue where there was non-viable information disclosure for pen testing.(AAP-39977) + + +=== {OperatorPlatformNameShort} + +* Fixed an issue on the {OCPShort} Route TLS termination that was always configured with the edge value.(AAP-42051) + + +=== Container based {PlatformNameShort} + +* Fixed an issue where backup and restore jobs would fail to restore on `CONT` jobs. Implemented validation and cleanup for service nodes on a restore to a new cluster.(AAP-42781) + +* Fixed an issue where podman logs did not show any log messages if the user was not part of the local *administrator* or `systemd-journal` group.(AAP-42755) + +* Fixed an issue where the {PlatformNameShort} 2.5 containerized installer was unable to read custom configurations.(AAP-40798) + +* Fixed an issue where a remote user was not part of the `systemd-journal` group and could not access container logs.(AAP-42755) + + +=== {ExecEnvNameStart} + +* Fixed an issue where there was a Python 3.11 incompatibility by updating `pykerberos` to 1.2.4 in `ee-minimal` and `ee-supported` container images.(AAP-42428) + + +=== {EDAName} + +* Fixed an issue where activations attached with some event streams could not be created in deployments configured with *Postgresql* with *mTLS*.(AAP-42268) + + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where the token refresh prevented {EDAName} worker nodes from re-authenticating tokens.(AAP-42981) + +* Fixed an issue where the bundle installer failed to update {ControllerName} and `aap-metrics-utility` in the same run.(AAP-42632) + +* Fixed an issue where platform UI was not loading when the {Gateway} was on a *FIPS* enabled {RHEL} 9.(AAP-39146) + + +== Known Issues + +* This section provides information about known issues in {PlatformNameShort} 2.5. +Upgrade issues with the RPM installer. + +* Upgrading from {RHEL} 9.4 to {RHEL} 9.5 or later fails when running {Gateway} version 2.5.20250409 or later. To upgrade to {RHEL} 9.5 or later, follow the steps in this link:https://access.redhat.com/solutions/7112819[KCS article]. + +* When upgrading {PlatformNameShort} 2.5, you must use the RPM installer version 2.5-11 or later. If you use an older installer, the installation might fail. If you encounter a failed installation using an earlier version of the installer, rerun the installation with version 2.5-11 or later. diff --git a/downstream/titles/release-notes/async/aap-25-20250507.adoc b/downstream/titles/release-notes/async/aap-25-20250507.adoc new file mode 100644 index 0000000000..bcca6caf36 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250507.adoc @@ -0,0 +1,317 @@ +[[aap-25-20250507]] + += {PlatformNameShort} patch release May 7, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| May 7, 2025 | +* {ControllerNameStart} 4.6.12 +* {HubNameStart} 4.10.4 +* {EDAName} 1.1.8 +* Container-based installer {PlatformNameShort} (bundle) 2.5-13 +* Container-based installer {PlatformNameShort} (online) 2.5-13 +* Receptor 1.5.5 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-12 +* RPM-based installer {PlatformNameShort} (online) 2.5-12 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: aap-operator.v2.5.0-0.1746137767 + +* Cluster-scoped Bundle: aap-operator.v2.5.0-0.1746138413 + + +== General + +* Implemented GitHub application credential type.(AAP-38589) + +* The `ansible.platform` collection has been updated to 2.5.20250507.(AAP-44992) + +* The `ansible.controller` collection has been updated to 4.6.12. + +* The `ansible.eda` collection has been updated to 2.7.0. + + +== Technology Preview + +=== Policy as Code + +Policy enforcement is available in tech preview, behind a feature flag. See the link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/controller-pac#enable-pac_controller-pac[product documentation] and the Knowledgebase article link:https://access.redhat.com/articles/7109282[How to set feature flags for {PlatformName}] for information on working with feature flags. + +== Features + +=== {PlatformNameShort} + +* Added an enhanced log viewer for rulebook activation instances similar to the job output logger.(AAP-43337) + +=== Container-based {PlatformNameShort} + +* Implemented a playbook to collect sos reports using the inventory file.(AAP-42606) + + +=== {EDAName} + +* {EDAName} now submits analytics data.(AAP-40881) + +* Enabled {EDAName} analytics data to be uploaded to the cloud. This feature is guarded by a feature flag.(AAP-42468) + +* Added a log tracking id to each log message labelled as `[tid: uuid-pattern]`.(AAP-42270) + +* Improved the user experience of managing rulebook activations in {EDAName} by introducing an edit capability.(AAP-33067) + +* The following datapoints {EDAName} now collects for analytics for MVP: + +** Eventsources used in {EDAName}. + +** Eventstreams used in {EDAName}. + +** Version of {EDAName} installed. + +** Installation type (container/OCP/VM). + +** Platform organizations in {EDAName}. + +** Which {ControllerName} job template was launched from a rulebook activation.(AAP-31458) + +* {EDAName} `gather_analytics` command now runs on schedule as an internal task.(AAP-30063) + +* {EDAName} now writes analytics data collector that sends payloads to *console.redhat.com*.(AAP-30055) + +* Add `x-request-id` to each log message labelled as `[rid:uuid-pattern]`.(AAP-42269) + + +== Enhancements + + +=== {PlatformNameShort} + +* Updated {Gateway} to adopt selected standard component for settings mechanism.(AAP-34939) + +* Refactored the `authenticate()` method inside the `AuthenticatorPlugin` class in `legacy_password.py` and `legacy_sso.py` to their common parent `LegacyMixin`. Added comments to classes and their methods for code clarity.(AAP-44460) + + +=== {OperatorPlatformNameShort} + +* Fixed an issue where the Lightspeed Operator would not use the `ANSIBLE_AI_MODEL_MESH_CONFIG`.(AAP-41335) + +* Extended CCSP and renewal guidance reports to include inventory scope and node/host details.(AAP-38802) + + +=== {ControllerNameStart} + +* Updated the pinned version of `receptorctl` in {ControllerName} to 1.5.5.(AAP-44823) + +* Updated the pinned version for `ansible-runner` in {ControllerName}.(AAP-43357) + + +=== Container-based {PlatformNameShort} + +* Added new variable `use_archive_compression` with default `value: true`. Added new variable component `Name_use_archive_compression` for each component with the default `value: true`.(AAP-41242) + + +=== {EDAName} + +* {EDAName} collection standardization enhancements.(AAP-41402) + +* Relevant settings and versions are emitted in logs when the ansible-rulebook starts in worker mode.(AAP-40781) + +* Added log entries with settings and version at startup.(AAP-40781) + +* Enhanced the {PlatformNameShort} injectors for `eda-server` to include common platform variables as `extra_vars` or environment variables if they are specified.(AAP-43029) + +* {EDAName} decision environment validation errors now display under the decision environment text box in the decision environment UI page.(AAP-42147) + +* Added a {ControllerName} URL check for the CLI.(AAP-41575) + +* If a source plugin terminates you are now able to see the stack trace with the source file name, the function name, and line number.(AAP-41774) + + +=== RPM-based {PlatformNameShort} + +* Added compression for archive and database artifacts used in backup/restore + +** Updated database filename used for {ControllerName} `pg_dump` from tower to {ControllerName} while maintaining backward compatibility for backups using `tower.db` filename.(AAP-42055) + + +== Bug fixes + +With this update, the following CVEs have been addressed: + +link:https://access.redhat.com/security/cve/cve-2025-26699[CVE-2025-26699] `automation-controller`: Potential denial-of-service vulnerability in `django.utils.text.wrap()`.(AAP-41139) + + +=== {PlatformNameShort} + +* Fixed an issue where In AAP 2.5, the user needed to press Ctrl+Enter to start a new line.(AAP-43499) + +* Fixed an issue where the change anchor tag on API html view violated semantic rules. (AAP-43802) + +* LDAP Authenticator field `USER_SEARCH` field now properly supports LDAP Unions. Previously you could only define one search term in the field like: + +---- +[ + "ou=users,dc=example,dc=com", + "SCOPE_SUBTREE", + "uid=%(user)s" +] + +[ + "ou=users,dc=example,dc=com", + "SCOPE_SUBTREE", + "uid=%(user)s" + ], + [ + "ou=users,dc=example,dc=com", + "SCOPE_SUBTREE", + "uid=%(user)s" + ] +] +---- + +* `USER_DN_TEMPLATE` will still take precedence over the `USER_SEARCH` field. If non-unique users are found when performing multiple searches, those users will be unable to login to {PlatformNameShort}.(AAP-42883) + +* Fixed an issue where there was a file not found error with Dynaconf.(AP-43144) + +* Fixed an issue where dynaconf mishandled the openapi schema.(AAP-43143) + +* Fixed an issue when editing an authenticator with a large number of Organization/Team mappings in platform-gateway would affect the loading time of the web page, potentially making the page unresponsive.(AAP-40963) + +* Fixed an issue where unreachable hosts were not being filtered out of CCSP reports usage.(AAP-38735) + +* Fixed an issue where the `X-DAB-JW-TOKEN` header message would flood logs.(AAP-38169) + +* Fixed an issue where after upgrading to {PlatformNameShort} 2.5 managed on Azure, the ability to see job output while the job was running was lost. (AAP-43894) + +* Fixed an issue where customers were not allowed to view output details for filtered job outputs.(AAP-38925) + +* Fixed an issue where unreachable hosts from CCSP usage reports were not excluded.(AAP-38735) + +* Fixed an issue where indirect hosts were being counted in the first tab as quantity.(AAP-44676) + +* Fixed an issue where the platform-gateway could not be installed with a different name for the admin user.(AAP-44180) + +* Fixed an issue where an {PlatformNameShort} UI session was being logged out even if the user is actively working.(AAP-43622) + +* Fixed an issue where exceptions handled on SSO login were not allowing for error messages to be properly captured.(AAP-43369) + +* Fixed an issue where the job output was slow and making it hard to read due to missing parts of the output.(AAP-41434) + +* Fixed an issue where the user was unable to edit an existing rulebook activation.(AAP-37299) + + + +=== {OperatorPlatformNameShort} + +* Fixed an issue where the pod affinity/anti-affinity was not configurable for the aap-gateway-operator to allow for pod placement on unique nodes.(AAP-42983) + +* Fixed an issue where {LightspeedShortName} was incorrectly passing DAB settings.(AAP-43542) + +* Fixed an issue where the Lightspeed Operator WCA configuration was not optional.(AAP-42370) + +* Fixed an issue where `status.conditions` validation would not allow auto-reporting errors on CR statuses.(AAP-44081) + +* Fixed an issue where the {PlatformNameShort} gateway had the incorrect Lightspeed deployment name.(AAP-43837) + +* Fixed an issue where Lightspeed devel CRD was incompatible with 2.5 CRD.(AAP-43657) + +* Fixed an issue where `status.conditions` validation was not allowing auto-reporting errors on the CR statuses.(AAP-44083) + +* If the user is migrating between {OCPShort} Operator on AAP 2.5 fails because of a postgres permission issue. The {ControllerName} operator now grants permission to the {ControllerName} user to avoid permissions errors when migrating the data.(AAP-44846) + +* Fixed an issue where there was an Intermittent *502 Bad Gateway* error on {PlatformNameShort} 2.5 operator deployment.(AAP-44176) + + + +=== {ControllerNameStart} + +* Fixed usage of Django password validator `UserAttributeSimilarityValidator`.(AAP-43046) + +* Fixed an issue where there was no lookup credential without user Inputs, and where the credential defaults were not passing between awx-plugins and AWX.(AAP-38589) + +* Fixed an issue where there was an incorrect deprecation warning for `awx.awx.schedule_rrule`.(AAP-43474) + +* Fixed an issue where facts were unintentionally deleted when an inventory is modified during a job execution.(AAP-39365) + + + +=== Container based {PlatformNameShort} + +* Fixed an issue where the paths to expose isolated jobs' settings did not work.(AAP-37599) + +The ansible.gateway_configuration collection was replaced by ansible.platform.(AAP-44230) + +* Fixed an issue where the automation hub would fail to upload collections due to a missing worker temporary directory.(AAP-44166) + + + +=== {EDAName} + +* Fixed an issue where the log messages were not using the correct log level.(AAP-43607) + +* Fixed an issue where the *ansible-rulebook* logs were not logged into the activation-worker log.(AAP-43549) + +* Fixed an issue where the container was not always deleted correctly, or it missed the last output entries in VM based installations.(AAP-42935) + +* Fixed an issue where {EDAName} logging did not allow searching.(AAP-43338) + +* Fixed an issue where the rulebook activations and event streams would not remain due to a cascading delete after the user who created them was deleted.(AAP-41769) + +* Fixed an issue where the decision environment was not using the image to authenticate and pull successfully when using an image registry with a custom port.(AAP-41281) + +* Fixed an issue where timestamps were not formatted to the local timezone of the user.(AAP-38396) + +* Fixed an issue where the activation failed with the message *It will attempt to restart (1/5) in 60 seconds according to the restart policy always*, but it does not restart.(AAP-43969) + +* Fixed an issue where a race condition would occur while cleaning up activation in {OCPShort}, causing unexpected behavior.(AAP-44108) + +* Fixed an issue where the {EDAName} logs showed no information about an internal server error.(AAP-42271) + +* Fixed an issue where there was a duplicate error message in the CLI.(AAP-41745) + +* Fixed an issue where Envoy was stripping the `Authorization` header from client requests.(AAP-44700) + +* Fixed an issue where {EDAName} had not selected a standard component for settings mechanism.(AAP-41684) + +* Fixed an issue where documentation was missing for {EDAName} source plugins.(AAP-8630) + +* Fixed an issue where there was a memory leak in {EDAName} using the *ansible-rulebook* `sqs` plugin.(AAP-42623) + +* Fixed an issue where rulebook activations were not editable or copyable either through the UI or API.(AAP-37294) + +* Fixed an issue where the rule engine used in *ansible-rulebook* was keeping events that do not match in memory for the `default_events_ttl` of two hours causing a memory leak.(AAP-44899) + +* Fixed an issue where there was a memory leak in {EDAName} using *ansible-rulebook* `sqs` plugin.(AAP-44899) + +* Fixed an issue where the rulebook activation module in the {EDAName} collection lacked support for restarting the activation.(AAP-42542) + +* Fixed an issue where AAP aliases were unable to be used to specify {EDAName} collection variables.(AAP-42280) + + + +=== {LightspeedShortName} Operator + +* Fixed an issue where the `auth_config_secret_name` configuration in Lightspeed Operator was not optional in the {ControllerName}.(AAP-44203) + + +=== Receptor + +* Fixed an issue where the kube API would lock up on every call by moving `kubeAPIWapperInstance` inside each `kubeUnit` and removing `kubeAPIWapperlocks`.(AAP-43111) + + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where {Gateway} services were not aligned after restore with the target environment. + +** Fixed an issue where old instance nodes were still registered in {ControllerName} post restore. + +** Fixed an issue where *nginx* would attempt to reload before the configuration was finalized.(AAP-44231) + + + + diff --git a/downstream/titles/release-notes/async/aap-25-20250528.adoc b/downstream/titles/release-notes/async/aap-25-20250528.adoc new file mode 100644 index 0000000000..3594e3a260 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250528.adoc @@ -0,0 +1,144 @@ +[[aap-25-20250528]] + += {PlatformNameShort} patch release May 28, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| May 28, 2025 | +* {ControllerNameStart} 4.6.13 +* {HubNameStart} 4.10.4 +* {EDAName} 1.1.8 +* Container-based installer {PlatformNameShort} (bundle) 2.5-14 +* Container-based installer {PlatformNameShort} (online) 2.5-14 +* Receptor 1.5.5 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-13 +* RPM-based installer {PlatformNameShort} (online) 2.5-13 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: aap-operator.v2.5.0-0.1747343762 + +* Cluster-scoped Bundle: aap-operator.v2.5.0-0.1747345055 + + + +== General + +* The `ansible.platform collection` has been updated to 2.5.20250528.(AAP-45823) + +* The `ansible.controller collection` has been updated to 4.6.13.(AAP-45885) + + + +== Features + +=== {PlatformNameShort} + +* {PlatformNameShort} now supports service account-based authentication for integration with services available through the Hybrid Cloud Console, including automation analytics, {InsightsShort}, and subscription management. See this link:https://access.redhat.com/articles/7112649[Knowledgebase article] for more information on the required changes. + +* Replaced basic authenticate with service account authentication for {PlatformNameShort} subscription management.(AAP-44643) + +* Updated the subscription wizard to accommodate fetching subscription information using service account credentials.(AAP-37077) + +* Adds `ansible_base.lib.utils.address.classify_address` providing common recognition and parsing of machine addressing (hostname, IPv4 and IPv6) with and without an appended `:`.(AAP-45287) + + +== Enhancements + + +=== {PlatformNameShort} + +* Reduced the cognitive complexity level of `validate_password()` method and reorganized the `validate_authenticate_uid()` method to increase code readability.(AAP-45346) + +* For clarity and to prevent misconfiguration, the SAML authenticator now requires both a permanent user ID and a username.(AAP-45333) + +* Updated field names and help text in the System Settings UI to indicate client ID and client secret for service accounts, as well as client ID and client secret for analytics.(AAP-43119) + +* Validation/enforcement of expected service types removed because service types are now dynamic.(AAP-40130) + +* Enables configuration of control plane authentication for custom services. You should not modify it for pre-defined services.(AAP-40131) + +* Custom service type support added. Arbitrary service types and services can be created rather than a fixed list.(AAP-39812) + + +=== {LightspeedShortName} + +* It is now possible to disable SSL verification for {LightspeedShortName} <-> Model Server communication.(AAP-45337) + + +=== {ControllerNameStart} + +* Updated Azure Key Vault plugin to use managed identity when creating credentials.(AAP-43461) + + +== Bug fixes + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/CVE-2025-43859[CVE-2025-43859] `ee-supported-container`: h11 accepts some malformed Chunked-Encoding bodies.(AAP-44783) + +* link:https://access.redhat.com/security/cve/CVE-2025-43859[CVE-2025-43859] `ee-cloud-services-container`: h11 accepts some malformed Chunked-Encoding bodies.(AAP-44781) + +* link:https://access.redhat.com/security/cve/CVE-2025-43859[CVE-2025-43859] `ansible-lightspeed-container`: h11 accepts some malformed Chunked-Encoding bodies.(AAP-44779) + + + +=== {PlatformNameShort} + +* Fixed an issue found in SaaS deployments where the authentication proxy would use old, invalid database connections after an RDS database reboot.(AAP-44178) + +* Fixed an issue where administrators were not allowed to configure auto migration of legacy authenticators.(AAP-36841) + +* Fixed an issue where the usernames from LDAP were not case-insensitive. LDAP is case-insensitive so logging in as and would result in two different users in {Gateway} even though they are the same user in LDAP. With this change, both users will be authenticated as the lowercase username.(AAP-44177) + + + +=== {OperatorPlatformNameShort} + +* Fixed a broken document link to {OperatorPlatformNameShort} installation documents in the {OCPShort} UI.(AAP-45199) + +* Fixed an issue where the user was unable configure `kind: AnsibleInstanceGroup`, and it failed with an error *policy_spec_override is undefined*.(AAP-45351) + + +=== {LightspeedShortName} + +* Fixed an issue where it was not possible to disable SSL verification between Model Server and {LightspeedShortName}.(AAP-45269) + +* Fixed an issue where the provider type and context window size were not configurable in {LightspeedShortName} Operator.(AAP-45166) + + +=== {ControllerNameStart} + +* Fixed an issue where the VMware credential was not applying to the source correctly.(AAP-45169) + +* Fixed an issue where the workflow job template did not have job access parity with `UnifiedJobAccess`.(AAP-45057) + +* Fixed an issue where error handling did not allow event processing to continue even if one event contained invalid data that cannot be parsed by `jq`.(AAP-44876) + + +=== {GatewayStart} + +* Fixed `AttributeError` errors around the `legacy_base` authenticator which were harmless, but were showing in logs leading to customer and engineer confusion.(AAP-40159) + +* Fixed an issue where customized proxy authentication on a per service cluster basis was not allowed.(AAP-35601) + +* Fixed and issue where there was a server error on migrating an LDAP user in a freshly upgraded 2.4 -> 2.5 instance. The fix prevents the 500 error during LDAP user legacy authentication and migration following an upgrade.(AAP-44958) + + + +=== RPM-based {PlatformNameShort} + +* Fixed an issue the `max keyrings sysctl` would produce common failures when running more than 200 containers on a node.(AAP-45260) + +* Fixed an issue where automation {Gateway} proxy (envoy) ports were not included in the firewall.(AAP-45489) + + +== Known Issues + +* {LightspeedShortName} enabled deployments must apply a workaround to avoid problems during upgrade from release 2.5.20250507. The service cluster and related objects must be removed before upgrade and re-created after upgrade. For more information please see this link:https://access.redhat.com/articles/7122651[KCS article].(AAP-46154) diff --git a/downstream/titles/release-notes/async/aap-25-20250609.adoc b/downstream/titles/release-notes/async/aap-25-20250609.adoc new file mode 100644 index 0000000000..6322b235c4 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250609.adoc @@ -0,0 +1,127 @@ +[[aap-25-20250609]] + += {PlatformNameShort} patch release June 9, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| June 9, 2025| +* {ControllerNameStart} 4.6.14 +* {HubNameStart} 4.10.4 +* {EDAName} 1.1.9 +* Container-based installer {PlatformNameShort} (bundle) 2.5-15 +* Container-based installer {PlatformNameShort} (online) 2.5-15 +* Receptor 1.5.5 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-14 +* RPM-based installer {PlatformNameShort} (online) 2.5-14 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: aap-operator.v2.5.0-0.1749074128 + +* Cluster-scoped Bundle: aap-operator.v2.5.0-0.1749074612 + + +== General + +* The `ansible.controller` collection has been updated to 4.6.14 (AAP-46562) + +* The `ansible.platform` collection has been updated to 2.5.20250604 (AAP-46552) + + +== {PlatformNameShort} + +=== Features + +* Adds `ansible_base.lib.utils.address.classify_address` providing common recognition and parsing of machine addressing hostname, IPv4 and IPv6 with and without an appended `:`.(AAP-45910) + + +=== Enhancements + +* LDAP filter validation improved such that all filters that meet LDAP standards including and/or should be properly validated.(AAP-46249) + +* Completely updated interface for managing authentication methods and mappings.(AAP-45750) + +* Default validity period for *Oauth* tokens reduced from 1000 years to 1 year. Existing tokens will NOT be updated. If you wish to reduce the validity period of existing tokens, please remove and re-issue them. The default validity period for *Oauth* tokens can be modified via the django setting `ACCESS_TOKEN_EXPIRE_SECONDS in OAUTH2_PROVIDER`.(AAP-46187) + + +=== Bug fixes + +* Fixed an issue where there was a degraded logging performance notice removed on the job output page. Polling fallback functionality still exists.(AAP-46120) + +* Fixed an issue where the gateway proxy was not properly ejecting nodes failing health checks.(AAP-43931) + +* Fixed an issue where installations with {LightspeedShortName} enabled were not handled properly during upgrade.(AAP-46154) + + +== {ControllerNameStart} + + +=== Enhancements + +* Updated license mechanism to allow users to provide username and password when fetching subscriptions via the API and {PlatformNameShort} user interface.(AAP-46797) + + +=== Bug Fixes + +* Fixed an issue where the idle dispatch workers were not recycled based upon age, or after completing the last task. Default maximum age is 4 hours, controlled by `WORKER_MAX_LIFETIME_SECONDS` setting. Set to None to disable worker recycling.(AAP-45947) + +* Fixed an analytics collector failure to clean up temporary files after failed upload to Hybrid Cloud console.(AAP-45574) + +* Fixed an issue where inventory variables pulled in by update from a source with the option *Overwrite Variables* checked, were not deleted on subsequent updates from the same source when the source no longer contained the variable.(AAP-45571) + + +== Container-based {PlatformNameShort} + + +=== Enhancements + +* Allow users to skip {ControllerName} demo data creation.(AAP-46482) + +* Validating the {HubNameStart} NFS share path format during the preflight role execution.(AAP-46306) + + +=== Bug Fixes + +* Fixed an issue where the custom Certificate Authority (CA) TLS certificate was not passed to the external database validation during the preflight role execution.(AAP-46480) + +* Fixed a log redirection error for the {HubNameMain}, {EDAName}, and Unified UI containers.(AAP-46478) + +* Fixed an issue where `~/.local/bin` path was not added to the user $`PATH` environment variable during PostgreSQL database dump and restore.(AAP-46209) + +* Fixed the order of operations for handling service nodes to ensure only valid nodes are configured.(AAP-45551) + + +== {EDAName} + +=== Enhancements + +* Rename env `EDA_OIDC_TOKEN_URL` to `DA_AUTOMATION_ANALYTICS_OIDC_TOKEN_URL`.(AAP-44862) + + +=== Bug Fixes + +* Fixed an issue where the activation containers were not removed after a node goes offline.(AAP-45831) + +* Fixed an issue where the error reminding user to remap source with event stream should be under key source_mapping in the API return.(AAP-45105) + +* Fixed an issue where special characters such as `[]` were not allowed in the activation name on OCP deployment.(AAP-44691) + + +== RPM-based {PlatformNameShort} + +=== Enhancements + +* Setup will now retry automation gateway data migration attempts in case services take longer than expected to start.(AAP-46208) + + +=== Bug Fixes + +* Fixed an issue Event stream worker would not restart like other workers when running setup.sh.(AAP-46205) + +* Fixed an issue where setup would not restart the podman socket whenever podman was reset.(AAP-46191) diff --git a/downstream/titles/release-notes/async/aap-25-20250611.adoc b/downstream/titles/release-notes/async/aap-25-20250611.adoc new file mode 100644 index 0000000000..84951b93a7 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250611.adoc @@ -0,0 +1,37 @@ +[[aap-25-20250611]] + += {PlatformNameShort} patch release June 11, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| June 11, 2025| +* {ControllerNameStart} 4.6.15 +* {HubNameStart} 4.10.4 +* {EDAName} 1.1.9 +* Container-based installer {PlatformNameShort} (bundle) 2.5-15.1 +* Container-based installer {PlatformNameShort} (online) 2.5-15 +* Receptor 1.5.5 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-14.1 +* RPM-based installer {PlatformNameShort} (online) 2.5-14 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: aap-operator.v2.5.0-0.1749604727 + +* Cluster-scoped Bundle: aap-operator.v2.5.0-0.1749607543 + + + + +== {ControllerNameStart} + + +=== Bug Fixes + +* Fixed an issue where using or creating Azure keyvault credentials was failing with *TypeError*.(AAP-47413) diff --git a/downstream/titles/release-notes/async/aap-25-20250702.adoc b/downstream/titles/release-notes/async/aap-25-20250702.adoc new file mode 100644 index 0000000000..8cc82ecb53 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-20250702.adoc @@ -0,0 +1,228 @@ +[[aap-25-20250702]] + += {PlatformNameShort} patch release July 2, 2025 + +This release includes the following components and versions: + +[cols="1a,3a", options="header"] +|=== +| Release date | Component versions + +| July 2, 2025| +* {ControllerNameStart} 4.6.16 +* {HubNameStart} 4.10.5 +* {EDAName} 1.1.11 +* Container-based installer {PlatformNameShort} (bundle) 2.5-16 +* Container-based installer {PlatformNameShort} (online) 2.5-16 +* Receptor 1.5.7 +* RPM-based installer {PlatformNameShort} (bundle) 2.5-15 +* RPM-based installer {PlatformNameShort} (online) 2.5-15 + +|=== + +CSV Versions in this release: + +* Namespace-scoped Bundle: aap-operator.v2.5.0-0.1750901111 + +* Cluster-scoped Bundle: aap-operator.v2.5.0-0.1750901870 + + +== General + +* Allows running `ansible.platform` collection modules in `check_mode`.(AAP-45246) + +* The `ansible.eda` collection has been updated to 2.8.1.(AAP-48324) + +* The `ansible.platform` collection has been updated to 2.5.20250702.(AAP-48344) + +* The `ansible.controller` collection has been updated to 4.6.16.(AAP-48347) + + +== CVE + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/CVE-2025-22871[CVE-2025-22871] `receptor`: Request smuggling due to acceptance of invalid chunked data in net/http.(AAP-45132) + +* link:https://access.redhat.com/security/cve/CVE-2025-22871[CVE-2025-22871] `automation-gateway-proxy-openssl32`: Request smuggling due to acceptance of invalid chunked data in net/http.(AAP-45130) + +* link:https://access.redhat.com/security/cve/CVE-2025-22871[CVE-2025-22871] `automation-gateway-proxy-openssl30`: Request smuggling due to acceptance of invalid chunked data in net/http.(AAP-45129) + +* link:https://access.redhat.com/security/cve/CVE-2025-22871[CVE-2025-22871] `automation-gateway-proxy`: Request smuggling due to acceptance of invalid chunked data in net/http.(AAP-45128) + + +== {PlatformNameShort} + +=== Enhancements + +* Refactored `V1RootView.get()` and improve reverse lookup logic.(AAP-47366) + +* Refactored `process_statuses()` method to reduce its cognitive complexity.(AAP-47341) + +* All UI elements related to policy enforcement are visible to all users. See the link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/controller-pac[policy enforcement documentation] for more information. (AAP-47006) + +* On the inventory source form, for a source type of *VMware ESXi* the user will be able to select credentials of type *VMware vCenter*.(AAP-46784) + +* Reduced the cognitive complexity of method `migrate_resource()` in `migrate_service_data.py` from 56 to < =15.(AAP-45822) + +* Reduce the cognitive complexity of the `process_fields()` method in `serializers/preference.py` file.(AAP-45820) + +* Reduced the cognitive complexity of `unique_fields_for_model()` method to below 15.(AAP-45819) + +=== Bug fixes + +* Fixed an issue that did not allow role assignments using `object_ansible_id` in the `role_user_assignment` module.(AAP-48042) + +* Fixed an issue that did not allow the `object_id` field in the `role_user_assignment` module to accept a list of items.(AAP-47979) + +* Fixed an example task in the `ansible.platform.token` module.(AAP-47976) + +* Fixed an issue to `aap_*` parameters in `ansible.platform.token` module that resulted in user reminders not being sent out.(AAP-47975) + +* Fixed an API error messaging in the event a user logs in as the admin user via legacy *auth* on one component, then tries to do so via the other component.(AAP-47541) + +* Fixed an issue where API records could be missing or duplicated across pages.(AAP-47504) + +* Fixed a bug that was causing the UI to throw an error when launching a workflow job template with both *Prompt on Launch* and *Survey* enabled.(AAP-46813) + +* Fixed an issue where the {Gateway} *OpenAPI* schema file was not being generated correctly.(AAP-46639) + +* Fixed an issue where modules in the `ansible.platform` collection did not accept `AAP_*` variable for authentication.(AAP-45363) + +* Fixed an issue where there was a missing option in the ansible.platform.user module to allow setting the `is_platform_auditor` flag on a user.(AAP-45244) + +* Fixed an issue where an extra validation to handle incorrect user input in the variables field was needed, as the API did not return an error for it.(AAP-42563) + +* Fixed an issue with the *Hosts* links in the *Resource Counts* section of the overview page to redirect to the *Hosts* page, filtered by either *Show only ready hosts* or *Show only failed hosts* depending on which count was clicked on.(AAP-42288) + +* Fixed an issue where API records could be missing or duplicated across pages.(AAP-41842) + + +== {LightspeedShortName} + +=== Enhancements + +* {AAPchatbot} now supports third-party LLM providers such as {AzureOpenAI}, {OpenAI}, and {IBMwatsonxai}. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/installing_on_openshift_container_platform/deploying-chatbot-operator[Deploying the {AAPchatbot} on {OCPShort}].(AAP-44011) + + +== {OperatorPlatformNameShort} + +=== Enhancements + +* Annotation can now be added to the route by specifying *spec.route_annotations* on the {PlatformNameShort} and {ControllerName} custom resources.(AAP-45952) + +* New installations of {LightspeedShortName} using the {PlatformNameShort} Custom Resource will automatically integrate with {PlatformNameShort}'s *OAuth* mechanism. The `auth_config_secret_name` setting is optional.(AAP-45686) + +=== Bug fixes + +* Fixed an issue where the `jquery` version included in the redirect page did not match the version from the rest framework directory.(AAP-47160) + +* Fixed an issue where the ingress class name could not be configured on the {HubName} CR.(AAP-47054) + +* Fixed an issue where there was a missing resources limit on {HubName} API `init` containers.(AAP-47053) + +* Fixed an issue where the resources limit on worker pods could not be configured.(AAP-47045) + +* Fixed an issue where there was no `readinessProbe` configuration in the PostgreSQL `statefulset` definition.(AAP-47043) + + +== {ControllerNameStart} + +=== Features + +* Added AWX `dispatcherd` integration.(AAP-45800) + +=== Bug Fixes + +* Fixed a race condition where job templates with duplicate names in the same organization could be created.(AAP-45968) + +* Fixed an issue where `ole_user_assignments` failed to query for `object_ansible_id`. Enabled query filtering for fields `user_ansible_id`, `team_ansible_id`, and `object_ansible_id` on the role assignment API endpoints.(AAP-45443) + +* Fixed an issue where some credential types were not populated after upgrading. This adds a new migration to accomplish this.(AAP-44233) + +* Fixed an issue where there were large numbers of jobs queued that were stuck in waiting status.(AAP-44143) + + +== {HubNameStart} + +=== Enhancements + +* Any user can search and filter using AI keywords to find AI related collections in {HubName}.(AAP-43138) + +=== Bug Fixes + +* Fixed an issue where there was an error when installing collections that exist in both rh-certified and community.(AAP-24271) + + +== Container-based {PlatformNameShort} + +=== Enhancements + +* Validate that nodes are configured with at least 16G of RAM.(AAP-47542) + +* Containerized {PlatformNameShort} now supports RHEL 10 for enterprise topologies.(AAP-47083) + +=== Bug Fixes + +* Fixed an issue where the TLS Certificate Authority (CA) certificate for Receptor mesh configuration when providing TLS certificates were not signed by the internal CA.(AAP-48065) + +* Fixed a missing user parameter for the sos report command on the `log_gathering` playbook.(AAP-47718) + +* Fixed an issue where the `jquery` version included in the redirect page did not match the version from the rest framework directory.(AAP-47074) + + +== {EDAName} + +=== Features + +* API REST supports the editing of the URL of the project.(AAP-47459) + +* Prior to this release, we suggested utilizing `ansible.builtin.set_fact` within playbooks. We now advise using `ansible.builtin.set_stats` as it enables seamless integration with job templates. We encourage migrating from `ansible.builtin.set_fact` to `ansible.builtin.set_stats` for optimal results, although `ansible.builtin.set_fact` will continue to be supported.(AAP-46841) + +=== Enhancements + +* Previously, when a project `url/branch/scm_refspec` was edited, users had to manually trigger a project resync through either the UI or API. Now, {EDAName} automatically does a resync in case one of `url/branch/scm_refspec` is modified.(AAP-46254) + +* Relevant settings and versions are emitted in logs when the worker starts.(AAP-40984) + +=== Bug Fixes + +* Fixed an issue when using `gather_facts` in a rulebook a user had to provide an inventory. This is only available when running ansible-rulebook as a CLI. When the rulebook with `gather_facts` is run as part of Activation the `gather_facts` is ignored, since Activations does not include inventory.(AAP-47846) + +* Fixed an issue where DE images that use an SHA digest in the URI would fail to pull. This is now addressed, enabling user reminders to be sent actively.(AAP-47725) + +* Fixed an issue introduced in #1296 where we were running under the advisory lock and not the actual import/sync task, but the proxy that schedules the job for rq and `dispatcherd`.(AAP-47554) + +* Fixed an issue where there were no validations to `URL`, `branch/tag/commit`, and `refspec` fields when creating or updating a project.(AAP-47227) + +* Fixed an issue on k8s-based deployments where activations would hang while being deleted or disabled.(AAP-46559) + +* Fixed an issue where the activation could get stuck in the *disabling* or *deleting* state under {OCPShort}.(AAP-45298) + + +== Receptor + +=== Bug Fixes + +* Fixed an issue where jobs were in a failed status with message *Receptor detail: Finished*. EOF is now handled correctly when the pod is ready.(AAP-46484) + + +== RPM-based {PlatformNameShort} + +=== Bug Fixes + +* Fixed an issue where redis-platform would not restart on restore.(AAP-47689) + +* Fixed an issue where old service nodes were not removed from {Gateway} when the installer ran with a new host or new host names.(AAP-47651) + +* Fixed an issue where restore was failing when a non-default port was used for {PlatformNameShort} managed database.(AAP-47639) + +* Fixed an issue where some pages didn't render properly when non-default `umask` was being used.(AAP-47377) + +* Fixed an issue where the {EDAName} script was not starting `nginx` on restart.(AAP-46511) + +* Fixed an issue where the credentials associated to decision environments would not be updated with the site information defined in the source inventory during restore.(AAP-46271) + +* Fixed an issue where the receptor certificate tasks would require switching to a receptor user.(AAP-46189) + +* Fixed an issue where the firewall was not opening event stream ports.(AAP-45684) diff --git a/downstream/titles/release-notes/async/aap-25-3-28-oct.adoc b/downstream/titles/release-notes/async/aap-25-3-28-oct.adoc new file mode 100644 index 0000000000..cd44cd1b47 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-3-28-oct.adoc @@ -0,0 +1,78 @@ +[[aap-25-3-28-oct]] + += {PlatformNameShort} patch release October 28, 2024 + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + +== Enhancements + +=== {PlatformNameShort} + +* With this update, upgrades from {PlatformNameShort} 2.4 to 2.5 are supported for RPM and Operator-based deployments. For more information on how to upgrade, see link:{URLUpgrade}[{TitleUpgrade}]. (ANSTRAT-809) +** Upgrades from 2.4 Containerized {PlatformNameShort} Tech Preview to 2.5 Containerized {PlatformNameShort} are unsupported. +** Upgrades for {EDAName} are unsupported from {PlatformNameShort} 2.4 to {PlatformNameShort} 2.5. + +=== {OperatorPlatformNameShort} + +* An informative redirect page is now shown when you go to the {HubName} URL root. (AAP-30915) + +=== Container-based {PlatformNameShort} + +* The TLS Certificate Authority private key can now use a passphrase. (AAP-33594) + +* {HubNameStart} is populated with container images (decision and execution environments) and Ansible collections. (AAP-33759) + +* The {ControllerName}, {EDAName}, and {HubName} legacy UIs now display a redirect page to the Platform UI rather than a blank page. (AAP-33794) + +=== RPM-based {PlatformNameShort} + +* Added platform Redis to RPM-based {PlatformNameShort}. This allows a 6 node cluster for a Redis high availability (HA) deployment. Removed the variable `aap_caching_mtls` and replaced it with `redis_disable_tls` and `redis_disable_mtls` which are boolean flags that disable Redis server TLS and Redis client certificate authentication. (AAP-33773) + +* An informative redirect page is now shown when going to {ControllerName}, {EDAName}, or {HubName} URL. (AAP-33827) + +== Bug fixes + +=== {PlatformNameShort} + +* Removed the *Legacy external password* option from the *Authentication Type* list. (AAP-31506) + +* {Galaxy}'s `sessionauth` class is now always the first in the list of authentication classes so that the platform UI can successfully authenticate. (AAP-32146) + +* link:https://access.redhat.com/security/cve/CVE-2024-10033[CVE-2024-10033] - `automation-gateway`: Fixed a Cross-site Scripting (XSS) vulnerability on the `automation-gateway` component that allowed a malicious user to perform actions that impact users. + +* link:https://access.redhat.com/security/cve/CVE-2024-22189[CVE-2024-22189] - `receptor`: Resolved an issue in `quic-go` that would allow an attacker to trigger a denial of service by sending a large number of `NEW_CONNECTION_ID` frames that retire old connection IDs. + +=== {ControllerNameStart} + +* link:https://access.redhat.com/security/cve/CVE-2024-41989[CVE-2024-41989] - `automation-controller`: Before this update, in Django, if `floatformat` received a string representation of a number in scientific notation with a large exponent, it could lead to significant memory consumption. With this update, decimals with more than 200 digits are now returned as is. + +* link:https://access.redhat.com/security/cve/CVE-2024-45230[CVE-2024-45230] - `automation-controller`: Resolved an issue in Python's Django `urlize()` and `urlizetrunc()` functions where excessive input with a specific sequence of characters would lead to denial of service. + +=== {HubNameStart} + +* Refactored the `dynaconf` hooks to preserve the necessary authentication classes for {PlatformNameShort} {PlatformVers} deployments. (AAP-31680) + +* During role migrations, model permissions are now re-added to roles to preserve ownership. (AAP-31417) + +=== {OperatorPlatformNameShort} + +* The port is now correctly set when configuring the {Gateway} cache `redis_host` setting when using an external Redis cache. (AAP-33279) + +* Added checksums to the {HubName} deployments so that pods are cycled to pick up changes to the PostgreSQL configuration and galaxy server settings Kubernetes secrets. (AAP-33518) + +=== Container-based {PlatformNameShort} + +* Fixed the uninstall playbook execution when the environment was already uninstalled. (AAP-32981) + +// Commenting this out for now as the advisories are not yet published to the Errata tab on the downloads page: https://access.redhat.com/downloads/content/480/ver=2.5/rhel---9/2.5/x86_64/product-errata + +// == Advisories +// The following errata advisories are included in this release: + +// * link:https://access.redhat.com/errata/[] + +// * link:https://access.redhat.com/errata/[] + +// * link:https://access.redhat.com/errata/[] + +// * link:https://access.redhat.com/errata/[] diff --git a/downstream/titles/release-notes/async/aap-25-4-18-nov.adoc b/downstream/titles/release-notes/async/aap-25-4-18-nov.adoc new file mode 100644 index 0000000000..cb06c66de4 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-4-18-nov.adoc @@ -0,0 +1,72 @@ +[[aap-25-4-18-nov]] + += {PlatformNameShort} patch release November 18, 2024 + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + +== Enhancements + +* With this release, a redirect page has now been implemented that will be exhibited when you navigate to the root `/` for each component's stand-alone URL. The API endpoint remains functional. This affects {EDAName}, {ControllerName}, {OperatorPlatformNameShort}, and {OCPShort}. + + +== Bug fixes + +=== General + +With this update, the following CVEs have been addressed: + +link:https://access.redhat.com/security/cve/cve-2024-9902[CVE-2024-9902] ansible-core: Ansible-core user may read/write unauthorized content. + +link:https://access.redhat.com/security/cve/cve-2024-8775[CVE-2024-8775] ansible-core: Exposure of sensitive information in Ansible vault files due to improper logging. + + +=== {PlatformNameShort} + +* Fixed an issue where the user was unable to filter out hosts on inventory groups where it returned a *Failed to load* options on {PlatformNameShort} UI.(AAP-34752) + +=== Execution Environment + +* Update *pywinrm* to 0.4.3 in *ee-minimal* and *ee-supported* container images to fix Python 3.11 compatibility.(AAP-34077) + +=== {OperatorPlatformNameShort} + +* Fixed a syntax error when `bundle_cacert_secret` was defined due to incorrect indentation.(AAP-35358) + +* Fixed an issue where the default operator catalog for {PlatformNameShort} aligned to cluster-scoped versus namespace-scoped.(AAP-35313) + +* Added the ability to set tolerations and `node_selector` for the Redis *statefulset* and the gateway deployment.(AAP-33192) + +* Ensure the platform URL status is set when *Ingress* is used to resolve an issue with {Azure} on Cloud managed deployments. This is due to the {PlatformNameShort} operator failing to finish because it is looking for {OCPShort} routes that are not available on Azure Kubernetes Service.(AAP-34036) + +* Fixed an issue where the {PlatformNameShort} Operator description did not render code block correctly.(AAP-34589) + +* It is necessary to specify the `CONTROLLER_SSO_URL` and `AUTOMATION_HUB_SSO_URL` settings in Gateway to fix the OIDC auth redirect flow.(AAP-34080) + +* It is necessary to set the `SERVICE_BACKED_SSO_AUTH_CODE_REDIRECT_URL` setting to fix the OIDC auth redirect flow.(AAP-34079) + +=== Container-based {PlatformNameShort} + +* Fixed an issue when the port value was not defined in the `gateway_main_url` variable, the containerized installer failed with incorrect {ExecEnvShort} image reference error.(AAP-34716) + +* Fixed an issue where the containerized installer used port number when specifying the `image_url` for a decision environment. The user should not add a port to image URLs when using the default value.(AAP-34070) + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where not setting up the *gpg* agent socket properly when multiple hub nodes are configured resulted in not creating a *gpg* socket file in `/var/run/pulp`.(AAP-34067) + +=== {ToolsName} + +* Fixed an issue where missing data files were not included in the molecule RPM package.(AAP-35758) + +// Commenting this out for now as the advisories are not yet published to the Errata tab on the downloads page: https://access.redhat.com/downloads/content/480/ver=2.5/rhel---9/2.5/x86_64/product-errata + +// == Advisories +// The following errata advisories are included in this release: + +// * link:https://access.redhat.com/errata/[] + +// * link:https://access.redhat.com/errata/[] + +// * link:https://access.redhat.com/errata/[] + +// * link:https://access.redhat.com/errata/[] diff --git a/downstream/titles/release-notes/async/aap-25-5-3-dec.adoc b/downstream/titles/release-notes/async/aap-25-5-3-dec.adoc new file mode 100644 index 0000000000..83220d2352 --- /dev/null +++ b/downstream/titles/release-notes/async/aap-25-5-3-dec.adoc @@ -0,0 +1,116 @@ +[[aap-25-5-3-dec]] + += {PlatformNameShort} patch release December 3, 2024 + +The following enhancements and bug fixes have been implemented in this release of {PlatformNameShort}. + +== Enhancements + +=== {PlatformNameShort} + +* {LightspeedShortName} has been updated to 2.5.241127.(AAP-35307) + +* `redhat.insights` Ansible collection has been updated to 1.3.0.(AAP-35161) + +* `ansible.eda` collection has been updated to 2.2.0 in {ExecEnvShort} and decision environment images.(AAP-3398) + +=== {OperatorPlatformNameShort} + +* With this update, you can set PostgreSQL SSL/TLS mode to `verify-full` or `verify-ca` with the proper `sslrootcert` configuration in the {HubName} Operator.(AAP-35368) + +=== Container-based {PlatformNameShort} + +* With this update, `ID` and `Image` fields from a container image are used instead of `Digest` and `ImageDigest` to trigger a container update.(AAP-36575) + +* With this update, you can now update the registry URL value in {EDAName} credentials.(AAP-35085) + +* With this update, the `kernel.keys.maxkeys` and `kernel.keys.maxbytes` settings are increased on systems with large memory configuration.(AAP-34019) + +* Added `ansible_connection=local` to the `inventory-growth file` and clarified its usage.(AAP-34016) + +=== Documentation updates + +* With this update, the Container growth topology and Container enterprise topology have been updated to include s390x (IBM Z) architecture test support.(AAP-35969) + +=== RPM-based {PlatformNameShort} + +* With this update, you can now update the registry URL value in {EDAName} credentials.(AAP-35162) + +== Bug fixes + +=== General + +With this update, the following CVEs have been addressed: + +* link:https://access.redhat.com/security/cve/CVE-2024-52304[CVE-2024-52304] `automation-controller`: `aiohttp` vulnerable to request smuggling due to wrong parsing of chunk extensions. + +=== {OperatorPlatformNameShort} + +* With this update, missing {OperatorPlatformNameShort} custom resource definitions (CRDs) are added to the `aap-must-gather` container image.(AAP-35226) + +* Disabled {Gateway} authentication in the proxy configuration to prevent HTTP 502 errors when the control plane is down.(AAP-36527) + +* The Red Hat favicon is now correctly displayed on {ControllerName} and {EDAName} API tabs.(AAP-30810) + +* With this update, the {ControllerName} admin password is now reused during upgrade from {PlatformNameShort} 2.4 to 2.5.(AAP-35159) + +* Fixed undefined variable (`_controller_enabled`) when reconciling an `AnsibleAutomationPlatformRestore`. Fixed {HubName} Operator `pg_restore` error on restores due to a wrong database secret being set.(AAP-35815) + +=== {ControllerNameStart} + +* Updated the minor version of uWSGI to obtain updated log verbiage.(AAP-33169) + +* Fixed job schedules running at the wrong time when the `rrule` interval was set to `HOURLY` or `MINUTELY`.(AAP-36572) + +* Fixed an issue where sensitive data was displayed in the job output.(AAP-35584) + +* Fixed an issue where unrelated jobs could be marked as a dependency of other jobs.(AAP-35309) + +* Included pod anti-affinity configuration on default container group pod specification to optimally spread workload.(AAP-35055) + +=== Container-based {PlatformNameShort} + +* With this update, you cannot change the `postgresql_admin_username` value when using a managed database node.(AAP-36577) + +* Added update support for PCP monitoring role. + +* Disabled {Gateway} authentication in the proxy configuration to prevent HTTP 502 errors when the control plane is down. + +* With this update, you can use dedicated nodes for the Redis group. + +* Fixed an issue where disabling TLS on {Gateway} would cause installation to fail. + +* Fixed an issue where disabling TLS on {Gateway} proxy would cause installation to fail. + +* Fixed an issue where {Gateway} uninstall would leave container systemd unit files on disk. + +* Fixed an issue where the {HubName} container signing service creation failed when `hub_collection_signing=false` but `hub_container_signing=true`. + +* Fixed an issue with the `HOME` environment variable for receptor containers which would cause a “Permission denied” error on the containerized execution node. + +* Fixed an issue where not setting up the GPG agent socket properly when many hub nodes are configured, resulted in not creating a GPG socket file in `/var/tmp/pulp`. + +* With this update, you can now change the {Gateway} port value after the initial deployment. + +=== Receptor + +* Fixed an issue that caused a Receptor runtime panic error. + +=== RPM-based {PlatformNameShort} + +* Fixed an issue where the `metrics-utility` command failed to run after updating {ControllerName}. + +* Fixed the owner and group permissions on the `/etc/tower/uwsgi.ini` file. + +* Fixed an issue where not having `eda_node_type` defined in the inventory file would result in backup failure. + +* Fixed an issue where not having `routable_hostname` defined in the inventory file would result in a restore failure. + +* With this update, the `inventory-growth` file is now included in the RPM installer. + +* Fixed an issue where the dispatcher service went into `FATAL` status and failed to process new jobs after a database outage of a few minutes. + +* Disabled {Gateway} authentication in the proxy configuration to allow access to the UI when the control plane is down. + +* With this update, the Receptor data directory can now be configured using the `receptor_datadir` variable. + diff --git a/downstream/titles/release-notes/async/async-updates.adoc b/downstream/titles/release-notes/async/async-updates.adoc new file mode 100644 index 0000000000..bd98ccc2d5 --- /dev/null +++ b/downstream/titles/release-notes/async/async-updates.adoc @@ -0,0 +1,18 @@ + += Patch releases + +Security, bug fixes, and enhancements for {PlatformNameShort} {PlatformVers} are released as asynchronous erratas. All {PlatformNameShort} erratas are available on the link:{PlatformDownloadUrl}[Download {PlatformName}] page. + +As a Red{nbsp}Hat Customer Portal user, you can enable errata notifications in the account settings for Red{nbsp}Hat Subscription Management (RHSM). When errata notifications are enabled, you receive notifications through email whenever new erratas relevant to your registered systems are released. + +[NOTE] +==== +Red{nbsp}Hat Customer Portal user accounts must have systems registered and consuming {PlatformNameShort} entitlements for {PlatformNameShort} errata notification emails to generate. +==== + +The patch releases section of the release notes will be updated over time to give notes on enhancements and bug fixes for patch releases of {PlatformNameShort} {PlatformVers}. + +[role="_additional-resources"] +.Additional resources +* For more information about asynchronous errata support in {PlatformNameShort}, see link:https://access.redhat.com/support/policy/updates/ansible-automation-platform[{PlatformName} Life Cycle]. +* For information about Common Vulnerabilities and Exposures (CVEs), see link:https://www.redhat.com/en/topics/security/what-is-cve[What is a CVE?] and link:https://access.redhat.com/security/security-updates/cve[Red Hat CVE Database]. diff --git a/downstream/titles/release-notes/topics/installer-version-table.adoc b/downstream/titles/release-notes/async/installer-version-table.adoc similarity index 59% rename from downstream/titles/release-notes/topics/installer-version-table.adoc rename to downstream/titles/release-notes/async/installer-version-table.adoc index bcd4040b3b..188a0f38ca 100644 --- a/downstream/titles/release-notes/topics/installer-version-table.adoc +++ b/downstream/titles/release-notes/async/installer-version-table.adoc @@ -6,12 +6,12 @@ |=== | Installation bundle | Component versions -| xref:installer-24-7[2.4-7] + -June 12, 2024 | -* `ansible-automation-platform-setup` 2.4-7 -* `ansible-core` 2.15.11 -* {ControllerNameStart} 4.5.7 -* {HubNameStart} 4.9.2 -* {EDAName} 1.0.7 +| Advisory link + +Month Date, 2024 | +* `ansible-automation-platform-setup` +* `ansible-core` +* {ControllerNameStart} +* {HubNameStart} +* {EDAName} -|=== \ No newline at end of file +|=== diff --git a/downstream/titles/release-notes/async/rpm-version-table.adoc b/downstream/titles/release-notes/async/rpm-version-table.adoc new file mode 100644 index 0000000000..8702f0840c --- /dev/null +++ b/downstream/titles/release-notes/async/rpm-version-table.adoc @@ -0,0 +1,18 @@ +// This table contains the component/package versions per RPM release + +.Component versions per errata advisory +//cols="a,a" formats the columns as AsciiDoc allowing for AsciiDoc syntax +[cols="2a,3a", options="header"] +|=== +| Errata advisory | Component versions + +| Advisory link + +Month Date, 2024 | +* `ansible-automation-platform-installer` +* `ansible-automation-platform-setup` +* `ansible-core` +* {ControllerNameStart} +* {HubNameStart} +* {EDAName} + +|=== diff --git a/downstream/titles/release-notes/docinfo.xml b/downstream/titles/release-notes/docinfo.xml index 26df600387..097f5a79e4 100644 --- a/downstream/titles/release-notes/docinfo.xml +++ b/downstream/titles/release-notes/docinfo.xml @@ -1,9 +1,10 @@ -Red Hat Ansible Automation Platform release notes +Release notes Red Hat Ansible Automation Platform 2.5 -New features, enhancements, and bug fix information +New features, enhancements, and bug fix information + - The release notes for Red Hat Ansible Automation Platform summarize all new features and enhancements, notable technical changes, major corrections from the previous version, and any known bugs upon general availability. + This guide provides a summary of new features, enhancements, and bug fix information for Red Hat Ansible Automation Platform. Red Hat Customer Content Services diff --git a/downstream/titles/release-notes/master.adoc b/downstream/titles/release-notes/master.adoc index deb50aa0bf..579b9b8054 100644 --- a/downstream/titles/release-notes/master.adoc +++ b/downstream/titles/release-notes/master.adoc @@ -1,69 +1,76 @@ // Templates for release notes are contained in the ..downstream/snippets folder. // For each release, make a copy of assembly-rn-template.adoc, rename and save as instructed in the template and add an include statement to this file. -include::attributes/attributes.adoc[] - -= Red Hat Ansible Automation Platform release notes - - -include::{Boilerplate}[] - -include::topics/platform-intro.adoc[leveloffset=+1] +//If there are any technology previews, add the file. +// Asynchronous release notes - commented out until 2.5 has asynchronous release note updates +// include::async/async-updates.adoc[leveloffset=+1] -include::topics/aap-24.adoc[leveloffset=+1] -include::topics/controller-440.adoc[leveloffset=+1] - -include::topics/eda-24.adoc[leveloffset=+1] - -include::topics/hub-464.adoc[leveloffset=+1] - -include::topics/operator-240.adoc[leveloffset=+1] - -include::topics/docs-24.adoc[leveloffset=+1] - -// == Asynchronous updates -include::topics/async-updates.adoc[leveloffset=+1] +:experimental: -=== RPM releases - -include::topics/rpm-version-table.adoc[leveloffset=+3] - -include::topics/rpm-24-7.adoc[leveloffset=+3] - -include::topics/rpm-24-6.adoc[leveloffset=+3] - -include::topics/rpm-24-5.adoc[leveloffset=+3] - -include::topics/rpm-24-4.adoc[leveloffset=+3] - -include::topics/rpm-24-3.adoc[leveloffset=+3] - -include::topics/rpm-24-2.adoc[leveloffset=+3] - -=== Installer releases +include::attributes/attributes.adoc[] -include::topics/installer-version-table.adoc[leveloffset=+3] += Release notes -include::topics/installer-24-7.adoc[leveloffset=+3] -include::topics/installer-24-62.adoc[leveloffset=+3] +include::{Boilerplate}[] -include::topics/installer-24-61.adoc[leveloffset=+3] +include::topics/platform-intro.adoc[leveloffset=+1] -include::topics/installer-24-6.adoc[leveloffset=+3] +include::topics/aap-25.adoc[leveloffset=+1] -include::topics/installer-24-24.adoc[leveloffset=+3] +include::topics/tech-preview.adoc[leveloffset=+1] -include::topics/installer-24-23.adoc[leveloffset=+3] +include::topics/aap-25-deprecated-features.adoc[leveloffset=+1] -include::topics/installer-24-22.adoc[leveloffset=+3] +include::topics/aap-25-removed-features.adoc[leveloffset=+1] -include::topics/installer-24-21.adoc[leveloffset=+3] +include::topics/aap-25-changed-features.adoc[leveloffset=+1] -include::topics/installer-24-14.adoc[leveloffset=+3] +include::topics/aap-25-known-issues.adoc[leveloffset=+1] -include::topics/installer-24-13.adoc[leveloffset=+3] +include::topics/aap-25-fixed-issues.adoc[leveloffset=+1] -include::topics/installer-24-12.adoc[leveloffset=+3] +include::topics/docs-25.adoc[leveloffset=+1] -include::topics/installer-24-11.adoc[leveloffset=+3] +// == Asynchronous updates +include::async/async-updates.adoc[leveloffset=+1] +// Async release 2.5-07-02-2025 +include::async/aap-25-20250702.adoc[leveloffset=+2] +// Async release 2.5-06-11-2025 +include::async/aap-25-20250611.adoc[leveloffset=+2] +// Async release 2.5-06-09-2025 +include::async/aap-25-20250609.adoc[leveloffset=+2] +// Async release 2.5-05-28-2025 +include::async/aap-25-20250528.adoc[leveloffset=+2] +// Async release 2.5-05-07-2025 +include::async/aap-25-20250507.adoc[leveloffset=+2] +// Async release 2.5-04-09-2025 +include::async/aap-25-20250409.adoc[leveloffset=+2] +// Async release 2.5-03-26-2025 +include::async/aap-25-20250326.adoc[leveloffset=+2] +// Async release 2.5-03-12-2025 +include::async/aap-25-20250312.adoc[leveloffset=+2] +// Async release 2.5-03-05-2025 +include::async/aap-25-20250305.adoc[leveloffset=+2] +// Async release 2.3-02-25-2025 +include::async/aap-25-20250225.adoc[leveloffset=+2] +// Async release 2.5-02-13-2025 +include::async/aap-25-20250213.adoc[leveloffset=+2] +// Async release 2.5-01-29-January +include::async/aap-25-20250129.adoc[leveloffset=+2] +// Async release 2.5-01-22-January +include::async/aap-25-20250122.adoc[leveloffset=+2] +// Asyn release 2.5-01-15-January +include::async/aap-25-20250115.adoc[leveloffset=+2] +// Asyn release 2.5-12-18-December +include::async/aap-25-12-18-dec.adoc[leveloffset=+2] +// Async release 2.5-5 3rd Dec +include::async/aap-25-5-3-dec.adoc[leveloffset=+2] +// Async release 2.5-4 18th Nov (was released early) +include::async/aap-25-4-18-nov.adoc[leveloffset=+2] +// Async release 2.5-3 (AKA event 2) 28th Oct +include::async/aap-25-3-28-oct.adoc[leveloffset=+2] +// Async release 2.5-2 14th Oct +include::async/aap-25-2-14-oct.adoc[leveloffset=+2] +//Async release 2.5-1 7th Oct +include::async/aap-25-1-7-oct.adoc[leveloffset=+2] diff --git a/downstream/titles/release-notes/topics/aap-24.adoc b/downstream/titles/release-notes/topics/aap-24.adoc deleted file mode 100644 index c18585a9bb..0000000000 --- a/downstream/titles/release-notes/topics/aap-24.adoc +++ /dev/null @@ -1,118 +0,0 @@ -// For each release of AAP, make a copy of this file and rename it to aap-rn-xx.adoc where xx is the release number; for example, 24 for the 2.4 release. -// Save the renamed copy of this file to the release-notes/topics directory topic files for the release notes reside. -//Only include release note types that have updates for a given release. For example, if there are no Technology previews for the release, remove that section from this file. - - -= Overview of the {PlatformNameShort} 2.4 release - -== New features and enhancements - -{PlatformNameShort} 2.4 includes the following enhancements: - -* Previously, the {ExecEnvShort} container images were based on RHEL 8 only. With {PlatformNameShort} 2.4 onwards, the {ExecEnvShort} container images are now also available on RHEL 9. -The {ExecEnvShort} includes the following container images: -** ansible-python-base -** ansible-python-toolkit -** ansible-builder -** ee-minimal -** ee-supported - -* The ansible-builder project recently released {Builder} version 3, a much-improved and simplified approach to creating execution environments. -You can use the following configuration YAML keys with {Builder} version 3: -** additional_build_files -** additional_build_steps -** build_arg_defaults -** dependencies -** images -** options -** version - -* {PlatformNameShort} 2.4 and later versions can now run on ARM platforms, including both the control plane and the execution environments. - -* Added an option to configure the SSO logout URL for {HubName} if you need to change it from the default value. - -* Updated the ansible-lint RPM package to version 6.14.3. - -* Updated Django for potential denial-of-service vulnerability in file uploads (link:https://access.redhat.com/security/cve/CVE-2023-24580[CVE-2023-24580]). - -* Updated sqlparse for ReDOS vulnerability (link:https://access.redhat.com/security/cve/CVE-2023-30608[CVE-2023-30608]). - -* Updated Django for potential denial-of-service in Accept-Language headers (link:https://access.redhat.com/security/cve/CVE-2023-23969[CVE-2023-23969]). - -* {PlatformNameShort} 2.4 adds the ability to install {ControllerName}, {HubName}, and {EDAName} on IBM Power (ppc64le), IBM Z (s390x), and IBM® LinuxONE (s390x) architectures. - -.Additional resources - -* For more information about using {Builder} version 3, see link:https://ansible.readthedocs.io/projects/builder/en/stable/[{Builder} Documentation] and link:https://docs.ansible.com/automation-controller/latest/html/userguide/ee_reference.html[Execution Environment Setup Reference]. - -== Technology Preview - -include::../snippets/technology-preview.adoc[] - -The following are Technology Preview features: - -* Starting with {PlatformNameShort} 2.4, the Platform Resource Operator can be used to create the following resources in {ControllerName} by applying YAML to your OpenShift cluster: -** Inventories -** Projects -** Instance Groups -** Credentials -** Schedules -** Workflow Job Templates -** Launch Workflows - -You can now configure the Controller Access Token for each resource with the `connection_secret` parameter, rather than the `tower_auth_secret` parameter. This change is compatible with earlier versions, but the `tower_auth_secret` parameter is now deprecated and will be removed in a future release. - -[role="_additional-resources"] -.Additional resources - -* For the most recent list of Technology Preview features, see link:https://access.redhat.com/articles/ansible-automation-platform-preview-features[Ansible Automation Platform - Preview Features]. - -* For information about execution node enhancements on OpenShift deployments, see link:https://docs.ansible.com/automation-controller/latest/html/administration/instances.html[Managing Capacity With Instances]. - -== Deprecated and removed features - -include::../snippets/deprecated-features.adoc[] - -The following functionality was deprecated and removed in {PlatformNameShort} 2.4: - -* On-premise component {CatalogName} is now removed from {PlatformNameShort} 2.4 onwards. - -* With the {PlatformNameShort} 2.4 release, the {ExecEnvShort} container image for Ansible 2.9 (*ee-29-rhel-8*) is no longer loaded into the {ControllerName} configuration by default. - -* Although you can still synchronize content, the use of synclists is deprecated and will be removed in a later release. Instead, {PrivateHubName} administrators can upload manually-created requirements files from the `rh-certified` remote. - -* You can now configure the Controller Access Token for each resource with the `connection_secret` parameter, rather than the `tower_auth_secret` parameter. This change is compatible with earlier versions, but the `tower_auth_secret` parameter is now deprecated and will be removed in a future release. - -* Smart inventories have been deprecated in favor of constructed inventories and will be removed in a future release. - -== Bug fixes - -{PlatformNameShort} 2.4 includes the following bug fixes: - -* Updated the installation program to ensure that collection auto signing cannot be enabled without enabling the collection signing service. - -* Fixed an issue with restoring backups when the installed {ControllerName} version is different from the backup version. - -* Fixed an issue with not adding user defined galaxy-importer settings to `galaxy-importer.cfg` file. - -* Added missing `X-Forwarded-For` header information to nginx logs. - -* Removed unnecessary receptor peer name validation when IP address is used as the name. - -* Updated the `outdated base_packages.txt` file that is included in the bundle installer. - -* Fixed an issue where upgrading the {PlatformNameShort} did not update the nginx package by default. - -* Fixed an issue where an *awx* user was created without creating an *awx* group on execution nodes. - -* Fixed the assignment of package version variable to work with flat file inventories. - -* Added a FQDN check for the {HubName} hostname required to run the Skopeo commands. - -* Fixed the front end URL for Red Hat Single Sign On (SSO) so it is now properly configured after you specify the `sso_redirect_host` variable. - -* Fixed the variable precedence for all component `nginx_tls_files_remote` variables. - -* Fixed the *setup.sh* script to escalate privileges if necessary for installing {PlatformNameShort}. - -* Fixed an issue when restoring a backup to an {HubName} with a different hostname. diff --git a/downstream/titles/release-notes/topics/aap-25-2-patch-release-7-oct-2024.adoc b/downstream/titles/release-notes/topics/aap-25-2-patch-release-7-oct-2024.adoc new file mode 100644 index 0000000000..b64294abc0 --- /dev/null +++ b/downstream/titles/release-notes/topics/aap-25-2-patch-release-7-oct-2024.adoc @@ -0,0 +1,59 @@ +//This is the working version of the patch release notes document. + +[[aap-25-2-patch-release-7-oct-2024]] + + += {PlatformName} 2.5-2 - October 7, 2024 + +This release includes a few enhancements and fixes that have been implemented in the Red Hat {PlatformNameShort} 2.5-2. + +== Enhancements + +* {EDAName} workers and scheduler add timeout and retry resilience when communicating with a Redis cluster. (AAP-32139) +* Removed the *MTLS* credential type that was incorrectly added. (AAP-31848) + +== Fixed issues + +=== {PlatformNameShort} + +* Fixed conditional that was skipping necessary tasks in the restore role, which was causing restores to not finish reconciling. (AAP-30437) + +* Systemd services in the containerized installer are now set with restart policy set to *always* by default. (AAP-31824) + +* *FLUSHDB* is now modified to account for shared usage of a Redis database. It now respects access limitations by removing only those keys that the client has permissions to. (AAP-32138) + +* Added a fix to ensure default *extra_vars* values are rendered in the *Prompt on launch* wizard. (AAP-30585) + +* Filtered out the unused *ANSIBLE_BASE_* settings from the environment variable in job execution. (AAP-32208) + + +=== {EDAName} + +* Configured the setting *EVENT_STREAM_MTLS_BASE_URL* to the correct default to ensure MTLS is disallowed in the RPM installer. (AAP-32027) + +* Configured the setting *EVENT_STREAM_MTLS_BASE_URL* to the correct default to ensure MTLS is disallowed in the containerized installer. (AAP-31851) + +* Fixed a bug where the Event-Driven Ansible workers and scheduler are unable to reconnect to the Redis cluster if a primary Redis node enters a *failed* state and a new primary node is promoted. See the KCS article link:https://access.redhat.com/articles/7088545[Redis failover causes {EDAName} activation failures] that include the steps that were necessary before this bug was fixed. (AAP-30722) + +== Advisories +This section lists the various errata advisories contained in this release. + +.Errata advisories +//cols="a,a" formats the columns as AsciiDoc allowing for AsciiDoc syntax +[cols="2a,3a", options="header"] +|=== +| Patch release version | Errata advisory + +| {PlatformNameShort} 2.5-2 - October 7, 2024 + +| + +link:https://access.redhat.com/errata/RHBA-2024:7756[RHBA-2024:7756] + +link:https://access.redhat.com/errata/RHBA-2024:7760[>RHBA-2024:7760] + +link:https://access.redhat.com/errata/RHBA-2024:7766[RHBA-2024:7766] + +link:https://access.redhat.com/errata/RHBA-2024:7810[RHBA-2024:7810] + +|=== diff --git a/downstream/titles/release-notes/topics/aap-25-changed-features.adoc b/downstream/titles/release-notes/topics/aap-25-changed-features.adoc new file mode 100644 index 0000000000..c2a9cb8619 --- /dev/null +++ b/downstream/titles/release-notes/topics/aap-25-changed-features.adoc @@ -0,0 +1,33 @@ +[[aap-2.5-changed-features]] += Changed features + +Changed features are not deprecated and will continue to be supported until further notice. + +The following table provides information about features that are changed in {PlatformNameShort} 2.5: + +[cols="20%,80%"] +|=== +| Component | Feature + +|{HubNameStart} +|Error codes are now changed from 403 to 401. Any API client usage relying on specific status code 403 versus 401 will have to update their logic. Standard UI usage will work as expected. + +|{EDAName} +|The endpoints `/extra_vars` are now moved to a property within `/activations`. + +|{EDAName} +|The endpoint `/credentials` was replaced with `/eda-credentials`. This is part of an expanded credentials capability for {EDAName}. For more information, see the chapter link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/event-driven_ansible_controller_user_guide/eda-credentials[Setting up credentials for {EDAcontroller}] in the _{EDAcontroller} user guide_. + +|{EDAName} +|{EDAName} can no longer add, edit, or delete the {Gateway}-managed resources. Creating, editing, or deleting organizations, teams, or users is available through {Gateway} endpoints only. The {Gateway} endpoints also enable you to edit organization or team memberships and configure external authentication. + +|API +|Auditing of users has now changed. Users are now audited through the platform API, not through the controller API. This change applies to the {PlatformNameShort} in both cloud service and on-premise deployments. + +|{ControllerNameStart}, + +{HubName}, + +{Gateway}, and + +{EDAName} +|User permission audits the sources of truth for the {Gateway}. When an IdP (SSO) is used, then the IdP should be the source of truth for user permission audits. When the {PlatformNameShort} {Gateway} is used without SSO, then the {Gateway} should be the source of truth for user permissions, not the app-specific UIs or APIs. + +|=== \ No newline at end of file diff --git a/downstream/titles/release-notes/topics/aap-25-deprecated-features.adoc b/downstream/titles/release-notes/topics/aap-25-deprecated-features.adoc new file mode 100644 index 0000000000..01bc6f0d82 --- /dev/null +++ b/downstream/titles/release-notes/topics/aap-25-deprecated-features.adoc @@ -0,0 +1,140 @@ +[[aap-2.5-deprecated-features]] += Deprecated features + +include::../snippets/deprecated-features.adoc[] + +The following table provides information about features that were deprecated in {PlatformNameShort} 2.5: + +[cols="20%,80%"] +|=== +| Component | Feature + +|{ControllerNameStart}, + +{HubName}, and + +{EDAcontroller} +|Tokens for the {ControllerName} and the {HubName} are deprecated. If you want to generate tokens, use the {Gateway} to create them. + +The {Gateway} is the service that handles authentication and authorization for the {PlatformNameShort}. It provides a single entry into the {PlatformNameShort} and serves the platform user interface, so you can authenticate and access all of the {PlatformNameShort} services from a single location. + +|{ControllerNameStart} and + +{HubName} +|All non-local authentications into the {ControllerName} and {HubName} are deprecated. Use the {Gateway} to configure external authentications, such as SAML, LDAP, and RADIUS. + +|Ansible-core +|The `INI` configuration option in the *COLLECTIONS_PATHS* is deprecated. Use the singular form *COLLECTIONS_PATH* instead. + +|Ansible-core +|The environment variable *ANSIBLE_COLLECTIONS_PATHS* is deprecated. Use the singular form *ANSIBLE_COLLECTIONS_PATH* instead. + +|Ansible-core +|Old-style Ansible vars plug-ins that use the entry points `get_host_vars` or `get_group_vars` were deprecated in ansible-core 2.16, and will be removed in ansible-core 2.18. Update the Ansible plug-in to inherit from *BaseVarsPlugin* and define a `get_vars` method as the entry point. + +|Ansible-core +|The *STRING_CONVERSION_ACTION* configuration option is deprecated as it is no longer used in the ansible-core code base. + +|Ansible-core +|The *smart* option for setting a connection plug-in is being removed as its main purpose of choosing between SSH and Paramiko protocols is now irrelevant. Select an explicit connection plug-in instead. + +|Ansible-core +|The undocumented `vaultid` parameter in the `vault` and `unvault` filters is deprecated and will be removed in ansible-core version 2.20. Use `vault_id` instead. + +|Ansible-core +|The string parameter `keepcache` in the `yum_repository` is deprecated. + +|Ansible-core +|The `required` parameter in the API `ansible.module_utils.common.process.get_bin_path` is deprecated. + +|Ansible-core +|`module_utils` - Importing the following convenience helpers from `ansible.module_utils.basic` has been deprecated: + +`get_exception`, `literal_eval`, `_literal_eval`, `datetime`, `signal`, `types`, `chain`, `repeat`, `PY2`, `PY3`, `b`, `binary_type`, `integer_types`, `iteritems`, `string_types`, `test_type`, `map`, and `shlex_quote`. + +Import the helpers from the source definition. + +|Ansible-core +|`ansible-doc` - Role `entrypoint` attributes are deprecated and eventually will no longer be shown in `ansible-doc` from ansible-core. + +|{ExecEnvNameStartSing} +|Execution environment-29 will be deprecated in the next major release after {PlatformNameShort} 2.5. + +|Installer +|The Ansible team is exploring ways to improve the installation of the {PlatformNameShort} on {RHEL}, which may include changes to how components are deployed using RPM directly on the host OS. RPMs will be replaced by packages deployed into containers that are run via Podman; this is similar to how automation currently executes on Podman in containers (execution environments) on the host OS. Changes will be communicated through release notes, but removal will occur in major release versions of the {PlatformNameShort}. + +|Automation mesh +|The Work Python option has been deprecated and will be removed from automation mesh in a future release. + +|=== + + +== Deprecated API endpoints + +API endpoints that will be removed in a future release either because their functionality is being removed or superseded with other capabilities. For example, with the platform moving to a centralized authentication system in the {Gateway}, the existing authorization APIs in the {ControllerName} and {HubName} are being deprecated for future releases as all authentication operations should occur in the {Gateway}. + +[cols="20%,40%,40%"] +|=== +| Component | Endpoint | Capability + +|{ControllerNameStart} +|`*/api/o*` +|Token authentication is moving to the {Gateway}. + +|{HubNameStart} +|`*/api/login/keycloak*` +|Moving to the {Gateway}. + +|{HubNameStart} +|`*/api/v3/auth/token*` +|Token authentication used for pulling collections will migrate to the {Gateway} tokens. + +|{ControllerNameStart} +|`*/api/v2/organizations*` +|Moving to the {Gateway}. + +|{ControllerNameStart} +|`*/api/v2/teams*` +|Moving to the {Gateway}. + +|{ControllerNameStart} +|`*/api/v2/users*` +|Moving to the {Gateway}. + +|{ControllerNameStart} +|`*/api/v2/roles*` +|Controller-specific role definitions are moving to `*/api/controller/v2/role_definitions*`. + +|{ControllerNameStart} +a| +The following roles lists: + +* `*/api/v2/teams/{id}/roles/*` +* `*/api/v2/users/{id}/roles/*` +|Controller-specific resource permissions are moving to `*/api/controller/v2/role_user_assignments*` and `*/api/controller/v2/role_team_assignments*`. + +|{ControllerNameStart} +a| +The following object roles lists: + +* `*/api/v2/credentials/{id}/object_roles/*` +* `*/api/v2/instance_groups/{id}/object_roles/*` +* `*/api/v2/inventories/{id}/object_roles/*` +* `*/api/v2/job_templates/{id}/object_roles/*` +* `*/api/v2/organizations/{id}/object_roles/*` +* `*/api/v2/projects/{id}/object_roles/*` +* `*/api/v2/teams/{id}/object_roles/*` +* `*/api/v2/workflow_job_templates/{id}/object_roles/*` +|Controller-specific resource permissions are moving to `*/api/controller/v2/role_user_assignments*` and `*/api/controller/v2/role_team_assignments*`. + +|{ControllerNameStart} +a| +The following resource access lists: + +* `*/api/v2/credentials/{id}/access_list/*` +* `*/api/v2/instance_groups/{id}/access_list/*` +* `*/api/v2/inventories/{id}/access_list/*` +* `*/api/v2/job_templates/{id}/access_list/*` +* `*/api/v2/organizations/{id}/access_list/*` +* `*/api/v2/projects/{id}/access_list/*` +* `*/api/v2/teams/{id}/access_list/*` +* `*/api/v2/users/{id}/access_list/*` +* `*/api/v2/workflow_job_templates/{id}/access_list/*` +|No replacements yet. + +|=== \ No newline at end of file diff --git a/downstream/titles/release-notes/topics/aap-25-fixed-issues.adoc b/downstream/titles/release-notes/topics/aap-25-fixed-issues.adoc new file mode 100644 index 0000000000..0026a9a487 --- /dev/null +++ b/downstream/titles/release-notes/topics/aap-25-fixed-issues.adoc @@ -0,0 +1,90 @@ +[[aap-2.5-fixed-issues]] += Fixed issues + +This section provides information about fixed issues in {PlatformNameShort} 2.5. + +== {PlatformNameShort} + +* The installer now ensures semanage command is available when SELinux is enabled. (AAP-24396) + +* The installer can now update certificates without attempting to start the nginx service for previously installed environments. (AAP-19948) + +* {EDAName} installation now fails when the pre-existing {ControllerName} is older than version 4.4.0. (AAP-18572) + +* {EDAName} can now successfully install on its own with a controller URL when the controller is not in the inventory. (AAP-16483) + +* Postgres tasks that create users in FIPS environments now use *scram-sha-256*. (AAP-16456) + +* The installer now successfully generates a new `SECRET_KEY` for controller. (AAP-15513) + +* Ensure all backup and restore staged files and directories are cleaned up before running a backup or restore. You must also mark the files for deletion after a backup or restore. (AAP-14986) + +* Postgres certificates are now temporarily copied when checking the Postgres version for SSL mode verify-full. (AAP-14732) + +* The setup script now warns if the provided log path does not have write permissions, and fails if default path does not have write permissions. (AAP-14135) + +* The linger configuration is now correctly set by the root user for the {EDAName} user. (AAP-13744) + +* Subject alternative names for component hosts will now only be checked for signing certificates when HTTPS is enabled. (AAP-7737) + +* The UI for creating and editing an organization now validates the *Max hosts* value. This value must be an integer and have a value between 0 and 214748364. (AAP-23270) + +* Installations that do not include the {ControllerName} but have an external database will no longer install an unused internal Postgres server. (AAP-29798) + +* Added default port values for all `pg_port` variables in the installer. (AAP-18484) + +* *XDG_RUNTIME_DIR* is now defined when applying {EDAName} linger settings for Podman. (AAP-18341)* + +* Fixed an issue where the restore process failed to stop *pulpcore-worker* services on RHEL 9. (AAP-12829) + +* Fixed Postgres *sslmode* for verify-full that affected external Postgres and Postgres signed for 127.0.0.1 for internally managed Postgres. (AAP-7107) + +* Fixed support for {HubName} content signing. (AAP-9739) + +* Fixed conditional code statements to align with changes from ansible-core issue #82295. (AAP-19053) + +* Resolved an issue where providing the database installation with a custom port broke the installation of Postgres. (AAP-30636) + +== {HubNameStart} + +* {HubNameStart} now uses system crypto-policies in nginx. (AAP-17775) + +== {EDAName} + +* Fixed a bug where the Swagger API docs URL returned 404 error with trailing slash. (AAP-27417) + +* Fixed a bug where logs contained stack trace errors inappropriately. (AAP-23605) + +* Fixed a bug where the API returned error 500 instead of error 400 when a foreign key ID did not exist. (AAP-23105) + +* Fix a bug where the Git hash of a project could be empty. (AAP-21641) + +* Fixed a bug where an activation could fail at the start time due to authentication errors with Podman. (AAP-21067) + +* Fixed a bug where a project could not get imported if it contained a malformed rulebook. (AAP-20868) + +* Added *EDA_CSRF_TRUSTED_ORIGINS*, which can be set by user input or defined based on the allowed hostnames provided or determined by the installer as a default. (AAP-19319) + +* Redirected all {EDAName} traffic to `/eda/` following UI changes that require the redirect. (AAP-18989) + +* Fixed target database for Event-Driven automation restore from backup. (AAP-17918) + +* Fixed the {ControllerName} URL check when installing {EDAName} without a controller. (AAP-17249) + +* Fixed a bug when the membership operator failed in a condition applied to a previously saved event. (AAP-16663) + +* Fixed {EDAName} nginx configuration for custom HTTPS port. (AAP-16000) + +* Instead of the target service only, all {EDAName} services are enabled after installation is completed. The {EDAName} services will always start after the setup is complete. (AAP-15889) + +== {OperatorPlatformNameShort} + +* Fixed Django REST Framework (DRF) browsable views. (AAP-25508) + +== {AAPRHDH} + +The following updates were introduced in {AAPRHDH} 1.2: + +* Improvements in error handling and logging for collection and playbook project scaffolder. +* Updates to the `backstage-rhaap-backend` plugin for compatibility with {RHDHShort} 1.4. + diff --git a/downstream/titles/release-notes/topics/aap-25-known-issues.adoc b/downstream/titles/release-notes/topics/aap-25-known-issues.adoc new file mode 100644 index 0000000000..aaf1a9e509 --- /dev/null +++ b/downstream/titles/release-notes/topics/aap-25-known-issues.adoc @@ -0,0 +1,27 @@ +[[aap-2.5-known-issues]] += Known issues + +This section provides information about known issues in {PlatformNameShort} 2.5. + +== {PlatformNameShort} + +* Added the `podman_containers_conf_logs_max_size` variable for *containers.conf* to control the max log size for Podman installations. The default value is 10 MiB. (AAP-12295) + +* Setting the `pg_host=` value without any other context no longer results in an empty HOST section of the *settings.py* in the {ControllerName}. As a workaround, delete the `pg_host=` value or set it to `pg_host=''`. (AAP-31915) + +* Using *Prompt on launch* for variables for job templates, workflow job templates, workflow visualizer nodes, and schedules will not show the default variables when launching the job, or when configuring the workflows and schedules. (AAP-30585) + +* The unused *ANSIBLE_BASE_* settings are included as environment variables in the job execution. These variables suffixed with *SECRET* are no longer used in the {PlatformNameShort} and might be ignored until they are removed in a future patch. (AAP-32208) + +== {EDAName} + +* mTLS event stream creation should be disallowed on all installation methods by default. It is currently disallowed on {OCPShort} installation, but not disallowed in the containerized installations or on RPM installations. (AAP-31337) + +* If a primary Redis node enters a `failed` state and a new primary node is promoted, {EDAName} workers and scheduler are unable to reconnect to the cluster. This causes activations to fail until the containers or pods are recycled. (AAP-30722) + +For more information, see the KCS article link:https://access.redhat.com/articles/7088545[Redis failover causes {EDAName} activation failures]. + +== {AAPRHDH} + +* Python VS Code extension v2024.14.1 does not work in OpenShift Dev Spaces version 1.9.3, prohibiting the Ansible VS Code extension from loading. As a workaround, downgrade the Python VS Code extension version to 2024.12.3. + +* The Ansible Content Creator *Get Started* page links do not work in OpenShift Dev Spaces version 1.9.3. As a workaround, use the link:https://code.visualstudio.com/docs/getstarted/userinterface#:~:text=VS%20Code%20is%20equally%20accessible,for%20the%20most%20common%20operations[Ansible VS Code Command Palette] to access the features. diff --git a/downstream/titles/release-notes/topics/aap-25-removed-features.adoc b/downstream/titles/release-notes/topics/aap-25-removed-features.adoc new file mode 100644 index 0000000000..101e336f2d --- /dev/null +++ b/downstream/titles/release-notes/topics/aap-25-removed-features.adoc @@ -0,0 +1,69 @@ +[[aap-2.5-removed-features]] += Removed features + +Removed features are those that were deprecated in earlier releases. They are now removed from the {PlatformNameShort}, and will no longer be supported. + +The following table provides information about features that are removed in {PlatformNameShort} 2.5: + +[cols="20%,80%"] +|=== +| Component | Feature + +|{ControllerNameStart} +|Proxy support for the {ControllerName} has been removed. Load balancers must now point to the {Gateway} instead of the controller. + +|ansible-lint +|Support for old Ansible `include` tasks syntax is removed in version 2.16 and moved to `include_tasks` or `import_tasks`. Update content to use the currently-supported Ansible syntax, like link:https://docs.ansible.com/ansible/latest/collections/ansible/builtin/include_tasks_module.html[include_tasks] or link:https://docs.ansible.com/ansible/latest/collections/ansible/builtin/import_tasks_module.html#ansible-collections-ansible-builtin-import-tasks-module[import_tasks]. + +|{EDAcontroller} +|Tokens for the {EDAcontroller} are deprecated. Their configuration has been removed from rulebook activations, and they have been replaced with the {PlatformNameShort} credential type. + +|Ansible-core +|Support for Windows Server versions 2012 and 2012 R2 is removed, as Microsoft's supported end-of-life date is 10 October 2023. These versions of Windows Server are not tested in the {PlatformNameShort} 2.5 release. Red Hat does not guarantee that these features will continue to work as expected in this and future releases. + +|Ansible-core +|In the Action plugin with an *ActionBase* class, the deprecated `_remote_checksum` method is now removed. Use `_execute_remote_stat` instead. + +|Ansible-core +|The deprecated *FileLock* class is now removed. Add your own implementation or rely on third-party support. + +|Ansible-core +|Python 3.9 is now removed as a supported version of the {ControllerName}. Use Python 3.10 or later. + +|Ansible-core +|The `include` module that was deprecated in ansible-core 2.12 is now removed. Use `include_tasks` or `import_tasks` instead. + +|Ansible-core +|`Templar` - The deprecated `shared_loader_obj` parameter of `___init___` is now removed. + +|Ansible-core +|`fetch_url` - Removed auto disabling `decompress` when gzip is not available. + +|Ansible-core +|`inventory_cache` - Removed deprecated `default.fact_caching_prefix ini` configuration option. Use `defaults.fact_caching_prefix` instead. + +|Ansible-core +|`module_utils/basic.py` - Removed Python 3.5 as a supported remote version. Python version 2.7 or Python version 3.6 or later is now required. + +Removed Python versions 2.7 and 3.6 as supported remote versions. Use Python 3.7 or later for target execution. + +*NOTE:* This applies to Ansible version 2.17 only. + +With the removal of Python 2 support, the `yum` module and `yum action` plug-in are removed and redirected to `dnf`. + +|Ansible-core +|`stat` - Removed the unused `get_md5` parameter. + +|Ansible-core +|Removed the deprecated `JINJA2_NATIVE_WARNING` environment variable. + +|Ansible-core +|Removed the deprecated `scp_if_ssh` from the ssh connection plugin. + +|Ansible-core +|Removed the deprecated `crypt` support from `ansible.utils.encrypt`. + +|Execution environment +|The Python link is no longer available in the ubi9-based execution environments; only python3 is. Replace scripts that use `python` or `/bin/python` with `python3` or `/bin/python3`. + +|=== \ No newline at end of file diff --git a/downstream/titles/release-notes/topics/aap-25.adoc b/downstream/titles/release-notes/topics/aap-25.adoc new file mode 100644 index 0000000000..f19e1812d4 --- /dev/null +++ b/downstream/titles/release-notes/topics/aap-25.adoc @@ -0,0 +1,187 @@ +// For each release of AAP, make a copy of this file and rename it to aap-rn-xx.adoc where xx is the release number; for example, 24 for the 2.4 release. +// Save the renamed copy of this file to the release-notes/topics directory topic files for the release notes reside. +//Only include release note types that have updates for a given release. For example, if there are no Technology previews for the release, remove that section from this file. + +[id="new-features"] += New features and enhancements + +== Installation changes +Starting with {PlatformNameShort} 2.5, three different on-premise deployment models are fully tested. In addition to the existing RPM-based installer and operator, support for the containerized installer is being added. + +As the platform moves toward a container-first model, the RPM-based installer will be removed in a future release, and a deprecation warning is being issued with the release of {PlatformNameShort} 2.5. While the RPM installer will still be supported for {PlatformNameShort} 2.5 until it is removed, the investment will focus on the {ContainerBase} for RHEL deployments and the {OperatorBase} for OpenShift deployments. Upgrades from 2.4 containerized {PlatformNameShort} Technology Preview to 2.5 containerized {PlatformNameShort} are unsupported. + +== Deployment topologies +Red Hat tests {PlatformNameShort} 2.5 with a defined set of topologies to give you opinionated deployment options. Deploy all components of {PlatformNameShort} so that all features and capabilities are available for use without the need to take further action. + +It is possible to install {PlatformNameShort} on different infrastructure topologies and with different environment configurations. Red Hat does not fully test topologies outside of published reference architectures. Red Hat recommends using a tested topology for all new deployments and provides commercially reasonable support for deployments that meet minimum requirements. + +At the time of the {PlatformNameShort} 2.5 GA release, a limited set of topologies are fully tested. Red Hat will regularly add new topologies to iteratively expand the scope of fully tested deployment options. As new topologies roll out, we will include them in the release notes. + +The following table shows the tested topologies for {PlatformNameShort} 2.5: + +[%autowidth] +|=== +| Mode | Infrastructure | Description | Tested topologies + +|RPM | Virtual Machines/Bare Metal | The RPM installer deploys the {PlatformNameShort} on {RHEL} using RPMs to install the platform on host machines. Customers manage the product and infrastructure lifecycle. +a| +* RPM {GrowthTopology} +* RPM {EnterpriseTopology} + +|Containers | Virtual Machines/Bare Metal | The containerized installer deploys the {PlatformNameShort} on {RHEL} by using Podman that runs the platform in containers on host machines. Customers manage the product and infrastructure lifecycle. +a| +* Container {GrowthTopology} +* Container {EnterpriseTopology} + +|Operator | Red Hat OpenShift | The operator uses Red Hat OpenShift operators to deploy the {PlatformNameShort} within Red Hat OpenShift. Customers manage the product and infrastructure lifecycle. +a| +* Operator {GrowthTopology} +* Operator {EnterpriseTopology} +|=== + +For more information, see {LinkTopologies}. + +== Unified UI +In versions before 2.5, the {PlatformNameShort} was split into three primary services: {ControllerName}, {HubName}, and {EDAcontroller}. Each service included standalone user interfaces, separate deployment configurations, and separate authentication schemas. + +In {PlatformNameShort} 2.5, the {Gateway} is provided as a service that handles authentication and authorization for the {PlatformNameShort}. With the {Gateway}, all services that make up the {PlatformNameShort} are consolidated into a single unified UI. The unified UI provides a single entry into the {PlatformNameShort} and serves the platform user interface to authenticate and access all of the {PlatformNameShort} services from a single location. + +=== Terminology changes + +The Unified UI highlights the functional benefits provided by each underlying service. New UI terminology aligns to earlier names as follows: + +* *Automation execution* provides functionality from the *{ControllerName}* service +* *Automation decisions* provides functionality from the *{EDAName}* service +* *Automation content* provides functionality from the *{HubName}* service + +== {EDAName} functionality (Automation decisions) +With {PlatformNameShort} 2.5, {EDAName} functionality has been enhanced with the following features: + +* Enterprise single-sign on and role-based access control are available through a new {PlatformNameShort} UI, which enables a single point of authentication and access to all functional components as follows: +** Automation Execution ({ControllerName}) +** Automation Decision ({EDAName}) +** Automation Content ({HubName}) +** Automation Analytics +** Access Management +** {LightspeedShortName} + +* Simplified event routing capabilities introduce event streams. Event streams are an easy way to connect your sources to your rulebooks. This new capability lets you create a single endpoint to receive alerts from an event source and then use the events in multiple rulebooks. This simplifies rulebook activation setup, reduces maintenance demands, and helps lower risk by eliminating the need for additional ports to be open to external traffic. + +* {EDAName} in the {PlatformNameShort} 2.5 now supports horizontal scalability and enables high-availability deployments of the {EDAController}. These capabilities allow for the installation of multiple {EDAName} nodes and thus enable you to create highly available deployments. + +* Migration to the new platform-wide {PlatformName} credential type replaces the legacy controller token for enabling rulebook activations to call jobs in the {ControllerName}. + +* {EDAName} now has the ability to manage credentials that can be added to rulebook activations. These credentials can be used in rulebooks to authenticate to event sources. In addition, you can now attach vault credentials to rulebook activations so that you can use vaulted variables in rulebooks. Encrypted credentials and vaulted variables enable enterprises to secure the use of {EDAName} within their environment. + +* New modules are added to the *ansible.eda* collection to enable users to automate the configuration of the {EDAcontroller} using Ansible playbooks. + +[id="eda-2.5-with-automation-controller-2.4"] +== {EDAName} 2.5 with {ControllerName} 2.4 +You can use a newly installed version of {EDAName} from {PlatformNameShort} 2.5 with some existing versions of the {ControllerName}. A hybrid configuration is supported with the following versions: + +* 2.4 {PlatformNameShort} version of {ControllerName} (4.4 or 4.5) +* 2.5 {PlatformNameShort} version of {EDAName} (1.1) + +You can only use new installations of {EDAName} in this configuration. RPM-based hybrid deployments are fully supported by Red Hat. For details on setting up this configuration, see the chapter *Installing {EDAController} 1.1 and configuring {ControllerName} 4.4 or 4.5* in the link:{BaseURL}/red_hat_ansible_automation_platform/2.4/html/using_event-driven_ansible_2.5_with_ansible_automation_platform_2.4[Using Event-Driven Ansible 2.5 with Ansible Automation Platform 2.4] guide. + +A hybrid configuration means you can install a new {EDAName} service and configure rulebook activations to execute job templates on a 2.4 version of the {ControllerName}. + +== {LightspeedShortName} on-premise deployment +{LightspeedFullName} is a generative AI service that helps automation teams create, adopt, and maintain Ansible content more efficiently; it is now available as an on-premise deployment on the {PlatformNameShort} 2.5. + +The on-premise deployment provides the {PlatformNameShort} customers more control over their data and supports compliance with enterprise security policies. For example, organizations in sensitive industries with data privacy or air-gapped requirements can use on-premise deployments of both {LightspeedShortName} and {ibmwatsonxcodeassistant} for {LightspeedShortName} on Cloud Pak for Data. {LightspeedShortName} on-premise deployments are supported on {PlatformNameShort} 2.5. For more information, see the chapter link:https://docs.redhat.com/en/documentation/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant/2.x_latest/html-single/red_hat_ansible_lightspeed_with_ibm_watsonx_code_assistant_user_guide/index#configuring-lightspeed-onpremise_set-up-lightspeed[Setting up {LightspeedShortName} on-premise deployment] in the _{LightspeedFullName} User Guide_. + +== {AAPRHDH} +The {AAPRHDH} deliver an Ansible-first {RHDH} user experience that simplifies creating Ansible content, such as playbooks and collections, for Ansible users of all skill levels. The Ansible plug-ins provide curated content and features to accelerate Ansible learner onboarding and streamline Ansible use case adoption across your organization. + +The Ansible plug-ins provide the following capabilities: + +* A customized home page and navigation tailored to Ansible users +* Curated Ansible learning paths to help users new to Ansible +* Software templates for creating Ansible playbooks and collection projects that follow best practices +* Links to supported development environments and tools with opinionated configurations + +For more information, see _link:{LinkPluginRHDHInstall}_. +//For more information, see the link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/installing_ansible_plug-ins_for_red_hat_developer_hub/rhdh-intro_aap-plugin-rhdh-installing#rhdh-about-plugins_rhdh-intro[{AAPRHDH}] documentation. + + +== {ToolsName} +{ToolsName} is a suite of tools provided with the {PlatformNameShort} to help automation creators create, test, and deploy playbook projects, execution environments, and collections on Linux, MacOS, and Windows platforms. Consolidating core Ansible tools into a single package simplifies tool management and promotes recommended practices in the automation content creation experience. + +{ToolsName} are distributed in an RPM package for RHEL systems, and in a supported container distribution that can be used on Linux, Mac, and Windows OS. + +{ToolsName} comprise the following tools: + +* ansible-builder +* ansible-core +* ansible-lint +* ansible-navigator +* ansible-sign +* Molecule +* ansible-creator +* ansible-dev-environment +* pytest-ansible +* tox-ansible + +For more information, see link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/developing_ansible_automation_content/index[Developing Ansible automation content]. + +== {SaaSonAWS} + +{SaaSonAWS} is a deployment of the {PlatformNameShort} control plane purchased through AWS Marketplace. Red{nbsp}Hat manages the service so that customer teams can focus on automation. + +For more information, see link:{BaseURL}/ansible_on_clouds/2.x/html-single/red_hat_ansible_automation_platform_service_on_aws/index[{SaaSonAWS}]. + +== Enhancements + +* Added the ability to provide `mounts.conf` or copy from a local or remote source when installing Podman. (AAP-16214) + +* Updated the inventory file to include the SSL key and certificate parameters for provided SSL web certificates. (AAP-13728) + +* Added an {PlatformNameShort} operator-version label on Kubernetes resources created by the operator. (AAP-31058) + +* Added installation variables to support PostgreSQL certificate authentication for user-provided databases. (AAP-1095) + +* Updated NGINX to version 1.22. (AAP-15128) + +* Added a new configuration endpoint for the REST API. (AAP-13639) + +* Allowed adjustment of *RuntimeDirectorySize* for Podman environments at the time of installation. (AAP-11597) + +* Added support for the *SAFE_PLUGINS_FOR_PORT_FORWARD* setting for *eda-server* to the installation program. (AAP-21503) + +* Aligned inventory content to tested topologies and added comments for easier access to groups and variables when custom configurations are required. (AAP-30242) + +* The variable *`automationedacontroller_allowed_hostnames`* is no longer needed and is no longer supported for {EDAName} installations. (AAP-24421) + +* The *eda-server* now opens the ports for a rulebook with a source plugin that requires inbound connections only if that plugin is allowed in the settings. (AAP-17416) + +* The {EDAName} settings are now moved to a dedicated YAML file. (AAP-13276) + +* Starting with {PlatformNameShort} 2.5, customers using the controller collection (`ansible.controller`) have the platform collection (`ansible.platform`) as a single point of entry, and must use the platform collection to seed organizations, users, and teams. (AAP-31517) + +* Users are opted in for {Analytics} by default when activating {ControllerName} on first time log in. (ANSTRAT-875) + +//// +THE FOLLOWING IS THE SNIPPET FOR TECH. PREVIEW. ADD THIS SNIPPET IF THERE ARE ANY TECH. PREVIEW FEATURES FOR THE RELEASE. AAP 2.5 HAD NO TECH. PREVIEW FEATURES. +== Technology Preview + +include::../snippets/technology-preview.adoc[] + +The following are Technology Preview features: + +* Starting with {PlatformNameShort} 2.4, the Platform Resource Operator can be used to create the following resources in {ControllerName} by applying YAML to your OpenShift cluster: +** Inventories +** Projects +** Instance Groups +** Credentials +** Schedules +** Workflow Job Templates +** Launch Workflows + +You can now configure the Controller Access Token for each resource with the `connection_secret` parameter, rather than the `tower_auth_secret` parameter. This change is compatible with earlier versions, but the `tower_auth_secret` parameter is now deprecated and will be removed in a future release. + +[role="_additional-resources"] +.Additional resources + +* For the most recent list of Technology Preview features, see link:https://access.redhat.com/articles/ansible-automation-platform-preview-features[Ansible Automation Platform - Preview Features]. +//// diff --git a/downstream/titles/release-notes/topics/async-updates.adoc b/downstream/titles/release-notes/topics/async-updates.adoc deleted file mode 100644 index 85b6a76cb0..0000000000 --- a/downstream/titles/release-notes/topics/async-updates.adoc +++ /dev/null @@ -1,18 +0,0 @@ - -= Asynchronous updates - -Security, bug fix, and enhancement updates for {PlatformNameShort} {PlatformVers} are released as asynchronous erratas. All {PlatformNameShort} erratas are available on the link:{PlatformDownloadUrl}[Download {PlatformName}] page in the Customer Portal. - -As a Red Hat Customer Portal user, you can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, you receive notifications through email whenever new erratas relevant to your registered systems are released. - -[NOTE] -==== -Red Hat Customer Portal user accounts must have systems registered and consuming {PlatformNameShort} entitlements for {PlatformNameShort} errata notification emails to generate. -==== - -The Asynchronous updates section of the release notes will be updated over time to give notes on enhancements and bug fixes for asynchronous errata releases of {PlatformNameShort} 2.4. - -[role="_additional-resources"] -.Additional resources -* For more information about asynchronous errata support in {PlatformNameShort}, see link:https://access.redhat.com/support/policy/updates/ansible-automation-platform[{PlatformName} Life Cycle]. -* For information about Common Vulnerabilities and Exposures (CVEs), see link:https://www.redhat.com/en/topics/security/what-is-cve[What is a CVE?] and link:https://access.redhat.com/security/security-updates/cve[Red Hat CVE Database]. diff --git a/downstream/titles/release-notes/topics/controller-440.adoc b/downstream/titles/release-notes/topics/controller-440.adoc deleted file mode 100644 index 655657574d..0000000000 --- a/downstream/titles/release-notes/topics/controller-440.adoc +++ /dev/null @@ -1,8 +0,0 @@ -// This is the release notes for Automation Controller 4.4, the version number is removed from the topic title as part of the release notes restructuring efforts. - -[[controller-440-intro]] -= {ControllerNameStart} - -{ControllerNameStart} helps teams manage complex multitiered deployments by adding control, knowledge, and delegation to Ansible-powered environments. - -See link:https://docs.ansible.com/automation-controller/latest/html/release-notes/relnotes.html#release-notes-for-4-x[Automation Controller Release Notes for 4.x] for a full list of new features and enhancements. diff --git a/downstream/titles/release-notes/topics/docs-24.adoc b/downstream/titles/release-notes/topics/docs-24.adoc deleted file mode 100644 index eda991403a..0000000000 --- a/downstream/titles/release-notes/topics/docs-24.adoc +++ /dev/null @@ -1,36 +0,0 @@ -// This is the release notes for AAP 2.4 documentation, the version number is removed from the topic title as part of the release notes restructuring efforts. - -[[docs-2.4-intro]] -= {PlatformNameShort} documentation - -{PlatformName} 2.4 documentation includes significant feature updates as well as documentation enhancements and offers an improved user experience. - -.New features and enhancements - -* With the removal of the on-premise component {CatalogName} from {PlatformNameShort} 2.4 onwards, all {CatalogName} documentation is removed from the {PlatformNameShort} 2.4 documentation. - -* The following documents are created to help you install and use {EDAName}, the newest capability of {PlatformNameShort}: - -** link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/getting_started_with_event-driven_ansible_guide/index[Getting Started with Event-Driven Ansible] - -** link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/event-driven_ansible_controller_user_guide/index[Event Driven Ansible User Guide] - -In addition, sections of the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_planning_guide/index[Ansible Automation Platform Planning Guide] -and the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index[Ansible Automation Platform Installation Guide] are updated to include instructions for planning and installing {EDAName}. - -* The {HubName} documentation has had significant reorganization to combine the content spread across 9 separate documents into the following documents: - -_Getting started with automation hub_:: -Use this guide to perform the initial steps required to use Red Hat {HubName} as the default source for Ansible collections content. - -_Managing content in automation hub_:: -Use this guide to understand how to create and manage collections, content and repositories in {HubName}. - -_Red Hat Ansible Automation Platform Installation Guide_:: -Use this guide to learn how to install {PlatformNameShort} based on supported installation scenarios. - -* The _Managing Red Hat Certified and Ansible Galaxy collections in automation hub guide_ has been moved to the link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/managing_content_in_automation_hub/index#managing-cert-valid-content[_Red Hat Certified, validated, and Ansible Galaxy content in automation hub_] topic in the _Managing content in automation hub_ guide. - -* The {PlatformNameShort} 2.4 Release Notes are restructured to improve the experience for our customers and the Ansible Community. Users can now view the latest updates based on the {PlatformNameShort} versions, instead of their release timeline. - -* The topic link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/managing_content_in_automation_hub/index#repo-management[Repository management with automation hub] is created to help you create and manage custom repositories in {HubName}. This topic is found in the _Managing content in automation hub_ guide. diff --git a/downstream/titles/release-notes/topics/docs-25.adoc b/downstream/titles/release-notes/topics/docs-25.adoc new file mode 100644 index 0000000000..b4e99c628a --- /dev/null +++ b/downstream/titles/release-notes/topics/docs-25.adoc @@ -0,0 +1,121 @@ +// This is the release notes for AAP 2.5 documentation, the version number is removed from the topic title as part of the release notes restructuring efforts. + +[[docs-2.5-intro]] += {PlatformNameShort} documentation + +{PlatformName} 2.5 documentation includes significant feature updates as well as documentation enhancements and offers an improved user experience. + +The following are documentation enhancements in {PlatformNameShort} 2.5: + +* The _Setting up an {ControllerName} token_ chapter that previously existed has been deprecated and replaced with the link:https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html-single/using_automation_decisions/index#eda-set-up-rhaap-credential-type[Setting up a Red Hat Ansible Automation Platform credential] topic. As the {EDAcontroller} is now integrated with centralized authentication and the Platform UI, this method simplifies the authentication process required for rulebook activations moving forward. + +* Documentation changes for 2.5 reflect terminology and product changes. Additionally, we have consolidated content into fewer documents. ++ +The following table summarizes title changes for the 2.5 release. ++ +// Per call with Lynne Maynard on Mon. 23 Sept., the ask is to hold off on adding hyperlinks to the individual doc guides for 30 Sept. release as there have been many updates in the guide names and we don't want broken links issues. This is to be reconsidered in the next update, ie, update 1. Therefore, I have used "title attributes" and not "link attributes" for the guides. +[cols="2,2"] +|=== +| Version 2.4 document title | Version 2.5 document title + +|{PlatformName} release notes +|Release notes + +|NA +|New: {TitleAnalytics} + +|{PlatformName} planning guide +|{TitlePlanningGuide} + +|Containerized {PlatformNameShort} installation guide (Technology Preview release) +|{TitleContainerizedInstall} (First Generally Available release) + +|Deploying the {PlatformNameShort} operator on {OCPShort} +|{TitleOperatorInstallation} + +a| +* Getting started with {ControllerName} +* Getting started with {HubName} +* Getting started with {EDAName} +|New: {TitleGettingStarted} + +|Installing and configuring central authentication for the {PlatformNameShort} +|{TitleCentralAuth} + +|Getting started with Ansible playbooks +|Getting started with Ansible playbooks + +|{PlatformNameShort} operations guide +|{TitleAAPOperationsGuide} + +|{PlatformNameShort} automation mesh for {OperatorBase} +|{TitleOperatorMesh} + +|{PlatformNameShort} automation mesh for {VMBase} +|{TitleAutomationMesh} + +|Performance considerations for {OperatorBase} +|{TitleOCPPerformanceGuide} + +|{PlatformNameShort} operator backup and recovery guide +|{TitleOperatorBackup} + +|Troubleshooting {PlatformNameShort} +|{TitleTroubleshootingAAP} + +|{PlatformNameShort} hardening guide +|Not available for 2.5 release; to be published at a later date + +|{ControllerName} user guide +|{ControllerUG} + +|{ControllerName} administration guide +|{ControllerAG} + +|{ControllerName} API overview +|{TitleControllerAPIOverview} + +|{ControllerName} API reference +|Automation execution API reference + +|{ControllerName} CLI reference +|Automation execution CLI reference + +|{EDAName} user guide +|{TitleEDAUserGuide} + +|Managing content in {HubName} +| +- Managing automation content + +- Automation content API reference + +|Ansible security automation guide +|Ansible security automation guide + +a| +* Using the automation calculator + +* Viewing reports about your Ansible automation environment + +* Evaluating your automation controller job runs using the job explorer + +* Planning your automation jobs using the automation savings planner +|{TitleAnalytics} + +|{PlatformNameShort} creator guide +|{TitleDevelopAutomationContent} + +|Automation content navigator creator guide +|{TitleNavigatorGuide} + +|Creating and consuming execution environments +|{TitleBuilder} + +|Installing {AAPRHDH} +|{TitlePluginRHDHInstall} + +|Using {AAPRHDH} +|{TitlePluginRHDHUsing} + +|=== diff --git a/downstream/titles/release-notes/topics/eda-24.adoc b/downstream/titles/release-notes/topics/eda-24.adoc deleted file mode 100644 index ad5bb44673..0000000000 --- a/downstream/titles/release-notes/topics/eda-24.adoc +++ /dev/null @@ -1,58 +0,0 @@ -// This is the release notes for Event-Driven Ansible 1.0 for AAP 2.4 release, the version number is removed from the topic title as part of the release notes restructuring efforts. - -[[eda-24-intro]] -= {EDAName} - -{EDAName} is a new way to enhance and expand automation by improving IT speed and agility while enabling consistency and resilience. {EDAName} is designed for simplicity and flexibility. - -.Known issues - -* Both contributor and editor roles cannot set the AWX token. Only users with administrator roles can set the AWX token. - -* Activation-job pods do not have request limits. - -* The onboarding wizard does not request a controller token creation. - -* Users cannot filter through a list of tokens under the *Controller Token* tab. - -* Only the users with administrator rights can set or change their passwords. - -* If there is a failure, an activation with restart policy set to `Always` is unable to restart the failed activation. - -* Disabling and enabling an activation causes the restart count to increase by one count. This behavior results in an incorrect `restart` count. - -* You must run Podman pods with memory limits. - -* Users can add multiple tokens even when only the first AWX token is used. - -* A race condition occurs when creating and rapidly deleting an activation causes errors. - -* When users filter any list, only the items that are on the list get filtered. - -* When ongoing activations start multiple jobs, a few jobs are not recorded in the audit logs. - -* When a job template fails, a few key attributes are missing in the event payload. - -* Restart policy in a Kubernetes deployment does not restart successful activations that are marked as failed. - -* An incorrect status is reported for activations that are disabled or enabled. - -* If the `run_job_template` action fails, the rule is not counted as executed. - -* RHEL 9.2 activations cannot connect to the host. - -* Restarting the {EDAName} server can cause activation states to become stale. - -* Bulk deletion of rulebook activation lists is not consistent, and the deletion can be either successful or unsuccessful. - -* When users access the detail screen of a rule audit, the related rulebook activation link is broken. - -* Long running activations with loads of events can cause an out of disk space issue. Resolved in xref:rpm-24-6[installer release 2.4-6]. - -* Certain characters, such as hyphen (-), forward slash (/), and period (.), are not supported in the event keys. Resolved in xref:rpm-24-3[installer release 2.4-3]. - -* When there are more activations than available workers, disabling the activations incorrectly shows them in running state. Resolved in xref:rpm-24-3[installer release 2.4-3]. - -* {EDAName} activation pods are running out of memory on RHEL 9. Resolved in xref:rpm-24-3[installer release 2.4-3]. - -* When all workers are busy with activation processes, other asynchronous tasks are not executed, such as importing projects. Resolved in xref:rpm-24-3[installer release 2.4-3]. \ No newline at end of file diff --git a/downstream/titles/release-notes/topics/hub-464.adoc b/downstream/titles/release-notes/topics/hub-464.adoc deleted file mode 100644 index 8c1faab655..0000000000 --- a/downstream/titles/release-notes/topics/hub-464.adoc +++ /dev/null @@ -1,22 +0,0 @@ -// This is the release notes for Automation Hub 4.6.4, the version number is removed from the topic title as part of the release notes restructuring efforts. - -[[hub-464-intro]] -= {HubNameStart} - -{HubNameStart} enables you to discover and use new certified automation content, such as Ansible Collections, from Red Hat Ansible and Certified Partners. - -.New features and enhancements - -* This release of {HubName} provides repository management functionality. With repository management, you can create, edit, delete, and move content between repositories. - -.Bug fixes - -* Fixed an issue in the collection keyword search which was returning an incorrect number of results. - -* Added the ability to set *OPT_REFERRALS* option for LDAP, so that users can now successfully log in to the {HubName} by using their LDAP credentials. - -* Fixed an error on the UI when *redhat.openshift* collection's core dependency was throwing a `404 Not Found` error. - -* Fixed an error such that the deprecated execution environments are now skipped while syncing with `registry.redhat.io`. - - diff --git a/downstream/titles/release-notes/topics/operator-240.adoc b/downstream/titles/release-notes/topics/operator-240.adoc deleted file mode 100644 index 5bc0da4611..0000000000 --- a/downstream/titles/release-notes/topics/operator-240.adoc +++ /dev/null @@ -1,17 +0,0 @@ -// This is the release notes for Automation Platform Operator 2.4, the version number is removed from the topic title as part of the release notes restructuring efforts. - -[[operator-240-intro]] -= Automation Platform Operator - -{OperatorPlatform} provides cloud-native, push-button deployment of new {PlatformNameShort} instances in your OpenShift environment. - -.Bug fixes - -* Enabled configuration of resource requirements for {ControllerName} `init` containers. - -* Added *securityContext* for Event-Driven Ansible Operator deployments to be Pod Security Admission compliant. - -* Resolved error `Controller: Error 413 Entity too large` when doing bulk updates. - -* Ansible token is now obfuscated in YAML job details. - diff --git a/downstream/titles/release-notes/topics/platform-intro.adoc b/downstream/titles/release-notes/topics/platform-intro.adoc index a2baaf62cc..43fc5efb6f 100644 --- a/downstream/titles/release-notes/topics/platform-intro.adoc +++ b/downstream/titles/release-notes/topics/platform-intro.adoc @@ -1,22 +1,23 @@ [[platform-introduction]] = Overview of {PlatformName} -{PlatformName} simplifies the development and operation of automation workloads for managing enterprise application infrastructure lifecycles. -{PlatformNameShort} works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. -Simple to adopt, use, and understand, {PlatformNameShort} provides the tools needed to rapidly implement enterprise-wide automation, no matter where you are in your automation journey. +{PlatformName} simplifies the development and operation of automation workloads for managing enterprise application infrastructure lifecycles. {PlatformNameShort} works across multiple IT domains, including operations, networking, security, and development, as well as across diverse hybrid environments. Simple to adopt, use, and understand, {PlatformNameShort} provides the tools needed to rapidly implement enterprise-wide automation, no matter where you are in your automation journey. [[whats-included]] -== What is included in {PlatformNameShort} +== What is included in the {PlatformNameShort} -[cols="a,a,a,a,a"] +[%header, %autowidth] |=== -| {PlatformNameShort} | {ControllerNameStart} | {HubNameStart} | {EDAcontroller} | {InsightsShort} +| {PlatformNameShort} | {ControllerNameStart} | {HubNameStart} | {EDAcontroller} | {InsightsShort} | {GatewayStart} + +(Unified UI) -|2.4 | 4.4| -* 4.7 +|2.5 | 4.6.0 +a| +* 4.10.0 * hosted service| -1.0 +1.1.0 | hosted service +| 1.1 |=== @@ -24,13 +25,3 @@ Simple to adopt, use, and understand, {PlatformNameShort} provides the tools nee Red Hat provides different levels of maintenance for each {PlatformNameShort} release. For more information, see link:https://access.redhat.com/support/policy/updates/ansible-automation-platform[{PlatformName} Life Cycle]. -== Upgrading {PlatformNameShort} - -When upgrading, do not use `yum update`. Use the installation program instead. The installation program performs all of the necessary actions required to upgrade to the latest versions of {PlatformNameShort}, including {ControllerName} and {PrivateHubName}. - -.Additional resources -* For information about the components included in {PlatformNameShort}, see the table in xref:whats-included[What is included in {PlatformNameShort}]. - -* For more information about upgrading {PlatformNameShort}, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_upgrade_and_migration_guide/index[{PlatformName} upgrade and migration guide]. - -* For procedures related to using the {PlatformNameShort} installer, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html/red_hat_ansible_automation_platform_installation_guide/index[{PlatformNameShort} installation guide]. diff --git a/downstream/titles/release-notes/topics/rpm-version-table.adoc b/downstream/titles/release-notes/topics/rpm-version-table.adoc deleted file mode 100644 index d07ee09e7d..0000000000 --- a/downstream/titles/release-notes/topics/rpm-version-table.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// This table contains the component/package versions per each errata advisory - -.Component versions per errata advisory -//cols="a,a" formats the columns as AsciiDoc allowing for AsciiDoc syntax -[cols="2a,3a", options="header"] -|=== -| Errata advisory | Component versions - -| xref:rpm-24-7[RHSA-2024:3781] + -June 10, 2024 | -* `ansible-automation-platform-installer` 2.4-7 -* `ansible-automation-platform-setup` 2.4-7 -* `ansible-core` 2.15.11 -* {ControllerNameStart} 4.5.7 -* {HubNameStart} 4.9.2 -* {EDAName} 1.0.7 - -|=== \ No newline at end of file diff --git a/downstream/titles/release-notes/topics/snippets b/downstream/titles/release-notes/topics/snippets new file mode 120000 index 0000000000..2490678d80 --- /dev/null +++ b/downstream/titles/release-notes/topics/snippets @@ -0,0 +1 @@ +../../../snippets \ No newline at end of file diff --git a/downstream/titles/release-notes/topics/tech-preview.adoc b/downstream/titles/release-notes/topics/tech-preview.adoc new file mode 100644 index 0000000000..b9a235336d --- /dev/null +++ b/downstream/titles/release-notes/topics/tech-preview.adoc @@ -0,0 +1,62 @@ +[[tech-preview]] += Technology preview + + +== Technology Preview + +include::../snippets/technology-preview.adoc[] + +// The following are Technology Preview features: + +// * Starting with {PlatformNameShort} 2.4, the Platform Resource Operator can be used to create the following resources in {ControllerName} by applying YAML to your OpenShift cluster: +// ** Inventories +// ** Projects +// ** Instance Groups +// ** Credentials +// ** Schedules +// ** Workflow Job Templates +// ** Launch Workflows + +// You can now configure the Controller Access Token for each resource with the `connection_secret` parameter, rather than the `tower_auth_secret` parameter. This change is compatible with earlier versions, but the `tower_auth_secret` parameter is now deprecated and will be removed in a future release. + + +=== Availability of {AAPchatbot} + +The {AAPchatbot} is now available on {PlatformNameShort} 2.5 on {OCP} as a Technology Preview release. It is an intuitive chat interface embedded within the {PlatformNameShort}, utilizing generative artificial intelligence (AI) to answer questions about the {PlatformNameShort}. + +The chat experience in the {AAPchatbot} interacts with users in their natural language prompts in English, and utilizes large language models (LLMs) to generate quick, accurate, and personalized responses. These responses empower {PlatformNameShort} users to work more efficiently, thereby improving productivity and the overall quality of their work. + +To access and use the {AAPchatbot}, you need: + +* Installation of {PlatformNameShort} 2.5 on {OCP}. + +* Deployment of an LLM served by Red Hat AI platforms. + +For more information, see link:{BaseURL}/red_hat_ansible_automation_platform/{PlatformVers}/html-single/installing_on_openshift_container_platform/index#deploying-chatbot-operator[Deploying the {AAPchatbot} on {OCPShort}] in _{TitleOperatorInstallation}_ guide. + +=== {SelfService} + +{SelfService} is released as a technical preview, with limited support offered in accordance with Red Hat’s support guidelines. + +{SelfServiceShortStart} aims to provide a self-service experience, making automation simpler and easily accessible to users of any skill level and role. It also offers accelerated deployment of common automation use cases. + +You can download the self-service technical preview from the link:https://access.redhat.com/downloads/content/480/ver=2.4/rhel---9/2.4/x86_64/product-software[{PlatformNameShort}] download page on the Red Hat Customer Portal. + +For more information, see _link:{LinkSelfServiceInstall}_. + +[IMPORTANT] +==== +{SelfServiceShortStart} is a Technology Preview feature only. +==== + +[role="_additional-resources"] +.Additional resources + +* For the most recent list of Technology Preview features, see link:https://access.redhat.com/articles/ansible-automation-platform-preview-features[{PlatformNameShort} - Preview Features]. + +* For information about execution node enhancements on OpenShift deployments, see link:https://docs.ansible.com/automation-controller/latest/html/administration/instances.html[Managing Capacity With Instances]. + + +=== Managing AI infrastructure using {PlatformName} + +It is now possible to access a practical approach to managing and automating AI infrastructure using {PlatformName}. Please refer to the link:https://access.redhat.com/articles/7117333[AI + Ansible Solution Guides]. \ No newline at end of file diff --git a/downstream/titles/security-guide/docinfo.xml b/downstream/titles/security-guide/docinfo.xml index 9f1ab257c2..fb847664c8 100644 --- a/downstream/titles/security-guide/docinfo.xml +++ b/downstream/titles/security-guide/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible security automation guide +Implementing security automation Red Hat Ansible Automation Platform 2.5 Identify and manage security events using Ansible diff --git a/downstream/titles/security-guide/master.adoc b/downstream/titles/security-guide/master.adoc index 9187edaa44..2c0bf13757 100644 --- a/downstream/titles/security-guide/master.adoc +++ b/downstream/titles/security-guide/master.adoc @@ -7,7 +7,7 @@ include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible security automation guide += Implementing security automation include::{Boilerplate}[] diff --git a/downstream/titles/self-service-install/aap-common b/downstream/titles/self-service-install/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/self-service-install/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/self-service-install/attributes b/downstream/titles/self-service-install/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/self-service-install/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/self-service-install/devtools b/downstream/titles/self-service-install/devtools new file mode 120000 index 0000000000..dc79f7e1fa --- /dev/null +++ b/downstream/titles/self-service-install/devtools @@ -0,0 +1 @@ +../../assemblies/devtools \ No newline at end of file diff --git a/downstream/titles/self-service-install/docinfo.xml b/downstream/titles/self-service-install/docinfo.xml new file mode 100644 index 0000000000..6eae8aa3ed --- /dev/null +++ b/downstream/titles/self-service-install/docinfo.xml @@ -0,0 +1,11 @@ +Installing Ansible Automation Platform self-service technology preview +Red Hat Ansible Automation Platform +2.5 +Install and configure Ansible Automation Platform self-service technology preview + + This guide describes how to install and configure Ansible Automation Platform self-service technology preview so that users can run automation. + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/self-service-install/images b/downstream/titles/self-service-install/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/self-service-install/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/self-service-install/master.adoc b/downstream/titles/self-service-install/master.adoc new file mode 100644 index 0000000000..4e0de6ce55 --- /dev/null +++ b/downstream/titles/self-service-install/master.adoc @@ -0,0 +1,31 @@ +:imagesdir: images +:numbered: +:toclevels: 4 +:experimental: +:context: aap-self-service-install + +include::attributes/attributes.adoc[] + +// Book Title += Installing Ansible Automation Platform self-service technology preview + +Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. + +This guide describes how to install {SelfService} and connect it with an instance of {PlatformNameShort}. + +include::{Boilerplate}[] + +[IMPORTANT] +==== +{SelfService} is a Technology Preview feature only. +include::snippets/technology-preview.adoc[] +==== + +include::devtools/assembly-self-service-about.adoc[leveloffset=+1] +include::devtools/assembly-self-service-installation-overview.adoc[leveloffset=+1] +include::devtools/assembly-self-service-preinstall-config.adoc[leveloffset=+1] +include::devtools/assembly-self-service-helm-install.adoc[leveloffset=+1] +include::devtools/assembly-self-service-view-deployment.adoc[leveloffset=+1] +include::devtools/assembly-self-service-accessing-deployment.adoc[leveloffset=+1] +include::devtools/assembly-self-service-telemetry-capture.adoc[leveloffset=+1] + diff --git a/downstream/titles/self-service-install/snippets b/downstream/titles/self-service-install/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/self-service-install/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/self-service-using/aap-common b/downstream/titles/self-service-using/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/self-service-using/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/self-service-using/attributes b/downstream/titles/self-service-using/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/self-service-using/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/self-service-using/devtools b/downstream/titles/self-service-using/devtools new file mode 120000 index 0000000000..dc79f7e1fa --- /dev/null +++ b/downstream/titles/self-service-using/devtools @@ -0,0 +1 @@ +../../assemblies/devtools \ No newline at end of file diff --git a/downstream/titles/self-service-using/docinfo.xml b/downstream/titles/self-service-using/docinfo.xml new file mode 100644 index 0000000000..985a3dcc23 --- /dev/null +++ b/downstream/titles/self-service-using/docinfo.xml @@ -0,0 +1,11 @@ +Using Ansible Automation Platform self-service technology preview +Red Hat Ansible Automation Platform +2.5 +Use Ansible Automation Platform self-service technology preview + + This guide describes how to use Ansible Automation Platform self-service technology preview to implement role-based access control and run automation. + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/self-service-using/images b/downstream/titles/self-service-using/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/self-service-using/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/self-service-using/master.adoc b/downstream/titles/self-service-using/master.adoc new file mode 100644 index 0000000000..a68de005d1 --- /dev/null +++ b/downstream/titles/self-service-using/master.adoc @@ -0,0 +1,45 @@ +:imagesdir: images +:numbered: +:toclevels: 4 +:experimental: +:context: aap-self-service-using + +include::attributes/attributes.adoc[] + +// Book Title += Using Ansible Automation Platform self-service technology preview + +Thank you for your interest in {PlatformName}. {PlatformNameShort} is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. + +This guide describes how to add templates to your {SelfService} instance, +and how to launch them to run automation jobs. +It also describes how to configure role-based access control (RBAC) so that you can restrict the teams and users who can run the jobs. + +This document has been updated to include information for the latest release of {PlatformNameShort}. + +include::{Boilerplate}[] + +[IMPORTANT] +==== +{SelfService} is a Technology Preview feature only. + +include::snippets/technology-preview.adoc[] + +==== + +include::devtools/assembly-self-service-using-overview.adoc[leveloffset=+1] + +include::devtools/assembly-self-service-using-prereqs.adoc[leveloffset=+1] + +include::devtools/assembly-self-service-using-repo-setup.adoc[leveloffset=+1] + +include::devtools/assembly-self-service-scm-credentials-private-repos.adoc[leveloffset=+1] + +include::devtools/assembly-self-service-working-templates.adoc[leveloffset=+1] + +include::devtools/assembly-self-service-rbac.adoc[leveloffset=+1] + +include::devtools/assembly-self-service-deregister-templates.adoc[leveloffset=+1] + +include::devtools/assembly-self-service-feedback.adoc[leveloffset=+1] + diff --git a/downstream/titles/self-service-using/snippets b/downstream/titles/self-service-using/snippets new file mode 120000 index 0000000000..7bf6da9a51 --- /dev/null +++ b/downstream/titles/self-service-using/snippets @@ -0,0 +1 @@ +../../snippets \ No newline at end of file diff --git a/downstream/titles/hub/getting-started/aap-common b/downstream/titles/terraform-aap/terraform-aap-getting-started/aap-common similarity index 100% rename from downstream/titles/hub/getting-started/aap-common rename to downstream/titles/terraform-aap/terraform-aap-getting-started/aap-common diff --git a/downstream/titles/analytics/reports/attributes b/downstream/titles/terraform-aap/terraform-aap-getting-started/attributes similarity index 100% rename from downstream/titles/analytics/reports/attributes rename to downstream/titles/terraform-aap/terraform-aap-getting-started/attributes diff --git a/downstream/titles/terraform-aap/terraform-aap-getting-started/docinfo.xml b/downstream/titles/terraform-aap/terraform-aap-getting-started/docinfo.xml new file mode 100644 index 0000000000..4c420f3afc --- /dev/null +++ b/downstream/titles/terraform-aap/terraform-aap-getting-started/docinfo.xml @@ -0,0 +1,11 @@ +Getting started with Terraform and Ansible Automation Platform +Red Hat Ansible Automation Platform +2.5 +Integrate Terraform with Ansible Automation Platform + + Learn how to configure Ansible Automation Platform with Terraform Enterprise or HCP Terraform, and migrate from Terraform Community. + + + Red Hat Customer Content Services + + \ No newline at end of file diff --git a/downstream/titles/controller/controller-getting-started/images b/downstream/titles/terraform-aap/terraform-aap-getting-started/images similarity index 100% rename from downstream/titles/controller/controller-getting-started/images rename to downstream/titles/terraform-aap/terraform-aap-getting-started/images diff --git a/downstream/titles/terraform-aap/terraform-aap-getting-started/master.adoc b/downstream/titles/terraform-aap/terraform-aap-getting-started/master.adoc new file mode 100644 index 0000000000..5dbfe94f8f --- /dev/null +++ b/downstream/titles/terraform-aap/terraform-aap-getting-started/master.adoc @@ -0,0 +1,15 @@ +:imagesdir: images +:numbered: +:toclevels: 1 +:experimental: + +include::attributes/attributes.adoc[] + +// Book Title += Getting started with Terraform and Ansible Automation Platform + +include::{Boilerplate}[] +include::terraform-aap/assembly-terraform-introduction.adoc[leveloffset=+1] +include::terraform-aap/assembly-terraform-integrating-from-aap.adoc[leveloffset=+1] +include::terraform-aap/assembly-terraform-migrating-from-community.adoc[leveloffset=+1] +include::terraform-aap/assembly-terraform-integrating-from-terraform.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/titles/terraform-aap/terraform-aap-getting-started/terraform-aap b/downstream/titles/terraform-aap/terraform-aap-getting-started/terraform-aap new file mode 120000 index 0000000000..d20247e33d --- /dev/null +++ b/downstream/titles/terraform-aap/terraform-aap-getting-started/terraform-aap @@ -0,0 +1 @@ +../../../assemblies/terraform-aap \ No newline at end of file diff --git a/downstream/titles/topologies/aap-common b/downstream/titles/topologies/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/topologies/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/topologies/attributes b/downstream/titles/topologies/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/topologies/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/topologies/docinfo.xml b/downstream/titles/topologies/docinfo.xml new file mode 100644 index 0000000000..e29d8807e5 --- /dev/null +++ b/downstream/titles/topologies/docinfo.xml @@ -0,0 +1,13 @@ +Tested deployment models +Red Hat Ansible Automation Platform +2.5 +Plan your deployment of Ansible Automation Platform + + +This guide provides the Red Hat tested and supported topologies for Red Hat Ansible Automation Platform. + + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/topologies/images b/downstream/titles/topologies/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/topologies/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/topologies/master.adoc b/downstream/titles/topologies/master.adoc new file mode 100644 index 0000000000..9b3ab3721c --- /dev/null +++ b/downstream/titles/topologies/master.adoc @@ -0,0 +1,28 @@ +:imagesdir: images +:toclevels: 4 +:context: topologies +include::attributes/attributes.adoc[] + +// Book Title + += Tested deployment models + +include::{Boilerplate}[] + +include::topologies/assembly-overview-tested-deployment-models.adoc[leveloffset=+1] + +//RPM topologies +include::topologies/assembly-rpm-topologies.adoc[leveloffset=+1] + +//Container topologies +include::topologies/assembly-container-topologies.adoc[leveloffset=+1] + +//Operator topologies +include::topologies/assembly-ocp-topologies.adoc[leveloffset=+1] + +//Automation mesh nodes +include::topologies/topologies/ref-mesh-nodes.adoc[leveloffset=+1] + +//Additional resources appendix +[appendix] +include::topologies/assembly-appendix-topology-resources.adoc[leveloffset=+1] diff --git a/downstream/titles/topologies/topologies b/downstream/titles/topologies/topologies new file mode 120000 index 0000000000..760101fd3c --- /dev/null +++ b/downstream/titles/topologies/topologies @@ -0,0 +1 @@ +../../assemblies/topologies \ No newline at end of file diff --git a/downstream/titles/troubleshooting-aap/master.adoc b/downstream/titles/troubleshooting-aap/master.adoc index 8f717f7564..3d793053df 100644 --- a/downstream/titles/troubleshooting-aap/master.adoc +++ b/downstream/titles/troubleshooting-aap/master.adoc @@ -13,12 +13,25 @@ Use the Troubleshooting {PlatformNameShort} guide to troubleshoot your {Platform include::{Boilerplate}[] include::troubleshooting-aap/assembly-diagnosing-the-problem.adoc[leveloffset=+1] + include::troubleshooting-aap/assembly-troubleshoot-controller.adoc[leveloffset=+1] + include::troubleshooting-aap/assembly-troubleshoot-backup-recovery.adoc[leveloffset=+1] + include::troubleshooting-aap/assembly-troubleshoot-execution-environments.adoc[leveloffset=+1] + include::troubleshooting-aap/assembly-troubleshoot-installation.adoc[leveloffset=+1] + include::troubleshooting-aap/assembly-troubleshoot-jobs.adoc[leveloffset=+1] -include::troubleshooting-aap/assembly-troubleshoot-login.adoc[leveloffset=+1] + +// Michelle - commenting out as it refers to controller UI which should no longer be accessed +//include::troubleshooting-aap/assembly-troubleshoot-login.adoc[leveloffset=+1] + include::troubleshooting-aap/assembly-troubleshoot-networking.adoc[leveloffset=+1] + include::troubleshooting-aap/assembly-troubleshoot-playbooks.adoc[leveloffset=+1] -include::troubleshooting-aap/assembly-troubleshoot-subscriptions.adoc[leveloffset=+1] \ No newline at end of file + +include::troubleshooting-aap/assembly-troubleshoot-upgrade.adoc[leveloffset=+1] + +// Michelle - commenting out for now as this content doesn't appear to exist anymore in a published doc +//include::troubleshooting-aap/assembly-troubleshoot-subscriptions.adoc[leveloffset=+1] diff --git a/downstream/titles/updating-aap/aap-common b/downstream/titles/updating-aap/aap-common new file mode 120000 index 0000000000..472eeb4dac --- /dev/null +++ b/downstream/titles/updating-aap/aap-common @@ -0,0 +1 @@ +../../aap-common \ No newline at end of file diff --git a/downstream/titles/updating-aap/attributes b/downstream/titles/updating-aap/attributes new file mode 120000 index 0000000000..a5caaa73a5 --- /dev/null +++ b/downstream/titles/updating-aap/attributes @@ -0,0 +1 @@ +../../attributes \ No newline at end of file diff --git a/downstream/titles/updating-aap/docinfo.xml b/downstream/titles/updating-aap/docinfo.xml new file mode 100644 index 0000000000..4b4908c021 --- /dev/null +++ b/downstream/titles/updating-aap/docinfo.xml @@ -0,0 +1,13 @@ +Updating from Ansible Automation Platform 2.5 to 2.5.x +Red Hat Ansible Automation Platform +2.5 +Perform a patch update from Ansible Automation Platform 2.5 to 2.5.x + + +This guide shows how to perform a patch update from Ansible Automation Platform 2.5 to 2.5.x for each installation type. + + + + Red Hat Customer Content Services + + diff --git a/downstream/titles/updating-aap/images b/downstream/titles/updating-aap/images new file mode 120000 index 0000000000..5fa6987088 --- /dev/null +++ b/downstream/titles/updating-aap/images @@ -0,0 +1 @@ +../../images \ No newline at end of file diff --git a/downstream/titles/updating-aap/master.adoc b/downstream/titles/updating-aap/master.adoc new file mode 100644 index 0000000000..27bb46df48 --- /dev/null +++ b/downstream/titles/updating-aap/master.adoc @@ -0,0 +1,24 @@ +:imagesdir: images +:toclevels: 4 +:experimental: + +:context: updating-aap + +include::attributes/attributes.adoc[] + +// Book Title + += Updating from Ansible Automation Platform 2.5 to 2.5.x + +include::{Boilerplate}[] + +You can perform patch updates to your {PlatformNameShort} installation as updates are released. This only applies to updates from 2.5 to 2.5.x. + +[NOTE] +==== +Upgrades from 2.4 to 2.5 are unsupported at this time. For more information, see this link:https://access.redhat.com/solutions/7089196[Knowledgebase article]. +==== + +include::platform/assembly-update-rpm.adoc[leveloffset=+1] +include::platform/assembly-update-container.adoc[leveloffset=+1] +// [hherbly]: moved to Installing on OCP guide per AAP-34122 include::platform/assembly-update-ocp.adoc[leveloffset=+1] \ No newline at end of file diff --git a/downstream/titles/updating-aap/platform b/downstream/titles/updating-aap/platform new file mode 120000 index 0000000000..06b49528ee --- /dev/null +++ b/downstream/titles/updating-aap/platform @@ -0,0 +1 @@ +../../assemblies/platform \ No newline at end of file diff --git a/downstream/titles/upgrade/docinfo.xml b/downstream/titles/upgrade/docinfo.xml index 01d64bfa69..0def8006be 100644 --- a/downstream/titles/upgrade/docinfo.xml +++ b/downstream/titles/upgrade/docinfo.xml @@ -1,4 +1,4 @@ -Red Hat Ansible Automation Platform upgrade and migration guide +RPM upgrade and migration Red Hat Ansible Automation Platform 2.5 Upgrade and migrate legacy deployments of Ansible Automation Platform diff --git a/downstream/titles/upgrade/master.adoc b/downstream/titles/upgrade/master.adoc index 99be853bab..debb660b71 100644 --- a/downstream/titles/upgrade/master.adoc +++ b/downstream/titles/upgrade/master.adoc @@ -6,13 +6,14 @@ include::attributes/attributes.adoc[] // Book Title -= Red Hat Ansible Automation Platform upgrade and migration guide += RPM upgrade and migration include::{Boilerplate}[] include::platform/assembly-aap-upgrades.adoc[leveloffset=+1] include::platform/assembly-aap-upgrading-platform.adoc[leveloffset=+1] -include::platform/assembly-migrate-legacy-venv-to-ee.adoc[leveloffset=+1] -include::platform/assembly-migrate-isolated-execution-nodes.adoc[leveloffset=+1] -include::platform/assembly-content-migration.adoc[leveloffset=+1] -include::platform/assembly-converting-playbooks-for-aap2.adoc[leveloffset=+1] +include::platform/assembly-aap-post-upgrade.adoc[leveloffset=+1] +// include::platform/assembly-migrate-legacy-venv-to-ee.adoc[leveloffset=+1] +// include::platform/assembly-migrate-isolated-execution-nodes.adoc[leveloffset=+1] +// include::platform/assembly-content-migration.adoc[leveloffset=+1] +// include::platform/assembly-converting-playbooks-for-aap2.adoc[leveloffset=+1]