Skip to content

Commit bb21c8e

Browse files
authored
Merge pull request #122 from matzew/add_integration_docs_for_source
Adding Integration Source and Sink preview doc
2 parents f6624b3 + 91a35d5 commit bb21c8e

File tree

11 files changed

+468
-0
lines changed

11 files changed

+468
-0
lines changed

modules/ROOT/nav.adoc

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,18 @@
1111
*** xref:serverless:eventing/kafka-scaling-setup.adoc[Setup Autoscaling For Eventing Kafka Components]
1212
*** xref:serverless:eventing/backstage-setup.adoc[Setup Backstage for Eventing]
1313
*** xref:serverless:eventing/backstage-usage.adoc[Knative Event Mesh Backstage Plugin]
14+
*** IntegrationSource
15+
**** xref:serverless:eventing/integrationsource/overview.adoc[Overview]
16+
**** xref:serverless:eventing/integrationsource/aws_ddbstreams.adoc[AWS DynamoDB Streams]
17+
**** xref:serverless:eventing/integrationsource/aws_s3.adoc[AWS S3 Source]
18+
**** xref:serverless:eventing/integrationsource/aws_sqs.adoc[AWS SQS Source]
19+
**** xref:serverless:eventing/integrationsource/timer.adoc[Timer Source]
20+
*** IntegrationSink
21+
**** xref:serverless:eventing/integrationsink/overview.adoc[Overview]
22+
**** xref:serverless:eventing/integrationsink/aws_s3.adoc[AWS S3 Sink]
23+
**** xref:serverless:eventing/integrationsink/aws_sns.adoc[AWS SNS Sink]
24+
**** xref:serverless:eventing/integrationsink/aws_sqs.adoc[AWS SQS Sink]
25+
**** xref:serverless:eventing/integrationsink/logger.adoc[Logger Sink]
1426
** Serving
1527
*** xref:serverless:serving/serving-with-ingress-sharding.adoc[Use Serving with OpenShift ingress sharding]
1628
*** xref:serverless:serving/scaleability-and-performance-of-serving.adoc[Scalability and performance of {serverlessproductname} Serving]
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
= AWS S3 Sink
2+
:compat-mode!:
3+
// Metadata:
4+
:description: AWS S3 Sink in {serverlessproductname}
5+
6+
This page describes how to use the AWS S3 service with the `IntegrationSink` API for Eventing in {serverlessproductname}.
7+
8+
[IMPORTANT]
9+
====
10+
{serverlessproductname} `IntegrationSink` is a Developer Preview feature only.
11+
12+
Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
13+
Red Hat does not recommend using them in production.
14+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
15+
16+
For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/.
17+
====
18+
19+
== Amazon credentials
20+
21+
For connecting to AWS the `IntegrationSink` uses Kubernetes `Secret`, present in the namespace of the resource. The `Secret` can be created like:
22+
23+
[source,terminal]
24+
----
25+
$ oc -n <namespace> create secret generic my-secret --from-literal=aws.accessKey=<accessKey> --from-literal=aws.secretKey=<secretKey>
26+
----
27+
28+
== AWS S3 Sink Example
29+
30+
Below is an `IntegrationSink` to send data to an Amazon S3 Bucket:
31+
32+
[source,yaml]
33+
----
34+
apiVersion: sinks.knative.dev/v1alpha1
35+
kind: IntegrationSink
36+
metadata:
37+
name: integration-sink-aws-s3
38+
namespace: knative-samples
39+
spec:
40+
aws:
41+
s3:
42+
arn: "arn:aws:s3:::my-bucket"
43+
region: "eu-north-1"
44+
auth:
45+
secret:
46+
ref:
47+
name: "my-secret"
48+
----
49+
50+
Inside of the `aws.s3` object we define the name of the bucket (or _arn_) and its region. The credentials for the AWS service are referenced from the `my-secret` Kubernetes `Secret`
51+
52+
More details about the Apache Camel Kamelet https://camel.apache.org/camel-kamelets/latest/aws-s3-sink.html[aws-s3-sink].
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
= AWS Simple Notification Service Sink
2+
:compat-mode!:
3+
// Metadata:
4+
:description: AWS Simple Notification Service Sink in {serverlessproductname}
5+
6+
This page describes how to use the Amazon Web Services (AWS) Simple Notification Service (SNS) with the `IntegrationSink` API for Eventing in {serverlessproductname}.
7+
8+
[IMPORTANT]
9+
====
10+
{serverlessproductname} `IntegrationSink` is a Developer Preview feature only.
11+
12+
Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
13+
Red Hat does not recommend using them in production.
14+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
15+
16+
For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/.
17+
====
18+
19+
== Amazon credentials
20+
21+
For connecting to AWS the `IntegrationSink` uses Kubernetes `Secret`, present in the namespace of the resource. The `Secret` can be created like:
22+
23+
[source,terminal]
24+
----
25+
$ oc -n <namespace> create secret generic my-secret --from-literal=aws.accessKey=<accessKey> --from-literal=aws.secretKey=<secretKey>
26+
----
27+
28+
== AWS SNS Sink Example
29+
30+
Below is an `IntegrationSink` to send data to AWS SNS:
31+
32+
[source,yaml]
33+
----
34+
apiVersion: sinks.knative.dev/v1alpha1
35+
kind: IntegrationSink
36+
metadata:
37+
name: integration-sink-aws-sns
38+
namespace: knative-samples
39+
spec:
40+
aws:
41+
sns:
42+
arn: "my-topic"
43+
region: "eu-north-1"
44+
auth:
45+
secret:
46+
ref:
47+
name: "my-secret"
48+
----
49+
50+
Inside of the `aws.sns` object we define the name of the topic (or _arn_) and its region. The credentials for the AWS service are referenced from the `my-secret` Kubernetes `Secret`
51+
52+
More details about the Apache Camel Kamelet https://camel.apache.org/camel-kamelets/latest/aws-sns-sink.html[aws-sns-sink].
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
= AWS Simple Queue Service Sink
2+
:compat-mode!:
3+
// Metadata:
4+
:description: AWS Simple Queue Service Sink in {serverlessproductname}
5+
6+
This page describes how to use the Amazon Web Services (AWS) Simple Queue Service (SQS) with the `IntegrationSink` API for Eventing in {serverlessproductname}.
7+
8+
[IMPORTANT]
9+
====
10+
{serverlessproductname} `IntegrationSink` is a Developer Preview feature only.
11+
12+
Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
13+
Red Hat does not recommend using them in production.
14+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
15+
16+
For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/.
17+
====
18+
19+
== Amazon credentials
20+
21+
For connecting to AWS the `IntegrationSink` uses Kubernetes `Secret`, present in the namespace of the resource. The `Secret` can be created like:
22+
23+
[source,terminal]
24+
----
25+
$ oc -n <namespace> create secret generic my-secret --from-literal=aws.accessKey=<accessKey> --from-literal=aws.secretKey=<secretKey>
26+
----
27+
28+
== AWS SQS Sink Example
29+
30+
Below is an `IntegrationSink` to send data to AWS SQS:
31+
32+
[source,yaml]
33+
----
34+
apiVersion: sinks.knative.dev/v1alpha1
35+
kind: IntegrationSink
36+
metadata:
37+
name: integration-sink-aws-sqs
38+
namespace: knative-samples
39+
spec:
40+
aws:
41+
sqs:
42+
arn: "arn:aws:s3:::my-queue"
43+
region: "eu-north-1"
44+
auth:
45+
secret:
46+
ref:
47+
name: "my-secret"
48+
----
49+
50+
Inside of the `aws.sqs` object we define the name of the queue (or _arn_) and its region. The credentials for the AWS service are referenced from the `my-secret` Kubernetes `Secret`
51+
52+
More details about the Apache Camel Kamelet https://camel.apache.org/camel-kamelets/latest/aws-sqs-sink.html[aws-sqs-sink].
Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
= Log Sink
2+
:compat-mode!:
3+
// Metadata:
4+
:description: Log Sink in {serverlessproductname}
5+
6+
This page describes how to use the _Log Sink Kamelet_ with the `IntegrationSink` API for Eventing in {serverlessproductname}. This sink is useful for debugging purposes.
7+
8+
[IMPORTANT]
9+
====
10+
{serverlessproductname} `IntegrationSink` is a Developer Preview feature only.
11+
12+
Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
13+
Red Hat does not recommend using them in production.
14+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
15+
16+
For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/.
17+
====
18+
19+
== Log Sink Example
20+
21+
Below is an `IntegrationSink` that logs all data that it receives:
22+
23+
[source,yaml]
24+
----
25+
apiVersion: sinks.knative.dev/v1alpha1
26+
kind: IntegrationSink
27+
metadata:
28+
name: integration-log-sink
29+
namespace: knative-samples
30+
spec:
31+
log:
32+
showHeaders: true
33+
level: INFO
34+
----
35+
36+
Inside of the `log` object we define the logging `level` and define to also show (HTTP) headers it received.
37+
38+
More details about the Apache Camel Kamelet https://camel.apache.org/camel-kamelets/latest/log-sink.html[log-sink].
Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
= Knative Integration Sink
2+
:compat-mode!:
3+
// Metadata:
4+
:description: Knative Integration Source in {serverlessproductname}
5+
6+
This page describes how to use the new `IntegrationSink` API for Eventing in {serverlessproductname}. The `IntegrationSink` is a Knative Eventing custom resource supporting selected https://camel.apache.org/camel-k/latest/kamelets/kamelets.html[Kamelets] from the https://camel.apache.org/[Apache Camel] project. Kamelets allow users to connect to 3rd party system for improved connectivity, they can act as "sources" or as "sinks". Therefore the `IntegrationSink` allows sending data to external systems out of Knative Eventing in the format of CloudEvents.
7+
8+
[IMPORTANT]
9+
====
10+
{serverlessproductname} `IntegrationSink` is a Developer Preview feature only.
11+
12+
Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
13+
Red Hat does not recommend using them in production.
14+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
15+
16+
For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/.
17+
====
18+
19+
== Supported Kamelet sinks
20+
21+
* xref:./aws_s3.adoc[AWS S3]
22+
* xref:./aws_sns.adoc[AWS SNS]
23+
* xref:./aws_sqs.adoc[AWS SQS]
24+
* xref:./logger.adoc[Generic logger]
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
= AWS DynamoDB Streams
2+
:compat-mode!:
3+
// Metadata:
4+
:description: AWS DynamoDB Streams in {serverlessproductname}
5+
6+
This page describes how to use the AWS DynamoDB Streams with the `IntegrationSource` API for Eventing in {serverlessproductname}.
7+
8+
[IMPORTANT]
9+
====
10+
{serverlessproductname} `IntegrationSource` is a Developer Preview feature only.
11+
12+
Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
13+
Red Hat does not recommend using them in production.
14+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
15+
16+
For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/.
17+
====
18+
19+
== Amazon credentials
20+
21+
For connecting to AWS the `IntegrationSource` uses Kubernetes `Secret`, present in the namespace of the resource. The `Secret` can be created like:
22+
23+
[source,terminal]
24+
----
25+
$ oc -n <namespace> create secret generic my-secret --from-literal=aws.accessKey=<accessKey> --from-literal=aws.secretKey=<secretKey>
26+
----
27+
28+
== AWS DynamoDB Streams Example
29+
30+
Below is an `IntegrationSource` to receive events from Amazon DynamoDB Streams.
31+
32+
[source,yaml]
33+
----
34+
apiVersion: sources.knative.dev/v1alpha1
35+
kind: IntegrationSource
36+
metadata:
37+
name: integration-source-aws-ddb
38+
namespace: knative-samples
39+
spec:
40+
aws:
41+
ddbStreams:
42+
table: "my-table"
43+
region: "eu-north-1"
44+
auth:
45+
secret:
46+
ref:
47+
name: "my-secret"
48+
sink:
49+
ref:
50+
apiVersion: eventing.knative.dev/v1
51+
kind: Broker
52+
name: default
53+
----
54+
55+
Inside of the `aws.ddbStreams` object we define the name of the table and its region. The credentials for the AWS service are referenced from the `my-secret` Kubernetes `Secret`
56+
57+
More details about the Apache Camel Kamelet https://camel.apache.org/camel-kamelets/latest/aws-ddb-streams-source.html[aws-ddb-streams-source].
Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,57 @@
1+
= AWS S3 Source
2+
:compat-mode!:
3+
// Metadata:
4+
:description: AWS S3 Source in {serverlessproductname}
5+
6+
This page describes how to use the AWS S3 service with the `IntegrationSource` API for Eventing in {serverlessproductname}.
7+
8+
[IMPORTANT]
9+
====
10+
{serverlessproductname} `IntegrationSource` is a Developer Preview feature only.
11+
12+
Developer Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete.
13+
Red Hat does not recommend using them in production.
14+
These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
15+
16+
For more information about the support scope of Red Hat Developer Preview features, see https://access.redhat.com/support/offerings/devpreview/.
17+
====
18+
19+
== Amazon credentials
20+
21+
For connecting to AWS the `IntegrationSource` uses Kubernetes `Secret`, present in the namespace of the resource. The `Secret` can be created like:
22+
23+
[source,terminal]
24+
----
25+
$ oc -n <namespace> create secret generic my-secret --from-literal=aws.accessKey=<accessKey> --from-literal=aws.secretKey=<secretKey>
26+
----
27+
28+
== AWS S3 Source Example
29+
30+
Below is an `IntegrationSource` to receive data from an Amazon S3 Bucket.
31+
32+
[source,yaml]
33+
----
34+
apiVersion: sources.knative.dev/v1alpha1
35+
kind: IntegrationSource
36+
metadata:
37+
name: integration-source-aws-s3
38+
namespace: knative-samples
39+
spec:
40+
aws:
41+
s3:
42+
arn: "arn:aws:s3:::my-bucket"
43+
region: "eu-north-1"
44+
auth:
45+
secret:
46+
ref:
47+
name: "my-secret"
48+
sink:
49+
ref:
50+
apiVersion: eventing.knative.dev/v1
51+
kind: Broker
52+
name: default
53+
----
54+
55+
Inside of the `aws.s3` object we define the name of the bucket (or _arn_) and its region. The credentials for the AWS service are referenced from the `my-secret` Kubernetes `Secret`
56+
57+
More details about the Apache Camel Kamelet https://camel.apache.org/camel-kamelets/latest/aws-s3-source.html[aws-s3-source].

0 commit comments

Comments
 (0)