Skip to content

Commit af4a347

Browse files
committed
feat(filters): Authorizer
Signed-off-by: Keith Wall <[email protected]>
1 parent 57b308e commit af4a347

File tree

1 file changed

+212
-0
lines changed

1 file changed

+212
-0
lines changed

proposals/010-authorizer.md

Lines changed: 212 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,212 @@
1+
<!-- This template is provided as an example with sections you may wish to comment on with respect to your proposal. Add or remove sections as required to best articulate the proposal. -->
2+
3+
# Authorizer Filter
4+
5+
The Authorizer filter gives the ability to add authorisation checks into a Kafka system which will be enforced by the proxy.
6+
7+
## Current situation
8+
9+
It is possible for a filter to implement its own business rules, enforcing authorization in some custom manner. However,
10+
that approach does not have good separation of concerns. Authorization checks are an orthogonal concern, and security
11+
best practice is to separate their enforcement from business logic.
12+
13+
## Motivation
14+
15+
We are identifying use-cases where making authorization decisions at the proxy is desirable. Examples include where one wishes to restrict a virtual cluster to a sub-set of the resources (say topics) of the cluster.
16+
17+
## Proposal
18+
19+
The Authorizer filter gives the ability to layer authorization checks into a Kafka system which with those authorization checks being enforced by the filter. These authorization checks are in addition to anythat may be imposed by the Kafka Cluster itself. This means that for an action to be allowed both the proxy’s authorizer and the Kafka broker’s authorizer will need to reach an ALLOW decision.
20+
21+
The Authorizer filter allows for authorization checks to be made in the following form:
22+
23+
`Principal P is [Allowed/Denied] Operation O On Resource R`.
24+
25+
where:
26+
27+
* Principal is the authenticated user.
28+
* Operation is an action such as, but not limited to, Read, Write, Create, Delete.
29+
* Resource identifies one or more resources, such as, but not limited to Topic, Group, Cluster, TransactionalId.
30+
31+
Unlike Apache Kafka authorizer system, the `from host` predicate is omitted. This is done to adhere to the modern security principle that there are no privileged network locations.
32+
33+
### Request authorization
34+
35+
The Authorizer filter will intercept all request messages that perform an action on a resource, and all response messages that list resources.
36+
37+
On receipt of a request message from the downstream, the filter will make an asynchronous call to the authorizer for the resource(s) involved in the request. If the authorization result for all resources is `ALLOWED`, the filter will forward the request to the broker.
38+
If the authorization result is `DENIED` for any resource in the request, the filter will produce a short circuit error response denying the request using the appropriate authorization failed error code. The authorizer filter must not forward requests that fail authorization.
39+
40+
### Response resource filtering
41+
42+
On receipt of a response message from the upstream, the Authorizing filter will filter the resources so that the downstream receives only resources that they are authorized to `DESCRIBE`.
43+
44+
The Authorizer filter will have a pluggable API that allows different Authorizer implementations to be plugged in. This proposal will deliver a simple implementation of the API that allows authorization rules to be expressed in a separate file. Future work may
45+
deliver alternative implementations that, say, delegate authorization decisions to externals systems (such as OPA), or implement other
46+
authorizations schemes (such as RBAC).
47+
48+
### Operation/Resource Matrix
49+
50+
For the initial version, the system will be capable of making authorization decisions for topic operations and cluster connections only.
51+
Future versions may support authorization decisions for other Kafka resources types (e.g. consumer group and transactional id).
52+
The Authorizer will be designed to be open for extension so that it may be used to make authorization decisions about other entities (beyond those defined by Apache Kafka).
53+
54+
The table below sets out the authorization checks and filters will be implemented.
55+
56+
| Operation | Resource Type | Kafka Message |
57+
|-----------|---------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
58+
| READ | Topic | Fetch, ShareFetch, ShareGroupFetch, ShareAcknowledge, AlterShareGroupOffsets, DeleteShareGroupOffsets, OffsetCommit, TxnOffsetCommit, OffsetDelete |
59+
| WRITE | Topic | Produce, InitProducerId, AddPartitionsToTxn |
60+
| CREATE | Topic | CreateTopics |
61+
| DELETE | Topic | DeleteTopics |
62+
| ALTER | Topic | AlterConfigs, IncrementalAlterConfigs, CreatePartitions |
63+
| DESCRIBE | Topic | ListOffset, OffsetFetch, OffsetFetchForLeaderEpoch DescribeProducers, ConsumerGroupHeartbeat, ConsumerGroupDescribe, ShareGroupHeartbeat, ShareGroupDescribe, MetaData, DescribeTopicPartitions, ConsumerGroupDescribe |
64+
| CONNECT | Cluster | SaslAuthenticate |
65+
66+
In general, the filter will make access decisions in the same manner as Kafka itself. This means it will apply the same authorizer checks that Kafka enforces itself and generate error responses in the same way. It will
67+
From the client's perspective, it will be impossible for it to distinguish between the proxy and kafka cluster itself.
68+
It will also use the same implied operation semantics as implemented by Kafka itself, such as where `ALTER` implies `DESCRIBE`, as described by
69+
`org.apache.kafka.common.acl.AclOperation`.
70+
71+
There is one deviation. The filter will implement a `CONNECT` authorization check on the `CLUSTER` early, as the connection is made, once the principal is known. This allows the Authorizer filter to be used to gate access to virtual clusters.
72+
* In the case of SASL, this will be performed on receipt of the Sasl Authentication Response. If the authorization check fails, the authentication will fail with an authorization failure and the connection closed.
73+
* In case of TLS client-auth, this will be performed on receipt of the first request message. If the authorization check fails, a short circuit response will be sent containing an authorization failure and the connection closed. This feature won’t be part of the initial scope.
74+
75+
The filter will support messages that express topic identity using topic ids (i.e. those building on [KIP-516](https://cwiki.apache.org/confluence/display/KAFKA/KIP-516%3A+Topic+Identifiers)). It will resolve the topic id into a topic name before making the authorization check.
76+
77+
### File based Authorizer implementation
78+
79+
The initial scope will include an Authorizer implementation that is backed by authorization rules expressed in a separate file. This file will associate principals with ACL rules capable of expressing an allow-list of resources.
80+
The initial version will be restricted to expressing allow lists of topics, but future version will extend this to allow for rules to be express about other resource types.
81+
82+
### APIs
83+
84+
#### Authorizer Filter
85+
86+
Filter Configuration:
87+
88+
```yaml
89+
type: AuthorizerFilter
90+
config:
91+
authorizer: FileBasedAllowListAuthorizer
92+
authorizerConfig:
93+
rulesFile: /path/to/allow-list.yaml
94+
```
95+
96+
Java APIs:
97+
98+
```java
99+
// Inspired by org.apache.kafka.server.authorizer.Authorizer, except is asynchronous in nature.
100+
interface Authorizer {
101+
CompletionStage<List<AuthenticationResult>> authorize(AuthorizableRequestContext context, List<Action> actions);
102+
}
103+
```
104+
105+
```java
106+
// Inspired by org.apache.kafka.server.authorizer.AuthorizableRequestContext
107+
interface AuthorizableRequestContext {
108+
String principal();
109+
// scope for methods such as requestType(), requestVersion() etc to be added in future.
110+
}
111+
```
112+
113+
```java
114+
// Inspired by org.apache.kafka.server.authorizer.Action
115+
record Action(
116+
AclOperation aclOperation,
117+
ResourcePattern resourcePattern) {
118+
}
119+
```
120+
121+
```java
122+
// The following types are inspire by the Kafka classes of the same name and have the same role. However, interfaces are used
123+
// rather than enums to allow for extensibility (using pattern suggested by https://www.baeldung.com/java-extending-enums)
124+
interface AclOperation {
125+
String operationName();
126+
}
127+
128+
enum CoreAclOperation implements AclOperation {
129+
CREATE("Create"),
130+
DELETE("Delete"),
131+
READ("Read")/* ,... */
132+
}
133+
134+
interface ResourceType {
135+
String resourceType();
136+
}
137+
138+
139+
enum CoreResourceType implements ResourceType {
140+
TOPIC("Topic"),
141+
CLUSTER("Cluster")
142+
}
143+
144+
interface PatternType {
145+
boolean matches(String resourceName);
146+
}
147+
148+
149+
enum CorePatternType {
150+
LITERAL() {
151+
@Override
152+
boolean matches(String pattern, String resourceName) {
153+
return pattern.equals(resourceName);
154+
}
155+
},
156+
MATCH() { /* ... */ },
157+
PREFIXED { /* ... */ }
158+
}
159+
160+
record ResourceNamePattern(PatternType patternType, String pattern) {
161+
boolean matches(String resourceName) {
162+
return patternType.matches(pattern, resourceName);
163+
}
164+
}
165+
```
166+
167+
#### Rules File
168+
169+
The rules file expresses a mapping between principals (user type only with exact match) and an allow-list of resources.
170+
171+
For the initial scope, only resource rules of type TOPIC are supported. In order to allow for future extension, the user’s configuration set the version property to 1. This will allow future versions of the filter to introduce support for other resource types without changing the meaning of existing configurations.
172+
173+
For the `CLUSTER` `CONNECT` authorization check, this will be implemented implicitly. The check will return `ALLOW` if there is at least one resource rule for the principal. If there are no resource rules for the principal, the authorizer will return `DENY`.
174+
175+
```yaml
176+
version: 1 # Mandatory must be 1. Version 1 is defined as supporting resourceType TOPIC only.
177+
definitions:
178+
- principals: [User:bob, User:grace] # Only User: prefixed principals will be supported.
179+
resourceRules:
180+
- resourceType: TOPIC # Only the topic resourceType is permitted
181+
operations: [READ]
182+
patternType: LITERAL
183+
resourceName: foo
184+
- type: TOPIC
185+
operations: [ALL]
186+
patternType: PREFIXED
187+
resourceName: bar
188+
- type: TOPIC
189+
operations: [ALL]
190+
patternType: MATCH
191+
resourceName: baz*
192+
```
193+
194+
## Affected/not affected projects
195+
196+
The kroxylicious repo.
197+
198+
## Compatibility
199+
200+
No issues - this proposal introduces a new filter./
201+
202+
## Rejected alternatives
203+
204+
### Reuse of the Kafka ACL interfaces/enumerations
205+
206+
The Kafka Client library includes interfaces and enumeration such as `org.apache.kafka.server.authorizer.Action`
207+
and `org.apache.kafka.common.acl.AclOperation`. It would be technically possible to base the Authorizer's interfaces
208+
on these types. This would have the advantage that it would help ensure that the ACL model of the Proxy followed
209+
that of Kafka, but it would have also meant restricting the Authorizer to making access control decisions for the same
210+
entities as Kafka does. We want to leave open the possibility to make access decisions about other resource types, beyond
211+
those considered by Kafka today (such as record-level ACLs).
212+

0 commit comments

Comments
 (0)