-
Notifications
You must be signed in to change notification settings - Fork 695
Upgrade to AWS Java SDK v2 #6165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
✅ Deploy Preview for nextflow-docs-staging ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
Signed-off-by: jorgee <[email protected]>
It is ready for review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks awesome. Made a few minor comments
plugins/nf-amazon/src/main/nextflow/cloud/aws/AwsClientFactory.groovy
Outdated
Show resolved
Hide resolved
CompletedMultipartUpload completedUpload = CompletedMultipartUpload.builder() | ||
.parts(completedParts) | ||
.build(); | ||
|
||
CompleteMultipartUploadRequest completeRequest = CompleteMultipartUploadRequest.builder() | ||
.bucket(targetBucketName) | ||
.key(targetObjectKey) | ||
.uploadId(uploadId) | ||
.multipartUpload(completedUpload) | ||
.build(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some indentation can make it better readable
try{ | ||
downloadFile.completionFuture().get(); | ||
} catch (InterruptedException e){ | ||
log.debug("S3 download file: s3://{}/{} cancelled", source.getBucket(), source.getKey()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log.debug("S3 download file: s3://{}/{} cancelled", source.getBucket(), source.getKey()); | |
log.debug("S3 download file: s3://{}/{} interrupted", source.getBucket(), source.getKey()); |
log.debug("S3 download file: s3://{}/{} exception thrown", source.getBucket(), source.getKey()); | ||
throw new IOException(e.getCause()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log.debug("S3 download file: s3://{}/{} exception thrown", source.getBucket(), source.getKey()); | |
throw new IOException(e.getCause()); | |
String msg = String.format("Exception thrown with downloading S3 object s3://{}/{}", source.getBucket(), source.getKey()); | |
throw new IOException(msg, e); |
Logging the error and re-throwing will result in double logging it, which can be confusing.
Alternatively it could be done just
throw e.getCause()
Thread.currentThread().interrupt(); | ||
} | ||
} catch (ExecutionException e) { | ||
log.debug("S3 deownload directory: s3://{}/{} exception thrown", source.getBucket(), source.getKey()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
Thread.currentThread().interrupt(); | ||
} catch (ExecutionException e) { | ||
log.debug("S3 upload file: s3://{}/{} exception thrown", target.getBucket(), target.getKey()); | ||
throw new IOException(e.getCause()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
return getObjectMetadata(bucketName,key).getSSEAwsKmsKeyId(); | ||
} catch (ExecutionException e) { | ||
log.debug("S3 upload directory: s3://{}/{} exception thrown", target.getBucket(), target.getKey()); | ||
throw new IOException(e.getCause()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as above
I have found an issue with multi-part uploads when uploading large files. I move to draft until I fix it. |
Signed-off-by: Ben Sherman <[email protected]>
Signed-off-by: Ben Sherman <[email protected]>
In one of the changes that I did to support the Signer override and UserAgent is breaking the multipart upload. I changed the way about how the transfer manager's S3AsyncClient is created and the multipart upload with large files is having issues. It is creating too many parts and it can exhaust the heap memory or a time out when acquaring the connections. It is not happening if we use the default S3CrtAsyncClient but it does not allow to define the UserAgent and Signer as a clientoverride. I am still looking for a way to define them but I am wondering how relevant are these options. Is there a possibility to deprecate them? |
This PR contains the changes to port the Amazon plugin to AWS SDK version 2. Find below the most relevant changes:
S3Client.withForceGlobalBucketAcessEnabled(flag)
. In v2, it set the following flagsS3Client.Builder.crossRegionAccessEnabled(flag)
andS3Configuration.multiRegionAccessEnabled(flag)
.-
AmazonS3Client.getS3AccountOwner()
is not available in SDK v2. It was providing an ID used for checking the file access. In V2, the only way to retrieve the same ID is from a bucket owned by the user. To do it we need to list the buckets and get the owner field in the GetBucketACLResponse. If it is not possible to retrieve the ID because the user does not own any bucket, we perform the following fallback. In the case of READ access, it tries to retrieve the head of the object, It will fail if there isn't read access. In the case of writting, a warning is printed. It is the same as AWS NIO is doing to check the file access.The
setEndpoint
andsetRegion
methods in the S3Client wrapper are removed as it is not available in the v2 clients. They were only used in tests.CannedAccessControlList
is split in two classes one for objects and another for buckets. In most of the code it has been substituted by ObjectCannedACL.ContentType
andContentLength
are part of the request instead of theObjectMetadata
, and they can be obtained invoking the S3client.headObject method in the SDK v2S3ClientConfiguration
doesn't exist in SDK v2. Two new classes have been created to emulate the same behaviour. They convert the properties to the SDK v2 sync and async client configurations.SsoCredentialsProviderV1
class is not needed anymore as SDK v2 already manages the SSO credentials. The custom provider chain created in theS3FileSystemProvider.getCredetialsProvider0
to include theSsoCredentialsProviderV1
ihas been replace by theDefaultCredentialProvider
in v2.Credentials and config are automatically merged by SDK v2. No option for NXF_DISABLE_AWS_CONFIG_MERGE.
In V2, clients and requests are immutable and must be generated with a builder class. Some helper methods have been modified to pass builder classes instead of requests, such as
makeJobDefRequest
,configJobRefRequest
,addVolumeMountsToContainer
, etc.S3 Parallel Download was deprecated and S3CopyStream was not used. They have been removed.
In v1, the upload directory was performed by walking through the different directory files and uploading them one by one. In v2, it has been substituted by the uploadDirectory method in the SDK.