S3 Connector
This facilitates communication and interaction with Amazon Simple Storage Service (S3) object storages, which is used for storing large information objects, such as documents and binary files.
Settings
Name - refers to the unique identifier. (must consist of letters (A-Z or a-z), digits (0-9), underscores (_), or dollar signs ($). However, a class name cannot start with a digit and dollar sign ($))
All settings should be defined as global variables. This enables the redefinition of values for new deployments.
Global variables allow the definition of different values for development, test, pre-production, and production environments.
AWS Access Key ID
Is a unique identifier used to authenticate and authorize requests made to Amazon Web Services (AWS) APIs.
Go to Amazon Web Services console and click on the name of your account (it is located in the top right corner of the console). Then, in the expanded drop-down list, select Security Credentials
Click the Continue to Security Credentials button
Expand the Access Keys (Access Key ID and Secret Access Key) option. You will see the list of your active and deleted access keys
To generate new access keys, click the Create New Access Key button
Click Show Access Key to have it displayed on the screen. Note, that you can download it to your machine as a file and open it whenever needed. To download it, just click the Download Key File button
AWS Secret Access Key
Is a sensitive piece of information used alongside an AWS Access Key ID to authenticate and authorize requests made to Amazon Web Services (AWS) APIs. Together, these two components are used for programmatic access to AWS services and resources.
Region
is a geographical area where Amazon Web Services (AWS) hosts its cloud infrastructure, including data centers and availability zones. Each AWS region is designed to be isolated from other regions, providing redundancy, fault tolerance, and compliance with data sovereignty requirements.
Functions
copyObject
Creates a copy of an object that is already stored in Amazon S3. You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic action using this API. However, to copy an object greater than 5 GB, you must use the multipart upload Upload Part - Copy (UploadPartCopy) API. All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account. A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy action starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK response. This means that a 200 OK response can contain either a success or an error. If you call the S3 API directly, make sure to design your application to parse the contents of the response and handle it appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the embedded error and apply error handling per your configuration settings (including automatically retrying the request as appropriate). If the condition persists, the SDKs throws an exception (or, for the SDKs that don't use exceptions, they return the error). If the copy is successful, you receive a response with information about the copied object. The copy request charge is based on the storage class and Region that you specify for the destination object. The request can also result in a data retrieval charge for the source if the source storage class bills for data retrieval. Amazon S3 transfer acceleration does not support cross-Region copies. If you request a cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad Request error. For more information, see Transfer Acceleration.
Arguments:
sourceBucketName :: pdk.core.String - The name of the source bucket
sourceKey :: pdk.core.String - The key of the source object
destinationBucketName :: pdk.core.String - The name of the destination bucket
destinationKey :: pdk.core.String - The key of the destination object
Result:
output :: pdk.s3.CopyObjectResult - Result of the CopyObject operation returned by the service
Possible exceptions
NullPointerException - throws if the sourceBucketName, sourceKey, destinationBucketName or destinationKey argument is
NULL
createBucket
Creates a new S3 bucket. To create a bucket, you must register with Amazon S3 and have a valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner. Not every string is an acceptable bucket name. For information about bucket naming restrictions, see Bucket naming rules
Arguments:
bucketName :: pdk.core.String
Result:
output :: pdk.s3.Bucket - Result of the CreateBucket operation returned by the service.
Possible exceptions
NullPointerException - throws if the bucketName argument is
NULL
deleteBucket
Deletes the S3 bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.
Arguments:
bucketName :: pdk.core.String
Result:
output :: pdk.core.Boolean - result of the DeleteBucket operation returned by the service.
Possible exceptions
NullPointerException - throws if the bucketName argument is
NULL
deleteObject
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects but will still respond that the command was successful.
Arguments:
bucketName :: pdk.core.String
key :: pdk.core.String - the object key
Result:
output :: pdk.core.Boolean - Result of the DeleteObject operation returned by the service.
Possible exceptions
NullPointerException - throws if the bucketName argument is
NULL
deleteObjects
This action enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this action provides a suitable alternative to sending individual delete requests, reducing per-request overhead.
The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete action and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted.
The action supports two modes for the response: verbose and quiet. By default, the action uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete action encountered an error. For a successful deletion, the action does not return any information about the delete in the response body.
Arguments:
bucketName :: pdk.core.String
keys :: pdk.core.Array<pdk.core.String> - the object keys
Result:
output :: pdk.core.Boolean - Result of the operation.
Possible exceptions
NullPointerException - throws if the bucketName or keys argument is
NULL
doesBucketExist
This action is useful to determine if a bucket exists and you have permission to access it.
To use this operation, you must have permissions to perform the s3:ListBucket action.
Arguments:
bucketName :: pdk.core.String
Result:
output :: pdk.core.Boolean - Result of the operation.
Possible exceptions
NullPointerException - throws if the bucketName is
NULL
getObjectAsByteArray
Retrieves objects from Amazon S3. To use GET, you must have READ access to the object.
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical computer file system. You can, however, create a logical hierarchy by using object key names that imply a folder structure. For example, instead of naming an object sample.jpg, you can name it photos/2006/February/sample.jpg.
To get an object from such a logical hierarchy, specify the full key name for the object in the GET operation. For a virtual hosted-style request example, if you have the object photos/2006/February/sample.jpg, specify the resource as /photos/2006/February/sample.jpg. For a path-style request example, if you have the object photos/2006/February/sample.jpg in the bucket named examplebucket, specify the resource as /examplebucket/photos/2006/February/sample.jpg. For more information about request types, see HTTP Host Header Bucket Specification.
Arguments:
bucketName :: pdk.core.String
key :: pdk.core.String - the object key
Result:
output :: pdk.core.Array<pdk.core.Byte> - containing data streamed from service.
Possible exceptions
NullPointerException - throws if the bucketName or key is
NULL
getObjectAsFile
Retrieves objects from Amazon S3. To use GET, you must have READ access to the object.
An Amazon S3 bucket has no directory hierarchy such as you would find in a typical computer file system. You can, however, create a logical hierarchy by using object key names that imply a folder structure. For example, instead of naming an object sample.jpg, you can name it photos/2006/February/sample.jpg.
To get an object from such a logical hierarchy, specify the full key name for the object in the GET operation. For a virtual hosted-style request example, if you have the object photos/2006/February/sample.jpg, specify the resource as /photos/2006/February/sample.jpg. For a path-style request example, if you have the object photos/2006/February/sample.jpg in the bucket named examplebucket, specify the resource as /examplebucket/photos/2006/February/sample.jpg. For more information about request types, see HTTP Host Header Bucket Specification.
Arguments:
bucketName :: pdk.core.String
key :: pdk.core.String - the object key
fileName :: pdk.core.String
Result:
output :: pdk.io.File - new file with defined name
Possible exceptions
NullPointerException - throws if the bucketName, key or fileName is
NULL
listBuckets
Returns a list of all buckets owned by the authenticated sender of the request. To use this operation, you must have the s3:ListAllMyBuckets permission.
Arguments:
no
Result:
output :: pdk.core.Array<pdk.s3.Bucket> - result of the ListBuckets operation returned by the service
listObjectsV2
Returns some or all (up to 1,000) of the objects in a bucket with each request. You can use the request parameters as selection criteria to return a subset of the objects in a bucket. A 200 OK response can contain valid or invalid XML. Make sure to design your application to parse the contents of the response and handle it appropriately. Objects are returned sorted in an ascending order of the respective key names in the list. For more information about listing objects, see Listing object keys programmatically in the Amazon S3 User Guide. To use this operation, you must have READ access to the bucket.
Arguments:
bucketName :: pdk.core.String
prefix :: pdk.core.String - limits the response to keys that begin with the specified prefix.
continuationToken :: pdk.core.String - indicates to Amazon S3 that the list is being continued on this bucket with a token.
Result:
output :: pdk.s3.ListObjectsV2Response - a custom iterable that can be used to iterate through all the response pages
Possible exceptions
NullPointerException - throws if the bucketName, prefix or continuationToken is
NULL
putObject
Adds an object to a bucket. You must have WRITE permissions on a bucket to add an object to it.
Amazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the entire object to the bucket. You cannot use PutObject to only update a single piece of metadata for an existing object. You must put the entire object with updated metadata if you want to update some values.
Amazon S3 is a distributed system. If it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. To prevent objects from being deleted or overwritten, you can use Amazon S3 Object Lock.
Arguments:
bucketName :: pdk.core.String
key :: pdk.core.String - the object key
file :: pdk.io.File - source file
Result:
output :: pdk.s3.PutObjectResponse - result of the PutObject operation returned by the service
Possible exceptions
NullPointerException - throws if the bucketName, key or file is
NULL
Last updated