
Export files to Amazon S3
🤖/s3/store exports encoding results to Amazon S3.
If you are new to Amazon S3, see our tutorial on using your own S3 bucket.
The URL to the result file in your S3 bucket will be returned in the Assembly Status JSON. If your S3 bucket has versioning enabled, the version ID of the file will be returned within meta.version_id
Warning
Avoid permission errors. By default, acl is set to "public". AWS S3 has a bucket setting called "Block new public ACLs and uploading public objects". Set this to False in your bucket if you intend to leave acl as "public". Otherwise, you’ll receive permission errors in your Assemblies despite your S3 credentials being configured correctly.
Warning
Use DNS-compliant bucket names. Your bucket name must be DNS-compliant and must not contain uppercase letters. Any non-alphanumeric characters in the file names will be replaced with an underscore, and spaces will be replaced with dashes. If your existing S3 bucket contains uppercase letters or is otherwise not DNS-compliant, rewrite the result URLs using the Robot’s url_prefix parameter.
Limit access
You will also need to add permissions to your bucket so that Transloadit can access it properly. Here is an example IAM policy that you can use. Following the principle of least privilege, it contains the minimum required permissions to export a file to your S3 bucket using Transloadit. You may require more permissions (especially viewing permissions) depending on your application.
Please change {BUCKET_NAME} in the values for Sid and Resource accordingly. Also, this policy will grant the minimum required permissions to all your users. We advise you to create a separate Amazon IAM user, and use its User ARN (can be found in the "Summary" tab of a user here) for the Principal value. More information about this can be found here.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowTransloaditToStoreFilesIn{BUCKET_NAME}Bucket",
"Effect": "Allow",
"Action": ["s3:GetBucketLocation", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl"],
"Resource": ["arn:aws:s3:::{BUCKET_NAME}", "arn:aws:s3:::{BUCKET_NAME}/*"]
}
]
}
The Sid value is just an identifier for you to recognize the rule later. You can name it anything you like.
The policy needs to be separated into two parts, because the ListBucket action requires permissions on the bucket while the other actions require permissions on the objects in the bucket. When targeting the objects there's a trailing slash and an asterisk in the Resource parameter, whereas when the policy targets the bucket, the slash and the asterisk are omitted.
Please note that if you give the Robot's acl parameter a value of "bucket-default", then you do not need the "s3:PutObjectAcl" permission in your bucket policy.
In order to build proper result URLs we need to know the region in which your S3 bucket resides. For this we require the GetBucketLocation permission. Figuring out your bucket's region this way will also slow down your Assemblies. To make this much faster and to also not require the GetBucketLocation permission, we have added the bucket_region parameter to the /s3/store and /s3/import Robots. We recommend using them at all times.
Please keep in mind that if you use bucket encryption you may also need to add "sts:*" and "kms:*" to the bucket policy. Please read here and here in case you run into trouble with our example bucket policy.
Keep your credentials safe
Usage example
Export uploaded files to my_target_folder in an S3 bucket:
{
"steps": {
"exported": {
"robot": "/s3/store",
"use": ":original",
"credentials": "YOUR_AWS_CREDENTIALS",
"path": "my_target_folder/${unique_prefix}/${file.url_name}"
}
}
}Parameters
output_metaRecord<string, boolean> | boolean | Array<string>Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.
For images, you can add
"has_transparency": truein this object to extract if the image contains transparent parts and"dominant_colors": trueto extract an array of hexadecimal color codes from the image.For videos, you can add the
"colorspace: true"parameter to extract the colorspace of the output video.For audio, you can add
"mean_volume": trueto get a single value representing the mean average volume of the audio file.You can also set this to
falseto skip metadata extraction and speed up transcoding.resultboolean(default:false)Whether the results of this Step should be present in the Assembly Status JSON
queuebatchSetting the queue to 'batch', manually downgrades the priority of jobs for this step to avoid consuming Priority job slots for jobs that don't need zero queue waiting times
force_acceptboolean(default:false)Force a Robot to accept a file type it would have ignored.
By default, Robots ignore files they are not familiar with. 🤖/video/encode, for example, will happily ignore input images.
With the
force_acceptparameter set totrue, you can force Robots to accept all files thrown at them. This will typically lead to errors and should only be used for debugging or combatting edge cases.usestring | Array<string> | Array<object> | objectSpecifies which Step(s) to use as input.
- You can pick any names for Steps except
":original"(reserved for user uploads handled by Transloadit) - You can provide several Steps as input with arrays:
{ "use": [ ":original", "encoded", "resized" ] }
Tip
That's likely all you need to know about
use, but you can view Advanced use cases.- You can pick any names for Steps except
credentialsstringPlease create your associated Template Credentials in your Transloadit account and use the name of your Template Credentials as this parameter's value. They will contain the values for your S3 bucket, Key, Secret and Bucket region.
While we recommend to use Template Credentials at all times, some use cases demand dynamic credentials for which using Template Credentials is too unwieldy because of their static nature. If you have this requirement, feel free to use the following parameters instead:
"bucket","bucket_region"(for example:"us-east-1"or"eu-west-2"),"key","secret".pathstring(default:"${unique_prefix}/${file.url_name}")The path at which the file is to be stored. This may include any available Assembly variables. The path must not be a directory.
url_prefixstring(default:"http://{bucket}.s3.amazonaws.com/")The URL prefix used for the returned URL, such as
"http://my.cdn.com/some/path/".aclbucket-default | private | public | public-read(default:"public-read")The permissions used for this file.
Please keep in mind that the default value
"public-read"can lead to permission errors due to the"Block all public access"checkbox that is checked by default when creating a new Amazon S3 Bucket in the AWS console.check_integrityboolean(default:false)Calculate and submit the file's checksum in order for S3 to verify its integrity after uploading, which can help with occasional file corruption issues.
Enabling this option adds to the overall execution time, as integrity checking can be CPU intensive, especially for larger files.
headersRecord<string, string>(default:{"Content-Type":"${file.mime}"})An object containing a list of headers to be set for this file on S3, such as
{ FileURL: "${file.url_name}" }. This can also include any available Assembly Variables. You can find a list of available headers here.Object Metadata can be specified using
x-amz-meta-*headers. Note that these headers do not support non-ASCII metadata values.tagsRecord<string, string>(default:{})Object tagging allows you to categorize storage. You can associate up to 10 tags with an object. Tags that are associated with an object must have unique tag keys.
hoststring(default:"s3.amazonaws.com")The host of the storage service used. This only needs to be set when the storage service used is not Amazon S3, but has a compatible API (such as hosteurope.de). The default protocol used is HTTP, for anything else the protocol needs to be explicitly specified. For example, prefix the host with
https://ors3://to use either respective protocol.no_vhostboolean(default:false)Set to
trueif you use a custom host and run into access denied errors.sign_urls_forstring | numberThis parameter provides signed URLs in the result JSON (in the
signed_urlandsigned_ssl_urlproperties). The number that you set this parameter to is the URL expiry time in seconds. If this parameter is not used, no URL signing is done.session_tokenstringThe session token to use for the S3 store. This is only used if the credentials are from an IAM user with the
sts:AssumeRolepermission.
Demos
Related blog posts
- API update: renaming Robots for better clarity
- Rapid growth and new features
- Addressing S3 put request inconsistencies at Transloadit
- Introducing /s3/store Robot's 'url_prefix' parameter
- Launching SFTP Robot & unveiling new homepage
- All Robots now support expanded Assembly Variables
- Switching to official S3 CLI for enhanced file exporting
- Addressing the S3 incident with fixes and discounts
- New pricing model for future Transloadit customers
- No-code real-time video uploading with Bubble & Transloadit
- Export files to DigitalOcean Spaces with ease
- Creating audio waveform videos with FFmpeg & Node.js
- New feature: auto-transcribe videos with subtitles
- Celebrating transloadit’s 2021 milestones and progress
- Expanding our API for better Terraform provisioning
- What is content localization?
- How to set up an S3 bucket to use with Transloadit
- Automatically correct page orientation in documents
- Automatic background removal from images
- Generate stunning images from text using AI