
Export files to Amazon S3
🤖/s3/store exports encoding results to Amazon S3.
If you are new to Amazon S3, see our tutorial on using your own S3 bucket.
The URL to the result file in your S3 bucket will be returned in the Assembly Status JSON.
Avoid permission errors. By default, acl
is set to "public"
. AWS S3 has a bucket setting
called "Block new public ACLs and uploading public objects". Set this to False in
your bucket if you intend to leave acl
as "public"
. Otherwise, you’ll receive permission errors
in your Assemblies despite your S3 credentials being configured correctly.
Use DNS-compliant bucket names. Your bucket name
must be DNS-compliant
and must not contain uppercase letters. Any non-alphanumeric characters in the file names will be
replaced with an underscore, and spaces will be replaced with dashes. If your existing S3 bucket
contains uppercase letters or is otherwise not DNS-compliant, rewrite the result URLs using the
Robot’s url_prefix
parameter.
Limit access
You will also need to add permissions to your bucket so that Transloadit can access it properly. Here is an example IAM policy that you can use. Following the principle of least privilege, it contains the minimum required permissions to export a file to your S3 bucket using Transloadit. You may require more permissions (especially viewing permissions) depending on your application.
Please change {BUCKET_NAME}
in the values for Sid
and Resource
accordingly. Also, this policy
will grant the minimum required permissions to all your users. We advise you to create a separate
Amazon IAM user, and use its User ARN (can be found in the "Summary" tab of a user
here) for the Principal
value. More information
about this can be found
here.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowTransloaditToStoreFilesIn{BUCKET_NAME}Bucket",
"Effect": "Allow",
"Action": ["s3:GetBucketLocation", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl"],
"Resource": ["arn:aws:s3:::{BUCKET_NAME}", "arn:aws:s3:::{BUCKET_NAME}/*"]
}
]
}
The Sid
value is just an identifier for you to recognize the rule later. You can name it anything
you like.
The policy needs to be separated into two parts, because the ListBucket
action requires
permissions on the bucket while the other actions require permissions on the objects in the bucket.
When targeting the objects there's a trailing slash and an asterisk in the Resource
parameter,
whereas when the policy targets the bucket, the slash and the asterisk are omitted.
Please note that if you give the Robot's acl
parameter a value of "bucket-default"
,
then you do not need the "s3:PutObjectAcl"
permission in your bucket policy.
In order to build proper result URLs we need to know the region in which your S3 bucket resides. For
this we require the GetBucketLocation
permission. Figuring out your bucket's region this way will
also slow down your Assemblies. To make this much faster and to also not require the
GetBucketLocation
permission, we have added the bucket_region
parameter to the /s3/store and
/s3/import Robots. We recommend using them at all times.
Please keep in mind that if you use bucket encryption you may also need to add "sts:*"
and
"kms:*"
to the bucket policy. Please read
here and
here
in case you run into trouble with our example bucket policy.
Usage example
Export uploaded files to my_target_folder
in an S3 bucket:
{
"steps": {
"exported": {
"robot": "/s3/store",
"use": ":original",
"credentials": "YOUR_AWS_CREDENTIALS",
"path": "my_target_folder/${unique_prefix}/${file.url_name}"
}
}
}
Parameters
-
use
String / Array of Strings / Object requiredSpecifies which Step(s) to use as input.
-
You can pick any names for Steps except
":original"
(reserved for user uploads handled by Transloadit) -
You can provide several Steps as input with arrays:
"use": [ ":original", "encoded", "resized" ]
💡 That’s likely all you need to know about
use
, but you can view advanced use cases:› Advanced use cases
-
Step bundling. Some Robots can gather several Step results for a single invocation. For example, 🤖/file/compress would normally create one archive for each file passed to it. If you'd set
bundle_steps
to true, however, it will create one archive containing all the result files from all Steps you give it. To enable bundling, provide an object like the one below to theuse
parameter:"use": { "steps": [ ":original", "encoded", "resized" ], "bundle_steps": true }
This is also a crucial parameter for 🤖/video/adaptive, otherwise you'll generate 1 playlist for each viewing quality.
Keep in mind that all input Steps must be present in your Template. If one of them is missing (for instance it is rejected by a filter), no result is generated because the Robot waits indefinitely for all input Steps to be finished.Here’s a demo that showcases Step bundling.
-
Group by original. Sticking with 🤖/file/compress example, you can set
group_by_original
totrue
, in order to create a separate archive for each of your uploaded or imported files, instead of creating one archive containing all originals (or one per resulting file). This is important for for 🤖/media/playlist where you'd typically set:"use": { "steps": [ "segmented" ], "bundle_steps": true, "group_by_original": true }
-
Fields. You can be more discriminatory by only using files that match a field name by setting the
fields
property. When this array is specified, the corresponding Step will only be executed for files submitted through one of the given field names, which correspond with the strings in thename
attribute of the HTML file input field tag for instance. When using a back-end SDK, it corresponds withmyFieldName1
in e.g.:$transloadit->addFile('myFieldName1', './chameleon.jpg')
.This parameter is set to
true
by default, meaning all fields are accepted.Example:
"use": { "steps": [ ":original" ], "fields": [ "myFieldName1" ] }
-
Use as. Sometimes Robots take several inputs. For instance, 🤖/video/merge can create a slideshow from audio and images. You can map different Steps to the appropriate inputs.
Example:
"use": { "steps": [ { "name": "audio_encoded", "as": "audio" }, { "name": "images_resized", "as": "image" } ] }
Sometimes the ordering is important, for instance, with our concat Robots. In these cases, you can add an index that starts at 1. You can also optionally filter by the multipart field name. Like in this example, where all files are coming from the same source (end-user uploads), but with different
<input>
names:Example:
"use": { "steps": [ { "name": ":original", "fields": "myFirstVideo", "as": "video_1" }, { "name": ":original", "fields": "mySecondVideo", "as": "video_2" }, { "name": ":original", "fields": "myThirdVideo", "as": "video_3" } ] }
For times when it is not apparent where we should put the file, you can use Assembly Variables to be specific. For instance, you may want to pass a text file to 🤖/image/resize to burn the text in an image, but you are burning multiple texts, so where do we put the text file? We specify it via
${use.text_1}
, to indicate the first text file that was passed.Example:
"watermarked": { "robot": "/image/resize", "use" : { "steps": [ { "name": "resized", "as": "base" }, { "name": "transcribed", "as": "text" }, ], }, "text": [ { "text" : "Hi there", "valign": "top", "align" : "left", }, { "text" : "From the 'transcribed' Step: ${use.text_1}", "valign" : "bottom", "align" : "right", "x_offset": 16, "y_offset": -10, } ] }
-
-
credentials
StringrequiredPlease create your associated Template Credentials in your Transloadit account and use the name of your Template Credentials as this parameter's value. They will contain the values for your S3 bucket, Key, Secret and Bucket region.
While we recommend to use Template Credentials at all times, some use cases demand dynamic credentials for which using Template Credentials is too unwieldy because of their static nature. If you have this requirement, feel free to use the following parameters instead:
"bucket"
,"bucket_region"
(for example:"us-east-1"
or"eu-west-2"
),"key"
,"secret"
. -
path
String ⋅ default:"${unique_prefix}/${file.url_name}"
The path at which the file is to be stored. This may include any available Assembly variables. The path must not be a directory.
-
url_prefix
String ⋅ default:"http://{bucket}.s3.amazonaws.com/"
The URL prefix used for the returned URL, such as
"http://my.cdn.com/some/path/"
. -
acl
String ⋅ default:"public-read"
The permissions used for this file. This can be
"public-read"
,"public"
,"private"
or"bucket-default"
.Please keep in mind that the default value
"public-read"
can lead to permission errors due to the"Block all public access"
checkbox that is checked by default when creating a new Amazon S3 Bucket in the AWS console. -
check_integrity
Boolean ⋅ default:false
Calculate and submit the file's checksum in order for S3 to verify its integrity after uploading, which can help with occasional file corruption issues.
Enabling this option adds to the overall execution time, as integrity checking can be CPU intensive, especially for larger files.
-
headers
Object ⋅ default:{ "Content-Type": "${file.mime}" }
An object containing a list of headers to be set for this file on S3, such as
{ FileURL: "${file.url_name}" }
. This can also include any available Assembly Variables. You can find a list of available headers here.Object Metadata can be specified using
x-amz-meta-*
headers. Note that these headers do not support non-ASCII metadata values. -
host
String ⋅ default:"s3.amazonaws.com"
The host of the storage service used. This only needs to be set when the storage service used is not Amazon S3, but has a compatible API (such as hosteurope.de). The default protocol used is HTTP, for anything else the protocol needs to be explicitly specified. For example, prefix the host with
https://
ors3://
to use either respective protocol. -
no_vhost
Boolean ⋅ default:false
Set to
true
if you use a custom host and run into access denied errors. -
sign_urls_for
IntegerThis parameter provides signed URLs in the result JSON (in the
signed_url
andsigned_ssl_url
properties). The number that you set this parameter to is the URL expiry time in seconds. If this parameter is not used, no URL signing is done.
Note: The URLs in the result JSON already point to the file on your target storage platform, so you can just save that URL in your database.
Demos
- Copy files from FTP servers to Amazon S3
- Store uploaded files in an Amazon S3 bucket
- Copy files from Azure to Amazon S3
- Copy files from Backblaze to Amazon S3
- Copy files from DigitalOcean Spaces to Amazon S3
- Copy files from Dropbox to Amazon S3
- Copy files from Google Storage to Amazon S3
- Copy files from MinIO to Amazon S3
- Copy files from Openstack/Swift to Amazon S3
- Copy files from Rackspace Cloud Files to Amazon S3
- Copy files from SFTP servers to Amazon S3
- Copy files from Wasabi to Amazon S3
- Copy files from Webservers to Amazon S3
Related blog posts
- Renaming Some Robots April 7, 2010
- Rapid Growth and New Features November 12, 2010
- Fixing Amazon S3 Bugs May 16, 2011
- New /s3/store Parameter May 26, 2011
- Releasing Our SFTP Robot and a New Homepage August 21, 2011
- Assembly Variables Now Available Everywhere April 27, 2012
- S3 Changes February 5, 2015
- Post Mortem: S3 Saving Incident February 17, 2015
- Raising prices (for new customers) February 7, 2018
- Add real-time video uploading to a site without writing code, with Bubble.is and Transloadit August 2, 2019
- The /digitalocean/store Robot December 9, 2019
- Let's Build: Audio Waveform Video Generator January 4, 2021
- Automatically transcribe video files March 8, 2021
- Transloadit Milestones of 2021 January 31, 2022
- Expanding our API for better Terraform provisioning December 6, 2022
- How to set up an S3 bucket to use with Transloadit March 25, 2023
- What is content localization? February 24, 2023