Transloadit
Pricing
Log in
Sign up
EssentialsRobotsFAQAPIFormats
Handling uploads
  • /upload/handle
    Handle uploads
File importing
  • /azure/import
    Import files from Azure
  • /backblaze/import
    Import files from Backblaze
  • /box/import
    Import files from Box
  • /cloudfiles/import
    Import files from Rackspace Cloud Files
  • /cloudflare/import
    Import files from Cloudflare R2
  • /digitalocean/import
    Import files from DigitalOcean Spaces
  • /dropbox/import
    Import files from Dropbox
  • /ftp/import
    Import files from FTP servers
  • /google/import
    Import files from Google Storage
  • /http/import
    Import files from web servers
  • /minio/import
    Import files from MinIO
  • /s3/import
    Import files from Amazon S3
  • /sftp/import
    Import files from SFTP servers
  • /supabase/import
    Import files from Supabase
  • /swift/import
    Import files from Openstack/Swift
  • /tigris/import
    Import files from Tigris
  • /vimeo/import
    Import videos from Vimeo
  • /wasabi/import
    Import files from Wasabi
Video encoding
  • /video/adaptive
    Convert videos to HLS and MPEG-Dash
  • /video/artwork
    Extract or insert video artwork
  • /video/concat
    Concatenate videos
  • /video/encode
    Transcode, resize, or watermark videos
  • /video/merge
    Merge video, audio, images into one video
  • /video/ondemand
    Stream videos with on-demand encoding
  • /video/split
    Split video
  • /video/subtitle
    Add subtitles to videos
  • /video/thumbs
    Extract thumbnails from videos
  • Video presets
Audio encoding
  • /audio/artwork
    Extract or insert audio artwork
  • /audio/concat
    Concatenate audio
  • /audio/split
    Split audio
  • /audio/encode
    Encode audio
  • /audio/loop
    Loop audio
  • /audio/merge
    Merge audio files into one
  • /audio/waveform
    Generate waveform images from audio
  • Audio presets
Image manipulation
  • /image/bgremove
    Remove the background from images
  • /image/merge
    Merge several images into one image
  • /image/optimize
    Optimize images without quality loss
  • /image/resize
    Convert, resize, or watermark images
Artificial intelligence
  • /document/ocr
    Recognize text in documents (OCR)
  • /image/describe
    Recognize objects in images
  • /image/facedetect
    Detect faces in images
  • /image/generate
    Generate images from text prompts
  • /image/upscale
    Upscale images
  • /image/ocr
    Recognize text in images (OCR)
  • /speech/transcribe
    Transcribe speech in audio or video files
  • /text/speak
    Synthesize speech in documents
  • /text/translate
    Translate text in documents
  • /ai/chat
    Generate AI chat responses
  • /video/generate
    Generate videos from text prompts
Document processing
  • /document/autorotate
    Auto-rotate documents
  • /document/convert
    Convert documents into different formats
  • /document/merge
    Merge documents into one
  • /document/optimize
    Optimize PDF file size
  • /file/read
    Read file contents
  • /document/split
    Extracts pages
  • /document/thumbs
    Extract thumbnail images from documents
  • /html/convert
    Take screenshots of webpages or HTML files
File filtering
  • /file/filter
    Filter files
  • /file/verify
    Verify the file type
  • /file/virusscan
    Scan files for viruses
Code evaluation
  • /script/run
    Run scripts in Assemblies
Media cataloging
  • /file/hash
    Hash files
  • /file/preview
    Generate a preview thumbnail
  • /meta/write
    Write metadata to media
File compressing
  • /file/compress
    Compress files
  • /file/decompress
    Decompress archives
File exporting
  • Downloading
  • /azure/store
    Export files to Microsoft Azure
  • /backblaze/store
    Export files to Backblaze
  • /box/store
    Export files to Box
  • /cloudfiles/store
    Export files to Rackspace Cloud Files
  • /cloudflare/store
    Export files to Cloudflare R2
  • /digitalocean/store
    Export files to DigitalOcean Spaces
  • /dropbox/store
    Export files to Dropbox
  • /ftp/store
    Export files to FTP servers
  • /google/store
    Export files to Google Storage
  • /minio/store
    Export files to MinIO
  • /s3/store
    Export files to Amazon S3
  • /sftp/store
    Export files to SFTP servers
  • /supabase/store
    Export files to Supabase
  • /swift/store
    Export files to OpenStack/Swift
  • /tigris/store
    Export files to Tigris
  • /tus/store
    Export files to Tus-compatible servers
  • /vimeo/store
    Export files to Vimeo
  • /wasabi/store
    Export files to Wasabi
  • /youtube/store
    Export files to YouTube
Smart CDN
  • /file/serve
    Serve files to web browsers
  • /tlcdn/deliver
    Cache and deliver files globally
  • Pricing

Export files to Amazon S3

🤖/s3/store exports encoding results to Amazon S3.

If you are new to Amazon S3, see our tutorial on using your own S3 bucket.

The URL to the result file in your S3 bucket will be returned in the Assembly Status JSON. If your S3 bucket has versioning enabled, the version ID of the file will be returned within meta.version_id

Warning

Avoid permission errors. By default, acl is set to "public". AWS S3 has a bucket setting called "Block new public ACLs and uploading public objects". Set this to False in your bucket if you intend to leave acl as "public". Otherwise, you’ll receive permission errors in your Assemblies despite your S3 credentials being configured correctly.

Warning

Use DNS-compliant bucket names. Your bucket name must be DNS-compliant and must not contain uppercase letters. Any non-alphanumeric characters in the file names will be replaced with an underscore, and spaces will be replaced with dashes. If your existing S3 bucket contains uppercase letters or is otherwise not DNS-compliant, rewrite the result URLs using the Robot’s url_prefix parameter.

Limit access

You will also need to add permissions to your bucket so that Transloadit can access it properly. Here is an example IAM policy that you can use. Following the principle of least privilege, it contains the minimum required permissions to export a file to your S3 bucket using Transloadit. You may require more permissions (especially viewing permissions) depending on your application.

Please change {BUCKET_NAME} in the values for Sid and Resource accordingly. Also, this policy will grant the minimum required permissions to all your users. We advise you to create a separate Amazon IAM user, and use its User ARN (can be found in the "Summary" tab of a user here) for the Principal value. More information about this can be found here.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowTransloaditToStoreFilesIn{BUCKET_NAME}Bucket",
      "Effect": "Allow",
      "Action": ["s3:GetBucketLocation", "s3:ListBucket", "s3:PutObject", "s3:PutObjectAcl"],
      "Resource": ["arn:aws:s3:::{BUCKET_NAME}", "arn:aws:s3:::{BUCKET_NAME}/*"]
    }
  ]
}

The Sid value is just an identifier for you to recognize the rule later. You can name it anything you like.

The policy needs to be separated into two parts, because the ListBucket action requires permissions on the bucket while the other actions require permissions on the objects in the bucket. When targeting the objects there's a trailing slash and an asterisk in the Resource parameter, whereas when the policy targets the bucket, the slash and the asterisk are omitted.

Please note that if you give the Robot's acl parameter a value of "bucket-default", then you do not need the "s3:PutObjectAcl" permission in your bucket policy.

In order to build proper result URLs we need to know the region in which your S3 bucket resides. For this we require the GetBucketLocation permission. Figuring out your bucket's region this way will also slow down your Assemblies. To make this much faster and to also not require the GetBucketLocation permission, we have added the bucket_region parameter to the /s3/store and /s3/import Robots. We recommend using them at all times.

Please keep in mind that if you use bucket encryption you may also need to add "sts:*" and "kms:*" to the bucket policy. Please read here and here in case you run into trouble with our example bucket policy.

Keep your credentials safe
Since you need to provide credentials to this Robot, always use this together with Templates and/or Template Credentials, so that you can never leak any secrets while transmitting your Assembly Instructions.

Usage example

Export uploaded files to my_target_folder in an S3 bucket:

{
  "steps": {
    "exported": {
      "robot": "/s3/store",
      "use": ":original",
      "credentials": "YOUR_AWS_CREDENTIALS",
      "path": "my_target_folder/${unique_prefix}/${file.url_name}"
    }
  }
}

Parameters

  • output_meta

    Record<string, boolean> | boolean | Array<string>

    Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.

    For images, you can add "has_transparency": true in this object to extract if the image contains transparent parts and "dominant_colors": true to extract an array of hexadecimal color codes from the image.

    For images, you can also add "blurhash": true to extract a BlurHash string — a compact representation of a placeholder for the image, useful for showing a blurred preview while the full image loads.

    For videos, you can add the "colorspace: true" parameter to extract the colorspace of the output video.

    For audio, you can add "mean_volume": true to get a single value representing the mean average volume of the audio file.

    You can also set this to false to skip metadata extraction and speed up transcoding.

  • result

    boolean (default: false)

    Whether the results of this Step should be present in the Assembly Status JSON

  • queue

    batch

    Setting the queue to 'batch', manually downgrades the priority of jobs for this step to avoid consuming Priority job slots for jobs that don't need zero queue waiting times

  • force_accept

    boolean (default: false)

    Force a Robot to accept a file type it would have ignored.

    By default, Robots ignore files they are not familiar with. 🤖/video/encode, for example, will happily ignore input images.

    With the force_accept parameter set to true, you can force Robots to accept all files thrown at them. This will typically lead to errors and should only be used for debugging or combatting edge cases.

  • ignore_errors

    boolean | Array<meta | execute> (default: [])

    Ignore errors during specific phases of processing.

    Setting this to ["meta"] will cause the Robot to ignore errors during metadata extraction.

    Setting this to ["execute"] will cause the Robot to ignore errors during the main execution phase.

    Setting this to true is equivalent to ["meta", "execute"] and will ignore errors in both phases.

  • use

    string | Array<string> | Array<object> | object

    Specifies which Step(s) to use as input.

    • You can pick any names for Steps except ":original" (reserved for user uploads handled by Transloadit)
    • You can provide several Steps as input with arrays:
      {
        "use": [
          ":original",
          "encoded",
          "resized"
        ]
      }
      
    Tip

    That's likely all you need to know about use, but you can view Advanced use cases.

  • credentials

    string

    Please create your associated Template Credentials in your Transloadit account and use the name of your Template Credentials as this parameter's value. They will contain the values for your S3 bucket, Key, Secret and Bucket region.

    While we recommend to use Template Credentials at all times, some use cases demand dynamic credentials for which using Template Credentials is too unwieldy because of their static nature. If you have this requirement, feel free to use the following parameters instead: "bucket", "bucket_region" (for example: "us-east-1" or "eu-west-2"), "key", "secret".

  • path

    string (default: "${unique_prefix}/${file.url_name}")

    The path at which the file is to be stored. This may include any available Assembly variables. The path must not be a directory.

  • url_prefix

    string (default: "http://{bucket}.s3.amazonaws.com/")

    The URL prefix used for the returned URL, such as "http://my.cdn.com/some/path/".

  • acl

    bucket-default | private | public | public-read (default: "public-read")

    The permissions used for this file.

    Please keep in mind that the default value "public-read" can lead to permission errors due to the "Block all public access" checkbox that is checked by default when creating a new Amazon S3 Bucket in the AWS console.

  • check_integrity

    boolean (default: false)

    Calculate and submit the file's checksum in order for S3 to verify its integrity after uploading, which can help with occasional file corruption issues.

    Enabling this option adds to the overall execution time, as integrity checking can be CPU intensive, especially for larger files.

  • headers

    Record<string, string> (default: {"Content-Type":"${file.mime}"})

    An object containing a list of headers to be set for this file on S3, such as { FileURL: "${file.url_name}" }. This can also include any available Assembly Variables. You can find a list of available headers here.

    Object Metadata can be specified using x-amz-meta-* headers. Note that these headers do not support non-ASCII metadata values.

  • tags

    Record<string, string> (default: {})

    Object tagging allows you to categorize storage. You can associate up to 10 tags with an object. Tags that are associated with an object must have unique tag keys.

  • host

    string (default: "s3.amazonaws.com")

    The host of the storage service used. This only needs to be set when the storage service used is not Amazon S3, but has a compatible API (such as hosteurope.de). The default protocol used is HTTP, for anything else the protocol needs to be explicitly specified. For example, prefix the host with https:// or s3:// to use either respective protocol.

  • no_vhost

    boolean (default: false)

    Set to true if you use a custom host and run into access denied errors.

  • sign_urls_for

    string | number

    This parameter provides signed URLs in the result JSON (in the signed_url and signed_ssl_url properties). The number that you set this parameter to is the URL expiry time in seconds. If this parameter is not used, no URL signing is done.

  • session_token

    string

    The session token to use for the S3 store. This is only used if the credentials are from an IAM user with the sts:AssumeRole permission.

Demos

  • Encode video, extract thumbnails and store on S3
  • Amazon S3 storage service

Related blog posts

  • API update: renaming Robots for better clarity April 7, 2010
  • Addressing S3 put request inconsistencies at Transloadit May 16, 2011
  • Introducing /s3/store Robot's 'url_prefix' parameter May 26, 2011
  • Launching SFTP Robot & unveiling new homepage August 21, 2011
  • All Robots now support expanded Assembly Variables April 27, 2012
  • Switching to official S3 CLI for enhanced file exporting February 5, 2015
  • Addressing the S3 incident with fixes and discounts February 17, 2015
  • New pricing model for future Transloadit customers February 7, 2018
  • No-code real-time video uploading with Bubble & Transloadit August 2, 2019
  • Export files to DigitalOcean Spaces with ease December 9, 2019
  • Creating audio waveform videos with FFmpeg & Node.js January 4, 2021
  • New feature: auto-transcribe videos with subtitles March 8, 2021
  • Celebrating transloadit’s 2021 milestones and progress January 31, 2022
  • Expanding our API for better Terraform provisioning December 6, 2022
  • What is content localization? February 24, 2023
  • How to set up an S3 bucket to use with Transloadit March 25, 2023
  • Automatically correct page orientation in documents December 10, 2024
  • Automatic background removal from images March 10, 2025
  • Generate stunning images from text using AI April 1, 2025
← /minio/store/sftp/store →
Transloadit
© 2009–2026 Transloadit-II GmbH
Privacy⋅Terms⋅Imprint

Product

  • Services
  • Pricing
  • Demos
  • Security
  • Support

Company

  • About / Press
  • Blog / Jobs
  • Comparisons
  • Open source
  • Solutions

Docs

  • Getting started
  • Transcoding
  • FAQ
  • API
  • Supported formats

More

  • Platform status
  • Community forum
  • StackOverflow
  • Uppy
  • Tus