FAQ

Here are our answers to frequently asked questions. Didn't find the answer you were looking for? Ask a human.

  • An end-user reported a CONNECTION_ERROR, what's that about?

    The most common cause for a CONNECTION_ERROR is network trouble, such as an unreliable WiFi connection.

    Network connections can be fragile. Sending large files leaves a user exposed longer to these connections, which increases the likelihood of an interruption.

    The only sure-fire way to combat this is by having resumable file uploads. We are making good progress with our tus.io initiative to support resumable uploads in Transloadit, but we still have some work to do. If your app is particularly susceptible to poor connections (e.g. targets clubbing crowds, serves rescue workers in desolate areas, only works with humongous video files, etc.) a workaround is possible so long as we don't offer tus yet.

    One workaround is implementing Fine Uploader, which uploads directly to S3 in a resumable way. Transloadit can then import from there. A different recommendation would be to enable things like wait: false, if you make use of the jQuery SDK. That way, as soon as the upload is done, the user is on their way. The window where flaky connections impact Assemblies through CONNECTION_ERROR will be smaller, than when the user also needs to wait on the encoding time.

  • Are the UUIDs that Transloadit uses for files and Assembly IDs secure?

    Transloadit uses UUIDv4 for generating these IDs randomly. Guessing, or generating a UUID that matches one of ours, would be as probable as generating a collision. This is so improbable that it is not considered not a viable attack vector.

    Since we keep around five million Assemblies in active storage at any given time, the chances are admittedly five million times more likely to generate a collision. That being said, since we rate-limit to 250 operations per minute, it would still take machines longer than mankind has existed on earth to generate enough UUIDs to have a 50% probability that one of those will match a UUID that Transloadit has once generated. We deem this far from being a viable attack vector.

    For files, the window gets even smaller again as we remove them after 24 hours. A few reasons for why we choose to do this are outlined here.

    Beyond the guessing of files or Assembly URLs, it is of course a concern that these addresses would leak somehow. We consider an Assembly ID and file URL private. They are a secret shared between Transloadit, our customer, and depending on your integration, the specific end-user for whom the customer is supplying the files and running the Assembly on your behalf.

    This communication between these parties happens over HTTPS, for which we have A+ grading on SSL Labs across the board. If HTTPS is used for integration with Transloadit and the end-user for any request involved, the URLs to Assemblies and files can not leak beyond these trusted parties, to the likelihood of becoming a viable attack vector.

    Then there is Transloadit to look at as trusted party. Our policy is that only our trusted core-team-members have access to these files for debugging purposes. We receive millions of files every day and they are just UUIDs to us until a customer asks us to take a closer look.

    We run all our processes as non-privileged users, injecting secrets, so if an attacker as possessed these secrets, that means they have gotten root access to our machines somehow. In this case, encryption of the file buckets would not suffice, as with the credentials acquired to get full access to the bucket, the attacker almost certainly also has access to our decryption keys, as is the case when we regard Amazon as hacked. Luckily, both Amazon and Transloadit have a very high focus on keeping our systems secure. But it is true that anyone who would give you a 100% security guarantee here, does in fact, not quite understand security, and you would be wise to steer clear.

  • Are there any discounts?
    • Uploading and importing of files is totally free.
    • Exporting conversion results (to S3, SFTP, etc.) is free.
    • Audio encoding is discounted by 75%.
    • Extracting video thumbnails is discounted by 90%.
    • Exporting uploaded files is discounted by 90%.
    • Gain a discount of up to 40% for inviting your friends.

    Here are some calculation examples for your convenience.

  • Can I create an upload form that allows users to upload files of any type, and then process them based on their filetype?

    Yes, this is possible because all Transloadit robots ignore any files they cannot handle. Therefore, you can set up two Steps with different robots that have use: ":original" specified, and the files will be automatically routed appropriately based on filetype. For example, if the uploaded file is an image, it can be resized and stored. If it is a video, it can be video encoded and have thumbnails extracted and stored. If it is a Word document or spreadsheet it can just be stored, etc.

  • Can I have my own upload progress bar?

    Yes, of course! You can use our jQuery SDK, which fires events for various stages of an upload and reports the bytes received and expected. This allows you to build very smooth progress bars and even display the upload speed. You can change the CSS and behavior of the progress bar as you see fit.

    Also, check out our community projects. There you can find the Transloadit Bootstrap plugin, which uses Twitter Bootstrap progress bars and a more customized layout.

  • Can I whitelist Transloadit's IPs in my firewall?

    Our platform is highly volatile in the sense that we'll have 10 servers online today that will be gone tomorrow. Trying to keep your firewalls up to date with this pace is asking for dropped connections.

    We don't funnel outgoing connections (e.g. /sftp/store or Notifications or /http/import) through one point because of performance and SPOF reasons. The trade-off being that our outgoing IPs change rapidly.

    The 'best' we can do is give you Amazon us-east ranges, but obviously you will be whitelisting a lot more than you bargained for. On the other hand you'll still rule out 99% of the internet, so for some less critical use cases it could be viable. We'll list them just in case (updated: 2016-07-22-22-36-08 ):

    • 46.51.128.0/18
    • 46.51.192.0/20
    • 46.137.0.0/17
    • 46.137.128.0/18
    • 52.16.0.0/15
    • 52.18.0.0/15
    • 52.30.0.0/15
    • 52.48.0.0/14
    • 52.95.244.0/24
    • 52.95.255.64/28
    • 52.208.0.0/13
    • 54.72.0.0/15
    • 54.74.0.0/15
    • 54.76.0.0/15
    • 54.78.0.0/16
    • 54.154.0.0/16
    • 54.155.0.0/16
    • 54.170.0.0/15
    • 54.194.0.0/15
    • 54.216.0.0/15
    • 54.220.0.0/16
    • 54.228.0.0/16
    • 54.229.0.0/16
    • 54.246.0.0/16
    • 54.247.0.0/16
    • 79.125.0.0/17
    • 176.34.64.0/18
    • 176.34.128.0/17
    • 185.48.120.0/22

    An up to date Amazon IP list is also available, that pages also lists a JSON variant for automation.

    A better solution you could implement, if security is paramount, is to setup a server outside of your trusted zone that we'll push updates to. Machines inside your trusted zone could then pull updates from this machine using rsync, a database, or ZeroMQ (collect Notifications and have your trusted zone eat through this queue). For a near-real-time approach, a program like HAProxy (directly forward traffic into your trusted zone), direct routing, or SSH Tunnels could work. This way you will only have to deal with a limited set of known IPs. Additionally, this will cover the security risk that, should our machines ever get compromised, there is a whitelisted connection straight into your trusted zone.

    If this seems like to much hassle, we recommend creating an S3 bucket and giving us append-only access. You can then from inside your DMZ safely pull the resulting files.

  • Can you delete temporary files sooner?

    We'd rather not, for three reasons:

    1. Some of our customers use the temporary files to import them again in a new Assembly to do more encodings with them. Others also use them to show to a moderation team in their app and once they approve it, only then do they persist the files in their own S3 buckets. Even though that goes against our recommendation, changing this could potentially break existing apps.

    2. If an Assembly crashes and it is auto-replayed, it imports the uploaded files from our temporary result bucket. If they get deleted after every Assembly or Assembly Step automatically, replays will not work.

    3. If people submit support tickets to have us look into an encoding issue, we need a way to retrieve the uploaded / encoded files for debugging. 24 hours is already a short time frame for this.

    In any case, the temporary filenames are hashes, so if you don't expose the locations publicly, it will take much longer to correctly guess a filename than it will exist.

  • Can you help me understand the pricing?

    It would be very simple if we could charge per minute and leave it at that. However, Transloadit can handle any kind of file, not just those that have a duration.

    A fixed $ per GB could also be nice but that would leave some bots overpriced in our opinion. For that reason, we offer discounts that vary per robot.

    The discounts work by ignoring a percentage of your usage.

    Suppose you had a 1 GB video file and wanted to extract 10 thumbnails, and each thumbnail would be a 1MB JPEG. What would the usage be in this case?

    It's 1 GB + (10 * 1MB) = 1034MB. Our thumbnail bot has a 90% discount, so we only charge 10%. This means you will be billed for 103.4MB. Or: 0.1 GB * the price associated with your plan.

    By using plans, you can commit to a monthly usage, for which we can offer a lower GB pricing. You will still be able to process files beyond your plan limit, but we can't offer the same discount for that. We will mail you when you are close to your plan's limit, so you can always find the most economical spot for your usage.

    There are more calculation examples and a recommendation tool on the pricing page.

  • Do Templates have mandatory parameters?

    Every step must have a robot parameter that defines the robot's name and a use parameter that defines which Steps are used as input. The exception to this are Steps that use one of our import robots. For them the use parameter is not mandatory.

    We also support a result parameter that controls if the file results of the particular Step should be returned in the Assembly result JSON. If set to true, this Step will occur in the Assembly's result JSON with a temporary URL referencing the result file(s). If you set this to false, this Step will not occur in the Assembly's result JSON, which can be useful if you want to keep the JSON small.

  • Do you offer an audio or video player?

    We don't offer any Transloadit-built players, but can wholeheartedly recommend using an open source player such as MediaElement.js for that. There are plenty of open source / free to use alternatives that have matured in ways beyond what a single company (like us) could produce.

  • Do you offer custom plans?

    Yes!

    If you would like a high-volume plan, a custom contract or if you have a special performance requirement, then we can definitely work with you to figure out a solution that will suit your needs.

    In any case, please get in touch!

  • How are my Amazon S3 credentials protected?

    If you want us to store files in your S3 bucket, it is recommended to save the credentials in a Template in your account. We keep this Template encrypted in our database.

    The keys needed to decrypt these are injected into process memory by another system user. This means that if someone was able to exploit the user under which we run our API or website processes, they could still not access the keys. If they tried to change our code to display or send the credentials, they would have to restart the service (not permitted under that user) and access the key files (also not permitted).

    If our servers are rooted, it is a different story. This is why we use firewalls, use protected SSH keys, and limit our sudo, but as any expert will tell you, 100% security is a myth and it is better to prepare for the worst.

    This is why we recommend to create IAM policies that only have Put and List permissions on your buckets (for an up to date and precise list of the required permissions, please check the S3 Store documentation), and let Transloadit use that for writing only. So if your credentials were ever stolen and the criminals managed to decrypt them as well - they would still only be able to add more files to this particular bucket, until we notice and intervene.

    While, as said, 100% security is a myth, our security philosophy is to make it as hard as possible to for anyone to gain access to your credentials and the keys necessary to decrypt them, and if they do manage to acquire them, to make them as useless as possible.

  • How can I filter which files the user is able to select for the upload?

    Our jQuery SDK does not feature file selection methods at the moment. However, you can use the browser's built-in features to limit selectable files on the client-side. See this StackOverflow discussion to learn how that is done.

    Even if you have already limited files on the client-side, you should also limit them on the back-end. For this, you can utilize Transloadit's file filtering capabilities.

  • How can I limit the output duration of my videos?

    You can provide your own FFmpeg parameters to /video/encode steps. For example, to limit videos to four minutes, use FFmpeg's -t parameter, like in this example:

    ffmpeg: {
      t: 240
    }
    
  • How can I limit the size of files uploaded by my users?

    Use the max_size parameter. For details, see the authentication page. The file size is checked as soon as the upload is started and if it exceeds the maximum size, the entire upload process is canceled – even if it contains files that do not exceed the max_size limitation.

    If you want to just ignore the files that exceed a certain size, but process all others, then please use our file filtering capabilities.

  • How can I test notifications in a dev environment?

    Transloadit will contact the specified notify_url when an Assembly is finished. If your development environment is behind a firewall, you will need to use a dynamic DNS service and forward ports to your router so that we can reach your computer.

    If your workspace has multiple developers, each one needs to supply a different notify_url. To overwrite the notify_url specified in your Template, use a hidden input field named params in your form to pass a new value.

    For more details, see the bottom of Passing variables into a Template.

    Another solution is using the community developed node-transloadit-development_notify_url_proxy. This runs on your local machine and polls a (publicly available) Assembly Status every second. When the Assembly completes, it sends a Notification to a configured notifyUrl, which could for instance be http://192.168.1.33:8080.

  • How can I track uploads for a specific user?

    Transloadit does not offer an explicit means to keep track of who of your users submitted a specific file. However, we offer something much more flexible than that.

    We allow you to add custom data to your Assemblies in the form of fields. For example, you can add a hidden field user_id to your form, populate its value attribute accordingly, and then set the jQuery SDK's fields parameter to true. This will send all your form fields to Transloadit, including the hidden user_id field.

    This field is now available in your Assembly as ${fields.user_id} and will also be appended to a fields array in the JSON response for you to save in your database.

    You can read more about custom fields in Assemblies here.

  • How do I enable multi-file upload?

    Transloadit supports several ways for users to upload multiple files at once.

    HTML5

    The easiest way to allow users to select multiple files, is to use an input field like this:

    <input name="my_file" multiple="multiple" />
    

    This will allow users with an HTML5-capable browser to select multiple files in the file selection dialog box.

    Supported Browsers:

    • Firefox >= 3.6
    • Google Chrome >= 2
    • Safari >= 4

    For Opera, use their Web Forms 2.0 support, which has been available since 2006:

    <input type="file" min="1" max="9999" />
    

    JavaScript

    For non-HTML5 browsers, you can add a new file input field whenever the user has selected a file. You can follow any of the available tutorials, as there are no Transloadit-specific steps involved. Note that these solution will work, but will not allow users to select multiple files in a single file upload dialog.

    Flash

    You can use any existing flash uploader with Transloadit.

  • How long do you keep temporary files around?

    You may have noticed that Transloadit also uses s3.amazonaws.com/tmp.transloadit.com locations for some files. This is to pass around files between uploading and encoding machines, and for debugging.

    We delete these files automatically after 24 hours, so please do not rely on them. Instead, add an export robot to your Templates so that files can be stored indefinitely in a place (S3, SFTP, etc) of your choosing and owning.

  • How would I validate that any image is actually from my correct user

    In short: use Signature Authentication.

    If you run a website and can control the server side code of that, you can store your Transloadit secret safely, there. Your visitors will be logged in with your server. They will want upload something. Your server now generates a secret, knowing it's for a particular logged in user. It can tag the upload and generate a signature of all these parameters using the secret only it and Transloadit know.

    Know when the files arrives at us, we'll also create a signature of the parameters using the same signature. If they don't match, and you set the option in your account that signatures are required, we'll reject the upload.

    This way you can be sure that:

    1. Uploads only work for logged in users
    2. Uploads are tagged with user information, and this cannot be forged by them, as they don't have to secret to forge the correct signature for those parameters.
  • My Assembly Step is not producing any result

    If one of your Steps does not produce a result, please double check the following:

    • Is your Step's robot able to deal with the file you are providing it with? The video related robots, for example, will ignore all files except videos.
    • Does the Step referenced in your Step's use parameter produce any result? Are you sure that your Step actually has some input?
    • If the result of your Step is passed into a subsequent Step, it will not produce a result by default. You can add result: true to your Step to force the creation of a result in this case.
  • What about hosting my files?

    We currently do not offer hosting for your files. Accordingly, we will not charge you for bandwidth or storage for your files.

    If you would like to store your files for longer than 24 hours, we advise you to use services such as Amazon S3 or Rackspace Cloudfiles. They will then send you a separate invoice every month. We do offer ways to automatically export your files to these services or to your own FTP servers.

  • What do I get for 1 GB?

    All of our conversion features track the size of the input file and the size of the output file and count them towards your usage.

    Say you convert a 0.8 MB image and the resulting image is 0.2 MB - that is 1 MB together. When you repeat this 1024 times, you will have used 1 GB.
    If you also converted a 0.8 MB video into a 0.2 MB one and did this 1024 times, that would be another GB.

    The Startup Plan, for example, includes 7 GB for the price of $19. That means you could do over 7000 of such image and video conversions per month for just $19.

    If you need to do more conversions in a certain month, then you do not need to upgrade your plan right away. Instead, you will just pay a small premium in that particular month.

  • What do you mean by 'reserved machines'?

    First off, we will handle any traffic that you send us. However, if you send us gigantic amounts simultaneously and expect everything to be processed at the same time as well, then you will need to reserve some capacity. You can do this by having more reserved machines.

    The more machines you have reserved, the more concurrent tasks we will be able to run for you before queueing the next ones, regardless of what other customers are throwing at us. Queued items might still be processed swiftly because of our large base capacity. However, if your use-case demands high traffic with a real-time feel to it, we recommend that you do not leave things to chance and reserve more capacity.

  • What formats and codecs does Transloadit support?

    We support hundreds of audio / video / image formats. For details, see our supported formats and codecs page.

  • What happens if my plan limit is reached?

    If you did not explicitly set a billing limit in your account, you will just be charged overage. That means you will pay a slightly higher dollar per gigabyte price than your plan includes. You will also receive warnings by email, which will allow you to upgrade to more cost effective plans when that is possible.

  • What if my users upload video files even though I only use image resizing?

    Our robots are smart enough to determine whether they can handle incoming files, and will ignore any files they cannot. So if a user uploads videos when only images are expected, the /image/resize robot will not process them.

  • What is a Template?

    A Template contains the file processing instructions in JSON. It will be safely encrypted after saving. Templates are the recommended way to integrate with Transloadit.

    Please keep in mind that we store files only for 24 hours after they were created. To persist your files, please use one of our file export robots.

  • Which plan do you recommend?

    That depends on the amount of data you expect to use. In some cases, it will be more cost-effective to go with a lighter plan and pay for the additional gigabytes, while in other cases, choosing a more robust plan could be more advantageous to you. You can monitor your monthly data usage in your account and switch to a different plan on the last day of each month. Changes made to plans are effective immediately.

  • Why and how do you charge by GB?

    What happens with volume calculation is that we add up each robot's input and output. Depending on the number of steps you have and how you chain these together (using an optimized step vs :original as input for all other steps, can reduce the costs greatly), you could be charged for much more volume than just the input size.

    We found this to be the only sensible way to support any kind of file and all possible combinations and workflows, while at the same time not running out of business. That said, we do provide discounts (some bots are either free, or only every tenth byte is counted) in cases where this pricing model would otherwise become too expensive.

  • Will there be any surprises after I sign up?

    No!

    • Change or cancel your plan at any time.
    • Changes made to your plan are effective immediately.
    • There are no hidden costs. What you see is what you get.
    • You can also configure a monthly spending limit.
    • You are billed monthly and there is proration for the first month.
  • Will there ever be an encoding queue?

    In 2013, it happened three times that we had a few minutes of queue time for our live queue. We were receiving big volumes and it took a few minutes for new machines to come online and help drain the queue more quickly.

    We have already:

    • separated batch/import jobs from live/upload jobs
    • increased our base capacity to 240 cores
    • parallelized upscaling
    • implemented analysis of uploads before they enter the encoding queue, so we can launch additional machines ahead of time
    • brought down launch times to five minutes
    • raised our instance ceiling at Amazon, so we can have 1500 heavy machines online (raised over the years, as our requirements grew, from the default of 20).

    We are planning to do even more to give a real-time feeling to our live queue. The truth, however, is that it is still a queue and we cannot 100% guarantee that a few minutes wait time won't ever happen. We strive for this, but having a base capacity big enough to handle any spike instantly would not be economical. For us, or anyone else.

    We realize this could mean a bad user experience if a visitor of yours is just trying to upload e.g. an avatar. This is why we recommend setting wait: false in your jQuery integration (if that is what you use of course). wait: false makes sure media processing is handled in an asynchronous way, so 2 minutes queue-time won't block the user experience. As soon as the files are uploaded to us, the user is on their way. We notify you when the files are ready, you notify your user.

    Of course, we are not sure if this workflow is feasible for your use-case. At any rate, you can be assured that live queues are very rare, and we are always working on making them even rarer.

    One thing we're doing to prevent live queues, is keeping dozens of machines on stand-by. This provides reserved capacity for all of our customers, allowing them to burst with many simultaneous live jobs, before being trickled down into the batch queue. It's possible to purchase more of this reserved capacity through higher plans, allowing you to have more real-time jobs.

    More on this topic in reserved capacity.