Import files from Rackspace Cloud Files
🤖/cloudfiles/import imports whole directories of files from your Rackspace Cloud Files container.
Keep your credentials safe. Since you need to provide credentials to this Robot, please always use this together with Templates and/or Template Credentials, so that you can never leak any secrets while transmitting your Assembly Instructions.
Note: Transloadit supports file sizes up to 200 GB. If you require a higher limit for your application, please get in touch.
Usage example
Import files from the path/to/files
directory and its subdirectories:
{
"steps": {
"imported": {
"robot": "/cloudfiles/import",
"credentials": "YOUR_CLOUDFILES_CREDENTIALS",
"path": "path/to/files/",
"recursive": true
}
}
}
Parameters
-
ignore_errors
Array of Strings / Boolean ⋅ default:[]
Possible array members are
"meta"
and"import"
.You might see an error when trying to extract metadata from your imported files. This happens, for example, for files with a size of zero bytes. Including
"meta"
in the array will cause the Robot to not stop the import (and the entire Assembly) when that happens.Including
"import"
in the array will ensure the Robot does not cease to function on any import errors either.To keep backwards compatibility, setting this parameter to
true
will set it to["meta", "import"]
internally. -
credentials
StringrequiredPlease create your associated Template Credentials in your Transloadit account and use the name of your Template Credentials as this parameter's value. They will contain the values for your Cloud Files Container, User, Key, Account type and Data center.
While we recommend to use Template Credentials at all times, some use cases demand dynamic credentials for which using Template Credentials is too unwieldy because of their static nature. If you have this requirement, feel free to use the following parameters instead:
"account_type"
("us" or "uk"),"data_center"
("dfw" for Dallas or "ord" for Chicago for example),"user"
,"key"
,"container"
. -
path
String / Array of StringsrequiredThe path in your bucket to the specific file or directory. If the path points to a file, only this file will be imported. For example:
images/avatar.jpg
.If it points to a directory, indicated by a trailing slash (
/
), then all files that are direct descendants of this directory will be imported. For example:images/
.Directories are not imported recursively. If you want to import files from subdirectories and sub-subdirectories, enable the
recursive
parameter.You can also use an array of path strings here to import multiple paths in the same Robot's Step.
-
recursive
Boolean ⋅ default:false
Setting this to
true
will enable importing files from subdirectories and sub-subdirectories (etc.) of the given path.Please use the pagination parameters
page_number
andfiles_per_page
wisely here. -
page_number
Integer ⋅ default:1
The pagination page number. For now, in order to not break backwards compatibility in non-recursive imports, this only works when recursive is set to
true
.When doing big imports, make sure no files are added or removed from other scripts within your path, otherwise you might get weird results with the pagination.
-
files_per_page
Integer ⋅ default:1000
The pagination page size. This only works when recursive is
true
for now, in order to not break backwards compatibility in non-recursive imports.