
Import files from Google Storage
🤖/google/import imports whole directories of files from Google Storage.
Keep your credentials safe
Note
Usage example
Import files from the path/to/files directory and its subdirectories:
{
"steps": {
"imported": {
"robot": "/google/import",
"credentials": "YOUR_GOOGLE_CREDENTIALS",
"path": "path/to/files/",
"recursive": true
}
}
}Parameters
output_metaRecord<string, boolean> | boolean | Array<string>Allows you to specify a set of metadata that is more expensive on CPU power to calculate, and thus is disabled by default to keep your Assemblies processing fast.
For images, you can add
"has_transparency": truein this object to extract if the image contains transparent parts and"dominant_colors": trueto extract an array of hexadecimal color codes from the image.For videos, you can add the
"colorspace: true"parameter to extract the colorspace of the output video.For audio, you can add
"mean_volume": trueto get a single value representing the mean average volume of the audio file.You can also set this to
falseto skip metadata extraction and speed up transcoding.resultboolean(default:false)Whether the results of this Step should be present in the Assembly Status JSON
queuebatchSetting the queue to 'batch', manually downgrades the priority of jobs for this step to avoid consuming Priority job slots for jobs that don't need zero queue waiting times
force_acceptboolean(default:false)Force a Robot to accept a file type it would have ignored.
By default, Robots ignore files they are not familiar with. 🤖/video/encode, for example, will happily ignore input images.
With the
force_acceptparameter set totrue, you can force Robots to accept all files thrown at them. This will typically lead to errors and should only be used for debugging or combatting edge cases.force_namestring | Array<string> | null(default:null)Custom name for the imported file(s). By default file names are derived from the source.
credentialsstringCreate a new Google service account. Set its role to "Storage Object Creator". Choose "JSON" for the key file format and download it to your computer. You will need to upload this file when creating your Template Credentials.
Go back to your Google credentials project and enable the "Google Cloud Storage JSON API" for it. Wait around ten minutes for the action to propagate through the Google network. Grab the project ID from the dropdown menu in the header bar on the Google site. You will also need it later on.
Now you can set up the
storage.objects.createandstorage.objects.deletepermissions. The latter is optional and only required if you intend to overwrite existing paths.To do this from the Google Cloud console, navigate to "IAM & Admin" and select "Roles". From here, click "Create Role", enter a name, set the role launch stage to General availability, and set the permissions stated above.
Next, go to Storage browser and select the ellipsis on your bucket to edit bucket permissions. From here, select "Add Member", enter your service account as a new member, and select your newly created role.
Then, create your associated Template Credentials in your Transloadit account and use the name of your Template Credentials as this parameter's value.
path— requiredstring | Array<string>The path in your bucket to the specific file or directory. If the path points to a file, only this file will be imported. For example:
images/avatar.jpg.If it points to a directory, indicated by a trailing slash (
/), then all files that are direct descendants of this directory will be imported. For example:images/.Directories are not imported recursively. If you want to import files from subdirectories and sub-subdirectories, enable the
recursiveparameter.If you want to import all files from the root directory, please use
/as the value here. In this case, make sure all your objects belong to a path. If you have objects in the root of your bucket that aren't prefixed with/, you'll receive a 404GOOGLE_IMPORT_NOT_FOUNDerror.You can also use an array of path strings here to import multiple paths in the same Robot's Step.
recursiveboolean(default:false)Setting this to
truewill enable importing files from subdirectories and sub-subdirectories (etc.) of the given path.Please use the pagination parameters
start_file_nameandfiles_per_pagewisely here.next_page_tokenstring(default:"")A string token used for pagination. The returned files of one paginated call have the next page token inside of their meta data, which needs to be used for the subsequent paging call.
files_per_pagestring | number(default:1000)The pagination page size. This only works when recursive is
truefor now, in order to not break backwards compatibility in non-recursive imports.