Batch export files to Amazon S3 with cURL and AWS CLI

In many development workflows, exporting files directly to cloud storage like Amazon S3 is a common
requirement. Automating this file export
process reduces errors and accelerates your workflow. In
this guide, we explain how to batch export files to Amazon S3
by generating pre-signed URL
s
using AWS CLI
and uploading via cURL
in a Bash script, providing an open source
approach.
Prerequisites
Before you begin, you need:
- AWS CLI version 2 installed and configured with the appropriate credentials.
cURL
installed on your system.- A Bash shell environment.
- An existing S3 bucket with the necessary permissions.
- An IAM user or role that can perform S3 actions.
Install AWS CLI v2 (Linux):
# For x86_64 systems
curl -fsSL --retry 3 "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o awscliv2.zip
# For arm64 systems
curl -fsSL --retry 3 "https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip" -o awscliv2.zip
unzip awscliv2.zip
sudo ./aws/install
rm -f awscliv2.zip aws/ # Clean up
aws --version # Should output aws-cli/2.x.x ...
Configure AWS CLI
:
aws configure
# Enter:
# - AWS access key id [none]: your_access_key
# - AWS secret access key [none]: your_secret_key
# - default region name [none]: your-region (e.g., us-east-1)
# - default output format [none]: JSON (or leave blank)
Understand the components
Amazon S3
is a scalable object store. With pre-signed URL
s you can grant time-limited permission
to upload (PUT) or download (GET) an object without exposing your AWS credentials directly in the
transfer command. Pairing AWS CLI
for URL generation with cURL
for the actual file transfer
yields a lightweight, open source
workflow suitable for scripting.
Provide the required iam permissions
Your AWS identity (IAM user or role) used by the AWS CLI
needs the minimal permissions required to
generate pre-signed URL
s and upload objects to the target bucket. Attach a policy like this to the
relevant IAM identity:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::your-bucket-name/*"
}
]
}
Note: s3:GetObject
permission is needed if you intend to generate pre-signed URLs for
downloading, but only s3:PutObject
is required for the upload scenario described here.
Replace your-bucket-name
with your actual bucket name.
Generate pre-signed URLs with AWS CLI
Use the aws s3 presign
command to create a URL valid for a specific duration (in seconds). This
example creates a URL valid for 1 hour (3600 seconds):
aws s3 presign s3://your-bucket-name/object-key \\
--region your-region \\
--expires-in 3600
Replace your-bucket-name
, object-key
(the desired name of the file in S3), and your-region
with your specific values. The command returns a unique URL containing temporary security
credentials.
Note: The maximum expiration time for pre-signed URL
s using AWS Signature Version 4 (the default)
is 7 days (604,800 seconds).
Upload files with cURL
Use the cURL
-T
flag to specify the local file to upload. Adding --retry
makes the upload more
resilient to transient network issues. It's also crucial to set the correct Content-Type
header so
S3 and browsers handle the file correctly later.
# Determine mime type dynamically (works on linux/macos)
CONTENT_TYPE=$(file -b --mime-type localfile.txt)
curl -fsSL --retry 3 --retry-delay 2 \\
-T localfile.txt \\
-H "Content-Type: $CONTENT_TYPE" \\
"YOUR_PRESIGNED_URL_HERE"
Replace localfile.txt
with your file path and YOUR_PRESIGNED_URL_HERE
with the URL generated in
the previous step.
Automate batch file exports
Uploading dozens of files manually is tedious. The script below loops through a specified directory,
generates a pre-signed URL
for each file, validates the file size (direct PUT uploads via
pre-signed URL
s max out at 5 GiB), and uploads it using cURL
. It also includes checks to ensure
AWS CLI
is available and credentials are working before starting the loop.
#!/usr/bin/env bash
set -euo pipefail # Exit on error, undefined variable, or pipe failure
BUCKET="your-bucket-name"
EXPIRE=3600 # URL validity in seconds (1 hour)
FILES_DIR="/path/to/your/files" # Directory containing files to upload
REGION="your-region" # Your S3 bucket region
# --- pre-flight checks ---
# Ensure the source directory exists
if [[ ! -d "$FILES_DIR" ]]; then
echo "Error: Directory '$FILES_DIR' does not exist." >&2
exit 1
fi
# Check if AWS CLI is installed
if ! command -v aws &>/dev/null; then
echo "Error: AWS CLI command not found. Please install AWS CLI v2." >&2
echo "See: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" >&2
exit 1
fi
# Verify AWS credentials are configured and valid
if ! aws sts get-caller-identity --query Arn --output text &>/dev/null; then
echo "Error: AWS credentials are not configured properly or are invalid." >&2
echo "Please run 'aws configure' or check your environment variables/IAM role." >&2
exit 1
else
echo "AWS credentials verified for: $(aws sts get-caller-identity --query Arn --output text)"
fi
echo "Starting batch export from '$FILES_DIR' to bucket '$BUCKET' in region '$REGION'..."
# --- processing loop ---
for file in "$FILES_DIR"/*; do
# Skip directories or other non-file types
[[ -f "$file" ]] || continue
filename=$(basename "$file")
echo "Processing '$filename'..."
# Check file size: S3 PUT limit via pre-signed URL is 5 GiB
# stat -f%z works on macOS/BSD, stat -c%s works on Linux
file_size=$(stat -f%z "$file" 2>/dev/null || stat -c%s "$file")
if (( file_size > 5368709120 )); then
echo " Skipping '$filename': File size ($((file_size / 1024 / 1024)) MiB) exceeds 5 GiB limit for single PUT upload." >&2
echo " Consider using 'aws s3 cp' for automatic multipart upload." >&2
continue
fi
# Generate pre-signed URL for PUT request
echo " Generating pre-signed URL..."
if ! url=$(aws s3 presign "s3://$BUCKET/$filename" --region "$REGION" --expires-in "$EXPIRE" 2>/dev/null); then
echo " Error: Failed to generate pre-signed URL for '$filename'. Check permissions and bucket/region settings." >&2
continue # Skip to the next file
fi
# Determine Content-Type
content_type=$(file -b --mime-type "$file")
# Upload the file using cURL
echo " Uploading '$filename' (Content-Type: $content_type)..."
if curl -fsSL --retry 3 --retry-delay 2 \\
-T "$file" \\
-H "Content-Type: $content_type" \\
"$url"; then
echo " Successfully uploaded '$filename'."
else
# cURL returns non-zero exit code on failure when using -f
echo " Error: Failed to upload '$filename' using cURL." >&2
# Consider adding more specific error handling or logging here
fi
done
echo "Batch export completed."
Remember to replace your-bucket-name
, /path/to/your/files
, and your-region
in the script. Make
the script executable (chmod +x script_name.sh
) before running it.
Apply security best practices
When working with cloud resources, security is paramount:
- Prefer IAM Roles: When running scripts on EC2 instances or other AWS services, use IAM roles for temporary credentials instead of long-lived access keys.
- Least Privilege: Grant only the
s3:PutObject
permission needed for this task, scoped to the specific bucket. - Short URL Lifetimes: Keep the
--expires-in
value forpre-signed URL
s as short as practically possible for the upload duration. - Encryption: Enable server-side encryption (SSE-S3, SSE-KMS, or SSE-C) on your S3 bucket to protect data at rest.
- Monitoring: Use AWS CloudTrail to log API calls (like
presign
) and S3 access logs to monitor bucket activity. - VPC Endpoints: If your script runs within a VPC, use S3 VPC endpoints to keep traffic within the AWS network, avoiding the public internet.
Troubleshoot common issues
- Authentication Failures (
403 Forbidden
fromaws s3 presign
): Confirm credentials are correct and configured usingaws sts get-caller-identity
. Check IAM user/role permissions. - Access Denied (
403 Forbidden
duringcURL
upload): Verify the IAM policy allowss3:PutObject
onarn:aws:s3:::your-bucket-name/*
. Ensure thepre-signed URL
hasn't expired. Check bucket policies or ACLs that might deny access. - Network Timeouts (
cURL
errors): Increase--retry
counts or add--connect-timeout
/--max-time
options tocURL
if dealing with slow networks. Check network connectivity and firewalls. - File Too Large (
EntityTooLarge
error or script skip): For files over 5 GiB, the single PUT operation used by thiscURL
method won't work. Use theAWS CLI
'saws s3 cp
command, which handles multipart uploads automatically for large files. - Region Mismatch (
AuthorizationHeaderMalformed
orBadRequest
): Ensure the--region
specified in theaws s3 presign
command matches the actual region of your S3 bucket. - Expired URL (
AccessDenied
orRequest has expired
): Thepre-signed URL
is only valid for the duration specified by--expires-in
. Regenerate the URL if the upload takes longer than expected or is attempted after expiration. - Wrong Content Type: If files don't behave as expected after download, double-check the
Content-Type
header set during thecURL
upload. Ensurefile --mime-type
is giving the correct type.
Need to upload something bigger than 5 GiB? Let AWS CLI
handle the complexity of multipart
chunking for you:
# AWS CLI handles multipart uploads automatically for large files
aws s3 cp /path/to/your/large_file.zip s3://your-bucket-name/
Conclusion
By combining AWS CLI
for pre-signed URL
generation with cURL
for uploads, you get a flexible,
open source
pipeline for batch-exporting files to Amazon S3
—without embedding credentials
directly in your transfer commands. This method is great for automation scripts and environments
where installing larger SDKs isn't ideal.
For more complex workflows involving file processing before or after the file export
to S3,
consider a managed service. Our 🤖 /s3/store Robot, for
example, wraps a similar pattern, allowing you to export results from other processing
Steps directly to S3 within a single Assembly.
Happy coding!