Browser uploads to Cloudflare R2 with AWS SDK

Cloudflare R2 offers developers a cost-effective and performant object storage solution, compatible with the S3 API but without egress fees. A key capability for web applications is handling direct browser-based uploads, which can reduce server load, latency, and costs by avoiding the need to proxy files through your backend.
In this DevTip, we'll explore how to leverage the AWS SDK for JavaScript (v3) and Cloudflare Workers to enable secure file uploads directly from a user's browser to your Cloudflare R2 bucket.
Introduction to Cloudflare R2
Cloudflare R2 provides S3-compatible object storage integrated with Cloudflare's global network. Its compatibility with the AWS S3 API means you can use existing AWS SDKs and tools while benefiting from R2's pricing model, particularly the zero egress fees.
Using the AWS SDK for browser uploads
To interact with Cloudflare R2 from the browser securely, we'll use a backend (like a Node.js server or a Cloudflare Worker) to generate temporary, secure upload links called presigned URLs. The browser then uses these URLs to upload files directly to R2.
First, install the necessary AWS SDK v3 packages in your backend project:
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
Setting up presigned URLs for secure browser uploads
Presigned URLs grant temporary permission to perform a specific S3 action (like PutObject
) on a
specific object key, without exposing your secret R2 credentials to the browser.
Here's an example of a backend endpoint (e.g., using Node.js/Express or a Cloudflare Worker) that generates a presigned URL for uploading:
// backend/presigned-url-generator.js
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
// Ensure environment variables are set:
// CLOUDFLARE_ACCOUNT_ID, R2_ACCESS_KEY_ID, R2_SECRET_ACCESS_KEY, R2_BUCKET_NAME
const R2 = new S3Client({
region: 'auto',
endpoint: `https://${process.env.CLOUDFLARE_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID, // Use R2_ACCESS_KEY_ID
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY, // Use R2_SECRET_ACCESS_KEY
},
})
const BUCKET_NAME = process.env.R2_BUCKET_NAME
// Example function (adapt for your framework, e.g., Express route handler)
async function generateUploadUrl(req, res) {
// It's crucial to sanitize and validate filenames from user input
const unsafeFilename = req.query.filename
const contentType = req.query.contentType || 'application/octet-stream' // Default or get from query
if (!unsafeFilename) {
return res.status(400).json({ error: 'Filename query parameter is required' })
}
// Basic sanitization: replace potentially problematic characters
const safeFilename = unsafeFilename.replace(/[^a-zA-Z0-9._-\/]/g, '_')
// Consider adding a unique prefix (e.g., user ID, timestamp) to prevent collisions
const key = `uploads/${Date.now()}-${safeFilename}`
try {
const command = new PutObjectCommand({
Bucket: BUCKET_NAME,
Key: key, // Use the sanitized and potentially prefixed key
ContentType: contentType, // Set ContentType for correct handling
// ACL: 'public-read', // Uncomment if you want uploaded objects to be public by default
})
// Generate the presigned URL, valid for 1 hour (3600 seconds)
const signedUrl = await getSignedUrl(R2, command, { expiresIn: 3600 })
res.json({ url: signedUrl, key: key }) // Return the URL and the final key
} catch (error) {
console.error('Error generating signed URL:', error)
res.status(500).json({ error: 'Failed to generate upload URL' })
}
}
// Example usage in an Express app:
// app.get('/api/generate-upload-url', generateUploadUrl);
On the frontend, you fetch this presigned URL and then use it to upload the selected file directly to R2:
// frontend/uploader.js
async function uploadFileToR2(file) {
try {
// 1. Request a presigned URL from your backend
const response = await fetch(
// Pass filename and content type to backend
`/api/generate-upload-url?filename=${encodeURIComponent(file.name)}&contentType=${encodeURIComponent(file.type)}`,
)
if (!response.ok) {
const errorData = await response.json()
throw new Error(`Failed to get upload URL: ${errorData.error || response.statusText}`)
}
const { url, key } = await response.json() // Get URL and the final object key
console.log(`Received presigned URL for key: ${key}`)
// 2. Upload the file directly to R2 using the presigned URL
const uploadResponse = await fetch(url, {
method: 'PUT',
body: file,
headers: {
// Content-Type must match what was used to generate the presigned URL if specified
'Content-Type': file.type,
// You might not need 'Content-Length' as fetch often handles it,
// but some S3-compatible services might require it.
// 'Content-Length': file.size.toString(),
},
})
if (!uploadResponse.ok) {
// Attempt to get error details from R2 response
const errorText = await uploadResponse.text()
console.error('R2 Upload Error Response:', errorText)
throw new Error(`Upload failed: ${uploadResponse.statusText}`)
}
console.log(`File uploaded successfully! Object key: ${key}`)
return { success: true, key: key }
} catch (error) {
console.error('Error uploading file:', error)
return { success: false, error: error.message }
}
}
// Example usage with a file input element
document.getElementById('fileInput').addEventListener('change', async (event) => {
const file = event.target.files[0]
if (file) {
const uploadProgress = document.getElementById('uploadProgress')
uploadProgress.textContent = 'Uploading...'
const result = await uploadFileToR2(file)
if (result.success) {
uploadProgress.textContent = `Upload complete! Key: ${result.key}`
// Optionally display the file URL if the bucket is public or served via Worker
// e.g., `https://your-public-bucket-domain/${result.key}`
// or `https://your-worker-domain/${result.key}`
} else {
uploadProgress.textContent = `Upload failed: ${result.error}`
}
}
})
Handling large file uploads with multipart upload
For files larger than a few megabytes, using S3 multipart uploads is recommended. This breaks the file into smaller chunks, allowing for parallel uploads, retries of failed parts, and pausing/resuming uploads.
Implementing multipart uploads directly from the browser is more complex, often involving:
- Backend endpoint to initiate the multipart upload (
CreateMultipartUploadCommand
) and return anUploadId
. - Backend endpoint(s) to generate presigned URLs for each part (
UploadPartCommand
). - Frontend logic to slice the file, request presigned URLs for parts, upload parts, and track progress.
- Backend endpoint to finalize the upload (
CompleteMultipartUploadCommand
) once all parts are uploaded.
Here's the backend logic using AWS SDK v3:
// backend/multipart-handler.js
import {
S3Client,
CreateMultipartUploadCommand,
UploadPartCommand,
CompleteMultipartUploadCommand,
AbortMultipartUploadCommand, // Important for cleanup
} from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
// Assume R2 S3Client is configured as shown previously
// const R2 = new S3Client({...});
// const BUCKET_NAME = process.env.R2_BUCKET_NAME;
async function initiateMultipartUpload(key, contentType) {
const command = new CreateMultipartUploadCommand({
Bucket: BUCKET_NAME,
Key: key,
ContentType: contentType,
})
const response = await R2.send(command)
return response.UploadId // Return the UploadId needed for subsequent steps
}
async function getMultipartPresignedUrl(key, uploadId, partNumber) {
const command = new UploadPartCommand({
Bucket: BUCKET_NAME,
Key: key,
UploadId: uploadId,
PartNumber: partNumber,
})
// Generate presigned URL for uploading a specific part
const signedUrl = await getSignedUrl(R2, command, { expiresIn: 3600 }) // 1 hour expiry
return signedUrl
}
async function completeMultipartUpload(key, uploadId, parts) {
// 'parts' should be an array of { ETag: string, PartNumber: number }
// The ETag is returned by R2 in the header of a successful part upload
const command = new CompleteMultipartUploadCommand({
Bucket: BUCKET_NAME,
Key: key,
UploadId: uploadId,
MultipartUpload: {
Parts: parts.sort((a, b) => a.PartNumber - b.PartNumber), // Ensure parts are sorted
},
})
return await R2.send(command)
}
async function abortMultipartUpload(key, uploadId) {
const command = new AbortMultipartUploadCommand({
Bucket: BUCKET_NAME,
Key: key,
UploadId: uploadId,
})
return await R2.send(command)
}
// You would need API endpoints calling these functions
// e.g., POST /api/uploads/initiate, GET /api/uploads/:uploadId/part/:partNumber, POST /api/uploads/:uploadId/complete
Libraries like Uppy can simplify implementing multipart uploads on the frontend by handling the file chunking, part signing requests, and upload management.
Managing R2 buckets with the Wrangler CLI
Cloudflare provides the wrangler
CLI tool for managing Cloudflare resources, including R2 buckets.
Install it globally:
npm install -g wrangler
Log in to your Cloudflare account:
wrangler login
Now you can manage your R2 buckets:
# Create a new bucket (names must be globally unique if not using jurisdiction)
wrangler r2 bucket create your-unique-bucket-name
# List all buckets associated with your account
wrangler r2 bucket list
# Upload a file from your local machine
wrangler r2 object put your-unique-bucket-name/path/to/object.txt --file ./local-file.txt --content-type "text/plain"
# Download an object
wrangler r2 object get your-unique-bucket-name/path/to/object.txt --file ./downloaded-file.txt
# Delete an object
wrangler r2 object delete your-unique-bucket-name/path/to/object.txt
# List objects in a bucket
wrangler r2 object list your-unique-bucket-name
Advanced configurations with Cloudflare workers
Cloudflare Workers allow you to run JavaScript code at the edge, enabling custom logic for accessing your R2 buckets, such as authentication, routing, or serving private content.
Here's a basic Worker example that serves files from an R2 bucket and allows authorized uploads:
// worker/src/index.js
export default {
async fetch(request, env, ctx) {
const url = new URL(request.url)
// Remove leading slash from pathname to get the object key
const key = url.pathname.slice(1)
// Ensure the R2 bucket binding 'MY_BUCKET' is configured in wrangler.toml
if (!env.MY_BUCKET) {
return new Response('R2 bucket binding not configured', { status: 500 })
}
switch (request.method) {
case 'PUT':
case 'POST': // Handle POST as PUT for simplicity here
// --- Authentication/Authorization Check ---
// Implement robust auth check here (e.g., check JWT, API key, session)
if (!isAuthenticated(request)) {
return new Response('Unauthorized', { status: 401 })
}
// --- End Auth Check ---
// Stream the request body directly to R2
try {
const object = await env.MY_BUCKET.put(key, request.body, {
httpMetadata: request.headers, // Pass client headers (like Content-Type) to R2
})
// Return a success response, potentially with the object details
return new Response(null, {
status: 200,
headers: { ETag: object.httpEtag },
})
} catch (e) {
return new Response(`Error uploading to R2: ${e.message}`, { status: 500 })
}
case 'GET':
// Retrieve the object from R2
const object = await env.MY_BUCKET.get(key)
if (object === null) {
return new Response('Object Not Found', { status: 404 })
}
// Set necessary response headers from the object's metadata
const headers = new Headers()
object.writeHttpMetadata(headers) // Copies Content-Type, etc.
headers.set('etag', object.httpEtag) // Set ETag for caching
// Add Cache-Control headers if desired
// headers.set('Cache-Control', 'public, max-age=3600');
// Stream the object body back to the client
return new Response(object.body, {
headers,
})
case 'DELETE':
// --- Authentication/Authorization Check ---
if (!isAuthenticated(request)) {
return new Response('Unauthorized', { status: 401 })
}
// --- End Auth Check ---
try {
await env.MY_BUCKET.delete(key)
return new Response(null, { status: 204 }) // No Content
} catch (e) {
return new Response(`Error deleting from R2: ${e.message}`, { status: 500 })
}
default:
return new Response('Method Not Allowed', { status: 405 })
}
},
}
// Placeholder authentication function - replace with your actual logic
function isAuthenticated(request) {
// Example: Check for a specific Authorization header
const authHeader = request.headers.get('Authorization')
// WARNING: This is a trivial example, use proper token validation in production
return authHeader === 'Bearer YOUR_SECRET_TOKEN'
}
To deploy this Worker, configure your wrangler.toml
to bind your R2 bucket:
# Wrangler.toml
name = "r2-file-server-worker"
main = "src/index.js" # Path to your worker script
compatibility_date = "2023-10-30" # Use a recent compatibility date
# Bind the R2 bucket named 'your-unique-bucket-name' to the variable 'my_bucket' in the worker
[[r2_buckets]]
binding = "MY_BUCKET" # Variable name available in the Worker (env.MY_BUCKET)
bucket_name = "your-unique-bucket-name"
# Preview_bucket_name = "your-preview-bucket-name" # optional: for wrangler dev
Then deploy using:
wrangler deploy
Setting up custom domains
You can serve your R2 content through a custom domain (e.g., files.yourdomain.com
) instead of the
default Cloudflare Worker or R2 public URLs.
- Ensure your domain (
yourdomain.com
) is managed by Cloudflare. - Deploy a Cloudflare Worker (like the example above) that serves content from your R2 bucket.
- In the Cloudflare dashboard, navigate to your Worker and add a "Custom Domain" or add a route
under "Workers & Pages" -> your domain -> "Workers Routes", pointing a specific path (e.g.,
files.yourdomain.com/*
) to your deployed Worker service.
This setup allows you to control access, add caching headers, and potentially rewrite URLs, all served under your branded domain.
Error handling and retries
Network issues can interrupt uploads. Implement retry logic, especially for larger files or multipart uploads. Exponential backoff is a common strategy.
// frontend/uploader.js - (Simplified retry logic for PUT example)
async function uploadFileWithRetry(file, maxRetries = 3) {
let attempt = 0
while (attempt <= maxRetries) {
console.log(`Upload attempt ${attempt + 1} of ${maxRetries + 1}...`)
const result = await uploadFileToR2(file) // Use the function defined earlier
if (result.success) {
return result // Success!
}
console.error(`Attempt ${attempt + 1} failed: ${result.error}`)
attempt++
if (attempt <= maxRetries) {
// Exponential backoff: 1s, 2s, 4s...
const delay = Math.pow(2, attempt - 1) * 1000
console.log(`Retrying in ${delay / 1000} seconds...`)
await new Promise((resolve) => setTimeout(resolve, delay))
}
}
console.error(`Upload failed after ${maxRetries + 1} attempts.`)
return { success: false, error: `Upload failed after ${maxRetries + 1} attempts` }
}
// Modify the event listener to use the retry function:
document.getElementById('fileInput').addEventListener('change', async (event) => {
const file = event.target.files[0]
if (file) {
const uploadProgress = document.getElementById('uploadProgress')
uploadProgress.textContent = 'Uploading...'
// Use the retry wrapper
const result = await uploadFileWithRetry(file)
// ... (update UI based on final result) ...
if (result.success) {
uploadProgress.textContent = `Upload complete! Key: ${result.key}`
} else {
uploadProgress.textContent = `Upload failed: ${result.error}`
}
}
})
Practical use cases
Direct browser uploads to Cloudflare R2 are beneficial for:
- User profile pictures and avatars.
- Image galleries and media sharing platforms.
- Document submission forms.
- User-generated content sites.
- Static asset hosting where users upload content directly.
Benefits include:
- Reduced Server Load: Your backend only generates presigned URLs, not proxying large files.
- Lower Latency: Users upload directly to Cloudflare's edge, closer to them.
- Cost Savings: Avoids egress fees from R2 and reduces your server bandwidth costs.
- Simplified Architecture: Fewer moving parts compared to proxying uploads.
Conclusion
Using the AWS SDK v3 with presigned URLs provides a secure and efficient method for enabling direct browser uploads to Cloudflare R2. Combined with Cloudflare Workers for advanced control and the Wrangler CLI for management, you can build robust, scalable file handling into your web applications while leveraging R2's cost-effective storage.
For more complex scenarios involving file processing after upload, consider a service like Transloadit. Transloadit integrates smoothly with Cloudflare R2 via our 🤖 /cloudflare/store Robot, allowing you to trigger encoding, resizing, watermarking, and more as files land in your R2 bucket. Our Uppy SDK also offers a robust frontend file uploading experience, including support for multipart uploads to various destinations.