Efficiently import files from Supabase in Python

Supabase offers a robust storage solution built on PostgreSQL's Large Object storage, making it ideal for managing large files, assets, and user uploads. In this DevTip, we demonstrate how to import files from Supabase using Python with practical examples and best practices for handling both single files and directories.
Introduction to Supabase storage
Supabase Storage provides a simple yet powerful way to store and serve large files. By organizing files into buckets—much like AWS S3—it becomes easy to manage application assets, user uploads, and other file-based content. This approach ensures efficient storage and quick retrieval when needed.
Overview of the Python ecosystem for file imports
Python boasts a rich ecosystem for file handling and API interactions. The official open-source Python SDK for Supabase (currently v2.13.0 as of February 2025) simplifies the process of connecting to your Supabase projects and managing storage buckets. This guide leverages that SDK to streamline file imports into your Python applications.
Step-by-step guide for setting up Supabase and Python integration
1. Install required Python libraries
Before starting, ensure you are running Python 3.8 or higher. Install the official Supabase Python client using pip:
pip install supabase
This installation command retrieves the latest stable version of the SDK (v2.13.0), which is essential for compatibility and receiving the most recent features.
2. Configure access credentials and permissions in Supabase
Access your Supabase project settings to locate your API URL and API key under the "API" section. For secure storage operations, configure appropriate bucket policies. For example, create a private bucket with specific MIME type restrictions and file size limits:
supabase.storage.create_bucket('my-bucket',
options={
"public": False,
"allowed_mime_types": ["image/png", "image/jpeg"],
"file_size_limit": 1024000, # Limit file size to ~1 MB
}
)
Setting these options helps enforce server-side file type validation and prevents unauthorized access.
3. Establish a connection to your Supabase bucket
Initialize a Supabase client in your Python application using your project's URL and API key. This connection is necessary before performing any storage operations:
from supabase import create_client, Client
from supabase.lib.client_options import ClientOptions
from supabase.exceptions import StorageException
import os
url: str = "https://your-project.supabase.co"
key: str = "your-api-key"
supabase: Client = create_client(url, key)
How to import files from Supabase using Python
Importing a single file
The following function demonstrates how to download a single file from a specified bucket. It retrieves the file from Supabase and writes it to a local destination:
def download_file(bucket_name: str, file_path: str, destination: str) -> None:
try:
# Download the file from the specified bucket and path
response = supabase.storage.from_(bucket_name).download(file_path)
with open(destination, 'wb') as f:
f.write(response)
print(f"File downloaded successfully to {destination}")
except StorageException as e:
print(f"Storage error: {e.message}")
except Exception as e:
print(f"Error downloading file: {str(e)}")
# Example usage
download_file('my-bucket', 'folder/image.jpg', 'local/image.jpg')
This synchronous approach is effective for most use cases; however, if you require progress updates, you might explore asynchronous methods or third-party libraries to monitor file transfer progress.
Importing multiple files from a directory
If you need to import an entire directory of files, use the following function. It lists all files under a specified prefix in the bucket and downloads each, preserving the local directory structure:
def import_directory(bucket_name: str, prefix: str = "") -> None:
try:
# List all files in the bucket with the given prefix
files = supabase.storage.from_(bucket_name).list(path=prefix)
for file in files:
# Check if the item is a file (directories usually have an 'id' ending with '/')
if not file.get('id').endswith('/'):
file_path = file.get('name')
local_path = os.path.join('downloads', file_path)
# Create the local directory structure if it does not exist
os.makedirs(os.path.dirname(local_path), exist_ok=True)
# Download the file
download_file(bucket_name, file_path, local_path)
except StorageException as e:
print(f"Storage error: {e.message}")
except Exception as e:
print(f"Error importing directory: {str(e)}")
# Example usage
import_directory('my-bucket', 'images/')
Storage limitations and considerations
- File size limits: Files can be as large as 50GB on Pro plans. For files approaching this size, consider exploring resumable uploads.
- Storage quotas: These vary by plan and usage, so monitor your account limits accordingly.
- Rate limits: The default is 250 operations per minute, so plan your file interactions to avoid throttling.
Authentication and security
Before performing storage operations, always authenticate your users. For example, sign in with a user's email and password to ensure that subsequent operations are performed within a secure context:
# Authenticate the user prior to executing storage operations
user = supabase.auth.sign_in_with_password({
"email": "user@example.com",
"password": "password"
})
# Proceed with storage operations using the authenticated context
response = supabase.storage.from_("private-bucket").download("file.txt")
Remember to avoid hard-coding sensitive credentials; instead, use secure environment variables or configuration files.
Best practices for importing files from Supabase
- Use proper authentication: Ensure that the user is authenticated before initiating storage operations.
- Handle exceptions specifically: Leverage
StorageException
to catch and manage file storage errors precisely. - Configure bucket policies: Set strict access controls and allowable MIME types to safeguard your files.
- Validate file types and sizes: Rely on bucket configuration to enforce server-side restrictions.
- Secure your credentials: Store API keys and secrets in environment variables rather than embedding them in code.
Additional resources to optimize Supabase and Python workflows
Conclusion
Importing files from Supabase in Python is straightforward with the proper setup and security measures. The official Supabase Python SDK offers a robust interface for handling both single file downloads and bulk imports while maintaining secure access protocols.
For advanced file processing or further automation, consider exploring Transloadit's Python SDK for powerful file handling solutions.