~ 9 min read
Building Cloudflare R2 Pre-signed URL Uploads with Hono: A Complete Implementation Guide

The following write-up presents a comprehensive walkthrough of implementing secure file uploads using Cloudflare R2 (Cloudflare’s cloud object storage, comparable to AWS S3) and Hono for backend HTTP API including common pitfalls and best practices.
Table of Contents
- Apply CORS policy
Introduction
Building file upload functionality in modern web applications can be tricky, especially when you want to maintain security, performance, and scalability. After extensive research and implementation, I’ll present a robust solution using Cloudflare R2 for storage and Hono for the API layer, with pre-signed URLs enabling direct frontend-to-storage uploads.
This article chronicles a complete implementation journey, including the challenges I faced, the mistakes I made, and the solutions I discovered. Whether you’re building a photo gallery, document management system, or any application requiring file uploads, this guide will help you avoid common pitfalls and implement a production-ready solution.
Why This Combination Works
Cloudflare R2: The Storage Solution
- S3-compatible API: Familiar interface with excellent support
- Global edge network: Fast uploads from anywhere in the world
- Cost-effective: No egress fees, pay only for storage
- Integrated with Workers: Seamless deployment and management
Hono: The API Framework
- Lightweight and fast: Built for edge computing, native to Cloudflare Workers platform.
- TypeScript-first: Excellent type safety and developer experience
- Cloudflare Workers native: Optimized for the Workers runtime
- Middleware ecosystem: Rich set of plugins and utilities
Pre-signed URLs: The Security Model
- Direct uploads: Files go straight to R2, reducing server load from Hono and also cutting down latency
- Time-limited access: URLs expire automatically
- Cryptographic security: Signed requests prevent tampering
- Fine-grained control: Specify exact upload conditions
The Journey: What I Learned
The implementation journey wasn’t without its challenges. Here’s what I discovered along the way:
The DOMParser Dilemma
My first attempt used the AWS SDK v3 (@aws-sdk/client-s3
as is recommended by Cloudflare as part of supported libraries), which seemed like the obvious choice for S3-compatible operations.
However, I quickly hit a wall:
Uncaught ReferenceError: DOMParser is not defined
The AWS SDK v3 relies on browser APIs like DOMParser
that aren’t available in Cloudflare Workers. This reminded me about Cloudflare’s Workers unique runtime and that not all Node.js packages work in edge environments (on Cloudflare, at least).
The aws4fetch Solution
After researching Cloudflare’s documentation, I discovered aws4fetch
- another recommended library for a lightweight, Workers-compatible library specifically designed for AWS signature generation. This was the breakthrough (but this too came with some challenges in terms of API usage).
Configuration Complexity
Setting up R2 correctly required understanding several moving parts:
- Bucket creation and CORS configuration (some gotchas here)
- API token generation and management
- Environment variable handling for different environments
- TypeScript type generation
Common Pitfalls and What NOT to Do
❌ Don’t Use AWS SDK v3 in Workers
// DON'T DO THIS
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
// This will fail with DOMParser errors in Workers
❌ Don’t Put Secrets in wrangler.jsonc
// DON'T DO THIS - secrets will be committed to git
{
"vars": {
"R2_ACCESS_KEY_ID": "your-secret-key"
}
}
❌ Don’t Skip CORS Configuration
Without proper CORS setup, browsers will block direct uploads to R2, even with valid pre-signed URLs.
❌ CORS Setup Expects Specific Format
If you’re pushing CORS rules via CLI, ensure the JSON structure matches Cloudflare’s expectations which isn’t just the regular JSON representation of CORS rules and how it looks in the Cloudflare UI but rather it needs to match their HTTP API body format.
❌ Don’t Forget File Validation
Always validate file types, sizes, and user authentication before generating pre-signed URLs.
❌ Don’t Use Hardcoded URLs
R2 URLs should be constructed dynamically using environment variables.
Prerequisites
Before diving into implementation, ensure you have:
- Cloudflare Account: With R2 enabled (you need to manually opt-in for this from the Cloudflare dashboard)
- Node.js 20+: For local development
- Wrangler CLI:
npm install -g wrangler
- Basic Hono Knowledge: Understanding of routes and middleware
- TypeScript Experience: For type safety and better development
Step-by-Step Implementation
Step 1: Project Setup
Start with a fresh Hono project or add to an existing one:
# Initialize project
npm init -y
npm install hono aws4fetch better-auth drizzle-orm
npm install -D wrangler @types/node typescript
Step 2: Configure Wrangler
Create or update wrangler.jsonc
:
{
"$schema": "node_modules/wrangler/config-schema.json",
"name": "hono-r2-uploads",
"main": "src/index.ts",
"compatibility_date": "2025-10-13",
"compatibility_flags": ["nodejs_compat"],
// I needed the below 0.0.0.0 ip rule because I am doing
// local development and testing from within a dev container
"dev": {
"ip": "0.0.0.0"
},
// Define the bucket name and how it is binded through the Worker code
"r2_buckets": [
{
"binding": "GALLERY",
"bucket_name": "gallery"
}
]
}
Step 3: Create R2 Bucket
# Create the bucket
npx wrangler r2 bucket create gallery
# Configure CORS (create cors.json)
cat > cors.json << EOF
{
"rules": [
{
"allowed": {
"methods": ["PUT", "GET", "POST","DELETE"],
"origins": ["*"],
"headers": ["content-type",
"x-amz-meta-original-filename",
"x-amz-meta-uploaded-at",
"x-amz-meta-uploaded-by"
]
},
"exposeHeaders": ["ETag"],
"MaxAgeSeconds": 3000
}
]
}
EOF
❌ Piftall alert: The CORS configuration must match Cloudflare’s expected format AND you can’t just use wildcards for headers because you’ll get a PUT 403 Forbidden error and a CORS error.
Another small note, remember to actually replace the origins
value with your frontend domain in production, but otherwise *
is fine for development.
Apply CORS policy
This is how you can set the CORS policy using Wrangler CLI:
npx wrangler r2 bucket cors set gallery --file=cors.json
Step 4: Generate TypeScript Types
Next, run this to create worker-configuration.d.ts
with proper type definitions:
npx wrangler types --env-interface Env
Step 5: Environment Configuration
Create .dev.vars
for local development:
# .dev.vars (never commit this file)
BETTER_AUTH_URL=https://your-app.com
BETTER_AUTH_SECRET=your-secret-key-here
DATABASE_URL=your-database-connection-string
CLOUDFLARE_ACCOUNT_ID=your-account-id
R2_ACCESS_KEY_ID=your-r2-access-key-id
R2_SECRET_ACCESS_KEY=your-r2-secret-access-key
Step 6: Implement the Upload Handler
Create src/routes/uploads.ts
:
import { Hono } from 'hono'
import { AwsClient } from 'aws4fetch'
import { auth } from '../lib/better-auth'
import { randomUUID } from 'crypto'
const uploads = new Hono<{ Bindings: Env }>()
// Helper function to validate image MIME type
function isValidImageType(mimeType: string): boolean {
return mimeType.startsWith('image/')
}
// Helper function to get file extension from MIME type
function getExtensionFromMimeType(mimeType: string): string {
const mimeToExt: Record<string, string> = {
'image/jpeg': '.jpg',
'image/jpg': '.jpg',
'image/png': '.png',
'image/gif': '.gif',
'image/webp': '.webp',
'image/svg+xml': '.svg',
'image/bmp': '.bmp',
'image/tiff': '.tiff',
}
return mimeToExt[mimeType] || '.jpg'
}
uploads.post('/pre-signed-url', async (c) => {
try {
// Validate authentication
const session = await auth(c.env).api.getSession({
headers: c.req.raw.headers
})
if (!session) {
return c.json({ error: 'Authentication required' }, 401)
}
// Parse request body
const body = await c.req.json()
const { filename, contentType, fileSize } = body
// Validate required fields
if (!filename || !contentType || !fileSize) {
return c.json({
error: 'Missing required fields: filename, contentType, fileSize'
}, 400)
}
// Validate file size (10MB limit)
const maxSize = 10 * 1024 * 1024 // 10MB in bytes
if (fileSize > maxSize) {
return c.json({
error: 'File size exceeds 10MB limit'
}, 400)
}
// Validate content type (must be image)
if (!isValidImageType(contentType)) {
return c.json({
error: 'Only image files are allowed'
}, 400)
}
// Generate unique filename using UUID
const fileExtension = getExtensionFromMimeType(contentType)
const uniqueFilename = `${randomUUID()}${fileExtension}`
// Create AWS client for R2 using aws4fetch (Workers-compatible)
const aws = new AwsClient({
accessKeyId: c.env.R2_ACCESS_KEY_ID,
secretAccessKey: c.env.R2_SECRET_ACCESS_KEY,
service: 's3',
region: 'auto',
})
// Generate timestamp once to avoid race conditions
const uploadedAt = new Date().toISOString()
// Construct the R2 bucket URL (correct format for R2)
const bucketUrl = `https://${c.env.CLOUDFLARE_ACCOUNT_ID}.r2.cloudflarestorage.com/gallery/${uniqueFilename}`
// Create a signed request for PUT operation with query string signing
// Use minimal headers to avoid signature mismatches
const signedRequest = await aws.sign(bucketUrl, {
method: 'PUT',
headers: {
'Content-Type': contentType,
'x-amz-meta-original-filename': filename,
'x-amz-meta-uploaded-by': session.user.id,
'x-amz-meta-uploaded-at': uploadedAt,
},
aws: {
signQuery: true,
}
})
const finalPresignedUrl = signedRequest.url
// Return the pre-signed URL and metadata
return c.json({
presignedUrl: finalPresignedUrl,
filename: uniqueFilename,
originalFilename: filename,
contentType,
fileSize,
expiresIn: 3600,
uploadedBy: session.user.id,
uploadedAt: uploadedAt,
})
} catch (error) {
console.error('Error generating pre-signed URL:', error)
return c.json({
error: 'Internal server error'
}, 500)
}
})
export default uploads
Step 7: Integrate Routes
Update src/index.ts
:
import { Hono } from 'hono'
import { cors } from 'hono/cors'
import { auth } from './lib/better-auth'
import uploads from './routes/uploads'
const app = new Hono<{ Bindings: Env }>()
// CORS middleware
app.use('*', cors({
origin: '*',
credentials: true,
allowMethods: ['GET', 'POST', 'PUT', 'DELETE', 'OPTIONS'],
allowHeaders: ['Content-Type', 'Authorization'],
}))
// Mount Better Auth handler
app.on(['GET', 'POST'], '/api/auth/*', (c) => {
return auth(c.env).handler(c.req.raw)
})
// Mount uploads routes
app.route('/api/uploads', uploads)
app.get('/', (c) => {
return c.text('Hello Hono!')
})
export default app
Testing and Validation
Local Development
# Start development server
npm run dev
# Test the endpoint
curl -X POST http://localhost:8787/api/uploads/pre-signed-url \
-H "Content-Type: application/json" \
-d '{"filename":"test.jpg","contentType":"image/jpeg","fileSize":1024}'
Authentication Testing
The endpoint requires authentication. You’ll need to:
- Set up Better Auth with user registration/login
- Include authentication cookies in requests
- Test with authenticated sessions
Pre-signed URL Validation
Generated URLs should:
- Contain AWS signature parameters (
X-Amz-Algorithm
,X-Amz-Credential
, etc.) - Include expiration timestamp (
X-Amz-Expires
) - Point to the correct R2 bucket URL
- Include metadata headers
Production Deployment
Step 1: Create R2 API Token
- Go to Cloudflare Dashboard → R2 Object Storage → Manage R2 API tokens
- Click “Create API token” → “Custom token”
- Set permissions: Object Read & Write for
gallery
bucket - Copy the Access Key ID and Secret Access Key
Step 2: Set Production Secrets
# Set secrets for production
npx wrangler secret put BETTER_AUTH_URL
npx wrangler secret put BETTER_AUTH_SECRET
npx wrangler secret put DATABASE_URL
npx wrangler secret put CLOUDFLARE_ACCOUNT_ID
npx wrangler secret put R2_ACCESS_KEY_ID
npx wrangler secret put R2_SECRET_ACCESS_KEY
Step 3: Deploy
npm run deploy
Step 4: Update CORS for Production
Basically the update-cors
lifecycle hook on npm writes a cors.json
file and pushes it to R2:
# Restrict CORS to your domain
npm run update-cors "https://yourdomain.com" "https://www.yourdomain.com"
Frontend Integration
A Vue.js Example for Pre-signed URL Uploads
<template>
<div>
<input
type="file"
accept="image/*"
@change="handleFileUpload"
:disabled="uploading"
/>
<div v-if="uploading">Uploading...</div>
<div v-if="uploadedFiles.length">
<h3>Uploaded Files:</h3>
<div v-for="file in uploadedFiles" :key="file.filename">
<p>Original: {{ file.originalName }}</p>
<p>Stored as: {{ file.filename }}</p>
</div>
</div>
</div>
</template>
<script setup>
import { ref } from 'vue'
const uploading = ref(false)
const uploadedFiles = ref([])
const uploadFile = async (file) => {
const response = await fetch('/api/uploads/pre-signed-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
credentials: 'include',
body: JSON.stringify({
filename: file.name,
contentType: file.type,
fileSize: file.size,
}),
})
const { presignedUrl, filename } = await response.json()
await fetch(presignedUrl, {
method: 'PUT',
headers: { 'Content-Type': file.type },
body: file,
})
return filename
}
const handleFileUpload = async (event) => {
const files = Array.from(event.target.files)
for (const file of files) {
if (!file.type.startsWith('image/')) {
alert('Only image files are allowed')
continue
}
if (file.size > 10 * 1024 * 1024) {
alert('File size must be less than 10MB')
continue
}
uploading.value = true
try {
const filename = await uploadFile(file)
uploadedFiles.value.push({
filename,
originalName: file.name
})
} catch (error) {
console.error('Upload failed:', error)
} finally {
uploading.value = false
}
}
}
</script>
Conclusion
Relying on Pre-signed URLs for direct uploads to Cloudflare R2 infra might come at some cost of complexity, but the benefits in terms of scalability, security, and performance are well worth it.
Hopefully, the documented journey above with code snippets and pitfalls to avoid will help you implement a similar solution in your own projects without going through various hurdles and open issues.