Skip to main content
Back to Blog
ArchitectureSaaS

Building a SaaS Platform: Complete Tech Stack Guide

Lessons from architecting and building DemandAI.Pro - a production AI SaaS platform that reduces demand letter creation from 8+ hours to 5 minutes. Learn the complete tech stack, architecture decisions, and best practices for building scalable, secure, and profitable SaaS applications.

Ross Day
December 20, 2024
18 min read

The Challenge

Building a SaaS platform from scratch is one of the most complex software engineering challenges. You need to balance rapid development with production-grade quality, manage costs while scaling, and deliver enterprise security on a startup budget.

This guide shares the complete architecture and tech stack decisions from building DemandAI.Pro - a legal AI platform currently in beta. I'll cover what worked, what didn't, and the exact tools and patterns that enabled a solo developer to build a production-ready SaaS platform.

Project Overview: DemandAI.Pro

The Problem

Personal injury attorneys spend 8-12 hours manually drafting demand letters by:

  • Reviewing hundreds of pages of medical records
  • Calculating medical damages and treatment timelines
  • Crafting persuasive legal narratives
  • Formatting documents to client standards

The Solution

AI-powered platform that reduces this process to 5 minutes by:

  • Automatically extracting medical data from PDFs and images
  • Using GPT-4 and Claude AI to analyze injuries and treatments
  • Generating professional demand letters with legal citations
  • Providing real-time cost tracking and usage analytics

Current Status

Stage

Beta Testing

Target

100 Beta Users

Development Time

6 Months

Team Size

Solo Developer

Complete Tech Stack

Frontend Architecture

Next.js 14 (App Router)

Why: Server Components, built-in API routes, automatic code splitting, and excellent TypeScript support.

Key Features Used:

  • Server Components for data fetching and security
  • Edge Runtime for API routes (lower latency)
  • Streaming for AI-generated content
  • Middleware for authentication protection

React 18

Why: Concurrent features, Suspense boundaries, and hooks for state management.

Patterns Used:

  • useState/useEffect for client-side interactivity
  • useContext for global state (user, theme)
  • Custom hooks for reusable logic (useAuth, useToast)
  • Suspense for loading states

TypeScript 5

Why: Type safety prevents production bugs, better developer experience, and easier refactoring.

Configuration:

  • Strict mode enabled
  • Path aliases (@/components, @/lib)
  • Shared types between frontend and backend

Tailwind CSS

Why: Rapid UI development, consistent design system, small bundle size with purging.

Setup:

  • Custom color palette for brand consistency
  • Reusable component classes
  • Dark mode optimized
  • JIT compiler for faster builds

Backend & Database

Supabase (PostgreSQL)

Why: Built-in auth, realtime subscriptions, Row Level Security, and generous free tier.

Features Used:

  • PostgreSQL database with full ACID compliance
  • Row Level Security (RLS) for multi-tenant data isolation
  • OAuth providers (Google, Microsoft)
  • Storage for PDF uploads and generated documents
  • Edge Functions for serverless compute
  • Realtime subscriptions for live usage tracking

Database Schema Design

Key Tables:

  • users - User profiles and subscription status
  • demand_letters - Generated documents with versioning
  • medical_records - Uploaded PDFs with OCR data
  • usage_logs - Token usage and cost tracking
  • subscriptions - Stripe integration data

RLS Policies:

  • Users can only access their own data
  • Admins have read-only access for support
  • Automated soft-deletes for HIPAA compliance

AI & External Services

OpenAI GPT-4

Use Cases:

  • Primary model for demand letter generation
  • Medical record summarization
  • Legal language refinement
  • Streaming responses for real-time generation

Cost Optimization:

  • Token counting before API calls
  • Response caching for similar requests
  • Fallback to GPT-3.5 for non-critical tasks

Anthropic Claude AI

Use Cases:

  • Fallback when GPT-4 rate limits hit
  • Long-context medical record analysis (100k+ tokens)
  • More nuanced legal reasoning for complex cases

Stripe Payments

Implementation:

  • Subscription management (Pro, Enterprise tiers)
  • Usage-based billing for API calls
  • Webhooks for subscription events
  • Customer portal for self-service management

PDF Processing

Tools:

  • pdf-parse for text extraction
  • Tesseract.js for OCR on scanned documents
  • react-pdf for client-side preview
  • jsPDF for document generation

Infrastructure & Deployment

Vercel

Why: Zero-config deployments, edge network, automatic HTTPS, preview environments.

Configuration:

  • Edge Functions for API routes
  • Environment variables for secrets
  • Automatic preview deployments on PRs
  • Analytics and Web Vitals monitoring

Security & Compliance

HIPAA Considerations:

  • End-to-end encryption for medical records
  • Audit logging for all data access
  • Automatic data retention policies
  • Business Associate Agreements with vendors

Monitoring & Analytics

Tools:

  • Vercel Analytics for performance metrics
  • Sentry for error tracking (planned)
  • Custom usage dashboard in Supabase
  • Stripe analytics for revenue metrics

Key Architecture Decisions

Server Components by Default

All pages are Server Components unless interactivity is needed. This reduces JavaScript bundle size, improves initial load times, and keeps sensitive logic (API keys, database queries) on the server.

Result: 60% reduction in client-side JavaScript, faster Time to Interactive.

Edge Runtime for AI APIs

Running API routes on Vercel's Edge Network reduces latency for AI calls from ~800ms to ~200ms. Critical for streaming responses that feel instant.

Implementation: Add export const runtime = 'edge' to API routes.

Multi-Model AI Strategy

Instead of relying on a single AI provider, I implemented automatic failover between GPT-4 and Claude. If one hits rate limits or has an outage, requests automatically route to the backup model.

Benefit: 99.9% uptime for AI features, no user-facing errors from provider issues.

Row Level Security for Multi-Tenancy

Instead of building authorization logic in application code, I use PostgreSQL Row Level Security policies. Every query automatically filters data based on the authenticated user.

Security Win: Impossible to accidentally expose another user's data - the database enforces it.

What I'd Change: Add Redis Caching

Currently, every AI request hits the OpenAI/Anthropic APIs. For similar prompts (common medical conditions, standard legal language), I should cache responses in Redis.

Potential Savings: ~30% reduction in AI API costs, faster response times for cached content.

What I'd Change: Background Job Queue

Large PDF processing (100+ pages) can timeout on Vercel's 10-second serverless limit. Should move to background jobs with progress tracking.

Solution: Implement BullMQ with Redis or use Supabase Edge Functions with longer timeouts.

Monthly Cost Breakdown

One of the biggest advantages of this stack is the low operational cost during early stages. Here's the actual monthly spend:

Vercel Pro

Hosting, edge functions, analytics

$20/mo

Supabase Pro

Database, auth, storage (8GB, 100GB egress)

$25/mo

OpenAI API

GPT-4 usage (~50k tokens/day during beta)

~$150/mo

Anthropic API

Claude fallback and long-context tasks

~$50/mo

Domain & SSL

Custom domain, SSL cert (auto-renewed)

$15/mo

Total Monthly Cost

For beta testing phase (100 users)

~$260/mo

Scaling Note: AI costs are variable and will increase with usage. Once revenue starts, plan to optimize with caching, prompt engineering, and tiered pricing that covers AI costs per user.

Lessons Learned

1. Start with Managed Services

Using Supabase instead of managing my own PostgreSQL instance saved weeks of DevOps work. Auth, storage, and realtime features that would take months to build are included out-of-the-box.

2. TypeScript is Non-Negotiable

Type safety caught dozens of bugs before they hit production. Refactoring database schema? TypeScript errors show exactly what code needs updating. Worth the initial learning curve.

3. Monitor AI Costs From Day One

I built a usage dashboard on day 1 that tracks every AI API call, token count, and cost per user. This prevented bill shock and informed pricing strategy. Track prompt_tokens + completion_tokens for every request.

4. Edge Functions Are Game-Changing

Moving API routes to Vercel Edge reduced AI response latency by 70%. For streaming responses, this is the difference between feeling instant and feeling slow.

5. Multi-Model Strategy is Essential

OpenAI has had multiple outages during beta testing. Having Claude as a backup meant zero user-facing downtime. Plus, Claude's 100k context window is better for long medical records.

6. Row Level Security > Application Logic

Implementing authorization at the database level (RLS policies) means you can never accidentally expose data - even if you write buggy queries. This is critical for HIPAA compliance.

Planned Improvements

Redis Caching Layer

Cache common AI responses to reduce costs and improve speed. Target 30% cost reduction.

Background Job Queue

BullMQ for long-running PDF processing with progress tracking and retry logic.

Advanced Analytics

Mixpanel or Amplitude for user behavior tracking and conversion optimization.

Error Monitoring

Sentry integration for real-time error tracking and performance monitoring.

E2E Testing

Playwright tests for critical user flows (signup, document generation, payment).

CDN for Assets

Move static assets (fonts, images) to Cloudflare R2 for global distribution.

Conclusion

Building a production SaaS platform as a solo developer is achievable with the right stack. By leveraging managed services (Supabase, Vercel), powerful frameworks (Next.js 14), and modern AI APIs (OpenAI, Anthropic), you can build enterprise-grade applications without a large team.

The total monthly cost of ~$260 is sustainable for a bootstrapped startup. As user count grows, costs scale linearly but so does revenue - the key is implementing usage-based pricing that covers AI costs.

If you're building a SaaS platform, I highly recommend this stack. The developer experience is excellent, deployment is painless, and you can focus on building features instead of managing infrastructure.

Need Help Building Your SaaS Platform?

I offer consulting services for SaaS architecture, AI integration, and full-stack development. Let's discuss your project.