Build a Zero-Cost Portfolio Website With AI


Here's the honest version of how this portfolio site exists.
I'm a Technical Lead. I write about distributed systems, AWS, Rust, and production fires. I wanted a personal site with a technical blog, not another WordPress dashboard, Contentful subscription, Sanity studio, or SaaS tool sending upgrade emails.
Along the way I leaned on Gemini, ChatGPT, and Claude to brainstorm architecture and wording; ChatGPT generated the hero and in-post images. The implementation is still mine — the AIs are sparring partners, not authors.
So I built my own stack. No database. No CMS. No vendor lock-in. Posts live in a private GitHub repo as .mdx files. The site fetches them on-demand, caches aggressively, and revalidates the second I push new content.
Total hosting cost: $0/month (Vercel Hobby + Cloudinary free tier).
This article is the blueprint. If you're a junior developer, follow it step by step — you'll have a production-grade portfolio and blog running by the end. If you're senior, you'll probably steal the AGENTS.md pattern.
Next.js 16 website on Vercel fetches MDX posts from a private GitHub repo via the GitHub API, caches them with on-demand ISR, serves images from Cloudinary, and sits behind Cloudflare for DNS and edge caching.
Here's what that looks like:
[You write a post]
↓
GitHub push (private content repo)
↓
GitHub Action fires webhook
↓
POST /api/revalidate (Next.js)
↓
Cache busted → Next fetch on next request
↓
Reader sees fresh content in < 1 secondNo deploy pipeline triggered. No rebuild. Just cache invalidation.
| Tool | Why |
|---|---|
| Next.js 16 | ISR + Server Components + MDX rendering in one framework |
| GitHub as CMS | Free, version-controlled, diff-able, private, already where my code lives |
| Cloudinary | Free tier covers image hosting + automatic WebP/AVIF conversion |
| Vercel | Zero-config Next.js deploys, hobby tier is genuinely free |
| Cloudflare | Free DNS, DDoS protection, and a global CDN I didn't have to configure |
| Gemini + ChatGPT + Claude | Brainstorming structure, tradeoffs, and copy — pick the model that fits the thread |
| ChatGPT (images) | Hero art and supporting graphics — iterate in the image workflow, then host on Cloudinary |
The alternative was Ghost, Hashnode, or Medium. The problem: I don't control the data, I don't control the design, and I'm building on someone else's platform. One pricing change and I'm migrating.
This is the most important architectural decision.
Repo 1: The Website — public GitHub repo, contains the Next.js app, zero content.
Repo 2: The Content — private GitHub repo, contains only .mdx files in a posts/ folder.
Why separate?
content-repo/
posts/
my-first-post.mdx
aws-cost-disaster.mdx
rust-error-handling.mdxEach .mdx file has frontmatter at the top:
---
title: "How We Cut AWS Costs 38% After a $53K Breach"
date: "2024-11-15"
excerpt: "A breach, a $53K bill, and the architecture rebuild that followed."
tags: ["aws", "finops", "security"]
image: "https://res.cloudinary.com/your-cloud/image/upload/v1/posts/aws-breach.jpg"
---
Your post content starts here...The image field is optional. If absent, the website falls back to a branded gradient placeholder.
blog-content-readerrepo scope (read access to private repos)Save it as GITHUB_TOKEN in your environment variables. Never commit it to any repo.
npx create-next-app@latest my-blog --typescript --tailwind --app
cd my-blogInstall the content libraries:
npm install next-mdx-remote gray-matter reading-time date-fns \
remark-gfm rehype-slug rehype-autolink-headings rehype-pretty-code shikiapp/
(site)/ ← route group (shared Navbar + Footer)
page.tsx → / (post listing)
[slug]/page.tsx → /:slug (individual post)
tag/[tag]/page.tsx → /tag/:tag
about/page.tsx → /about
api/
revalidate/route.ts ← webhook endpoint
rss.xml/route.ts ← RSS feed
sitemap.ts
robots.ts
globals.css ← ALL design tokens here
layout.tsx ← fonts + HTML shell
components/
ui/ ← Button, Badge, etc.
blog/ ← PostCard, PostHeader, MDXContent
layout/ ← Navbar, Footer
lib/
github.ts ← GitHub API client
mdx.ts ← frontmatter parser
cache.ts ← caching wrappers
types/
blog.ts ← TypeScript interfaceslib/github.ts)This is the heart of the content pipeline:
const GITHUB_API = "https://api.github.com";
const CONTENT_REPO = process.env.CONTENT_REPO!;
const BRANCH = process.env.CONTENT_BRANCH ?? "main";
const TOKEN = process.env.GITHUB_TOKEN!;
const headers = {
Authorization: `Bearer ${TOKEN}`,
Accept: "application/vnd.github.v3+json",
};
export async function fetchAllPostSlugs(): Promise<string[]> {
const res = await fetch(
`${GITHUB_API}/repos/${CONTENT_REPO}/contents/posts?ref=${BRANCH
The next: { tags: [...] } on fetch calls is what makes on-demand revalidation work. Tag "posts" busts all posts. Tag "post-{slug}" busts just one.
lib/mdx.ts)import matter from "gray-matter";
import readingTime from "reading-time";
import type { PostFrontMatter, PostMeta } from "@/types/blog";
export function parsePost(slug: string, raw: string): PostMeta {
const { data, content } = matter(raw);
const rt = readingTime(content);
return {
slug,
frontMatter: data as PostFrontMatter,
readingTime: Math.ceil(rt.minutes),
content,
};
}lib/cache.ts)Next.js 16 uses the 'use cache' directive, not unstable_cache — this is the most common mistake I see:
"use cache";
import { cacheTag } from "next/cache";
import { fetchAllPostSlugs, fetchPostContent } from "./github";
import { parsePost } from "./mdx";
export async function getAllPostMetas() {
cacheTag("posts");
const slugs = await fetchAllPostSlugs();
const posts = await Promise.all(
slugs.map(async (slug) => {
const raw = await fetchPostContent(slug);
return parsePost(slug, raw);
})
);
return
You also need this in next.config.ts:
const nextConfig = {
cacheComponents: true, // required for 'use cache' directive
images: {
remotePatterns: [
{
protocol: "https",
hostname: "res.cloudinary.com",
pathname: "/your-cloud-name/**",
},
],
},
};
export default nextConfig;app/api/revalidate/route.ts)import { revalidateTag } from "next/cache";
import { NextRequest, NextResponse } from "next/server";
export async function POST(req: NextRequest) {
const secret = req.nextUrl.searchParams.get("secret");
if (secret !== process.env.REVALIDATION_SECRET) {
return NextResponse.json({ error: "Unauthorized" }, { status: 401 });
}
const body = await req.json().catch(() => ({}));
if (body.slug) {
revalidateTag(`post-${body.slug}`
Cloudinary's free tier gives you 25 GB storage and 25 GB monthly bandwidth. For a personal blog, that's effectively unlimited.
Your image URL will look like:
https://res.cloudinary.com/YOUR_CLOUD_NAME/image/upload/v1234567890/posts/my-image.jpgUse that URL as the image field in your post frontmatter. Cloudinary automatically converts to WebP/AVIF based on the browser, resizes on-demand, and serves from a global CDN.
Add your cloud name to next.config.ts under images.remotePatterns (shown in the config above). Without this, next/image will refuse to optimise images from that domain — you'll get a runtime error.
Every visual decision lives in app/globals.css as CSS custom properties. Nothing is hard-coded in components.
:root {
/* Surfaces */
--bg-base: #fafafa;
--bg-subtle: #f3f4f6;
--bg-muted: #e5e7eb;
/* Brand */
--brand: #2563eb;
--brand-hover: #1d4ed8;
/* Text */
--text-primary: #111827;
--text-secondary: #374151;
--text-muted: #6b7280;
/* Borders */
--border-default: #e5e7eb;
--border-strong: #d1d5db;
/* Code */
--code-bg: #0f172a; /* dark background for code blocks */
--code-inline-bg: #f1f5f9;
The graph-paper grid is pure CSS — no JavaScript, no canvas, no library. One background-image with two gradients.
Fonts in app/layout.tsx:
import { Inter, Playfair_Display, JetBrains_Mono } from "next/font/google";
const inter = Inter({ subsets: ["latin"], variable: "--font-sans" });
const playfair = Playfair_Display({ subsets: ["latin"], variable: "--font-display" });
const jetbrains = JetBrains_Mono({ subsets: ["latin"], variable: "--font-mono" });Rule: --font-display (Playfair Display) on all headings. --font-sans (Inter) on body and UI. --font-mono (JetBrains Mono) on code.
Vercel auto-detects Next.js. Zero config needed.
In Vercel → Project Settings → Environment Variables, add:
GITHUB_TOKEN = ghp_xxxxxxxxxxxx
CONTENT_REPO = yourusername/blog-content
CONTENT_BRANCH = main
REVALIDATION_SECRET = any-random-string-you-generate
NEXT_PUBLIC_BASE_URL = https://yourdomain.comClick Deploy. That's it.
Every push to your website repo triggers a new Vercel build. Pushes to your content repo only trigger cache revalidation — no rebuild, no wait.
In your content repo, create .github/workflows/revalidate.yml:
name: Revalidate Blog Cache
on:
push:
branches: [main]
paths:
- "posts/**"
jobs:
revalidate:
runs-on: ubuntu-latest
steps:
- name: Trigger revalidation
run: |
curl -X POST \
"${{ secrets.BLOG_URL }}/api/revalidate?secret=${{ secrets.REVALIDATION_SECRET }}" \
-H "Content-Type: application/json" \
-d '{}'In your content repo's GitHub Secrets, add:
BLOG_URL → https://yourdomain.comREVALIDATION_SECRET → same secret you set in VercelNow every time you push a new post, GitHub tells Vercel to bust the cache automatically. Fresh content appears in seconds, no deploy needed.
@ (or www)cname.vercel-dns.comVercel provisions an SSL certificate automatically via Let's Encrypt. Cloudflare handles DDoS protection and global edge caching at no cost.
I used three AI tools, and each played a different role.
Claude Code (terminal-based): Primary builder. It lived in the project directory the entire time. It read my files, understood the full codebase context, and wrote production-ready code — not snippets, full implementations. When something broke, I showed it the error and it fixed the root cause, not the symptom.
ChatGPT: Good for design brainstorming. I described the aesthetic I wanted and it generated CSS ideas I then refined.
Gemini: Used for longer-context tasks — when I needed to paste an entire file and ask architectural questions about it.
But here's what actually made all three work effectively: the AGENTS.md file.
Every AI coding assistant starts fresh each session. It doesn't remember your design decisions, your naming conventions, or the gotcha you hit last week with the cache directive.
The AGENTS.md file fixes this. It's a markdown file you place at the root of your project that tells AI assistants exactly how your project works — before they write a single line of code. Claude Code, Cursor, and most MCP-aware tools read this file automatically.
Here's a template you can adapt:
# Project: [Your Blog Name]
## What this is
Personal technical blog. [Your name] writes posts; readers consume them.
No CMS, no database.
## Stack
- Next.js 16 (has breaking changes — read the docs before touching routing or caching)
- Tailwind v4
- next-mdx-remote for MDX rendering
- GitHub API for content fetching
## Design System
All tokens live in `app/globals.css`. Never hard-code hex in components.
Semantic tokens:
- Surfaces: `--bg-base` · `--bg-subtle` · `--bg-muted`
- Brand: `--brand` · `--brand-hover`
- Text: `--text-primary` · `--text-secondary` · `--text-muted`
Fonts:
- `--font-display` → headings only
- `--font-sans` → body and UI
- `--font-mono`
Why this works: The AI reads the rules before it writes code. No more generated components with hard-coded colors. No more deprecated API usage. No more it breaking your cache by using patterns from older Next.js versions.
Three sections are all you need in an AGENTS.md:
I'm including these because every tutorial leaves them out and you'll hit all of them.
1. await params in dynamic routes. Next.js 16 changed params to be a Promise. const { slug } = params without awaiting gives you a runtime error that looks like a type error.
// Wrong:
export default function Page({ params }: { params: { slug: string } }) {
const { slug } = params; // runtime error
// Right:
export default async function Page({ params }: { params: Promise<{ slug: string }> }) {
const { slug } = await params;2. cacheComponents: true conflicts with route segment config. If you have export const dynamic = 'force-dynamic' or export const revalidate = 60 in any page, it will clash with cacheComponents: true. Remove them. Let 'use cache' handle caching instead.
3. generateStaticParams must return at least one result. If your content repo is empty and generateStaticParams returns [], Next.js crashes at build time. Fix: remove generateStaticParams entirely if content might be empty. Pages cache on first request via 'use cache' anyway.
4. GitHub API returns base64-encoded content. Don't try to use data.content directly.
// Wrong:
return data.content;
// Right:
return Buffer.from(data.content, "base64").toString("utf-8");5. Every Cloudinary domain must be in remotePatterns. next/image refuses to process images from unlisted hostnames. Register every CDN you use in next.config.ts.
# GitHub API access (never commit this)
GITHUB_TOKEN=ghp_your_personal_access_token
# Your private content repo (format: username/repo-name)
CONTENT_REPO=yourusername/your-private-content-repo
CONTENT_BRANCH=main
# Cache revalidation (generate with: openssl rand -hex 32)
REVALIDATION_SECRET=your-random-secret-here
# Your live domain
NEXT_PUBLIC_BASE_URL=https://yourdomain.comGenerate the revalidation secret with:
openssl rand -hex 32| Service | Tier | Monthly Cost |
|---|---|---|
| Vercel | Hobby | $0 |
| Cloudinary | Free | $0 |
| Cloudflare | Free | $0 |
| GitHub | Free (private repos included) | $0 |
| Domain | — | ~$1/month ($12/year) |
Total: $1/month. Everything else is free, forever, until you hit serious traffic — at which point you've probably already monetised the site.
Once the base is running, natural extensions in order of value:
sitemap.ts auto-generates from post slugs, improves Google indexing/tag/[tag] filters posts by frontmatter tag, already in the folder structureuseState@vercel/og generates per-post social cards from frontmatterThis blog runs on a pattern, not a platform. The content pipeline is roughly 80 lines of TypeScript. The caching is three function calls. The design system is one CSS file.
The complexity lives where it should: in the architecture decisions you make once and document in AGENTS.md, so every AI assistant you work with respects them without being told twice.
That file is the real deliverable from this project. Copy the pattern. Adapt it to your stack. Give your AI a memory.
Built with Next.js 16, Tailwind v4, and three AI assistants that all had strong opinions about the correct way to handle MDX. Two of them were right.