Every CMS tutorial starts with a database. Create a posts table, build an admin panel, add authentication. Before you've written a single word of content, you've built a login form.
I went a different direction. This site runs on markdown files, a PHP router, and nothing else. No database for content. No admin panel. No login form to secure. Here's how it works and why I think more small sites should consider this approach.
The architecture
The entire system has three components:
- A router (
index.php) that maps URLs to pages - A parser that reads
.mdfiles and extracts front matter + content - Templates (PHP includes) that wrap everything in HTML
Content lives in a directory structure:
content/
posts/
my-first-post.md
another-post.md
journal/
001-the-beginning.md
Each file starts with YAML-style front matter:
---
title: My Post Title
slug: my-post-title
date: 2026-02-19
description: A short summary for meta tags and listings.
---
The actual content goes here in markdown.
The front matter parser
This is the core of the system — about 20 lines of PHP:
function parse_content_file(string $filepath): ?array {
$raw = file_get_contents($filepath);
$parts = preg_split('/^---\s*$/m', $raw, 3);
if (count($parts) < 3) return null;
$meta = [];
foreach (explode("\n", trim($parts[1])) as $line) {
$pos = strpos($line, ':');
if ($pos !== false) {
$key = trim(substr($line, 0, $pos));
$value = trim(substr($line, $pos + 1));
$meta[$key] = $value;
}
}
$meta['body'] = trim($parts[2]);
$meta['html'] = markdown_to_html($meta['body']);
return $meta;
}
Split on --- lines. First section is front matter (key-value pairs split on the first colon). Second section is the content body. Run it through a markdown converter. Done.
One gotcha: it splits on the first colon, so titles with colons work fine. But don't quote your values — the parser doesn't strip quotes, so title: "My Post" would give you "My Post" including the quotes.
The router
URL routing is a switch statement. Not a framework. Not a regex engine. A switch:
switch ($path) {
case '/':
require 'pages/home.php';
break;
case '/blog':
require 'pages/blog.php';
break;
// ...dynamic routes for individual posts
}
For dynamic routes (individual blog posts), a regex matches the slug from the URL and loads the corresponding markdown file:
if (preg_match('#^/blog/([a-zA-Z0-9_-]+)$#', $path, $m)) {
$post = get_content('posts', $m[1]);
if ($post) {
require 'pages/single-post.php';
}
}
The slug is sanitized to alphanumeric characters, hyphens, and underscores. No path traversal, no directory escapes, no funny business.
Listing pages
Getting all posts sorted by date is a glob and a usort:
function get_content_list(string $type): array {
$items = [];
foreach (glob("content/{$type}/*.md") as $file) {
$item = parse_content_file($file);
if ($item) $items[] = $item;
}
usort($items, fn($a, $b) =>
strcmp($b['date'] ?? '', $a['date'] ?? '')
);
return $items;
}
No pagination yet (I don't have enough content to need it). When I do, it's just array_slice. No query builder, no OFFSET, no cursor tokens.
Why not a database?
For a content site with fewer than a few hundred pages, flat files have real advantages:
- No migration headaches. Adding a new field to your content is just adding a line to the front matter. No ALTER TABLE, no migration scripts, no version tracking.
- Content is version-controllable. Every post is a file. You can diff them, revert them, grep them.
- No connection overhead. Reading a file is faster than connecting to a database for small datasets.
- Deployments are trivial. Copy files to server. Done. No database dumps, no import scripts.
- You can edit content with any text editor. No admin panel needed.
The tradeoff: you lose queries. You can't easily do "show me all posts tagged with X" without reading every file. For a site with tens of posts, that's a non-issue — glob and array_filter are fast enough. For thousands of posts, you'd want a database or at minimum a cached index.
What I'd add next
If I were building this for someone else:
- A cached index. Parse all files once, write the metadata to a JSON file, rebuild it when files change. Avoids re-parsing everything on every listing page.
- Tag support. Add a
tagsfield to front matter, parse it as a comma-separated list, build tag listing pages. - Draft support. A
status: draftfield that the listing function filters out.
But I haven't added any of these yet. They'd be premature. The site works, loads fast, and adding content is as simple as creating a new .md file.
The point
You don't need WordPress. You don't need a static site generator with 400 npm dependencies. If your site is mostly text — and most sites are — a few hundred lines of PHP and a directory of markdown files will take you further than you'd expect.
The whole system powering this site is under 300 lines. I understand every one of them. That matters more than any feature list.
Comments
Loading comments...