WordPress Development with DDEV and SSH-Sync

TL;DR: My local WordPress environment is disposable infrastructure. Git only tracks my custom code, WP core is installed from scratch during ddev start, and DB is pulled from the live site on demand. Syncing local dev with production happens via SSH, without a plugin.

Local scripts orchestrate the process (pull from live, push to live, backup and restore). A single .env file is the source of truth for credentials, paths, and custom code mappings.

Introduction

For me, keeping a local WordPress environment in sync with production is a mess of manual steps. Code lives in Git, but the database is always a separate fight – mirroring production settings manually or pulling in a migration plugin like WP Migrate Pro. None of it feels coherent. I keep fighting the drift between local and live, and eventually I end up making changes directly on live, as it is more convenient.

My fix: I make my local env the authoritative source, and the live site becomes a “read-only” version of what I prepare locally. I treat WordPress itself as disposable and built a sync workflow around the only things that actually matter: the code I write and the content I create.

This post walks through the architecture of my DDEV-based WordPress setup with SSH sync scripts. How the project is structured, why custom code lives outside WordPress, how the sync scripts work, and where the approach has sharp edges. Everything here runs on a real site – my personal blog – and the scripts have survived months of actual use.

WordPress Is Disposable

Insight: Only three things matter in this repo: the code I write, the content I create, and the credentials to connect them.

In every WordPress project I’d worked on, the entire installation lived in git. I think that’s the wrong abstraction, because it conflates infrastructure with application code. WordPress core, third-party plugins, community themes – none of that is “my code.” It’s runtime dependencies. Committing it means my git history is 90% other people’s releases.

So I gitignore the entire web/ docroot. WordPress gets auto-installed on the first ddev start via a post-start hook. Content syncs from production on demand. Git only tracks the code I wrote.

DDEV’s type: wordpress handles the tedious parts. It creates and manages wp-config.php automatically – database credentials, debug settings, all of it. No manual config creation, no dynamic URL hacks.

The auto-install hook in .ddev/config.yaml:

hooks:
  post-start:
    - exec: "[ ! -f web/wp-load.php ] && .ddev/commands/install"

First ddev start after cloning? WordPress installs automatically. Every subsequent start? The hook sees wp-load.php and skips silently.

The install script is five WP-CLI commands:

wp core download
wp core install --url="$DDEV_PRIMARY_URL" \
  --title="Dev Site" \
  --admin_user=admin \
  ...
wp option update permalink_structure '/%postname%/'
wp rewrite flush --hard
wp plugin delete akismet hello

Download core, install, set permalinks, flush rewrites, remove the default plugins nobody wants. A wp-cli.yml at the project root with path: web ensures WP-CLI always resolves to the right docroot.

Custom Code Lives Outside WordPress

Gotcha: Without upload_dirs entries for volume-mounted paths, Mutagen and Docker volumes compete over the same files. No warning – things break silently.

When my custom code lived inside wp-content/, it kept getting tangled with files synced from production. A pull from live could overwrite local edits. A push to live needed careful exclusion logic scattered across scripts. I got burned by both.

The fix: move custom code outside WordPress. child-theme/ and custom-code/ live at the project root. Docker volume mounts them into wp-content/ at runtime:

# docker-compose.wp-content.yaml
services:
  web:
    volumes:
      - ../child-theme:/var/www/html/web/wp-content/themes/obsidian-minimal:ro
      - ../custom-code:/var/www/html/web/wp-content/plugins/site-customizations:ro

The :ro flag is intentional. Read-only mounts prevent WordPress from modifying custom code through the admin UI. No accidental theme editor changes, no plugin auto-updates overwriting my files.

Local folder names don’t need to match their WordPress paths. The .env file handles the mapping:

CUSTOM_THEMES=child-theme:obsidian-minimal
CUSTOM_PLUGINS=custom-code:site-customizations

Every sync script parses these mappings to build rsync --exclude flags. I add new custom code in one place. All scripts respect it.

One thing that cost me an afternoon: volume-mounted paths need to be listed in upload_dirs in config.yaml. This tells DDEV’s Mutagen file sync to ignore those paths. Without it, Mutagen and the Docker volumes compete over the same files. No error message, no warning – things stop working.

The Pull Workflow

I start every development session the same way: pull the current state from live. The goal is a local environment that mirrors production exactly – minus my custom code, which is already in the repo.

import-from-live.sh handles this in four steps.

First, a safety check. The script requires a clean git working directory. Uncommitted changes? It refuses to run. If the sync breaks something, git checkout reverts the damage.

Then the database. I stream it from production into the local environment:

ssh $LIVE_SSH \
  "cd $LIVE_PATH && wp db export - | gzip" \
  | gunzip \
  | ddev import-db

No temp files. The database exports on the server, compresses over the wire, decompresses locally, and imports into DDEV’s MariaDB – one pipeline.

After that, three separate rsync calls pull uploads, plugins, and themes. Each uses --delete to remove local files that no longer exist on production. Custom code folders get excluded via the .env mappings.

Finally, a search-replace swaps the live domain for the local one:

ddev wp search-replace \
  "https://$LIVE_DOMAIN" "https://$LOCAL_DOMAIN" \
  --skip-columns=guid

--skip-columns=guid preserves feed identifiers, in case anyone is using RSS to read the posts. In theory, the change would be reverted by the following push workflow, but this makes the values more stable.

The Push Workflow

This is where the local-first philosophy pays off. I prepare content, install updates, configure settings, and test everything – all locally. Nothing is partially visible on the live site. When I’m happy with what I see, I push local to live in one go.

push-to-live.sh is destructive by design. It overwrites the live database, uploads, plugins, and themes with whatever’s local. The script opens with a warning banner and requires typing “yes” to proceed. I once considered that overkill. It’s saved me twice.

The database push reverses the import pipeline:

ddev wp db export - \
  | gzip \
  | ssh $LIVE_SSH "cd $LIVE_PATH && gunzip | wp db import -"

After the database lands, a search-replace swaps the local domain back to the live one. Then rsync pushes uploads, plugins, and themes. Custom code pushes separately – from project root folders to their wp-content/ targets, using the same .env mappings.

Gotcha: Rsync doesn’t preserve shared hosting permissions. After a push, your uploads return 403 instead of displaying.

One thing I tripped over: shared hosting file permissions. Rsync preserves source permissions from my Mac, but shared hosts need explicit 755 for directories and 644 for files. Without fixing them, uploaded images return 403. The push script runs chmod after every sync:

ssh $LIVE_SSH "find $LIVE_PATH -type d -exec chmod 755 {} +"
ssh $LIVE_SSH "find $LIVE_PATH -type f -exec chmod 644 {} +"

Backup and Restore

Insight: Disaster recovery is local-first too. Restore a backup to local, verify it works, then push to live.

Pull and push are development workflows – they assume I’m actively working on the site. Backups are different. They’re insurance. They run independently, on a schedule, without me at the keyboard.

backup-from-live.sh downloads the current live state into a timestamped directory:

.backups/
  2024-11-15_14-30-00/
    db.sql.gz
    plugins/
    themes/
    uploads/

It also overwrites snapshot/ – a git-trackable copy of the current live state. I commit the snapshot after each backup, creating a restore point in git history. The .backups/ directory is gitignored and auto-cleaned to the last 10 entries.

I run this as a local cron job to create weekly backups. No server-side cron or hosting panel dependencies.

Recovery follows the same local-first pattern. restore-to-local.sh loads a backup into the local environment: import the database, run search-replace, restore files. I verify the site works before touching production. Never push a backup directly to live without checking it first.

Git as the Undo Button

Insight: Every sync operation is destructive. Git is the undo button.

Pull overwrites local. Push overwrites live. Restore overwrites local. One wrong command wipes out hours of work.

Git is the undo button. The clean-working-directory check before pull and restore is my first line of defense. Uncommitted changes? The script stops. Sync goes wrong? git checkout reverts the damage.

The snapshot/ directory adds another layer. Committing it after each backup creates restore points in git history. I can see what the live site looked like at any point and restore to that state.

The .gitignore strategy reinforces the separation:

.env        # Credentials - never commit!
web/        # WordPress core - disposable
.backups/   # Timestamped backups - keep the repo clean

The repo tracks three categories: my custom code, the project configuration (scripts, DDEV config, .env.sample), and the latest snapshot. Everything else is either disposable or too sensitive to commit.

.env as Single Source of Truth

All sync scripts need the same information: where’s the live server, what’s the local docroot, and which folders are custom code. Scattering that across multiple files is asking for configuration drift. Also, I don’t want those items in my git repo, while the script itself must be versioned. I define it once in .env:

LIVE_SSH=user@server.com
LIVE_PATH=/www/htdocs/user/webroot
LIVE_DOMAIN=example.com

LOCAL_PATH=web
LOCAL_DOMAIN=example.ddev.site

CUSTOM_THEMES=child-theme:obsidian-minimal
CUSTOM_PLUGINS=custom-code:site-customizations

A .env.sample with placeholder values is committed to the repo. Setup is three steps: cp .env.sample .env, fill in credentials, ddev start. The post-start hook handles the rest.

The custom code mappings (CUSTOM_THEMES, CUSTOM_PLUGINS) serve double duty. They configure the Docker volume mounts and generate rsync exclusion patterns. One definition, two consumers, zero drift.

Change a server address or add a new custom plugin in one place. All four scripts – import, push, backup, restore – pick it up automatically.

Conclusion

Once I separated custom code from WordPress, the entire workflow – sync, backup, recovery – collapsed into shell scripts built on SSH, rsync, and WP-CLI. No migration plugins, no paid sync tools, no hosting panel dependencies. That separation is what made everything else simple.

The starting point: gitignore web/, move custom code to the project root, write the .env mapping. The sync scripts are straightforward once that boundary is clean. The .env.sample, import-from-live.sh, and push-to-live.sh are available as a gist if you want a concrete starting point.

This setup assumes SSH access to production, a single-site WordPress install, and one developer owning the site. Multi-developer teams need conflict resolution. Composer-managed WordPress changes the “disposable core” model. Hosts without SSH need a different transport entirely. For the solo-developer, single-site case, this is the simplest architecture I’ve found that holds together.