OG Image in GitHub Actions: Generate and Upload Social Previews in CI

How to generate og:image files automatically in your GitHub Actions CI pipeline using Puppeteer, Playwright, or @vercel/og — and upload them to S3, GCS, or Cloudflare.

Why generate OG images in CI?

Generating OG images at build time in your CI pipeline offers several advantages over on-demand generation at request time: no cold starts, no per-request compute costs, images are cached on a CDN from the first hit, and you get reliable previews even if your server is under load. The tradeoff is that you need to regenerate images when content changes — but for blogs, docs, and marketing sites, this is exactly what a build step does.

Approach 1: Puppeteer screenshot in GitHub Actions

The most flexible approach is to spin up a headless Chromium browser via Puppeteer, navigate to a local HTML template with your post data injected, and take a screenshot. GitHub Actions runners already have Chrome available via puppeteer or you can install it in the workflow.

# .github/workflows/generate-og-images.yml
name: Generate OG Images

on:
  push:
    branches: [main]
    paths:
      - 'content/posts/**'

jobs:
  og-images:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'

      - run: npm ci

      - name: Install Chromium dependencies
        run: |
          sudo apt-get update
          sudo apt-get install -y libgbm-dev

      - name: Generate OG images
        run: node scripts/generate-og-images.js

      - name: Upload to S3
        env:
          AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
          AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
        run: |
          aws s3 sync ./public/og s3://your-bucket/og \
            --content-type image/png \
            --cache-control "public, max-age=31536000, immutable"

The generation script:

// scripts/generate-og-images.js
const puppeteer = require('puppeteer');
const fs = require('fs');
const path = require('path');

const posts = JSON.parse(fs.readFileSync('./content/posts-manifest.json', 'utf-8'));

(async () => {
  const browser = await puppeteer.launch({
    args: ['--no-sandbox', '--disable-setuid-sandbox'],
  });
  const page = await browser.newPage();
  await page.setViewport({ width: 1200, height: 630 });

  fs.mkdirSync('./public/og', { recursive: true });

  for (const post of posts) {
    const html = buildOgHtml(post.title, post.category);
    await page.setContent(html, { waitUntil: 'networkidle0' });
    await page.screenshot({
      path: `./public/og/${post.slug}.png`,
      type: 'png',
    });
    console.log(`✓ ${post.slug}`);
  }

  await browser.close();
})();

function buildOgHtml(title, category) {
  return `<!DOCTYPE html>
<html>
<head>
  <style>
    body { margin: 0; width: 1200px; height: 630px; display: flex; flex-direction: column;
      justify-content: flex-end; padding: 64px; background: #0a0a0a;
      font-family: system-ui, sans-serif; box-sizing: border-box; }
    .cat { color: #a855f7; font-size: 20px; margin-bottom: 16px; }
    h1 { color: #fff; font-size: 52px; margin: 0; line-height: 1.2; }
  </style>
</head>
<body>
  <p class="cat">${category}</p>
  <h1>${title}</h1>
</body>
</html>`;
}

Approach 2: Playwright screenshot

Playwright is similar to Puppeteer but supports multiple browser engines. For CI, it provides the playwright/chromium package with a simpler setup on GitHub Actions runners.

# In your workflow:
- name: Install Playwright Chromium
  run: npx playwright install chromium --with-deps

- name: Generate OG images
  run: node scripts/generate-og-playwright.js
// scripts/generate-og-playwright.js
const { chromium } = require('playwright');
const fs = require('fs');

const posts = JSON.parse(fs.readFileSync('./content/posts-manifest.json', 'utf-8'));

(async () => {
  const browser = await chromium.launch();
  const page = await browser.newPage();
  await page.setViewportSize({ width: 1200, height: 630 });
  fs.mkdirSync('./public/og', { recursive: true });

  for (const post of posts) {
    await page.setContent(buildHtml(post.title, post.date));
    await page.screenshot({ path: `./public/og/${post.slug}.png` });
  }

  await browser.close();
})();

Approach 3: Satori (no browser, edge-compatible)

For a lighter CI approach with no browser dependencies, use Satori + Sharp. This is faster, uses less memory, and works in restricted environments.

// scripts/generate-og-satori.mjs
import satori from 'satori';
import sharp from 'sharp';
import { readFileSync, writeFileSync, mkdirSync } from 'fs';

const font = readFileSync('./fonts/Inter-Bold.ttf');
const posts = JSON.parse(readFileSync('./content/posts-manifest.json', 'utf-8'));

mkdirSync('./public/og', { recursive: true });

for (const post of posts) {
  const svg = await satori(
    { type: 'div', props: { style: { width: '100%', height: '100%', display: 'flex', flexDirection: 'column', justifyContent: 'flex-end', background: '#0a0a0a', padding: '64px' },
      children: { type: 'h1', props: { style: { color: '#fff', fontSize: 52, margin: 0 }, children: post.title } } } },
    { width: 1200, height: 630, fonts: [{ name: 'Inter', data: font, weight: 700 }] }
  );
  const png = await sharp(Buffer.from(svg)).png().toBuffer();
  writeFileSync(`./public/og/${post.slug}.png`, png);
  console.log(`✓ ${post.slug}`);
}

Upload to Cloudflare R2 or S3

# Upload to Cloudflare R2 (uses S3-compatible API)
- name: Upload OG images to R2
  env:
    AWS_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
  run: |
    aws s3 sync ./public/og s3://your-r2-bucket/og \
      --endpoint-url https://${{ secrets.R2_ACCOUNT_ID }}.r2.cloudflarestorage.com \
      --content-type image/png \
      --cache-control "public, max-age=31536000, immutable"

# Then reference as:
# og:image = https://pub-XXXX.r2.dev/og/my-post.png

Only regenerate changed posts

For large sites with hundreds of posts, avoid regenerating all images on every push. Use Git diff to identify changed content files and only rebuild those images.

- name: Get changed posts
  id: changed
  run: |
    CHANGED=$(git diff --name-only HEAD~1 HEAD -- 'content/posts/*.md' | tr '\n' ',')
    echo "files=$CHANGED" >> $GITHUB_OUTPUT

- name: Generate OG images for changed posts
  run: node scripts/generate-og-images.js --only "${{ steps.changed.outputs.files }}"  

Test your OG tags free

Paste any URL into OGFixer to see exactly how your link previews look on Twitter, LinkedIn, Discord, and Slack.

Related Guides