how did I make the music page

I’ve been wanting to write a blog post for a long time, but I couldn’t find a topic to write about. I decided to write about how I created my favorite albums page because it’s something I’m very confident in and proud of.

This page, which I’ve been wanting to make for a while, had a clear purpose: to make a cool list of my favorite music albums. Although I know there are sites like Rate Your Music, Last.fm, or Topsters for this, none of them offered a solution that worked for my needs. I just wanted something simple yet beautiful, and easy to manipulate in terms of both appearance and content management. I knew the best solution was to create a website with a static site generator. This led me to start this website, which eventually grew far beyond my original vision.

The simplest solution I used to use (pre-this-site-era) was to process the album cover images manually using image manipulation software like ImageMagick, and then list them on a page.

magick input.ext -dither FloydSteinberg -scale 290x290 -monochrome output.png

After processing the images with that command, listing the albums shouldn’t be too difficult, right…?

![cover of the first album](albumURL)
![cover of the second album](albumURL)
![cover of the third album](albumURL)

- <cite>[name of the first album](albumURL)</cite> by [name of the first artist of first album](artistURL)
- <cite>[name of the second album](albumURL)</cite> by [name of the first artist of second album](artistURL) · [name of the second artist of second album](artistURL)
- <cite>[name of the third album](albumURL)</cite> by [name of the first artist of third album](artistURL)

While this isn’t the hardest thing in the world to do, as I said, if you have hundreds of albums to list, it quickly becomes a time-consuming process. Even if you don’t manually edit markdown and instead loop through data from a JSON file, things get out of control on the JSON side where you store the album information.

this is where CritiqueBrainz comes in #

There is a site that can be considered an open-source alternative to Rate Your Music. It gets its data from MusicBrainz and has a very useful API that—spoiler—will solve all my problems.

To integrate the CritiqueBrainz API into my site, I used the JavaScript data files feature that Eleventy offers for data management and the eleventy-fetch plugin.

The first thing I had to do was automatically pull the 5-star reviews I gave on CritiqueBrainz. To do this, I simply make a request to the CritiqueBrainz API with user_id and limit queries. The limit is set to 50 since that is the maximum we can get from the API: https://critiquebrainz.org/ws/1/review?user_id=4d5dbf68-7a90-4166-b15a-16e92f549758&limit=50

This gives out:

{
  "count": 134,
  "limit": 50,
  "offset": 0,
  "reviews": [
    {
      "entity_id": "d6c4be50-923a-4d14-8fe7-31f665630d6b",
      "entity_type": "release_group",
      "rating": 5
    }
  ]
}

Since I can only get a maximum of 50 entries per request, I had to create a loop script with an offset.

async function getFavoriteAlbumIds() {
  // ... setup code ...
  let offset = 0;
  let allReviews = [];

  // Loop until no more reviews are returned
  while (true) {
    const url = `https://critiquebrainz.org/ws/1/review?user_id=${CRITIQUEBRAINZ_ID}&limit=${LIMIT}&offset=${offset}`;
    try {
      const batch = await Fetch(url, {
        duration: "0s", // always fetch fresh data for reviews
        type: "json",
        // ... headers ...
      });

      if (!batch.reviews?.length) break; // stop if no reviews left

      allReviews.push(...batch.reviews);
      offset += LIMIT; // move the cursor forward

      // don't harrass the API
      await sleep(1000);
    } catch (e) {
      console.error(`[music.js] Failed to fetch: ${e.message}`);
      break;
    }
  }

  // Filter for only 5-star albums
  const favReviews = allReviews.filter(
    (r) => r.rating === 5 && r.entity_type === "release_group",
  );
  return new Set(favReviews.map((r) => r.entity_id));
}

While our getFavoriteAlbumIds function allows us to fetch more than 50 albums by sending multiple requests at 1-second intervals, it also filters the results to keep only the release_groups that I gave a rating of 5.

Eleventy Fetch plays a massive role here. You might have noticed the duration: "0s" option in the previous script. This controls the cache duration. For the reviews list, I explicitly tell Eleventy not to cache the request because I want my latest ratings to appear immediately every time I build the site. By managing these durations smartly (and caching the static album details for longer periods), I ensure that I don’t bombard the MusicBrainz API on every build.

getting album data from MusicBrainz #

Now that I have the IDs of the albums I love, I need to find out what they actually are. I need the title, the artist, and the release year.

async function fetchAlbumData(rgid) {
  const url = `https://musicbrainz.org/ws/2/release-group/${rgid}?inc=releases+artists&fmt=json`;
  try {
    await Fetch(url, {
      duration: "30d",
      type: "json",
      directory: DATA_DIR,
      filenameFormat: () => rgid,
      fetchOptions: {
        headers: {
          "User-Agent": USER_AGENT,
          Accept: "application/json",
        },
      },
    });
  } catch (e) {
    console.error(
      `[music.js] Failed to fetch album data for ${rgid}: ${e.message}`,
    );
  }
}

The fetchAlbumData function takes the Release Group ID (rgid) we found earlier and hits the MusicBrainz API.

Notice the duration: "30d" here? This is the caching strategy I mentioned earlier. Unlike my review rating, which might change if I decide I hate an album tomorrow, the fact that X album was released in Y year is never going to change. By caching this for 30 days, I make sure I’m not harassing the API for data I already have.

grabbing the cover art #

a wall of text is boring to look at, we need images…

async function fetchAlbumCover(rgid) {
  const url = `https://coverartarchive.org/release-group/${rgid}/front-500`;
  try {
    await Fetch(url, {
      duration: "30d",
      type: "buffer",
      directory: COVER_DIR,
      filenameFormat: () => rgid,
      fetchOptions: { headers: { "User-Agent": USER_AGENT } },
    });
  } catch (e) {
    console.error(
      `[music.js] Failed to fetch album cover for ${rgid}: ${e.message}`,
    );
  }
}

The fetchAlbumCover function works almost exactly like the metadata fetcher, but it hits the Cover Art Archive. Instead of requesting JSON, I’m grabbing the buffer (the raw image data) for the 500px version of the cover. Again, I cache this for 30 days because downloading hundreds of images every time I build the site would be painfully slow.

the aesthetic part, image processing #

The hardest part of this whole project wasn’t fetching the data, it was processing the images. I didn’t want to just slap high-res JPEGs on the page. I wanted a specific look: a 1-bit dithered aesthetic that feels a bit retro.

In the early versions of this script, I relied on ImageMagick. It’s the industry standard for a reason; it has a built-in flag for dithering that looks great. To make it work in my Node script, I had to use child_process.spawn to actually run the terminal command from within JavaScript.

It looked something like this:

import { spawn } from "node:child_process";

function convertWithMagick(inputPath, outputPath) {
  return new Promise((resolve, reject) => {
    const proc = spawn("magick", [
      inputPath,
      "-dither",
      "FloydSteinberg",
      "-scale",
      "290x290",
      "-monochrome",
      outputPath,
    ]);
    proc.on("exit", (code) => {
      if (code === 0) resolve();
      else reject(new Error(`magick exited with code ${code}`));
    });
  });
}

This approach worked, but it felt “heavy.” I was spawning a new system process for every single album cover. Plus, it meant that anyone who wanted to build my site (including my CI/CD pipeline) needed to install the heavy ImageMagick binary, which made the site unbuildable on Netlify at all. I wanted this project to be “pure” Node.js—just yarn and it should work out-of-the-box.

So, I switched to sharp, which is significantly faster and runs natively in Node. But there was a catch: sharp doesn’t have a built-in “Floyd-Steinberg monochrome” filter. It can handle palettes, but it doesn’t give that specific 1-bit error-diffusion look I wanted.

So, I had to get my hands dirty and steal the pixel math from the internet. I searched for:

floyd-steinberg dithering in javascript

in my search engine to steal this dithering algorithm…

async function ditherWithSharp(inputPath, outputPath) {
  // 1. Get raw pixel data
  const { data, info } = await sharp(inputPath)
    .resize(290, 290, { fit: "cover" })
    .greyscale()
    .raw()
    .toBuffer({ resolveWithObject: true });

  const width = info.width;
  const height = info.height;
  const pixels = new Uint8Array(data);

  // 2. manual Floyd-Steinberg dithering
  for (let y = 0; y < height; y++) {
    for (let x = 0; x < width; x++) {
      const idx = y * width + x;
      const oldPixel = pixels[idx];

      // threshold: Is it closer to black (0) or white (255)?
      const newPixel = oldPixel < 128 ? 0 : 255;
      pixels[idx] = newPixel;

      // calculate the "error" (how much we missed the original color by)
      const error = oldPixel - newPixel;

      // 3. Distribute that error to neighboring pixels
      if (x + 1 < width) pixels[idx + 1] += (error * 7) / 16;
      if (y + 1 < height) {
        if (x > 0) pixels[idx + width - 1] += (error * 3) / 16;
        pixels[idx + width] += (error * 5) / 16;
        if (x + 1 < width) pixels[idx + width + 1] += (error * 1) / 16;
      }
    }
  }

  // 4. Save the manipulated buffer back to PNG
  await sharp(Buffer.from(pixels), {
    raw: { width, height, channels: 1 },
  })
    .png()
    .toFile(outputPath);
}

This code manually iterates over every single pixel in the image buffer. It determines if a pixel should be black or white, calculates the difference (the error) between what the pixel should be and what it is, and pushes that error onto the neighboring pixels.

The result? I removed the external dependency entirely. Now, the site builds faster, the image processing is self-contained within the script, and I have total control over the output.

tying it all #

Finally.

async function getMusicData() {
  const favAlbumIds = await getFavoriteAlbumIds();

  for (const rgid of favAlbumIds) {
    // check if we have the files, if not, fetch them
    if (!(await fileExists(jsonPath))) {
      await fetchAlbumData(rgid);
      await sleep(1000);
    }
    // ... fetch covers ...
    // ... process images ...
  }

  // read all the JSON files back into an array
  let albums = [];
  for (const file of await fs.readdir(DATA_DIR)) {
    // ... parse JSON ...
    albums.push(content);
  }

  // sort by release date (newest to oldest)
  albums.sort((a, b) => {
    // ... date sorting logic ...
  });

  return { albums };
}

The getMusicData function is the main entry point. It gets the IDs, checks if files exist (so we don’t re-download or re-process images we already have), and coordinates the dithering. Finally, it reads all that cached data into a nice array and sorts the albums by their release date.

the result #

It is actually available on the music page!

All of this backend work results in a simple albums array that I can loop through in my template.

<div class="album">
  {% for album in music.albums | shuffle %}
    {% set artistNames = "" %}
    {% for ac in album["artist-credit"] %}
      {% if not loop.first %}{% set artistNames = artistNames + " · " %}{% endif %}
      {% set artistNames = artistNames + ac.name %}
    {% endfor %}

    <a
      href="[https://listenbrainz.org/album/](https://listenbrainz.org/album/){{ album.id }}"
      class="album__item"
      title="{{ album.title }} by {{ artistNames }}"
    >
      <div class="album__cover">
        <img
          class="album__cover__color lazy-hover"
          data-src="/assets/images/covers/{{ album.id }}_color.png"
          alt=""
          width="290"
          height="290"
        />
        <img
          class="album__cover__mono"
          src="/assets/images/covers/{{ album.id }}.png"
          alt="{{ album.title }} by {{ artistNames }}"
          loading="lazy"
          width="290"
          height="290"
        />
      </div>

      <div class="album__meta">
        <span class="album__title">{{ album.title }}</span>
        <span class="album__artist">{{ artistNames }}</span>
        <span class="album__year" style="opacity: 0.5">
          ({{ album["first-release-date"] | dateFromISO | readableDate("yyyy") }})
        </span>
      </div>
    </a>
  {% endfor %}
</div>

By using a little CSS to toggle visibility between the .album__cover__mono and .album__cover__color images, I get a static site that feels dynamic, looks unique, and, best of all, updates automatically whenever I rate a new album on CritiqueBrainz.