{"version":"https://jsonfeed.org/version/1.1","title":"topic: code | ege.celikci.me","home_page_url":"https://ege.celikci.me/","feed_url":"https://ege.celikci.me/tags/code.json","description":"all entries tagged with code","language":"en","items":[{"id":"https://ege.celikci.me/blog/music-page/","url":"https://ege.celikci.me/blog/music-page/","title":"how did I make the music page","content_html":"I’ve been wanting to write a blog post for a long time, but I couldn’t find a topic to write about. I decided to write about how I created my [favorite albums page](/music) because it’s something I’m very confident in and proud of.\n\nThis page, which I’ve been wanting to make for a while, had a clear purpose: to make a cool list of my favorite music albums. Although I know there are sites like [Rate Your Music](https://rateyourmusic.com/), [Last.fm](https://www.last.fm/), or [Topsters](https://topsters.org/) for this, none of them offered a solution that worked for my needs. I just wanted something simple yet beautiful, and easy to manipulate in terms of both appearance and content management. I knew the best solution was to create a website with a static site generator. This led me to start this website, which eventually grew far beyond my original vision.\n\nThe simplest solution I used to use (pre-this-site-era) was to process the album cover images manually using image manipulation software like [ImageMagick](https://imagemagick.org/), and then list them on a page.\n\n```fish\nmagick input.ext -dither FloydSteinberg -scale 290x290 -monochrome output.png\n```\n\nAfter processing the images with that command, listing the albums shouldn’t be too difficult, right…?\n\n```markdown\n![cover of the first album](albumURL)\n![cover of the second album](albumURL)\n![cover of the third album](albumURL)\n\n- <cite>[name of the first album](albumURL)</cite> by [name of the first artist of first album](artistURL)\n- <cite>[name of the second album](albumURL)</cite> by [name of the first artist of second album](artistURL) · [name of the second artist of second album](artistURL)\n- <cite>[name of the third album](albumURL)</cite> by [name of the first artist of third album](artistURL)\n```\n\nWhile this isn’t the hardest thing in the world to do, as I said, if you have hundreds of albums to list, it quickly becomes a time-consuming process. Even if you don’t manually edit markdown and instead loop through data from a JSON file, things get out of control on the JSON side where you store the album information.\n\n## this is where CritiqueBrainz comes in\n\nThere is a site that can be considered an open-source alternative to Rate Your Music. It gets its data from [MusicBrainz](https://musicbrainz.org) and has a very useful [API](https://critiquebrainz.readthedocs.io/api.html) that—spoiler—will solve all my problems.\n\nTo integrate the [CritiqueBrainz](https://critiquebrainz.org) API into my site, I used the [JavaScript data files](https://www.11ty.dev/docs/data-js/) feature that Eleventy offers for [data management](https://www.11ty.dev/docs/data/) and the [eleventy-fetch plugin](https://www.11ty.dev/docs/plugins/fetch/).\n\nThe first thing I had to do was automatically pull the 5-star reviews I gave on CritiqueBrainz. To do this, I simply make a request to the CritiqueBrainz API with `user_id` and `limit` queries. The `limit` is set to `50` since that is the maximum we can get from the API: `https://critiquebrainz.org/ws/1/review?user_id=4d5dbf68-7a90-4166-b15a-16e92f549758&limit=50`\n\nThis gives out:\n\n```json\n{\n  \"count\": 134,\n  \"limit\": 50,\n  \"offset\": 0,\n  \"reviews\": [\n    {\n      \"entity_id\": \"d6c4be50-923a-4d14-8fe7-31f665630d6b\",\n      \"entity_type\": \"release_group\",\n      \"rating\": 5\n    }\n  ]\n}\n```\n\nSince I can only get a maximum of 50 entries per request, I had to create a loop script with an offset.\n\n```javascript\nasync function getFavoriteAlbumIds() {\n  // ... setup code ...\n  let offset = 0;\n  let allReviews = [];\n\n  // Loop until no more reviews are returned\n  while (true) {\n    const url =\n      `https://critiquebrainz.org/ws/1/review?user_id=${CRITIQUEBRAINZ_ID}&limit=${LIMIT}&offset=${offset}`;\n    try {\n      const batch = await Fetch(url, {\n        duration: \"0s\", // always fetch fresh data for reviews\n        type: \"json\",\n        // ... headers ...\n      },);\n\n      if (!batch.reviews?.length) break; // stop if no reviews left\n\n      allReviews.push(...batch.reviews,);\n      offset += LIMIT; // move the cursor forward\n\n      // don't harrass the API\n      await sleep(1000,);\n    } catch (e) {\n      console.error(`[music.js] Failed to fetch: ${e.message}`,);\n      break;\n    }\n  }\n\n  // Filter for only 5-star albums\n  const favReviews = allReviews.filter(\n    (r,) => r.rating === 5 && r.entity_type === \"release_group\",\n  );\n  return new Set(favReviews.map((r,) => r.entity_id),);\n}\n```\n\nWhile our `getFavoriteAlbumIds` function allows us to fetch more than 50 albums by sending multiple requests at 1-second intervals, it also filters the results to keep only the `release_groups` that I gave a `rating` of 5.\n\nEleventy Fetch plays a massive role here. You might have noticed the `duration: \"0s\"` option in the previous script. This controls the [cache duration](https://www.11ty.dev/docs/plugins/fetch/#change-the-cache-duration). For the reviews list, I explicitly tell Eleventy **not** to cache the request because I want my latest ratings to appear immediately every time I build the site. By managing these durations smartly (and caching the static album details for longer periods), I ensure that I don’t bombard the [MusicBrainz API](https://musicbrainz.org/doc/MusicBrainz_API) on every build.\n\n## getting album data from MusicBrainz\n\nNow that I have the IDs of the albums I love, I need to find out what they actually _are_. I need the title, the artist, and the release year.\n\n```javascript\nasync function fetchAlbumData(rgid,) {\n  const url =\n    `https://musicbrainz.org/ws/2/release-group/${rgid}?inc=releases+artists&fmt=json`;\n  try {\n    await Fetch(url, {\n      duration: \"30d\",\n      type: \"json\",\n      directory: DATA_DIR,\n      filenameFormat: () => rgid,\n      fetchOptions: {\n        headers: {\n          \"User-Agent\": USER_AGENT,\n          Accept: \"application/json\",\n        },\n      },\n    },);\n  } catch (e) {\n    console.error(\n      `[music.js] Failed to fetch album data for ${rgid}: ${e.message}`,\n    );\n  }\n}\n```\n\nThe `fetchAlbumData` function takes the Release Group ID (`rgid`) we found earlier and hits the MusicBrainz API.\n\nNotice the `duration: \"30d\"` here? This is the caching strategy I mentioned earlier. Unlike my review rating, which might change if I decide I hate an album tomorrow, the fact that X album was released in Y year is never going to change. By caching this for 30 days, I make sure I’m not harassing the API for data I already have.\n\n## grabbing the cover art\n\na wall of text is boring to look at, we need images…\n\n```javascript\nasync function fetchAlbumCover(rgid,) {\n  const url = `https://coverartarchive.org/release-group/${rgid}/front-500`;\n  try {\n    await Fetch(url, {\n      duration: \"30d\",\n      type: \"buffer\",\n      directory: COVER_DIR,\n      filenameFormat: () => rgid,\n      fetchOptions: { headers: { \"User-Agent\": USER_AGENT, }, },\n    },);\n  } catch (e) {\n    console.error(\n      `[music.js] Failed to fetch album cover for ${rgid}: ${e.message}`,\n    );\n  }\n}\n```\n\nThe `fetchAlbumCover` function works almost exactly like the metadata fetcher, but it hits the [Cover Art Archive](https://coverartarchive.org/). Instead of requesting JSON, I’m grabbing the `buffer` (the raw image data) for the 500px version of the cover. Again, I cache this for 30 days because downloading hundreds of images every time I build the site would be painfully slow.\n\n## the ａｅｓｔｈｅｔｉｃ part, image processing\n\nThe hardest part of this whole project wasn’t fetching the data, it was processing the images. I didn’t want to just slap high-res JPEGs on the page. I wanted a specific look: a 1-bit dithered aesthetic that feels a bit retro.\n\nIn the early versions of this script, I relied on **ImageMagick**. It’s the industry standard for a reason; it has a built-in flag for dithering that looks great. To make it work in my Node script, I had to use `child_process.spawn` to actually run the terminal command from within JavaScript.\n\nIt looked something like this:\n\n```javascript\nimport { spawn, } from \"node:child_process\";\n\nfunction convertWithMagick(inputPath, outputPath,) {\n  return new Promise((resolve, reject,) => {\n    const proc = spawn(\"magick\", [\n      inputPath,\n      \"-dither\",\n      \"FloydSteinberg\",\n      \"-scale\",\n      \"290x290\",\n      \"-monochrome\",\n      outputPath,\n    ],);\n    proc.on(\"exit\", (code,) => {\n      if (code === 0) resolve();\n      else reject(new Error(`magick exited with code ${code}`,),);\n    },);\n  },);\n}\n```\n\nThis approach worked, but it felt “heavy.” I was spawning a new system process for every single album cover. Plus, it meant that anyone who wanted to build my site (including my CI/CD pipeline) needed to install the heavy ImageMagick binary, which made the site unbuildable on [Netlify](https://www.netlify.com/) at all. I wanted this project to be “pure” Node.js—just `yarn` and it should work out-of-the-box.\n\nSo, I switched to **[sharp](https://sharp.pixelplumbing.com/)**, which is significantly faster and runs natively in Node. But there was a catch: sharp doesn’t have a built-in “Floyd-Steinberg monochrome” filter. It can handle palettes, but it doesn’t give that specific 1-bit error-diffusion look I wanted.\n\nSo, I had to get my hands dirty and steal the pixel math from the internet. I searched for:\n\n> floyd-steinberg dithering in javascript\n\nin my search engine to steal this dithering algorithm…\n\n```javascript\nasync function ditherWithSharp(inputPath, outputPath,) {\n  const { data, info, } = await sharp(inputPath,)\n    .resize(290, 290, { fit: \"cover\", },)\n    .greyscale()\n    .raw()\n    .toBuffer({ resolveWithObject: true, },);\n\n  const width = info.width;\n  const height = info.height;\n  const inputPixels = new Uint8Array(data,);\n\n  // Create output buffer for RGBA (4 channels)\n  const outputPixels = new Uint8Array(width * height * 4,);\n\n  for (let y = 0; y < height; y++) {\n    for (let x = 0; x < width; x++) {\n      const idx = y * width + x;\n      const oldPixel = inputPixels[idx];\n      const newPixel = oldPixel < 128 ? 0 : 255;\n\n      const error = oldPixel - newPixel;\n      inputPixels[idx] = newPixel;\n\n      // Distribute error\n      if (x + 1 < width) inputPixels[idx + 1] += (error * 7) / 16;\n      if (y + 1 < height) {\n        if (x > 0) inputPixels[idx + width - 1] += (error * 3) / 16;\n        inputPixels[idx + width] += (error * 5) / 16;\n        if (x + 1 < width) inputPixels[idx + width + 1] += (error * 1) / 16;\n      }\n\n      // Map to RGBA: Black pixel = Opaque Black; White pixel = Transparent\n      const outIdx = idx * 4;\n      if (newPixel === 0) {\n        outputPixels[outIdx] = 0; // R\n        outputPixels[outIdx + 1] = 0; // G\n        outputPixels[outIdx + 2] = 0; // B\n        outputPixels[outIdx + 3] = 255; // Alpha\n      } else {\n        outputPixels[outIdx] = 0;\n        outputPixels[outIdx + 1] = 0;\n        outputPixels[outIdx + 2] = 0;\n        outputPixels[outIdx + 3] = 0; // Transparent\n      }\n    }\n  }\n\n  await sharp(Buffer.from(outputPixels,), {\n    raw: {\n      width: width,\n      height: height,\n      channels: 4,\n    },\n  },)\n    .png({\n      palette: true,\n      colors: 2,\n      effort: 10,\n    },)\n    .toFile(outputPath,);\n}\n```\n\nThis code manually iterates over every single pixel in the image buffer. It determines if a pixel should be black or white, calculates the difference (the error) between what the pixel _should_ be and what it _is_, and pushes that error onto the neighboring pixels.\n\nThe result? I removed the external dependency entirely. Now, the site builds faster, the image processing is self-contained within the script, and I have total control over the output.\n\n### tying it all\n\nFinally.\n\n```javascript\nasync function getMusicData() {\n  const favAlbumIds = await getFavoriteAlbumIds();\n\n  for (const rgid of favAlbumIds) {\n    // check if we have the files, if not, fetch them\n    if (!(await fileExists(jsonPath,))) {\n      await fetchAlbumData(rgid,);\n      await sleep(1000,);\n    }\n    // ... fetch covers ...\n    // ... process images ...\n  }\n\n  // read all the JSON files back into an array\n  let albums = [];\n  for (const file of await fs.readdir(DATA_DIR,)) {\n    // ... parse JSON ...\n    albums.push(content,);\n  }\n\n  // sort by release date (newest to oldest)\n  albums.sort((a, b,) => {\n    // ... date sorting logic ...\n  },);\n\n  return { albums, };\n}\n```\n\nThe `getMusicData` function is the main entry point. It gets the IDs, checks if files exist (so we don’t re-download or re-process images we already have), and coordinates the dithering. Finally, it reads all that cached data into a nice array and sorts the albums by their release date.\n\n## the result\n\nIt is actually available on the [music](/music) page!\n\nAll of this backend work results in a simple `albums` array that I can loop through in my template.\n\n{{ echo |> md }}\n\n```jinja\n<div class=\"album\">\n  {% for album in music.albums | shuffle %}\n    {% set artistNames = \"\" %}\n    {% for ac in album[\"artist-credit\"] %}\n      {% if not loop.first %}{% set artistNames = artistNames + \" · \" %}{% endif %}\n      {% set artistNames = artistNames + ac.name %}\n    {% endfor %}\n\n    <a\n      href=\"https://listenbrainz.org/album/%7B%7B%20album.id%20%7D%7D\"\n      class=\"album__item\"\n      title=\"{{ album.title }} by {{ artistNames }}\"\n    >\n      <div class=\"album__cover\">\n        <img\n          class=\"album__cover__color\"\n          data-src=\"/assets/images/covers/colored/{{ album.id }}.png\"\n          alt=\"\"\n          width=\"290\"\n          height=\"290\"\n        />\n        <img\n          class=\"album__cover__mono\"\n          src=\"https://ege.celikci.me/assets/images/covers/monochrome/%7B%7B%20album.id%20%7D%7D.png\"\n          alt=\"{{ album.title }} by {{ artistNames }}\"\n          loading=\"lazy\"\n          width=\"290\"\n          height=\"290\"\n        />\n      </div>\n\n      <div class=\"album__meta\">\n        <span class=\"album__title\">{{ album.title }}</span>\n        <span class=\"album__artist\">{{ artistNames }}</span>\n        <span class=\"album__year\" style=\"opacity: 0.5\">\n          ({{ album[\"first-release-date\"] | dateFromISO | readableDate(\"yyyy\") }})\n        </span>\n      </div>\n    </a>\n  {% endfor %}\n</div>\n```\n\n{{ /echo }}\n\nBy using a little CSS to toggle visibility between the `.album__cover__mono` and `.album__cover__color` images, I get a static site that feels dynamic, looks unique, and, best of all, updates automatically whenever I rate a new album on CritiqueBrainz.\n","date_published":"2025-11-25T00:00:00.000Z"}]}