Artificial Side Hugs

This site—yes, the very blog that you’re wasting your time reading— was upgraded to the latest available version of Astro (v6.1.4) in record time—about two or three minutes flat. And I did not have to put really any brain power into the thing at all. Which is mildly concerning but quite frankly I really wanted to do other things today, and yet somehow I remain at the whims of my ADHD. For all its ethical considerations, generative models do seem to be increasingly helpful companions… when used properly. And so, rather than purely shouting at the clouds I supposed that I might as well write up a bit of a technical post for anyone else who wishes to invent problems for themselves. Because surely we’d all be bored to tears if things were easy.

Laziness, codified.

Yeah so I upgraded the blog framework. Yay. It took very little time. Also yay. And I perhaps ran a mere handful of lines. My pet potato would think this beneath them in skill. Naturally, to update I gave it the ol’ bun x @astrojs/upgrade, which yielded a warning:

 astro   Integration upgrade in progress.

      ●  @astrojs/rss will be updated from v4.0.15 to v4.0.18
      ●  @astrojs/sitemap will be updated from v3.7.0 to v3.7.2
      ▲  astro will be updated  from v5.17.2 to v6.1.4
      ▲  @astrojs/mdx will be updated  from v4.3.13 to v5.0.3
      ▲  @astrojs/react will be updated  from v4.4.2 to v5.0.3
      ▲  @astrojs/vercel will be updated  from v9.0.4 to v10.0.4

  wait   Some packages have breaking changes. Continue?

Which naturally, I ignored:

         Yes

 check   Be sure to follow the CHANGELOGs.
         astro Upgrade to Astro v6
         @astrojs/mdx CHANGELOG
         @astrojs/react CHANGELOG
         @astrojs/vercel CHANGELOG

 ██████  Installing dependencies with bun...

╭─────╮  Houston:
│ ◠ ◡ ◠  Have fun building!
╰─────╯

And ignored further…

$ bun dev
$ bunx --bun astro dev
[LegacyContentConfigError] Found legacy content config file in "src/content/config.ts". Please move this file to "src/content.config.ts" and ensure each collection has a loader defined.
  Hint:
    See https://docs.astro.build/en/guides/upgrade-to/v6/#removed-legacy-content-collections for more information on updating collections.
  Error reference:
    https://docs.astro.build/en/reference/errors/legacy-content-config-error/
  Stack trace:
    at getContentPaths (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/content/utils.js:525:19)
    at createSettings (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/core/config/settings.js:146:20)
    at async dev (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/core/dev/dev.js:38:25)
    at async runCommand (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/cli/index.js:197:28)
    at processTicksAndRejections (native:7:39)
error: script "dev" exited with code 1

Excellent. Soooo I opened up Emacs and had Gemini go to work.

> I've upgraded this project from astro v5.17.2 to v6.1.4. Migrate it
  properly, including any breaking changes. Then, give me a list of
  everything you've done.

And immediately bun dev for good measure…

[LegacyContentConfigError] Found legacy content config file in "src/content/config.ts". Please move this file to "src/content.config.ts" and ensure each collection has a loader defined.
  Hint:
    See https://docs.astro.build/en/guides/upgrade-to/v6/#removed-legacy-content-collections for more information on updating collections.
  Error reference:
    https://docs.astro.build/en/reference/errors/legacy-content-config-error/
  Stack trace:
    at getContentPaths (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/content/utils.js:525:19)
    at createSettings (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/core/config/settings.js:146:20)
    at async dev (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/core/dev/dev.js:38:25)
    at async runCommand (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/cli/index.js:197:28)
    at processTicksAndRejections (native:7:39)
error: script "dev" exited with code 1

Okay. So that’s cool. Copypasta that into *scratch* and gptel-add it, kick off a new prompt.

> When running `bun dev` I get the error that I've pasted in the
  scratch buffer. Finish your migration and ensure that I don't
  receive errors similar to what I've relayed in the future.

Neat. All good?

Error: Unable to render ViewTransitions because it is undefined!
Did you forget to import the component or is it possible there is a typo?
    at renderFrameworkComponent (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/component.js:73:5)
    at renderFrameworkComponent (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/component.js:71:9)
    at renderComponent (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/component.js:382:34)
    at <anonymous> (/home/user/projects/me/lordinateur-xyz-site/src/layouts/Base.astro:1:1)
    at init (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/astro/instance.js:37:28)
    at render (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/astro/instance.js:49:3)
    at render (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/astro/render-template.js:38:9)
    at toPromise (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/astro/render.js:239:22)
    at renderToAsyncIterable (/home/user/projects/me/lordinateur-xyz-site/node_modules/astro/dist/runtime/server/render/astro/render.js:224:33)
    at processTicksAndRejections (native:7:39)

No. Okay. Third time’s the charm. Maybe. Paste over the *scratch* buffer, one more go.

> I have a new error, which I've pasted into the scratch buffer.
  Finish migrating, and ensure errors like that no longer happen.

And that did it. It really took the joy out of everything. But the thing ran, and frankly I didn’t have to do shit. And now I’m going to nudge you in the direction of this toolbox so you too can sap the excitement out of your day-to-day devwork.

So what are we doing today?

So glad you asked. We’re going to jack an LLM into our Emacs environment, where surely it shall Do No Evil.

I will confess, I’ve been working a lot recently on my Arch WSL instance so I’ve not actually tested this out on FreeBSD. Which I shall! But later. Apologies for the transgression.

Setting up the hellhole

So we need to grab a few things for our Emacs install. I’m using Gemini mostly, but also have been dicking around with llama.cpp which absolutely DOES work on FreeBSD, I’ve found, but you may run into some GPU issues. (Turns out you can also use it to schedule LLM jobs on an HPC — Slurm makes it fairly easy, and maybe I’ll do a writeup about that later.)

But yeah, you’ll probably want to get an API key for your favorite cloud LLM if you’re going in that direction. So here are some blocks that you’ll want to add to ~/.emacs to get this thing going. (And if you don’t know lisp… well, probably ought to learn it.

So this block below, you’re configuring an install of gptel, which is an LLM client. It’s basically going to be what lets you talk to your LLM of choice, and what spits shit out into your buffer. Now, note here that after :config I’ve got two variables set, gemini-backend and matrix-llama-backend. You can configure that the way you want (within reason, read the gptel docs) to add your different LLM providers.

(use-package gptel
  :ensure t
  :config
  (defvar gemini-backend
    (gptel-make-gemini "Gemini"
      :key (lambda ()
             (require 'auth-source)
             (let ((info (car (auth-source-search :host "gemini" :user "apikey"))))
               (if info
                   (let ((secret (plist-get info :secret)))
                     (if (functionp secret) (funcall secret) secret))
                 (error "Gemini API key not found in auth-source"))))
      :stream t))
  (defvar matrix-llama-backend
    (gptel-make-openai "llama-cpp"
      :host "1.2.3.4:11434"
      :stream t
      :protocol "http"
      :models '("qwen-coder")))
  (setq-default gptel-backend matrix-llama-backend)
  (setq-default gptel-model "qwen-coder"))

One thing you ought to keep in mind is that it’s not a super great practice to shove your secrets right into your .emacs config. I mean, honestly it’s not great to put it where I did either, there are better ways to do it (but I’m lazy). But in the meantime, go create a new file (if it doesn’t exist) called ~/.authinfo and slap this line in it:

machine gemini login apikey password <GEMINI_KEY>

And replace <GEMINI_KEY> with, well, your Gemini API key. And for more info on that sort of thing, you can take a look at the docs.

If you don’t want to use MCP servers, you’re pretty much done here. What you’d do is you’d be manually managing your context—the shit that you’re sending to the LLM. Keep in mind that LLMs basically don’t have a state, or at least not one that you or I really think of. As people, you and I are stateful. Consider this conversation:

Alice: Hi Bob, how are you today?
Bob:   Oh I'm fine, I had a fun time stealing waffles today.
Alice: What the fuck Bob why are you always stealing waffles.
Bob:   Oh they're from Waffle House.
Alice: That doesn't answer my question Bob.
Bob:   They know what they did.

Okay. Alice and Bob do not do this:

Alice: Hi Bob, how are you today?
Bob:   (Alice said: Hi Bob, how are you today?)
       Oh I'm fine, I had a fun time stealing waffles today.
Alice: (Alice said: Hi Bob, how are you today?)
       (Bob said: Oh I'm fine, I had a fun time stealing waffles today.)
       What the fuck Bob why are you always stealing waffles.
Bob:   (Alice said: Hi Bob, how are you today?)
       (Bob said: Oh I'm fine, I had a fun time stealing waffles today.)
       (Alice said: What the fuck Bob why are you always stealing waffles.)
       Oh they're from Waffle House.
Alice: (Alice said: Hi Bob, how are you doing today?)
       (Bob said: Oh I'm fine, I had a fun time stealing waffles today.)
       (Alice said: What the fuck Bob why are you always stealing waffles.)
       (Bob said: Oh they're from Waffle House.)
       That doesn't answer my question Bob.
Bob:   (Alice said: Hi Bob, how are you doing today?)
       (Bob said: Oh I'm fine, I had a fun time stealing waffles today.)
       (Alice said: What the fuck Bob why are you always stealing waffles.)
       (Bob said: Oh they're from Waffle House.)
       (Alice said: That doesn't answer my question Bob.)
       They know what they did.

That’s a lot of stuff going back and forth. Alice and Bob are way more likely to have a conversation like this:

Alice: Hi Bob, how are you today?
Bob:   (Alice said: Hi Bob, how are you today?)
       Oh I'm fine, I had a fun time stealing waffles today.
Alice: (Bob said: Oh I'm fine, I had a fun time stealing waffles today.)
       What the fuck Bob why are you always stealing waffles.
Bob:   (Alice said: What the fuck Bob why are you always stealing waffles.)
       Oh they're from Waffle House.
Alice: (Bob said: Oh they're from Waffle House.)
       That doesn't answer my question Bob.
Bob:   (Alice said: That doesn't answer my question Bob.)
       They know what they did.

Yet, LLMs do the longer (dumber) operation. So if you’re chucking your entire project at an LLM, you’re doing it every single time you make a request. And that burns tokens, and it fills up the context window (essentially, the amount of memory an LLM has before it provides nonsensical responses). So if you add and remove files from the context manually, you’re going to have to do it every time you have a different question. And that’s a pain in the ass. Enter, Model Context Protocol servers, aka MCP servers. These servers can be used to allow the LLM to drive a little on your machine and ask for the things it needs. Which can be a little better at managing that context window.

So if you want that shit, you’re going to need to set a few things up. First, you’re going to actually want to go install some MCP servers. I’ve installed a few from @modelcontextprotocol — they’re not all super maintained currently, so I’d go explore. (If you’re using emacs and you’re not up for poking around, what the fuck are you doing with your life.) So in general, I’ve just installed those MCP packages globally and went on back to emacs.

After you’ve done that, here’s some more stuff to put into ~/.emacs:


(use-package mcp
  :vc (:url "https://github.com/lizqwerscott/mcp.el"))

(use-package gptel-mcp
  :vc (:url "https://github.com/calebpower/gptel-mcp.el"
       :branch "main")
  :after gptel
  :bind (:map gptel-mode-map
              ("C-c m" . gptel-mcp-dispatch)))

(defun my/gptel-mcp-projectile-sync ()
  "Set the MCP server directory to the root of the current Projectile project."
  (interactive)
  (when (projectile-project-p)
    (let ((proj-root (projectile-project-root)))
      (setq mcp-hub-servers
            `(
              ("ProjectFiles" . (:command "npx"
                                 :args ("-y" "@modelcontextprotocol/server-filesystem"
                                        ,(expand-file-name proj-root))))

              ("EverythingTools" . (:command "npx"
                                    :args ("-y" "@modelcontextprotocol/server-everything")))

              ("Browser" . (:command "npx"
                                     :args ("-y" "@modelcontextprotocol/server-puppeteer")))
              ))

      (message "MCP context set to Projectile root: %s" proj-root))))

(add-hook 'projectile-after-switch-project-hook #'my/gptel-mcp-projectile-sync)

So you have two packages to add, a function, and a hook. The first package (mcp) is the thing that interfaces with the MCP servers. Then you have gptel-mcp, which (as you might guess) bridges mcp and gptel. Lastly, you have my/gptel-mcp-projectile-sync which is a nifty little trigger that will point your MCP servers (particularly what I’ve labeled as “ProjectFiles” — an MCP server responsible for allowing the LLM to peruse through and/or modify your local filesystem) to a scoped folder so that it doesn’t go on a rampage throughout your whole system).

This does mean that you’ll want projectile if you’re going to want to change what mcp sees on the fly when changing projects. So somewhere before all of that shenaniganery, make sure you have something like this in ~/.emacs:

(use-package projectile
  :ensure t
  :init
  (projectile-mode +1)
  :bind-keymap
  ("C-c p" . projectile-command-map))

Anywho, that add-hook piece just ties your custom function into projectile.

Using your new hellhole

Okay so how do you use it? Well there are probably better ways to do it, but for now what I’ve got working is this:

  1. Open emacs (if you get this far, you’re further along than like 90% of folks out there, and you’re way behind anyone with a life).
  2. If you don’t have a project folder in mind, do M-x projectile-add-known-project and navigate to one. If you open a file in a git repo, then it should choose the root of your project properly.
  3. Switch to your project - C-c p p.
  4. Configure gptel — I generally select a small model to test things. So that’s something like M-x gptel-menu RET -m. Then C-g to yeet the menu when you’re done.
  5. Then do M-x gptel and create a buffer — I tend to call it *llama* or whatever, but it’s up to you. It’s your thing, man.
  6. Then, in your gptel buffer (e.g. *llama*), do C-c m s to start the servers and, importantly, WAIT for like 10 seconds for the thing to register. (If you accidentally open your mail with C-x m don’t feel too bad, I keep doing it and it keeps pissing me off—just kill it with C-x k RET.)
  7. Then, activate your tooling — in the gptel buffer (e.g. *llama*), Do C-x m a to activate your tools.

After all that, you’re ready to do your thing. Give it a prompt. Do C-c RET to send off the prompt. Yayyyyyy.

Change out your hellhole

So you’ve finally finished working on that stupid ass bug in one project and are ready to move on to the next one. Here’s the weird part, maybe someone cooler and with more time can go fix this or whatever, but the best way I’ve found to do it is to restart the MCP servers after switching projects. So:

  1. C-c p p and select a project (RET).
  2. C-x b to switch over to your gptel buffer (e.g. *llama*).
  3. C-m d to deactivate your toolchain.
  4. C-m r to restart the servers (WAIT for 10 seconds or so! it’ll tell you when it’s done).
  5. C-m a to reactivate your toolchain.

And you’re off to the races.

What do I do with my hellhole now?

Idk man, we’re not that close. Live life. Dream dreams. Be Tony Stark.

No but seriously. Tony Stark doesn’t have Jarvis do all his shit for him. He designs his shit. And then, importantly, he tests it. He documents it (well, voice records or whatever). And when the tools are failing him, he does it himself. He built this in a cave!

So don’t be the asshole that just generates AI slop. Use it to enhance, not to replace. And good luck.

lordinateur.xyz