LM Studio Experiments

I have been experimenting with LM Studio over the last few days. This is a tool that allows me to run large language models locally on my computer, making it ideal for when I’m travelling or spending time somewhere with a slow or unreliable internet connection. It’s cross-platform (Windows, Silicon Macs and Linux), and has so far worked on every machine I’ve tried it on, although not always with the default settings.

There are a number of different runtimes and language models to choose from, with the smallest ones working on a laptop with 8Gb of RAM, and the large ones requiring a very powerful computer to get the best out of them. I’ver tried them on my work and home computers to do the sort of thing I use Copilot and ChatGPT for, and the experience is comparable, especially when using some of the larger models.

Ideally the software wants a decent graphics card and lots of RAM. The latter I can provide it with, but on my desktop computer it initially failed to work because it was trying to use my old and slow graphics card which is nowhere near good enough for this kind of tool. In this scenario it was just a case of telling it use a different runtime (CPU llama.cpp), and then everything started working as expected.

It’s also worth mentioning that the Linux version ships as an AppImage, which means it needs Fuse v2 to run. Newer distros will need to install libfuse2t64 to get it to run in the first place, but after that everything should just work.

In a world where AI is the future, but privacy is important as well, I think tools like this are definitely worth spending some time with.

Songs of a Lost World

Yesterday was the release day for the new album by The Cure. I used to try and pick up their albums on release day when I was younger, but don’t think I have managed to do so since Wish as very long time ago.

My first thought is that this record is really good. I keep trying to compare it to other Cure albums, and the best that I can do is that it’s a bit like the best parts of Disintegration and Bloodflowers. There isn’t a bad song on it, and I think it’s something I’ll come back to for many years.

If you like The Cure you will love this. And if you like music in general it’s definitely worth a listen.

In the evening there was the album launch concert, which took place at The Troxy, but was also broadcast live on YouTube which meant I could watch it without all the logistical challenges that a concert in London presents. It started with a performance of the whole new album, which is what I expected, but was then followed by a 2+ hour set of old favourites and a few things I wasn’t expecting (such as a whole segment dedicated to songs from Seventeen Seconds). It reminded me how important this band were to me when I was growing up, and how much time I have spent listening to their music. I think this lineup is probably the best ever; although it’s not too different to who was on the stage in 1989 when I last saw them play live.

Everyone looks older, but they still sound great, and these songs definitely stand the test of time. I think the new ones will as well, and am hopeful that the rumours of one more album before they retire are true.

New Music

It’s been another good week for new music. I try and listen to a few new things every Friday whilst working, and this is the selection I chose yesterday:

I’m already familiar with all three artists, but it’s always good to hear new music from old favourites.

Updated Ubuntu Installation script

I’ve made some major changes to my Ubuntu installation script over the last few weeks. Mostly because I finally replaced the hard drive in my laptop and needed to do a clean installation, but also because I wanted to try and get DisplayLink drivers installed by default, which has historically been a pain.

for the latter I’m now using a lot of someone else’s work, and a little bit of my own:

install_displaylink() {
    git clone https://github.com/AdnanHodzic/displaylink-debian.git
    cd displaylink-debian
    sudo ./displaylink-debian.sh
    wget -q https://raw.githubusercontent.com/teknostatik/debian/master/20-displaylink.conf -O /etc/X11/xorg.conf.d/20-displaylink.conf
    cd ..
}

One day there will be a more elegant solution, but for now this will do.

Since writing the original draft of this post I decided that having these drivers on all my computers really isn’t required, and I would generally just plug a HDMI cable in and use the dock for just the standard I/O. But it’s still there in a script as an optional setting, in case anyone else might find it useful. I’ve also added a few more optional settings, for QMK installation and enabling the firewall by default. I tested all this on a clean install of both 24.04 and 24.10 and it all seems to work fine.

Thoughts about living off the grid

I’ve been researching off-grid lifestyles for a while. Not because it’s something that I think would fit the way my family need to live, but because there are definitely things we can learn from people who have chosen that kind of life, and who are making it work.

As a technologist the idea of not having reliable electricity scares me. But then we live for 9 weeks a year in a caravan, and can already power phones and a mobile internet connection for several hours with power banks. Investing in another power bank that could recharge laptops would probably give us everything else we needed in an emergency, and adding some decent solar panels would make us mostly electrically self-sufficient as far as powering devices go.

We also have a lot of gas-reliance in the caravan, but most of the vans on this site use bottled gas anyway, so this would be straightforward to set up if required, as the van is already built for it, so it would just be a case of switching a few cables around. That would give us heat, hot water, and the ability to cook on a hob or in an oven.

Water is a trickier problem, but it’s generally the easiest grid to get connected to. A lot of places people live have water when they don’t have electricity, and there are definitely ways to make running water useful from an electricity-generating point of view.

We are not ready to take these steps yet, but there are things we have learned and changed already as a result of this initial research:

  • Not replacing our unreliable dishwasher and broken tumble drier has not really impacted our lives too much at all. Washing dishes just gets built into my evening chores, and clothes can be dried either on radiators or using the natural heat of the sun. We have a launderette lined up for emergencies. but it is rarely used.
  • When we’re in the caravan, a small portable washing machine will take care of most of our clothes washing needs.
  • We don’t need anywhere near as many clothes as we thought we did, and we definitely have enough of them to last a good while.
  • It’s possible to work off a laptop indefinitely, and there are now good enough portable monitors with a very low power draw to give the desired multi-monitor setup for working.
  • A combination of mobile internet and tethering to phones gives us three layers of internet contingency, at an additional cost of £20 a month plus £10 per trip if concurrent streaming or meetings are likely to happen regularly. We use Smarty for our main connection, and GifGaf for the backup, but that’s purely based on what works well in this physical location, and milage may vary elsewhere.

There will definitely be more experiments over the next year or so, and we do need to try and be more mindful about how we use electricity and gas throughout the year, not just when we are travelling.

What is a team?

A couple of months ago I gave a series of talks on creating team charters. As part of my preparation for this piece of work I asked myself a number of questions about what a team actually was, and what sorts of questions teams should be asking themselves regularly. Some of this content was used in the talks, but I’ve tried to make this article more about teams than team charters, as I’ve written and talked about these a lot elsewhere.

What is a team?

A team is a group of people who collaborate to deliver value. Value is anything that a customer would pay for.

A team doesn’t have to all report to the same person, or do the same kind of work, they just need to be able to combine their skills and expertise in a way that delivers something that someone else finds valuable.

Why does the team exist?

Each team should understand why is exists, and be able to articulate that in a short phrase. This should be a mission statement (what you do now), but may also incorporate elements of a vision statement (what you hope to achieve).

You should also understand who your stakeholders are, both the people you deliver work to, and the people outside the team who you depend on for things that you need. Ideally a team should be able to control everything it requires to deliver value, but this is unlikely to be the case in most organisations, and you are likely to have stakeholders who provide data centres, networks, offices or financial processes. Think about everything you need to do your work, and work out where it comes from, and how to get enough of it for what you need to do.

What skills does a team need?

That depends very much on what the team are responsible for delivering, but can be split into 3 main categories:

  • Subject matter expertise
  • Organisational skills
  • Leadership and management skills

These skills should be present across the team, but it’s fine for one person to have aspects of all three. What you don’t want is people who don’t have any of these skills, as they will find it harder to contribute meaningfully to the team.

How many people should be in the team?

That very much depends on what skills the team needs, and how those skills are distributed between individuals.

I like teams of 4-6 people, because they are small enough to make collective decisions quickly, but large enough to realistically contain a good mix of skills and experience. I may also add 1-2 interns, graduate trainees or apprentices, especially if the work being delivered would benefit from the unique experience people at the start of their career will bring. It’s also a good way to ensure that people starting out are being given a positive experience about what it’s like to be in a team.

How do we empower a team to be self-organising?

A self-organising team understands what it is there to do, and receives work from customers rather than managers. It prioritises work based on what value it adds to one or more customers, but also improves how the work is done so that valuable work is delivered more often. It is free to make decisions on how the work is done, providing any institution-wide restrictions are factored in. The larger the institution, the more likely a team will be restricted in some of the tools it uses, or in where and when it works, but it is still worth thinking about these things and contributing expertise towards redefining the institutional standard, rather than just using something different and then struggling to collaborate with other teams.

Working in this way can require a bit of unlearning, especially for people who hold leadership or management roles. An understanding of the key principles of Agile can help with this, as very often ceremonies associated with Agile (especially Scrum) are used to plan and deliver work in a self-organising team.

How does leadership and management work in a self-organising team?

In a traditional team you would expect the team manager to provide all (or most) of the leadership. management, and organisational skills, but also a fair amount of subject matter expertise. In an agile team we still need all of those skills, but they are more likely to be split more evenly between different team members.

How this might work:

  • One person who has a primary responsibility for ensuring that the work the team does continues to provide value to customers.
  • One person who has a primary responsibility for providing an environment for the team to do their best work.
  • Several people with subject matter expertise, who also have enough leadership, management and organisational skills to ensure that they can deliver within their subject area without hitting blockers on a daily basis.

Ensuring the skills are more equally distributed helps the team when one of more people are on holiday or otherwise unavailable to work. It means that a team is better positioned to deal with things that might slow down delivery, and that the team is not reliant on one person for many different things.

What meetings should the team attend?

There are a number of internal meetings that we find useful:

  • Daily stand up, to ensure that we have a collective understanding of what we are doing each day, and that we can identify anything where we might need help
  • Sprint planning, so that we understand what is being delivered in the next 2 weeks, and who is doing what
  • Retrospectives, to enable us to scrutinise how we work, and suggest improvements that can be fed into the next cycle

We still conduct 121 meetings, but these are less like status update meetings, and more about personal and professional development. Meeting every day to talk about our work doesn’t make 121 meetings less important, it just means we can focus them more effectively.

What about all those other meetings we attend?

Review all meetings and regular time commitments to ensure they add value to customers or directly contribute to improving how the work is done:

  • Why do you attend the meeting?
  • What contributions do you generally make?
  • What outputs do you get from the meeting, and are they valuable?

These questions should help identify meetings that could be dropped in order to free up more time for collaboration and individual contributions.

It is expected that people with leadership and management responsibilities will attend more meetings that individual contributors, but that in general there will be less meetings than in a traditional hierarchical team. Managers should be protecting their teams from meetings where possible, but this does not mean that people should stop collaborating with other teams; just that they do it in a way that only the people who are directly contributing to the collaboration are involved.

Setting up the pipeline

I’ve been meaning to try and set up a pipeline that achieves the following:

  • I write all my blog posts in Markdown and store them in a Github repository
  • Github actions creates a new github page on my staging blog
  • This is then pushed to WordPress, where a new post is created
  • The WordPress post is then promoted on Mastodon

I think I’m mostly there, but am still debugging. I’ve had two mostly working solutions, but neither meets all the requirements yet.

How I Buy Music in 2024

Earlier this year I ran out of space to store records. I could ship a load of old ones to our storage unit, but I really do like having my music around me so that didn’t feel like a good idea. I was also noticing the the already high price of records was creeping up, from £20 to well over £25 in most cases. So apart from a few new releases from Wrong Speed Records, I’ve not bought vinyl at all this year, and have instead reverted to either digital music or CDs.

Both of those formats provided challenges for me though. Digital music is great, but it means I have to think about backups, and having multiple copies in case of disk failure. I largely solved that by saving all new music to Dropbox, and then making it available on all my computers that way. I also back up to a large hard drive in my main computer that contains everything I have ever downloaded since 2021. The challenge with CDs was that the 20+ year old machine I used to play them on died over a year ago, so I was just ripping them to MP3 and playing them that way.

That wasn’t too much of a problem to start off with, as two of my main sources of music was the albums I get as part of my subscription to The Quietus magazine, and a Bandcamp subscription to the music of Constellation Records. In both cases new music arrives every few weeks, and I download it and play it on my computer. My computer is plugged into an amp, which is connected to the speakers from my old CD player, so the sound is pretty decent.

In the end I did buy a CD player, and another cheap amp to connect it to some other vintage speakers I have. So now I have my record player and CD player routing to one sound system, and my home and work computers to the other. That way I can play any music I own in my home office without too much effort with cables and switches. It also means I spend a lot less money on music because CDs are under half the price of records, and digital music is cheaper still.

After doing this for a few months, I can’t say I notice a huge difference in sound quality between anything I listen to, and I’ve enjoyed the subscription music a lot, whilst also spending far too much time on Ebay tracking down things I want to own on CD that other people are getting rid of. That’s where the real cost saving happens, and I’m operating on an average of about £3 per CD right now.

We are also setting up a media swap scheme at work, so I can take in any CDs I don’t want anymore, and hopefully pick up a few things from other people.

So that’s how I’m consuming music in 2024. I still have Spotify for when I’m walking the dog or on my work computer, but my default is still very much an ownership model, with a strong preference to buy directly from the artist or the record company. That way money goes to the people who make the music, and should hopefully support them sufficiently to make more.

Corne Build Guide

The Corne is a 42-key split mechanical keyboard.

The Corne I started with is the RTG model, ordered from Mechboards. I have also now built a few from scratch, using whatever the cheapest kit Mechboards sell is. These are what I use now, anywhere that I use a computer for any length of time.

Bill of materials

  • Corne light kit from Mechboards
  • Two controllers and sockets
  • 42 switches (Choc v1 or v2, or MX)
  • 40 1U keycaps and 2 1.5U keycaps to match switches chosen
  • Cable to match controllers chosen (almost certainly a USB-C cable)
  • A case, if you don’t want to use the one included in the kit. Currently I’m using this case which makes it look a lot better, as well as being significantly more stable on my desk. I’ve also now screwed a metal plate on to the bottom to give even more stability.

I’ve also built a few of these using PCBs I’ve ordered in bulk. This allows the use of third party cases without generating waste, but does require buying at least some TRRS jacks (PJ-320A), and a cable to connect them.

These are some good build guides that I have either used myself or found afterwards.

I don’t plan on trying to compete with these or duplicate them too much.

All the example code in this guide will be assuming that the Vial firmware will be used. It requires having a working installation of the vial-qmk github repository. It will also assume you’re using some kind of RP2040 controller.

Flashing Controllers

Flash each controller to test they work, and to define which controller belongs to which side of the keyboard.

Firstly test that your firmware compiles:

qmk compile -kb /crkbd/rev1 -km vial

Then flash each controller in turn:

qmk flash -c -kb crkbd/rev1 -km vial -e CONVERT_TO=promicro_rp2040 --bootloader uf2-split-left
qmk flash -c -kb crkbd/rev1 -km vial -e CONVERT_TO=promicro_rp2040 --bootloader uf2-split-right

Once you have done this then label the controllers so you are absolutely sure which is which.

Then build the keyboard, using whatever set of instructions you have chosen. Some third party cases will have their own instructions which should definitely be read in full before soldering anything.

Finally, use Vial to configure your keymap.

A few further hits and tips

  • Once you have soldered your controllers and TRRS jacks, plug the keyboard into a computer, and short every single socket to test that the right key codes are sent. I use QMK Configurator for this.
  • Depending on the height of your sockets, you may have the choose between having hot-swappable controllers and using an OLED screen. I’ve not yet managed to get both working on the same keyboard without the OLED cover pointing up at a weird angle. This may bother you less than it bothers me though.
  • If you’re using Vial and you want RGB, OLEDs, and loads of QMK features then you will likely find that your firmware is too large for a pro-micro controller. RP2040s will work fine though.