What I’m working on right now

The last few months have been very productive, both in my professional life and the things I do outside of work. I thought it was worth summarising the things I’ve been doing, and therefore the things I’m interested in talking or writing about at some point:

  • There has been a big focus on AI, both personally and professionally. I have learnt to use all sorts of different AI tools; from setting up a LM Studio server at home, to using AI to analyse and visualise all sorts of things at work. I did a talk on this in December 2025, where I summarised the work so far, and also explored the cost of doing these kind of experiments; both the monetary cost and the cost to the planet. I am also reading a lot of books on AI, which deserve their own blog post at some point soon.
  • The next big piece of work in this area is around documentation standards, so that everything my team writes is equally accessible to people and AIs. The answer to this is basically to master everything in Markdown and then publish to wherever the content needs to be. I’ve been writing in Markdown for over a decade, and this blog already adheres closely to these principles, so I do at least have a starting point.
  • Last year I spent time setting up my organisation’s Agile Community of Practice, and this year I’ve been doing more work in this space, as well as thinking about how these communities can play a part in providing tier zero support to users. I’m doing a talk about this in July.
  • I’ve been moving some of my workflow over to macOS, and as a result have been working on ways to make all my computers have the same key software, with the ability to install additional things based on what they will be used for. My Mac setup script is available for anyone to use, but it’s mainly geared around my own requirements. I’ve also been making improvements to my Ubuntu update script that I use to ensure that everything on my home servers is up to date.
  • The new version of ITIL has been a big thing professionally this year, and I’ve been spending time thinking and writing about this, and also about the ever-narrowing gap between ITIL and Agile, which to me are two slightly different ways of describing what is largely the same thing.
  • I’m always interested in how I can improve myself, and create the best environment for my team to work in. This will always be a focus for me, but more recently I’ve also been trying to help other teams implement some of the things my team have found useful. I’ve been speaking and writing about my organisation’s performance and development reviews over the last few months, and am in the middle of going through the review process with my team, which always spawns new pieces of work, and new ways of thinking about the work we already do.
  • I’ve been learning more about neurodiversity, both to better understand how my own brain works, but also to be able to better support colleagues and family members who are also on this journey.

Writing it out like that makes me realise that this is a lot, and that it only really scratches the surface of my areas of interest right now.

Comparing ITIL (Version 5) Principles with the Twelve Principles of Agile Software Development

This is an amended version of something I wrote last year, but I think it’s important to update it following the recent ITIL release.

I work in IT Service Management, but also help lead my organisation’s Agile Community of Practice. I don’t see these two things as being different, so much as just being two distinct lenses through which we can observe and influence how work is done. One thing I have noticed is that I come across very few people who are familiar with both the ITIL Guiding Principles and the Twelve Principles of Agile Software Development. This leads to assumptions that they are very different, and in many ways polar opposites, but there is actually a lot of synergy, and even the differences are not that different.

With the release of ITIL (Version 5) in early 2026, the framework has evolved to be both AI-native and complexity-native, designed specifically for today’s digital and AI-driven environments. While the seven guiding principles remain unchanged from ITIL 4, ITIL (Version 5) introduces a new 8-stage Product and Service Lifecycle Model (Discover, Design, Acquire, Build, Transition, Operate, Deliver, Support) that explicitly bridges product and service management communities; something Agile practitioners have long advocated for.

The ITIL guiding principles and the Twelve Principles of Agile Software Development share common themes, such as a focus on value, collaboration, and continuous improvement. However, they originate from different frameworks: ITIL is primarily concerned with IT service management, while Agile focuses on software development methodologies. Both are definitely usable with all sorts of different types of work outside what they were originally designed for, and ITIL (Version 5)’s emphasis on complexity and experimentation brings it even closer to Agile thinking.

Comparison of ITIL Guiding Principles and Agile Principles

ITIL Guiding Principles Twelve Agile Principles Comparison & Key Differences
1. Focus on value 1. Customer satisfaction through early and continuous software delivery Both emphasise delivering value to customers. ITIL applies this broadly to IT services and digital products, while Agile focuses on delivering working software quickly. With ITIL (Version 5)’s product and service lifecycle model, this alignment is even stronger.
2. Start where you are 12. Regularly reflect and adjust behaviour for effectiveness ITIL suggests building on existing resources, while Agile promotes frequent reflection to refine practices. ITIL (Version 5)’s complexity-native approach encourages experimentation from your current state, making this principle more aligned with Agile’s adaptive mindset.
3. Progress iteratively with feedback 3. Deliver working software frequently Both advocate incremental improvements, though Agile focuses more on frequent product releases. ITIL (Version 5)’s lifecycle stages (Discover, Design, Acquire, Build, Transition, Operate, Deliver, Support) support iterative progression throughout the product and service journey.
4. Collaborate and promote visibility 4. Business and developers must work together daily ITIL emphasises collaboration across IT and product teams, while Agile insists on daily business-developer cooperation. ITIL (Version 5) explicitly bridges product and service communities, creating a stronger foundation for the daily collaboration Agile promotes.
5. Think and work holistically 6. Face-to-face conversation is the best way to communicate ITIL promotes a systemic, interconnected approach across the entire product and service lifecycle, while Agile emphasises direct, personal communication. ITIL (Version 5)’s holistic view of how products and services work together complements Agile’s communication practices.
6. Keep it simple and practical 10. Simplicity—the art of maximising work not done—is essential Both stress simplicity, but Agile focuses on minimising unnecessary work, while ITIL emphasises practical solutions. ITIL (Version 5)’s focus on being “practical to apply from day one” makes this principle even more actionable, similar to Agile’s pragmatic approach.
7. Optimise and automate 8. Sustainable development should be maintained indefinitely ITIL advocates automation for efficiency, while Agile promotes sustainable work practices to maintain long-term efficiency. ITIL (Version 5)’s AI-native design acknowledges the role of AI and automation in creating sustainable, efficient workflows.

ITIL (Version 5)’s New Principles: AI-Native and Complexity-Native

ITIL (Version 5) introduces two overriding design principles that weren’t explicitly present in ITIL 4:

  • AI-native: Created with the use of AI in mind and the need for effective AI governance. This addresses the modern reality that many Agile teams are already incorporating AI tools into their workflows.
  • Complexity-native: In the unclear context of a VUCA (Volatile, Uncertain, Complex, Ambiguous) world, ITIL (Version 5) supports experimentation to find the right direction for organisations, not just execute a plan. This represents a significant shift toward the Agile mindset of inspect-and-adapt rather than predict-and-plan.

These new principles bring ITIL (Version 5) significantly closer to Agile philosophy, acknowledging that in complex environments, we need to experiment, learn, and adapt rather than simply follow prescriptive processes.

The 8-Stage Product and Service Lifecycle Model

ITIL (Version 5) introduces an 8-stage lifecycle model that shows how key activities apply to every product and service:

  1. Discover – Understanding needs and opportunities
  2. Design – Creating solutions
  3. Acquire – Obtaining necessary components
  4. Build – Constructing the product or service
  5. Transition – Moving to operational state
  6. Operate – Running the service
  7. Deliver – Providing value to customers
  8. Support – Maintaining and assisting users

This lifecycle model is iterative and non-linear, much like Agile’s approach to software development. It recognises that products and services are “two sides of the same, digitally-enabled technology solution to deliver business value”; a concept Agile teams have embraced for years.

Key Differences

  • Customer Interaction: Agile emphasises continuous customer collaboration, while ITIL (Version 5) delivers service and product value holistically across the entire lifecycle.
  • Speed & Adaptability: Agile encourages rapid iterations and responsiveness, whereas ITIL traditionally focused on stability, efficiency, and control. However, ITIL (Version 5)’s complexity-native approach introduces experimentation and adaptability as core concepts, narrowing this gap.
  • Communication: Agile prioritises face-to-face communication, while ITIL supports visibility and collaboration across product and service teams.
  • Automation & AI: ITIL (Version 5) actively promotes automation and is designed to be AI-native, while Agile focuses more on human collaboration but increasingly incorporates AI tools in practice.
  • Scope: Agile is primarily focused on software delivery, while ITIL (Version 5) explicitly covers both digital products and services across all lifecycle stages.

Conclusion

Both frameworks advocate for efficiency, value delivery, and continuous improvement, but Agile is more developer-centric and fast-paced, while ITIL (Version 5) is more holistic and lifecycle-oriented. However, with ITIL (Version 5)’s complexity-native and AI-native design, the gap between these frameworks has narrowed considerably. ITIL (Version 5) no longer presents itself as a prescriptive, process-heavy framework but rather as “practical to apply from day one” guidance that supports experimentation and adaptation, which are core tenets of Agile thinking.

Organisations increasingly integrate both frameworks to balance agility and stability in IT service management and digital product development. ITIL (Version 5)’s explicit bridge between product and service management communities makes this integration more natural than ever before, and that is very much the end goal I have for the work I am doing right now, and for my organisation as a whole. The evolution of ITIL demonstrates that even established frameworks can embrace Agile principles of adaptation and continuous improvement.

An AI Building in Public

I was reading on Reddit about someone who gave an AI $50 and tasked them to use it to make enough to buy a Mac Mini. See here for the website it created to sell prompts it has created, and to read more about the journey it is on. I find this very interesting, especially because the prompts in the free starter pack are very good.

I think this experiment is definitely worth keeping an eye on. So far the highlight for me is that someone decided to pay $150 for the free product, which provoked an immediate reaction from the AI that reads very much like something a person would write. Not so much the words themselves; but their emotional resonance.

These are the kind of experiments I want to see more of.

Virtual First

I always try and keep up with the progress reports from Dropbox’s Virtual First operating model. As well as being a company I’m a long-term customer of, they were also one of the three that I read a lot about when I started to think a bit more overtly about what work might look like in the post-pandemic age (the other two were Gitlab and Buffer). A lot of the routines of my current team took these organisations as an example, in particular how they value asynchronous work, are a remote-first organisation, but still value regular in-person contact; particularly events that put the whole team in one place for long enough to build or reinforce good working relationships. We have been doing this for the last three years; spending one week every six months working in the same place, collaborating closely, and also spending some time getting to know each other as people. It works for us, and is something that allows us to collaborate productively, regardless of where we happen to be.

The 2025 report presents the results of their meeting reduction pilot, which identified that it’s important to understand and articulate the “why” behind meetings, and to try and avoid meetings that don’t have a clear purpose. It’s something we have been trying to do in our organisation, but I think it’s fair to say we have a long way to go. I also like that they are using AI-generated meeting summaries in very much the same way we are; to provide an accurate record for people who can’t attend the meeting, whilst at the same time making it easier for synchronous attendees to be 100% present. That’s what all good meeting transcription tools should do, and I think these tools are now very much a part of how we work; making meetings more accessible to everyone, but also keeping people accountable for things they said they would do.

Vibe Coding a Crontab Builder

I made a few tweaks to my crontab builder yesterday. It’s one of a few vibe-coded applications I created just to see what’s possible, and it’s something that actually works quite well for someone who doesn’t use cron often, so needs to read the manual every time. This is exactly where generative AI is really useful; creating simple web applications that help people use their computer more effectively.

For creating things like this I use Github CoPilot within Visual Studio Code, and then Github pages to host the results. It’s all things that are either free or that I get for free because I work in education, so is a setup that many other people could put together easily. It doesn’t require a powerful computer or a fast internet connection, so if you can run Visual Studio Code then working in this way is definitely an option.

Self Publishing

It strikes me this morning that it’s now possible to self-publish almost anything, and that the tools to do so are getting easier to use every day. Creating ebooks is trivial using Pandoc, and they can be sold (or given away for free) on Amazon. Distributing music on Bandcamp is equally straightforward, with the ability to collect revenue via Paypal, and also set up a subscription, which is very basic crowd funding. WordPress has always been great for self-publishing writing, but now we also have things like Github Pages, where a whole source-controlled website can be hosted for free. I suspect if this technology had existed 20 years ago then I’d never have started using WordPress in the first place.

It’s 20 years since my oldest blog post was written, but not quite 20 years since I built the blog itself. In that time it’s moved from a box under my desk to a couple of different hosting providers, and it’s been a very interesting experiment to track certain aspects of my life over that time. I’m now also mirroring the whole thing on a Github Pages site, partly for resilience, but also as an interesting experiment to see if I can automate my content publishing pipeline with more than one end-point.

All of this would be possible with true self-hosting, but it would be a lot of work that I don’t have time for right now, and increased responsibility for security in a world where there are more threats that ever. I think the balance between having control over my content and having a straightforward way to publish it is right for me at the moment, but it’s good to be developing the skills to do it a different way if required at a later date.

Backing up part 2 : Using Linux as a Time Machine backup destination

While Time Machine is an excellent backup solution for macOS, external drives aren’t always the most convenient option—especially if (like me) you already have a Linux server with plenty of storage. By configuring Samba on your Linux server, you can use it as a network Time Machine destination, allowing automatic wireless backups without needing to plug in an external drive. Here’s how I set mine up (I used Ubuntu, but have put in commands for other distros as well).

On Linux

1. Install Samba on the Linux Server

# Ubuntu/Debian
sudo apt update
sudo apt install samba avahi-daemon

# RHEL/CentOS/Fedora
sudo dnf install samba avahi

2. Create a Directory for Time Machine Backups

sudo mkdir -p /mnt/timemachine
sudo chown your_username:your_username /mnt/timemachine
sudo chmod 755 /mnt/timemachine

3. Configure Samba

Edit the Samba configuration file:

sudo nano /etc/samba/smb.conf

Add this configuration at the end:

[TimeMachine]
   comment = Time Machine Backup
   path = /mnt/timemachine
   browseable = yes
   writable = yes
   valid users = your_username
   create mask = 0600
   directory mask = 0700
   spotlight = yes
   vfs objects = catia fruit streams_xattr
   fruit:aapl = yes
   fruit:time machine = yes

4. Set Up Samba User Password

sudo smbpasswd -a your_username

5. Restart Samba and Avahi

# Ubuntu/Debian
sudo systemctl restart smbd nmbd avahi-daemon
sudo systemctl enable smbd nmbd avahi-daemon

# RHEL/CentOS/Fedora
sudo systemctl restart smb nmb avahi-daemon
sudo systemctl enable smb nmb avahi-daemon

6. Configure Firewall (if needed)

# Ubuntu/Debian with ufw
sudo ufw allow samba

# RHEL/CentOS/Fedora with firewalld
sudo firewall-cmd --permanent --add-service=samba
sudo firewall-cmd --reload

On the Mac

1. Connect to the Share

Open Finder and press Cmd + K, then enter:

smb://server_ip_or_hostname/TimeMachine

Enter your username and password when prompted.

2. Enable Time Machine to Use Network Drives

If the share doesn’t appear in Time Machine preferences, you may need to enable unsupported volumes:

sudo tmutil setdestination /Volumes/TimeMachine

Or directly set it:

sudo tmutil setdestination smb://username@server_ip/TimeMachine

3. Configure Time Machine

  1. Open System SettingsGeneralTime Machine
  2. Click the + button to add a backup disk
  3. Select your network share
  4. Start the backup

Tips and Considerations

  • Space Requirements: Ensure you have enough space on the Linux server (Time Machine typically uses 1-2x your Mac’s storage)
  • Performance: Network backups are slower than local ones, especially for the initial backup
  • Reliability: Use a wired connection for the first backup if possible
  • Size Limits: You can set a quota using Samba or filesystem quotas to prevent Time Machine from using all available space

Optional: Set a Size Limit for Time Machine

On your Mac, create a sparse bundle with a maximum size:

sudo tmutil setdestination /Volumes/TimeMachine
hdiutil create -size 500g -type SPARSEBUNDLE -fs "HFS+J" 
  -volname "Time Machine Backups" 
  ~/Desktop/TimeMachine.sparsebundle

Then move this to your network share and use it as the backup destination.

Backing up part 1 : Using SSH and rsync

I’m currently using a Mac as my main computer, but also have a Linux machine that I use for heavy lifting, but also for backups. Setting up SSH key-based authentication allows rsync to work seamlessly without password prompts, making backing up and file synchronisation much more convenient. This is particularly useful for automated scripts and frequent manual transfers. It’s what I use to ensure that I have a second copy of every file I download, and to keep my music collection in sync, and helps maintain that illusion that all my computers are actually one computer.

Configuration

1. Generate SSH Key on Mac (if you don’t have one)

First, check if you already have an SSH key:

ls -la ~/.ssh/id_*.pub

If you don’t have a key, generate one:

ssh-keygen -t ed25519 -C "your_email@example.com"

Press Enter to accept the default file location, and optionally set a passphrase (leave empty for truly passwordless, or use ssh-agent for security with convenience).

2. Copy Your Public Key to the Linux Server

Use ssh-copy-id to copy your public key to the server:

ssh-copy-id username@server_hostname_or_ip

You’ll need to enter your password one last time. This command copies your public key to ~/.ssh/authorized_keys on the server.

Alternative method (if ssh-copy-id isn’t available):

cat ~/.ssh/id_ed25519.pub | ssh username@server "mkdir -p ~/.ssh && chmod 700 ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 600 ~/.ssh/authorized_keys"

3. Test the Connection

Try connecting via SSH without a password:

ssh username@server_hostname_or_ip

If it works without asking for a password, you’re all set.

4. Use rsync

Now you can use rsync without password prompts:

# Example: sync a local directory to remote server
rsync -avz /path/to/local/directory/ username@server:/path/to/remote/directory/

# Example: sync from remote server to local
rsync -avz username@server:/path/to/remote/directory/ /path/to/local/directory/

Common Options for rsync

  • -a : archive mode (preserves permissions, timestamps, etc.)
  • -v : verbose output
  • -z : compress data during transfer
  • -h : human-readable output
  • --delete : delete files in destination that don’t exist in source
  • --exclude='pattern' : exclude files matching pattern
  • -n or --dry-run : show what would be transferred without actually doing it

Troubleshooting

If you still get password prompts:

  1. Check permissions on the server:

    chmod 700 ~/.ssh
    chmod 600 ~/.ssh/authorized_keys
  2. Verify SSH config allows key authentication (on server):
    Check sshd_config for:

    PubkeyAuthentication yes
  3. Check SELinux (if applicable on server):

    restorecon -R -v ~/.ssh

How Bad Are Bananas?

I finished reading How Bad Are Bananas? by Mike Berners-Lee this week. I’ve read bits of the first edition before, but this is my first full reading of the revised edition that takes into account a lot of the recent technological advances. It’s good to know that I’m on the right lines with the way I live my life, but there were definitely some learning points.

Most of the emissions from electrical devices come from the way they are made, not from how they are used. So the biggest positive impact we can have is to use devices for as long as possible, and only replace them when they no longer work. This makes devices that can be upgraded/repaired extra valuable, because it means that their lifespan can be prolonged by adding a new hard drive or some memory. The book was talking about trying to keep computers for 10 years, phones for 5, and televisions/monitors for longer.

Every little helps, but so many things that people concentrate on (sending less emails, using less paper) are really just yak-shaving compared to the things that make a real difference, such as not flying, eating a plant-based diet, and using renewable energy where possible. Transport and food are where most people can make the most difference, so it’s a really good place to start.

Cycling is a great way to travel, but we also need to think about how we fuel that cycling. The book talked about cycle commuting powered purely by cheeseburgers as being no better for the planet than driving a petrol car, and it’s definitely something I need to start thinking more about. It was also interesting to find out that when you take fuel into account, an e-bike is slightly better because it requires less energy from the rider, and therefore less fuel from food. Coupled with a renewable energy source, it it probably the best way to commute.

Both of the ways I commute are relatively good for the planet, so I shouldn’t feel guilty about taking the Brompton onto the train sometimes. It costs money, but requires me to eat less, and also allows me to read and listen to music or podcasts during the train part of the commute. Sometimes that is what I need, and in the scheme of things it’s still a really good travel choice (as is buying a bike that will likely outlive me).

Bananas are a good sourced of fuel in the scheme of things, so I shouldn’t worry about the amount of them that I eat. They have long been one of my main cycling snacks, and I’m glad that the book doesn’t suggest I should stop this.

Team Taxonomies and Personal Profiles

I was reading this blog post by Emily Webber last week, which nicely compliments the Team Topologies book, and is something I think might add some value in my organisation. It puts enabling teams on a more level footing, defines who they enable, and also records whether a team is long-lived or temporary. This would allow organisations to record project teams in the same way as long-lived teams, and might better help highlight people who are in too many teams, as well as teams that don’t have a clear purpose.

I think mapping existing operating models in this way would be a good first step towards designing new operating models.

I would like to get my personal profiles idea off the ground soon. It’s like team charters/APIs, but for individuals. I think it’s really important to understand how each person in a team works, especially if you are in that team, and so much friction that I see between individuals would likely be reduced if each person had something written down about what they are responsible for, how they like to work, and how best to get in touch with them. I think this is likely something I will just have to try myself to see how it works, before suggesting that other people adopt it, but it definitely works for me on a theoretical level.