<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>andreasglashauser.com</title>
  <subtitle>Blog posts from andreasglashauser.com</subtitle>
  <link href="https://andreasglashauser.com/atom.xml" rel="self" />
  <link href="https://andreasglashauser.com/" />
  <id>https://andreasglashauser.com/</id>
  <updated>2026-04-19T00:00:00Z</updated>
  <author><name>Andreas Glashauser</name></author>
  <entry>
    <title>Nick Bostrom’s Superintelligence</title>
    <link href="https://andreasglashauser.com/blog/nick-bostrom-superintelligence.html" />
    <id>https://andreasglashauser.com/blog/nick-bostrom-superintelligence.html</id>
    <updated>2026-04-19T00:00:00Z</updated>
    <summary>I first read Nick Bostrom’s Superintelligence a few years ago, before the current AI boom took off. Recently I revisited it and was struck by how thought-provoking it still is. If you are interested in the philosophical implications of AI, this book is essential reading. It...</summary>
  </entry>
  <entry>
    <title>Thoughts on .onion sites</title>
    <link href="https://andreasglashauser.com/blog/thoughts-on-onion-sites.html" />
    <id>https://andreasglashauser.com/blog/thoughts-on-onion-sites.html</id>
    <updated>2026-04-12T00:00:00Z</updated>
    <summary>I really like the idea behind .onion sites. Onion addresses are self-authenticating, which means the address is tied directly to the cryptographic key used by the service. That is a stronger form of authentication than relying on a certificate authority to vouch for such a...</summary>
  </entry>
  <entry>
    <title>Thoughts on AI-assisted coding</title>
    <link href="https://andreasglashauser.com/blog/thoughts-on-ai-assisted-coding.html" />
    <id>https://andreasglashauser.com/blog/thoughts-on-ai-assisted-coding.html</id>
    <updated>2026-04-05T00:00:00Z</updated>
    <summary>I first experienced AI-assistant development via GitHub Copilot during its beta in 2021. It was the first AI programming assistant back then and it only offered tab-completion, which I think was based on gpt-3.5.</summary>
  </entry>
  <entry>
    <title>Exploring MirageOS</title>
    <link href="https://andreasglashauser.com/blog/exploring-mirageos.html" />
    <id>https://andreasglashauser.com/blog/exploring-mirageos.html</id>
    <updated>2026-03-27T00:00:00Z</updated>
    <summary>I first discovered MirageOS and unikernels through mirage-fw , a firewall for QubesOs . I was amazed by how fast it boots and how few resources it requires: just 1 vCPU and 32 MiB of memory with no disk usage at all.</summary>
  </entry>
  <entry>
    <title>Thoughts on LiteLLM incident</title>
    <link href="https://andreasglashauser.com/blog/thoughts-on-litellm-compromise.html" />
    <id>https://andreasglashauser.com/blog/thoughts-on-litellm-compromise.html</id>
    <updated>2026-03-27T00:00:00Z</updated>
    <summary>Supply chain attacks through compromised packages aren’t something new. If you read any tech news, you will regularly read about hijacked npm packages, usually as the result of some maintainer account getting pwned or stolen api keys.</summary>
  </entry>
  <entry>
    <title>How to uncensor LLMs?</title>
    <link href="https://andreasglashauser.com/blog/how-to-uncensor-llms.html" />
    <id>https://andreasglashauser.com/blog/how-to-uncensor-llms.html</id>
    <updated>2026-03-23T00:00:00Z</updated>
    <summary>LLMs contain safety guardrails for good reasons: prompt injection, IP violations, abuse and harmful outputs are risks which have to be mitigated. These guardrails lead to situations where the model won’t answer your question, resulting in responses like “ I’m sorry, but I...</summary>
  </entry>
</feed>
