<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Numa on Nubis Morocco</title>
    <link>https://nubis.ma/tags/numa/</link>
    <description>Recent content in Numa on Nubis Morocco</description>
    <generator>Hugo -- 0.148.1</generator>
    <language>en</language>
    <lastBuildDate>Sat, 18 Apr 2026 20:45:40 +0200</lastBuildDate>
    <atom:link href="https://nubis.ma/tags/numa/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>A Practical Guide to NUMA Affinity in Kubernetes</title>
      <link>https://nubis.ma/blog/a_practical_guide_to_numa_affinity_in_kubernetes/</link>
      <pubDate>Sun, 22 Feb 2026 10:00:00 +0000</pubDate>
      <guid>https://nubis.ma/blog/a_practical_guide_to_numa_affinity_in_kubernetes/</guid>
      <description>&lt;p&gt;NUMA effects are one of those problems that don’t show up in dashboards, but will happily show up in your p99 latency and in “why is this box slower than the identical box next to it?”&lt;/p&gt;
&lt;p&gt;Kubernetes can help—but only if you enable the right node-level managers and verify the result from inside the workload.&lt;/p&gt;
&lt;h3 id=&#34;the-problem--the-cross-numa-tax&#34;&gt;The Problem — The “Cross-NUMA” tax&lt;/h3&gt;
&lt;p&gt;On multi-socket or multi-NUMA machines, not all CPU cores are equally “close” to all memory and PCIe devices. If a workload ends up with CPUs on one NUMA node and memory (or NIC / GPU) on another, you can pay a real latency / throughput penalty.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
