<?xml version="1.0" encoding="UTF-8" ?>
<rss version="2.0">
    <channel>
      <title>ML Wiki</title>
      <link>https://saikatkumardey.com/ml-wiki</link>
      <description>Last 10 notes on ML Wiki</description>
      <generator>Quartz -- quartz.jzhao.xyz</generator>
      <item>
    <title>log</title>
    <link>https://saikatkumardey.com/ml-wiki/log</link>
    <guid>https://saikatkumardey.com/ml-wiki/log</guid>
    <description><![CDATA[ Ingest Log Entries are appended chronologically as sources are ingested. ]]></description>
    <pubDate>Sat, 11 Apr 2026 02:15:51 GMT</pubDate>
  </item><item>
    <title>Classification Token (CLS Token)</title>
    <link>https://saikatkumardey.com/ml-wiki/concepts/classification-token</link>
    <guid>https://saikatkumardey.com/ml-wiki/concepts/classification-token</guid>
    <description><![CDATA[ What It Is A special learnable vector prepended to the input sequence in transformer models — used as a dedicated slot to accumulate a global representation of the entire sequence. ]]></description>
    <pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate>
  </item><item>
    <title>Inductive Bias</title>
    <link>https://saikatkumardey.com/ml-wiki/concepts/inductive-bias</link>
    <guid>https://saikatkumardey.com/ml-wiki/concepts/inductive-bias</guid>
    <description><![CDATA[ What It Is Inductive bias is the set of assumptions baked into a model’s architecture that constrain what functions it can represent — independently of the training data. ]]></description>
    <pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate>
  </item><item>
    <title>Patch Embeddings</title>
    <link>https://saikatkumardey.com/ml-wiki/concepts/patch-embeddings</link>
    <guid>https://saikatkumardey.com/ml-wiki/concepts/patch-embeddings</guid>
    <description><![CDATA[ What It Is Patch embeddings convert a 2D image into a sequence of fixed-size vector tokens that a Transformer can process exactly like word tokens. ]]></description>
    <pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate>
  </item><item>
    <title>Transfer Learning</title>
    <link>https://saikatkumardey.com/ml-wiki/concepts/transfer-learning</link>
    <guid>https://saikatkumardey.com/ml-wiki/concepts/transfer-learning</guid>
    <description><![CDATA[ What It Is Transfer learning is the practice of pre-training a model on a large general dataset, then adapting it to a smaller, task-specific one — reusing learned representations instead of training from scratch. ]]></description>
    <pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate>
  </item><item>
    <title>ML Wiki</title>
    <link>https://saikatkumardey.com/ml-wiki/</link>
    <guid>https://saikatkumardey.com/ml-wiki/</guid>
    <description><![CDATA[ Notes on frontier ML. Papers, concepts, entities — structured for fast lookup and cross-referencing. ]]></description>
    <pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate>
  </item><item>
    <title>An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</title>
    <link>https://saikatkumardey.com/ml-wiki/sources/an-image-is-worth-16x16-words</link>
    <guid>https://saikatkumardey.com/ml-wiki/sources/an-image-is-worth-16x16-words</guid>
    <description><![CDATA[ Summary ViT (Vision Transformer) demonstrates that a pure transformer architecture applied directly to sequences of image patches can match or exceed state-of-the-art CNNs on image classification — when pre-trained at sufficient scale. ]]></description>
    <pubDate>Sat, 11 Apr 2026 00:00:00 GMT</pubDate>
  </item><item>
    <title>Learning Paths</title>
    <link>https://saikatkumardey.com/ml-wiki/learning-paths/</link>
    <guid>https://saikatkumardey.com/ml-wiki/learning-paths/</guid>
    <description><![CDATA[ Curated reading sequences for building deep understanding of ML systems. ]]></description>
    <pubDate>Fri, 10 Apr 2026 15:00:20 GMT</pubDate>
  </item><item>
    <title>Learning Paths</title>
    <link>https://saikatkumardey.com/ml-wiki/learning-paths</link>
    <guid>https://saikatkumardey.com/ml-wiki/learning-paths</guid>
    <description><![CDATA[ These paths are sequenced — each step builds on the last. ]]></description>
    <pubDate>Fri, 10 Apr 2026 08:00:00 GMT</pubDate>
  </item><item>
    <title>Making LLMs Fast — The Inference Efficiency Stack</title>
    <link>https://saikatkumardey.com/ml-wiki/learning-paths/efficient-inference</link>
    <guid>https://saikatkumardey.com/ml-wiki/learning-paths/efficient-inference</guid>
    <description><![CDATA[ Inference efficiency is a stack of mostly independent optimizations, each addressing a different bottleneck. ]]></description>
    <pubDate>Fri, 10 Apr 2026 08:00:00 GMT</pubDate>
  </item>
    </channel>
  </rss>