The Critical Section
About Posts Tags

Recent posts:

  • 11 Nov 2025 IS-LM-UIP: Dynamics and Intuition

    This is yet another post on the IS-LM model in macroeconomics, with the twist that this time it's an open economy and I've actually coded it up into a nice dashboard. Needless to say, even though the IS-LM has abysmal predictive power, it captures essential relationships in macroeconomic theory and is a useful educational tool to understand the different market processes. Through a graphical approach, we'll summarize famous economic scenarios and concepts, and we'll explore the role that expectations play within the dynamics.


  • 15 Oct 2025 LLM Latent Space Hacking

    From a previous post we've seen that LLMs, in their current form, cannot learn to reason in an end-to-end manner. This is because end-to-end requires a single computation graph in which gradients flow through the generated tokens. Since LLMs sample discrete tokens of variable-length, differentiation is impossible. Supervised finetuning is end-to-end but it only tries to reproduce the reasoning patterns in the expert traces, instead of letting the model discover them on its own. This post explores a related question: can we find those tokens that best condition a given response? We can interpret the response as the correct final answer and the tokens we seek as the best reasoning trace that produces it.


  • 20 Sep 2025 A World of Auctions

    In free markets a core element is the auction - a procedure for optimal allocation of scarce resources. Auctions have multiple social benefits: they help to deliver an item to the person that values it most; through bidding prices, they reveal underlying demand; and they generate revenue for the sellers. They also play a big part in our world today - radio spectrum, electricity markets, carbon and pollution permits, treasury bonds, fishing quotas and natural resources, online advertising, airport slots and transport rights, art and collectibles - all based on auctions.


  • 10 Sep 2025 Generative Models: Flow Matching

    Flow matching grew out of two earlier ideas. Normalizing flows showed how to gradually transform simple noise into complex data, while optimal transport studied how to move one distribution into another along efficient paths. Flow matching takes inspiration from both: it learns smooth velocity fields that carry noise toward data without the heavy math of exact Jacobians or transport costs. This simplicity and flexibility helped it quickly gain attention as an alternative to diffusion models. Let's see what it's all about.


  • 30 Aug 2025 Can LLMs Learn to Reason End-to-End?

    I was recently thinking about reasoning models and why RL methods like GRPO have become so prominent in that context. Previously, people were finetuning LLMs on step-by-step reasoning traces, yet this can hardly be called "learning to reason", as it's rather about learning to reproduce the reasoning patterns of some other expert. With RL for LLMs the paradigm has shifted and now the model can, in principle, discover the right reasoning patterns for the task. Yet, there are still lots of nuances. Here we consider a simple question: can LLMs learn the best reasoning patterns in an end-to-end manner? By understanding this we'll see precisely how different model designs and training paradigms fit together, like pieces of a puzzle.


Expander Graph
Figure 1: An expander - a most curious sparse graph with strong connectivity properties.
Every subset of less than half the total number of vertices has a proportionally large boundary of edges.

  • The Critical Section
  • GitHub

A personal blog for artificial intelligence and similar topics.