UtilsDaily

Ruby Multithreading & Concurrency

14 deep-dive topics with real-world analogies, visualizations, and every syntax you need.

14 topics
🚦 Concurrency vs Parallelism Core Concept β–Ύ
🍳 Chef analogy: One chef who chops onions, stirs soup, then checks the oven β€” switching tasks quickly β€” is concurrent. Two chefs each doing their own dish at the same time is parallel.

Concurrency is about dealing with many things at once (structure). Parallelism is about doing many things at once (execution). You can have concurrency without parallelism.

Visual: Concurrency (1 CPU core, 2 threads)

Time ──────────────────────────────────────▢
Thread A: β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘
Thread B: β–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆβ–‘β–‘β–‘β–‘β–ˆβ–ˆβ–ˆβ–ˆ
          ^switch ^switch ^switch
          (CPU switches between them rapidly)

Visual: Parallelism (2 CPU cores, 2 threads)

Time ──────────────────────▢
Core 1 β€” Thread A: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
Core 2 β€” Thread B: β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ
                   (truly simultaneous!)

Ruby Code: Concurrent threads (I/O bound)

# Two threads fetching data "at the same time"
# They don't run in parallel (GIL), but while
# thread A waits for I/O, thread B can run.

require 'net/http'

t1 = Thread.new { Net::HTTP.get(URI("https://api.example.com/users")) }
t2 = Thread.new { Net::HTTP.get(URI("https://api.example.com/orders")) }

result1 = t1.value  # blocks until t1 finishes, returns result
result2 = t2.value

puts "Both fetched!"

Key Rule

  • I/O-bound work (HTTP calls, DB queries, file reads) β†’ threads help, even with GIL
  • CPU-bound work (image processing, encryption) β†’ threads DON'T help in CRuby; use processes or Ractor
🧡 Thread Basics β€” Lifecycle & API Threading β–Ύ
🏭 Factory workers analogy: Imagine a factory with a manager (main thread) and multiple workers (child threads). The manager starts each worker, can check if they're alive, wait for them to finish, or fire them.

Thread Lifecycle States

  Thread.new
      β”‚
      β–Ό
  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”   scheduled    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚  sleep  │◄──────────────►│  run    β”‚
  β”‚(waiting)β”‚                β”‚(active) β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”˜
                                  β”‚ finish / kill
                             β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”
                             β”‚  dead   β”‚
                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

  status: "sleep" | "run" | "aborting" | false (dead) | nil (not started)

Creating & Running Threads

# Thread.new β€” create + start immediately
t = Thread.new do
  puts "Worker started (Thread ID: #{Thread.current.object_id})"
  sleep 1
  puts "Worker done"
end

puts "Main thread keeps running..."
t.join   # Wait for t to finish before continuing
puts "All done!"

# Thread.new with arguments
t = Thread.new("Alice", 3) do |name, count|
  count.times { puts "Hello from #{name}" }
end
t.join

Key Thread Methods

MethodWhat it doesExample
Thread.new { }Create & start a threadt = Thread.new { work }
t.joinWait for thread to finisht.join
t.join(timeout)Wait max N secondst.join(5)
t.valueWait + return last expressionresult = t.value
t.status"run", "sleep", false, nilt.status # => "sleep"
t.alive?Is the thread still running?t.alive? # => true
t.killTerminate thread immediatelyt.kill
t.raise(e)Raise exception in threadt.raise RuntimeError
t.wakeupWake a sleeping threadt.wakeup
Thread.currentThe currently running threadThread.current.object_id
Thread.mainThe main threadThread.main == Thread.current
Thread.listAll living threadsThread.list.count
Thread.passHint scheduler to switchThread.pass

Collecting Results from Multiple Threads

# Map pattern β€” spawn N threads, collect results
urls = ["https://a.com", "https://b.com", "https://c.com"]

threads = urls.map do |url|
  Thread.new { fetch(url) }  # returns thread
end

results = threads.map(&:value)  # wait for all, collect return values
puts results.inspect

Thread Exceptions

# By default, exceptions in threads are SILENT!
t = Thread.new { raise "Something broke!" }
t.join  # Re-raises the exception here β€” you MUST join to catch it

# OR set globally
Thread.abort_on_exception = true  # crash main thread if any thread crashes

# OR per thread
t = Thread.new { raise "oops" }
t.abort_on_exception = true
πŸ”’ GIL β€” Global Interpreter Lock Threading β–Ύ
🚻 Single bathroom key analogy: An office has 10 employees (threads) but only ONE bathroom key (GIL). To use the bathroom (run Ruby code), you must grab the key first. Only one person can have the key β€” and thus use the bathroom β€” at any moment.

CRuby (MRI) uses a Global Interpreter Lock: a mutex that ensures only one thread executes Ruby bytecode at a time. This makes CRuby simpler and thread-safe internally, but limits CPU parallelism.

Visual: GIL in Action

CPU Core 1    CPU Core 2
     β”‚              β”‚
 Thread A      Thread B
     β”‚              β”‚
  [grab GIL]        β”‚    ← Thread B must wait
  runs Ruby         β”‚
  releases GIL      β”‚
     β”‚         [grab GIL]
     β”‚          runs Ruby
     β”‚          releases GIL
     β–Ό              β–Ό
  (threads take TURNS β€” never truly parallel in CRuby)

The I/O Exception: Why Threads Still Help

When a thread does I/O (reading a file, waiting for an HTTP response, querying a database), it releases the GIL while waiting. This means other threads can run Ruby during that wait.

Thread A:  [GIL]──run──[release GIL: waiting for DB]──────────[GIL]──run──▢
Thread B:            [GIL]──run──[release GIL: HTTP wait]────[GIL]──run──▢
                      ↑ Thread B grabs GIL while A waits for DB!
# I/O bound: threads help! (GIL released during sleep/IO)
require 'benchmark'

def io_work = sleep(0.5)  # simulates DB query or HTTP call

# Sequential: 2 seconds total
Benchmark.measure { 4.times { io_work } }  # ~2.0s

# Concurrent with threads: ~0.5 seconds total
Benchmark.measure {
  threads = 4.times.map { Thread.new { io_work } }
  threads.each(&:join)
}  # ~0.5s β€” huge speedup!

# CPU bound: threads DON'T help (GIL is held)
def cpu_work = 10_000_000.times { }  # pure Ruby computation
# Sequential and threaded take the same time

Alternatives for True Parallelism

OptionGIL?Best for
CRuby threadsYes (GIL)I/O-bound work
Processes (fork)Each has own GILCPU-bound, isolation needed
Ractor (Ruby 3+)Per-Ractor GILCPU-bound within Ruby
JRuby / TruffleRubyNo GILTrue thread parallelism
⚠️ Race Conditions Thread Safety β–Ύ
πŸͺ Last cookie analogy: Two kids check the cookie jar β€” both see 1 cookie β€” both decide to take it β€” both reach in. Now what? Chaos! That's a race condition: two threads both reading and writing shared data without coordination.

A race condition happens when the outcome of code depends on the unpredictable order in which threads execute. It's one of the hardest bugs to reproduce because it's timing-dependent.

Classic Example: Broken Counter

counter = 0
threads = 1000.times.map do
  Thread.new { counter += 1 }
end
threads.each(&:join)
puts counter  # Expected: 1000. Actual: often 950-999!

Why Does This Happen? (Read-Modify-Write)

The line counter += 1 is NOT atomic β€” it's three separate steps:

counter += 1  is actually:
  Step 1: READ  counter  (gets 5)
  Step 2: ADD   5 + 1    (gets 6)
  Step 3: WRITE counter = 6

Thread A            Thread B
READ  counter β†’ 5
                    READ  counter β†’ 5  ← B reads BEFORE A writes!
ADD  5 + 1 = 6
WRITE counter = 6
                    ADD  5 + 1 = 6
                    WRITE counter = 6  ← Both wrote 6 instead of 7!

Check-Then-Act Bug

# Dangerous: check and act are not atomic
def withdraw(account, amount)
  if account.balance >= amount    # Thread A checks: balance = 100, amount = 80 βœ“
    # --- Thread B runs here: also checks 100 >= 80 βœ“, withdraws 80 ---
    account.balance -= amount     # Thread A now withdraws 80: balance = -60 !!!
  end
end

Signs of Race Conditions

  • Bug only happens under high load or in production
  • Bug disappears when you add logging (Heisenbug)
  • Results are "almost right" β€” off by a few, not completely wrong
  • Tests pass individually but fail when run in parallel

Solution: Use Mutex (next topic) to make read-modify-write operations atomic.

πŸ”‘ Mutex β€” Mutual Exclusion Synchronization β–Ύ
🎀 Talking stick analogy: In a meeting, only the person holding the talking stick can speak. Everyone else waits. When they're done, they pass the stick. A Mutex is the talking stick for your code β€” only one thread can be "inside" at a time.

A Mutex (Mutual Exclusion lock) is a flag that only one thread can hold at a time. While a thread holds the mutex, all other threads trying to acquire it will wait until it's released.

Visual: Mutex Protecting a Counter

Thread A: ──[ wants lock ]──[  HOLDS LOCK  ]──[ releases ]──────────────▢
Thread B: ──────────────────[ waiting... ]───[ HOLDS LOCK ]──[ releases ]β–Ά
                              ↑ blocked until A releases

Basic Mutex Usage

mutex = Mutex.new
counter = 0

threads = 1000.times.map do
  Thread.new do
    mutex.synchronize do   # Only one thread runs this block at a time
      counter += 1         # Now safe!
    end
  end
end

threads.each(&:join)
puts counter  # => Always exactly 1000

Manual lock / unlock (avoid this pattern)

mutex = Mutex.new

mutex.lock    # Grab the lock (blocks if already held)
begin
  # critical section
  counter += 1
ensure
  mutex.unlock  # ALWAYS release, even if exception!
end

# synchronize { } is better β€” it handles the ensure automatically

try_lock β€” Non-blocking Attempt

mutex = Mutex.new

if mutex.try_lock
  begin
    do_work  # got the lock, do work
  ensure
    mutex.unlock
  end
else
  puts "Lock was busy, skipping..."  # didn't block
end

Real-World: Thread-Safe Bank Account

class BankAccount
  def initialize(balance)
    @balance = balance
    @mutex = Mutex.new
  end

  def deposit(amount)
    @mutex.synchronize { @balance += amount }
  end

  def withdraw(amount)
    @mutex.synchronize do
      raise "Insufficient funds" if @balance < amount
      @balance -= amount
    end
  end

  def balance
    @mutex.synchronize { @balance }  # Even reads can need protection
  end
end

account = BankAccount.new(1000)
threads = 50.times.map { Thread.new { account.deposit(10) } }
threads.each(&:join)
puts account.balance  # => Always 1500

Rules for Mutex

  • Never call mutex.lock twice from the same thread β€” it deadlocks (use Monitor instead)
  • Keep critical sections as short as possible β€” holding a mutex blocks all other threads
  • Always use synchronize { } over manual lock/unlock for safety
πŸ”„ Monitor β€” Re-entrant Locking Synchronization β–Ύ
🏠 Your own front door analogy: A Mutex is like a door that locks behind you β€” even you can't re-enter while you're inside! A Monitor is smarter: you already have the key to your own house, so you can re-enter as many times as you want. But strangers still can't get in.

A Monitor is a re-entrant mutual exclusion mechanism. Unlike Mutex, the same thread can acquire the monitor's lock multiple times without deadlocking itself. It also provides condition variables for thread coordination.

Mutex deadlock vs Monitor safety

require 'monitor'

# DANGER with Mutex β€” re-entrancy causes deadlock:
mutex = Mutex.new
mutex.synchronize do
  mutex.synchronize { }  # => DEADLOCK! (locks itself)
end

# SAFE with Monitor β€” re-entrant:
monitor = Monitor.new
monitor.synchronize do
  monitor.synchronize { puts "No deadlock!" }  # fine!
end

MonitorMixin β€” Add to Any Class

require 'monitor'

class SafeCache
  include MonitorMixin  # adds synchronize, new_cond, etc.

  def initialize
    super  # must call super to init MonitorMixin
    @data = {}
  end

  def fetch(key)
    synchronize { @data[key] }
  end

  def store(key, value)
    synchronize { @data[key] = value }
  end

  # A method that calls other synchronized methods safely
  def fetch_or_store(key, default)
    synchronize do
      @data[key] ||= default  # calls synchronize internally β€” OK because re-entrant
    end
  end
end

Condition Variables β€” Coordinating Threads

A condition variable lets a thread wait until a specific condition is true, then another thread can signal it to wake up. This is the foundation of the producer-consumer pattern.

require 'monitor'

monitor = Monitor.new
cond    = monitor.new_cond  # condition variable tied to this monitor
queue   = []

# Consumer thread β€” waits until there's data
consumer = Thread.new do
  monitor.synchronize do
    cond.wait_while { queue.empty? }  # releases lock + sleeps until signaled
    puts "Got: #{queue.shift}"
  end
end

sleep 0.5  # let consumer start waiting

# Producer thread β€” adds data then signals
producer = Thread.new do
  monitor.synchronize do
    queue << "hello!"
    cond.signal  # wake up the waiting consumer
  end
end

[consumer, producer].each(&:join)

When to Use Monitor over Mutex

MutexMonitor
Re-entrant (same thread)?No (deadlock!)Yes
Condition variables?ConditionVariable (separate)Built-in new_cond
Use as mixin?NoYes (MonitorMixin)
OverheadLowerSlightly higher
πŸ“Œ Thread-Local Variables Threading β–Ύ
πŸŽ’ Personal locker analogy: In a school, each student has their own locker. Student A can't open Student B's locker. Thread-local variables are like personal lockers β€” each thread has its own private copy of a variable that no other thread can see or modify.

Thread-local variables store data that is private to a specific thread. Every thread has its own independent value for the same key. Changes in one thread never affect another thread's value.

Syntax: Reading and Writing

# Set a thread-local variable
Thread.current[:user_id] = 42

# Read it back
puts Thread.current[:user_id]  # => 42

# In a different thread, it's nil (independent!)
t = Thread.new do
  puts Thread.current[:user_id]  # => nil (different thread!)
  Thread.current[:user_id] = 99
  puts Thread.current[:user_id]  # => 99
end
t.join

puts Thread.current[:user_id]  # => still 42 (main thread unaffected)

# Check which keys exist for current thread
Thread.current.keys  # => [:user_id]

# Delete a key
Thread.current[:user_id] = nil

Real-World: Current User in Web Request

Rails and Rack middleware use thread-local storage to track the current request's user without passing it everywhere as an argument:

# Middleware sets the current user at the start of each request
class CurrentUserMiddleware
  def call(env)
    user = authenticate(env)
    Thread.current[:current_user] = user    # store for this request's thread
    @app.call(env)
  ensure
    Thread.current[:current_user] = nil     # clean up after request!
  end
end

# Anywhere in the app during that request:
module Current
  def self.user
    Thread.current[:current_user]
  end
end

# Usage:
Current.user  # => #

Fiber-Local vs Thread-Local

# Thread[] is actually FIBER-LOCAL in Ruby 3!
# Within the SAME thread but different fibers, values differ.

# For truly thread-local (shared across fibers in same thread):
Thread.current.thread_variable_set(:session_id, "abc")
Thread.current.thread_variable_get(:session_id)  # => "abc"

# Fiber-local (default Thread[:key] behavior):
Thread.current[:request_id] = "req-1"  # isolated per fiber
Thread[:key]thread_variable_get/set
ScopeFiber-local (within a thread)Thread-local (shared across fibers)
Use forPer-request context (Rack)Per-thread state across fibers
🌿 Fibers β€” Cooperative Concurrency Concurrency β–Ύ
πŸƒ Relay race analogy: In a relay race, runner A runs until they decide to pass the baton to runner B. Runner B runs until they decide to pass it back. They take turns, and each runner controls exactly when the handoff happens. That's a Fiber β€” cooperative, not forced.

Fibers are lightweight coroutines that let you pause and resume execution manually. Unlike threads (preemptive), fibers yield control explicitly with Fiber.yield. They run on a single OS thread β€” there's no parallelism, just controlled interleaving.

Visual: Thread vs Fiber Switching

Threads (OS decides when to switch):
  Thread A: β–“β–“β–“β–“β–‘β–‘β–‘β–‘β–“β–“β–“β–‘β–‘β–‘β–‘β–“β–“β–“β–“β–“β–‘  ← OS interrupts at any point
  Thread B: β–‘β–‘β–‘β–‘β–“β–“β–“β–“β–‘β–‘β–‘β–“β–“β–“β–“β–‘β–‘β–‘β–‘β–‘β–“

Fibers (YOU decide when to switch):
  Fiber A:  β–“β–“β–“β–“.yield.β–‘β–‘β–‘β–‘.resume.β–“β–“β–“β–“  ← switches only at yield/resume
  Fiber B:  β–‘β–‘β–‘β–‘.resume.β–“β–“β–“.yield.β–‘β–‘β–‘β–‘β–‘

Basic Fiber API

fiber = Fiber.new do
  puts "Step 1"
  Fiber.yield         # Pause here, return control to caller
  puts "Step 2"
  Fiber.yield "hello" # Pause, send "hello" back to caller
  puts "Step 3"
  "done"              # Final return value
end

fiber.resume          # => prints "Step 1", returns nil
fiber.resume          # => prints "Step 2", returns "hello"
result = fiber.resume # => prints "Step 3", returns "done"

fiber.resume          # => FiberError: dead fiber called

Passing Values In and Out

fiber = Fiber.new do |first_value|
  puts "Got: #{first_value}"          # "Got: hello"
  second = Fiber.yield "first yield"  # sends "first yield" to caller, gets next resume arg
  puts "Got: #{second}"               # "Got: world"
  "all done"
end

r1 = fiber.resume("hello")   # => "first yield"
r2 = fiber.resume("world")   # => "all done"

Fibers as Generators (Infinite Sequences)

# Generate Fibonacci numbers lazily
fibonacci = Fiber.new do
  a, b = 0, 1
  loop do
    Fiber.yield a
    a, b = b, a + b
  end
end

10.times { print "#{fibonacci.resume} " }
# => 0 1 1 2 3 5 8 13 21 34

Fibers Power Enumerator

# Ruby's Enumerator uses Fibers internally
enum = Enumerator.new do |yielder|
  yielder << "first"
  yielder << "second"
  yielder << "third"
end

enum.next  # => "first"
enum.next  # => "second"
enum.next  # => "third"

Thread vs Fiber Comparison

ThreadFiber
SchedulingPreemptive (OS)Cooperative (you)
ParallelismPossible (with GIL limits)No β€” single thread
Stack size~1–8 MB~4 KB (very lightweight)
Use caseI/O concurrency, background workGenerators, state machines, async
SwitchingImplicit (anytime)Explicit (yield/resume)
πŸ“¬ Queue & SizedQueue β€” Thread-Safe Channels Thread Safety β–Ύ
πŸ“¦ Conveyor belt analogy: Imagine a factory where workers (producers) place boxes on a conveyor belt, and other workers (consumers) pick them off the other end. The belt handles all the coordination β€” producers don't need to hand boxes directly to consumers. Queue is that belt.

Queue is Ruby's built-in, thread-safe FIFO (first-in, first-out) data structure. It's the safest way to pass data between threads without needing a Mutex yourself.

Queue API

q = Queue.new

# Add items (non-blocking)
q.push("item1")       # alias: q << "item1", q.enq("item1")

# Remove items β€” BLOCKS if empty (waits for producer to add something)
item = q.pop           # alias: q.deq, q.shift

# Non-blocking pop (raises ThreadError if empty)
item = q.pop(non_block = true)

# Inspection
q.size     # => number of items
q.empty?   # => true/false
q.length   # alias for size
q.clear    # remove all items

# Signal "no more work" by closing
q.close
q.closed?  # => true
# After closing, pop returns nil once queue drains

Producer-Consumer Pattern

queue = Queue.new

# Producer thread: generates work items
producer = Thread.new do
  10.times do |i|
    queue.push("job_#{i}")
    puts "Produced job_#{i}"
    sleep 0.1
  end
  queue.close  # signal: no more jobs coming
end

# Consumer threads: process work items
consumers = 3.times.map do |id|
  Thread.new do
    while (job = queue.pop)  # returns nil when queue closed + empty
      puts "Worker #{id} processing #{job}"
      sleep 0.2  # simulate work
    end
    puts "Worker #{id} done"
  end
end

producer.join
consumers.each(&:join)
puts "All done!"

SizedQueue β€” Bounded Buffer

A SizedQueue has a maximum capacity. Producers block when the queue is full, preventing unbounded memory growth.

sized_q = SizedQueue.new(5)  # max 5 items

# Same API as Queue, but push blocks when full:
producer = Thread.new do
  20.times do |i|
    sized_q.push(i)           # blocks here if queue has 5 items
    puts "Pushed #{i}"
  end
  sized_q.close
end

consumer = Thread.new do
  while (item = sized_q.pop)
    puts "Processing #{item}"
    sleep 0.1
  end
end

[producer, consumer].each(&:join)

Queue vs SizedQueue vs Array (don't use Array!)

QueueSizedQueueArray
Thread-safe?YesYesNo!
Blocks on pop?Yes (waits for item)Yes (waits for item)N/A
Blocks on push?Never (unbounded)Yes (when full)N/A
Use forWork queuesRate-limited pipelinesSingle-thread only
🏊 Thread Pools Patterns β–Ύ
πŸš• Taxi pool analogy: Instead of summoning a new taxi (creating a new thread) for every customer, a taxi depot keeps a pool of 10 taxis (threads) ready. When a customer arrives, they grab an available taxi. If all 10 are busy, the customer waits. This prevents the overhead of creating/destroying thousands of threads.

A thread pool pre-creates N threads and reuses them. Creating threads is expensive (time + memory). A pool caps resource usage and handles bursts gracefully.

Simple Thread Pool from Scratch

class ThreadPool
  def initialize(size)
    @size = size
    @queue = Queue.new
    @workers = size.times.map { spawn_worker }
  end

  def submit(&task)
    @queue.push(task)
  end

  def shutdown
    @size.times { @queue.push(nil) }  # nil = signal to stop
    @workers.each(&:join)
  end

  private

  def spawn_worker
    Thread.new do
      loop do
        task = @queue.pop
        break if task.nil?    # nil is shutdown signal
        task.call
      end
    end
  end
end

pool = ThreadPool.new(4)

20.times do |i|
  pool.submit { puts "Job #{i} on #{Thread.current.object_id}" }
end

pool.shutdown
puts "All jobs complete"

concurrent-ruby Gem (Production-Ready)

# gem 'concurrent-ruby'
require 'concurrent'

# Fixed thread pool (exactly N threads always running)
pool = Concurrent::FixedThreadPool.new(10)

50.times do |i|
  pool.post { puts "Task #{i} on thread #{Thread.current.object_id}" }
end

pool.shutdown
pool.wait_for_termination

# Cached pool (grows as needed, shrinks when idle)
pool = Concurrent::CachedThreadPool.new

# Future β€” async result with promise-like API
future = Concurrent::Future.execute { expensive_computation }
future.value   # blocks until result is ready
future.value!  # blocks and re-raises exception if one occurred

# Async β€” fire and forget
Concurrent::Future.execute { send_email }  # runs in background

When to Use a Thread Pool

  • Processing many short I/O tasks (API calls, DB queries)
  • Background job workers (like Sidekiq's internal model)
  • Parallelizing batch operations (import files, send emails)
  • Any place you'd otherwise call Thread.new in a loop
Pool typeThreadsBest for
FixedThreadPoolAlways NConsistent workloads
CachedThreadPoolGrows/shrinksBursty workloads
SingleThreadExecutorAlways 1Serialized background work
πŸ’€ Deadlock β€” Detection & Prevention Thread Safety β–Ύ
πŸš— Traffic standoff analogy: Thread A has Lock 1 and wants Lock 2. Thread B has Lock 2 and wants Lock 1. Neither can proceed. Both wait forever. This is a deadlock β€” a frozen system where everyone is waiting for everyone else.

Visual: Classic Two-Lock Deadlock

Thread A:  [has Lock1]──[waiting for Lock2]──FOREVER
Thread B:  [has Lock2]──[waiting for Lock1]──FOREVER
                ↑ Circular wait = DEADLOCK

Deadlock in Code

lock_a = Mutex.new
lock_b = Mutex.new

thread_a = Thread.new do
  lock_a.synchronize do       # Grabs Lock A
    sleep 0.01                # Let Thread B grab Lock B
    lock_b.synchronize do     # Waits for Lock B... forever
      puts "Thread A done"
    end
  end
end

thread_b = Thread.new do
  lock_b.synchronize do       # Grabs Lock B
    sleep 0.01
    lock_a.synchronize do     # Waits for Lock A... forever
      puts "Thread B done"
    end
  end
end

thread_a.join
thread_b.join
# => fatal: No live threads left. Deadlock? (RuntimeError)

Ruby detects deadlocks automatically and raises a fatal error: "No live threads left. Deadlock?"

Prevention Strategy 1: Consistent Lock Ordering

# ALWAYS acquire locks in the same order everywhere
# If every thread grabs A then B (never B then A), deadlock is impossible.

def transfer(from_account, to_account, amount)
  # Sort by object_id to ensure consistent order
  first, second = [from_account, to_account].sort_by(&:object_id)

  first.mutex.synchronize do
    second.mutex.synchronize do
      from_account.withdraw(amount)
      to_account.deposit(amount)
    end
  end
end

Prevention Strategy 2: Timeout with try_lock

def try_transfer(lock_a, lock_b)
  # Try to acquire both locks with a timeout
  deadline = Time.now + 1.0  # 1 second max

  loop do
    if lock_a.try_lock
      if lock_b.try_lock
        begin
          yield  # do work
          return true
        ensure
          lock_b.unlock
          lock_a.unlock
        end
      else
        lock_a.unlock
      end
    end

    if Time.now > deadline
      raise "Could not acquire locks β€” possible deadlock"
    end
    Thread.pass  # let other threads run
  end
end

Deadlock vs Livelock vs Starvation

ProblemWhat happensFix
DeadlockAll threads frozen waiting for each otherLock ordering, timeouts
LivelockThreads keep reacting to each other but make no progressRandom backoff, arbitration
StarvationOne thread never gets the lock (others always win)Fair queuing, priority
⚑ Ractor β€” True Parallelism in Ruby 3 Ruby 3 β–Ύ
🏝️ Separate islands analogy: Each Ractor is like a separate island. Islands can't share resources directly β€” they can only send messages (bottles with notes) across the water. Because they can't touch each other's stuff, there are no race conditions. And since they're truly independent, they CAN run on separate CPU cores in parallel.

Ractors (Ruby 3.0+) are Ruby's answer to true parallelism. Each Ractor has its own GIL, meaning Ractors can run genuinely parallel on multiple CPU cores. The catch: they communicate only via message passing, not shared memory.

Visual: Ractors vs Threads

Threads (shared memory, GIL):
  Thread A ──┐
  Thread B ──┼── Shared heap (GIL: one at a time)
  Thread C β”€β”€β”˜

Ractors (isolated, no GIL between them):
  Ractor A [its own memory] ──message──▢ Ractor B [its own memory]
  Ractor C [its own memory] ══ runs on Core 2 simultaneously! ═══▢

Basic Ractor API

# Create a Ractor and send it work
r = Ractor.new do
  number = Ractor.receive   # Wait for a message
  number * number            # Return value
end

r.send(7)          # Send a message to the ractor
result = r.take    # Wait and receive the result
puts result        # => 49

Parallel CPU Work (beats threads!)

# CPU-bound task in parallel using Ractors
def is_prime?(n)
  return false if n < 2
  (2..Math.sqrt(n)).none? { |i| n % i == 0 }
end

numbers = [999_999_937, 1_000_000_007, 999_999_893, 1_000_000_033]

ractors = numbers.map do |n|
  Ractor.new(n) { |num| [num, is_prime?(num)] }
end

results = ractors.map(&:take)
results.each { |n, prime| puts "#{n}: #{prime ? 'prime' : 'not prime'}" }
# These run on multiple CPU cores simultaneously!

Shareable Objects (What You Can Share)

# Ractors can ONLY share "shareable" objects:
# - Frozen (immutable) objects
# - Integers, Symbols, true/false/nil (always shareable)
# - Ractor.make_shareable(obj)

CONST = "shared string".freeze  # frozen strings are shareable

r = Ractor.new do
  Ractor.receive  # receive unshareable objects via message (they get MOVED)
end

# move: true β€” transfers ownership (original can no longer use it)
data = { key: "value" }
r.send(data, move: true)
# data is now unusable in the main Ractor

Ractor Pipeline

# Multi-stage pipeline: generator β†’ processor β†’ printer
generator = Ractor.new do
  5.times { |i| Ractor.yield i }
end

processor = Ractor.new(generator) do |gen|
  loop { Ractor.yield gen.take * 10 }
end

5.times { puts processor.take }  # => 0, 10, 20, 30, 40

Ractor Limitations (Experimental in Ruby 3.x)

  • Most stdlib classes are NOT Ractor-safe yet (ActiveRecord, etc.)
  • Accessing global variables raises Ractor::IsolationError
  • Complex objects must be frozen or moved, not shared
  • Still experimental β€” API may change
🌐 Async Ruby & Non-blocking I/O Patterns β–Ύ
🍜 Restaurant ordering analogy: Instead of ordering at Restaurant A, waiting at the table, then going to Restaurant B, you give both restaurants your order and sit at a park. When food is ready, you're notified. You never waited β€” you were always "free" while food was being prepared. That's async I/O.

Ruby 3.0 introduced the Fiber Scheduler interface β€” a hook that lets gems replace Ruby's blocking I/O with non-blocking I/O, all while keeping synchronous-looking code. The async gem implements this scheduler.

Traditional Threads vs Async Fibers

1000 concurrent HTTP requests:

Threads: 1000 threads Γ— ~1MB stack = ~1GB memory!
         Slow to create, expensive context switching

Async Fibers: 1000 fibers Γ— ~4KB stack = ~4MB memory!
              No OS context switch β€” pure Ruby scheduling

Fiber Scheduler (Ruby 3.0+)

# Ruby 3.0+ built-in: Fiber scheduler interface
# Set a scheduler to make I/O non-blocking for fibers
# The 'async' gem provides a production scheduler

require 'async'

Async do
  # Each 'Async { }' block is a Fiber managed by the scheduler
  task1 = Async { sleep 1; "task1 done" }  # non-blocking sleep!
  task2 = Async { sleep 1; "task2 done" }  # runs concurrently

  puts task1.wait  # => "task1 done" (after ~1s, not 2s!)
  puts task2.wait  # => "task2 done" (both ran at the same time!)
end

Async HTTP Requests

# gem 'async-http'
require 'async'
require 'async/http/internet'

Async do
  internet = Async::HTTP::Internet.new

  # Fire all requests concurrently β€” none blocks the others
  tasks = ["https://api.a.com", "https://api.b.com", "https://api.c.com"].map do |url|
    Async { internet.get(url) }
  end

  responses = tasks.map(&:wait)
  puts "Got #{responses.size} responses"

  internet.close
end

How the Fiber Scheduler Works

Event Loop (single thread):

Fiber A: START──[io: wait for HTTP]──────────────────[resume: got data]──END
Fiber B:        [io: wait for DB]──────────[resume: got rows]──END
Fiber C:               [io: file read]──[resume: file ready]──END
         ↑ Event loop detects I/O readiness and resumes the right fiber

Built-in Non-blocking Support (Ruby 3.x)

# With a scheduler set, these become non-blocking automatically:
# - IO#read, IO#write
# - Socket operations
# - Process.wait
# - Kernel#sleep
# - Mutex#lock (when waiting)

# Ruby 3 exposes the interface:
class MyScheduler
  def io_wait(io, events, timeout)
    # Your event-loop logic here (usually handled by 'async' gem)
  end
  # + fiber_schedule, block, unblock, kernel_sleep, etc.
end

Fiber.set_scheduler(MyScheduler.new)

When to Use Async

ScenarioBest choice
1000 concurrent HTTP callsAsync fibers (low memory)
5 background jobsThreads (simple)
CPU-heavy parallel workRactor or processes
High-throughput web serverPuma (threads + processes)
πŸ—οΈ Real-World Patterns β€” Puma, Sidekiq, Rails Production β–Ύ
🏒 Office building analogy: Puma is a building with multiple floors (workers/processes), each floor has multiple desks (threads). Sidekiq is a separate building entirely dedicated to background work β€” lots of desks, always busy processing jobs from a queue.

Puma: The Multi-Process Γ— Multi-Thread Server

Puma Architecture:

  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚  Master Process                             β”‚
  β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
  β”‚  β”‚ Worker 1  β”‚  β”‚ Worker 2  β”‚  β”‚Worker 3 β”‚ β”‚
  β”‚  β”‚ (process) β”‚  β”‚ (process) β”‚  β”‚(process)β”‚ β”‚
  β”‚  β”‚ T1 T2 T3  β”‚  β”‚ T1 T2 T3  β”‚  β”‚T1 T2 T3 β”‚ β”‚
  β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  workers Γ— threads = max concurrent requests
  e.g. 3 workers Γ— 5 threads = 15 concurrent requests
# config/puma.rb
workers ENV.fetch("WEB_CONCURRENCY") { 2 }   # processes (forked)
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count          # min, max threads per worker

preload_app!  # load app before forking (copy-on-write friendly)

on_worker_boot do
  ActiveRecord::Base.establish_connection  # each worker needs its own DB pool
end

Sidekiq: Thread Pool of Workers

# Sidekiq runs a configurable thread pool (default: 10 threads)
# Each thread picks a job from Redis and processes it

# config/sidekiq.yml
# concurrency: 10   ← 10 threads, 10 jobs at once

class OrderMailerJob
  include Sidekiq::Job

  def perform(order_id)
    order = Order.find(order_id)  # Each thread uses its own DB connection
    OrderMailer.shipped(order).deliver_now
  end
end

# Enqueue from anywhere:
OrderMailerJob.perform_async(order.id)

Thread Safety in Rails

# Rails itself is thread-safe since Rails 4.
# But YOUR code must be too. Common mistakes:

# ❌ UNSAFE: class-level mutable state
class ReportGenerator
  @@current_user = nil  # shared across all threads!
  def generate(user)
    @@current_user = user  # Thread A sets this...
    build_report           # Thread B changes it before we use it!
  end
end

# βœ… SAFE: use thread-local or pass as argument
class ReportGenerator
  def generate(user)
    Thread.current[:report_user] = user  # thread-local
    build_report
  ensure
    Thread.current[:report_user] = nil
  end
end

# βœ… SAFE: Rails' built-in CurrentAttributes (thread-local under the hood)
class Current < ActiveSupport::CurrentAttributes
  attribute :user  # automatically thread-local and reset per request
end

Current.user = User.find(session[:user_id])
Current.user  # accessible anywhere in this request

ActiveRecord Connection Pool

# Each thread needs its own DB connection
# ActiveRecord manages a connection pool automatically

# config/database.yml
# pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
# pool size should match your thread count!

# Rails checks out connections per-thread automatically
# If pool is exhausted, threads wait (or raise ActiveRecord::ConnectionTimeoutError)

# Manual checkout (rare):
ActiveRecord::Base.connection_pool.with_connection do |conn|
  conn.execute("SELECT 1")
end

Web Server Comparison

ServerModelConcurrencyBest for
PumaMulti-process + threadsworkers Γ— threadsStandard Rails, Heroku
UnicornMulti-process, no threadsN workersThread-unsafe apps, simple
FalconAsync fibers + processesThousandsHigh I/O concurrency
SidekiqThread pool (Redis-backed)N threadsBackground jobs
No topics match your search. Try different keywords.

What is Ruby Multithreading?

Ruby multithreading is the practice of running multiple threads of execution within a single Ruby program. A thread is a lightweight sequence of instructions that can run "concurrently" alongside other threads β€” like several workers in the same building, sharing the same resources. This guide covers every layer of Ruby's concurrency model: from the foundational concepts of concurrency vs parallelism, through threads, synchronization primitives, and fibers, all the way to Ruby 3's Ractor for true CPU parallelism.

Ruby's threading story is more nuanced than most languages. CRuby (MRI) includes a Global Interpreter Lock that limits threads to taking turns executing Ruby code β€” but releases the lock during I/O, making threads highly valuable for I/O-bound work like web requests, database queries, and file operations. Understanding where the GIL helps and where it hurts is a key differentiator for Ruby engineers at every level.

How Does This Guide Work?

Each of the 14 topics is displayed in an expandable card, all open by default. Use the search bar to jump directly to any topic β€” search for keywords like "mutex", "fiber", "deadlock", "ractor", or "sidekiq". Click any card header to collapse it when you've mastered that topic. The progress bar tracks your review session visually. Every code example has a copy button so you can paste directly into a Ruby REPL like irb or pry and experiment immediately.

Topics Covered in This Guide

  • Core Concepts: Concurrency vs Parallelism, GIL (Global Interpreter Lock)
  • Threading: Thread Basics (lifecycle, API), Thread-Local Variables
  • Thread Safety: Race Conditions, Deadlock Detection & Prevention, Queue & SizedQueue
  • Synchronization: Mutex (Mutual Exclusion), Monitor (Re-entrant Locking)
  • Concurrency Patterns: Fibers (Cooperative Concurrency), Thread Pools
  • Ruby 3: Ractor (True CPU Parallelism)
  • Modern Patterns: Async Ruby & Non-blocking I/O, Real-World Patterns (Puma, Sidekiq, Rails)

Who Should Use This Guide?

This guide is for Ruby and Rails engineers who want to deeply understand concurrency β€” not just know the API, but understand why each primitive exists and when to reach for it:

  • Mid-level developers who want to go beyond Thread.new and understand when threads cause subtle bugs
  • Senior engineers who need to tune Puma workers and threads, debug race conditions in production, or choose between Sidekiq and Ractor for parallel processing
  • Interview candidates facing questions about thread safety, GIL behavior, deadlock avoidance, and Ruby 3's concurrency improvements
  • Architects designing high-throughput Rails applications who need to reason about connection pools, async I/O, and process-vs-thread tradeoffs

Benefits of Using This Guide

  • Visual-first: Every abstract concept is illustrated with ASCII diagrams and plain-English analogies before any code is shown
  • Complete syntax coverage: Every Thread, Mutex, Monitor, Fiber, Queue, and Ractor method is shown in context β€” no guessing the API
  • Real-world grounding: Examples connect theory to production tools you already use (Puma, Sidekiq, Rails ActiveRecord pool)
  • Searchable: Find any topic, method, or pattern instantly without scrolling
  • Free & offline-friendly: No login, no tracking, works in any browser

How to Study Ruby Concurrency

The hardest part about concurrency is that bugs are timing-dependent β€” they're easy to introduce and hard to reproduce. The best way to internalize this material is hands-on: spin up a Ruby file, create 100 threads incrementing a shared counter, and watch it go wrong. Then add a Mutex and watch it go right. Run the deadlock example and observe Ruby's fatal error message. Create a Fiber generator and step through it.

Pay special attention to the GIL topic β€” many developers think "threads can't help in Ruby," but that misunderstands the GIL's scope. Threads are highly effective for I/O-bound work (which is most of what web applications do). CPU-bound parallelism requires Ractors or processes. Knowing this distinction is exactly what senior-level interviewers test for.

Embed This Tool on Your Website

β–Ό