Think of Dispatchers as “Where to Work”
In Kotlin coroutines, a dispatcher decides which thread(s) your code runs on:
Main: UI thread (draw things on screen).Default: CPU work (crunch numbers, parse JSON).IO: Blocking input/output (database, file, network).
👉 Dispatchers.IO is a shared, bounded pool built for waiting on I/O. It’s fast when used right—and painful when abused.
Why Choosing Dispatchers.IO Wisely Matters
- It’s not infinite.
IOhas a cap (often dozens of threads). If you flood it with long tasks, other I/O work waits in line. - Wrong work = slow app. CPU-heavy code on
IOhogs threads that should be free to perform actual I/O. - Queuing hurts UX. DB reads, file access, or network calls can stall, making screens load slowly or time out.
Kid-simple rule:
IO = waiting work. Default = thinking work. Main = showing work.
Use IO For (and Only For) Blocking I/O
- ✅ Room/SQLite queries
- ✅ Reading/writing files or
ContentResolver - ✅ Blocking network clients/sockets
Example (good):
val user = withContext(Dispatchers.IO) { userDao.getById(id) }
withContext(Dispatchers.Main) { render(user) }
Don’t Do This on IO
- ❌ CPU tasks (image processing, crypto, big JSON parsing)
- ❌ Infinite or long-lived loops that hog a thread
- ❌ Unbounded fan-out (launching thousands of I/O jobs at once)
Anti-pattern:
// CPU work on IO — starves I/O
withContext(Dispatchers.IO) {
val bitmap = expensiveDecode(bytes) // move to Default
}
Fix:
val bytes = withContext(Dispatchers.IO) { file.readBytes() } // I/O
val bitmap = withContext(Dispatchers.Default) { decode(bytes) } // CPU
Control the Crowd: Limit Concurrency
Match your app’s real bottlenecks (DB pool, API rate limits).
Semaphore (easy back-pressure):
val gate = Semaphore(8) // allow 8 I/O tasks at a time
val results = ids.map { id ->
async(Dispatchers.IO) {
gate.withPermit { api.fetch(id) }
}
}.awaitAll()
Cap a pipeline with limitedParallelism:
val dbIO = Dispatchers.IO.limitedParallelism(4) // align with DB connections
withContext(dbIO) { userDao.upsert(user) }
Flows: Move Only the Blocking Part
fun lines(): Flow<String> =
readFileAsFlow() // blocking source
.flowOn(Dispatchers.IO) // upstream only
.map { parse(it) } // stays on collector context
Mini Checklist (Stick on Your Monitor)
- Is this blocking I/O? Use
IO. - Is this CPU work? Use
Default. - Keep
IOblocks short; hop back ASAP. - Limit concurrency (Semaphore /
limitedParallelism). - Don’t park infinite loops on
IO. - Prefer
withContext(needs a result) overlaunch(fire-and-forget).
Quick “Why Choosing Dispatchers.IO Wisely Matters” Interview Flashcards
Q1. Why does Dispatchers.IO exist if we already have Default?
A. Default is sized to CPU cores for compute; IO grows (within bounds) to handle blocking I/O without choking compute tasks. Using IO isolates waiting work from CPU work.
Q2. What happens if you do CPU work on IO?
A. You occupy threads meant for blocking I/O, causing DB/file/network calls to queue and slow down. Use Default for CPU tasks.
Q3. How do you prevent flooding IO?
A. Use back-pressure—Semaphore, CoroutineDispatcher.limitedParallelism, request batching, or paging.
Q4. When do you not need IO for network?
A. If the client is non-blocking and manages threads internally, extra withContext(IO) is often unnecessary.
Copy-Paste Patterns
I/O → CPU → UI
val bytes = withContext(Dispatchers.IO) { file.readBytes() }
val model = withContext(Dispatchers.Default) { parse(bytes) }
withContext(Dispatchers.Main) { show(model) }
Bounded DB writes
val dbIO = Dispatchers.IO.limitedParallelism(4)
items.forEach { item ->
withContext(dbIO) { dao.insert(item) }
}
Safe fan-out
val gate = Semaphore(6)
val data = urls.map { url ->
async(Dispatchers.IO) { gate.withPermit { http.get(url) } }
}.awaitAll()
Final Take
Use Dispatchers.IO like a scalpel, not a bucket. Keep I/O snappy, keep CPU on Default, and cap concurrency. That’s the whole game.

