<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The Code & The Compass]]></title><description><![CDATA[The Code & The Compass]]></description><link>https://blog.builtbypranav.com</link><generator>RSS for Node</generator><lastBuildDate>Wed, 15 Apr 2026 20:58:44 GMT</lastBuildDate><atom:link href="https://blog.builtbypranav.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[The Bottleneck Java Couldn't Solve]]></title><description><![CDATA[For years, Java has been the undisputed king of enterprise software. It's robust, mature, and powers some of the largest applications in the world. But in the modern, cloud-native era of massive concurrency, I ran headfirst into a bottleneck that Jav...]]></description><link>https://blog.builtbypranav.com/the-bottleneck-java-couldnt-solve</link><guid isPermaLink="true">https://blog.builtbypranav.com/the-bottleneck-java-couldnt-solve</guid><category><![CDATA[Java]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[concurrency]]></category><category><![CDATA[performance]]></category><dc:creator><![CDATA[Pranav Srivathsa]]></dc:creator><pubDate>Wed, 18 Jun 2025 15:59:05 GMT</pubDate><content:encoded><![CDATA[<p>For years, Java has been the undisputed king of enterprise software. It's robust, mature, and powers some of the largest applications in the world. But in the modern, cloud-native era of massive concurrency, I ran headfirst into a bottleneck that Java's traditional model struggles to solve.</p>
<p>It’s the "one request, one thread" problem.</p>
<p>While building a high-performance web crawler designed to make thousands of simultaneous network requests, I discovered that even with a powerful language like Java, the underlying architecture has a fundamental limit. Let me show you what it is, and how a different approach solved it completely.</p>
<h3 id="heading-the-wall-why-traditional-threading-hits-a-limit">The Wall: Why Traditional Threading Hits a Limit</h3>
<p>In a traditional Java application, every concurrent task runs on a dedicated Operating System (OS) thread. Think of an OS thread as a powerful but heavy worker. It has a large memory footprint (1MB+) and is expensive for the OS to create and manage.</p>
<p>Now, consider my web crawler. Its job is almost entirely I/O-bound—it makes an HTTP request and then <em>waits</em> for a server across the internet to respond.</p>
<p>Here's the bottleneck: while a Java thread is waiting for that network response, it is <strong>blocked</strong>. It sits idle, consuming a full megabyte of memory, but doing zero productive work.</p>
<p>If I want to make 5,000 concurrent requests, I would theoretically need 5,000 threads. In reality, this would crash the system. The sheer memory consumption and the cost of the OS constantly switching between thousands of threads (context switching) creates a performance wall. This was the bottleneck I couldn't efficiently engineer my way around in Java for this specific use case.</p>
<blockquote>
<h3 id="heading-the-solution-what-if-we-dont-wait">The Solution: What If We Don't Wait?</h3>
<p>The creators of Go looked at this exact problem and re-imagined the solution. Instead of giving every task a heavy OS thread, they created goroutines.</p>
<p>A goroutine is an extremely lightweight, managed "green thread" that runs on top of a small pool of actual OS threads. The magic is in the <strong>Go runtime scheduler</strong>.</p>
<p>When a goroutine makes a blocking network call, the Go scheduler does something brilliant:</p>
<ol>
<li><p>It doesn't let the OS thread get blocked.</p>
</li>
<li><p>It instantly <strong>swaps out</strong> the waiting goroutine.</p>
</li>
<li><p>It puts another, ready-to-work goroutine onto that <em>same OS thread</em>.</p>
</li>
<li><p>When the network call for the original goroutine finally returns, the scheduler seamlessly swaps it back in to finish its work.</p>
</li>
</ol>
<p>This means a handful of OS threads can efficiently manage tens of thousands of concurrent tasks. The "worker" is never idle.</p>
</blockquote>
<h3 id="heading-the-result-from-a-performance-wall-to-a-superhighway">The Result: From a Performance Wall to a Superhighway</h3>
<p>Applying this to my web crawler was a game-changer.</p>
<ul>
<li><p><strong>The Java Approach:</strong> I was limited by a thread pool. To handle 200 concurrent requests, I needed 200 expensive threads.</p>
</li>
<li><p><strong>The Go Approach:</strong> I can launch 10,000 goroutines with go crawl(url). The memory footprint is tiny, and the Go scheduler ensures that the CPU is always doing useful work, not waiting on the network. The bottleneck disappeared.</p>
</li>
</ul>
<p>This isn't to say Java is a bad language. Project Loom is introducing virtual threads to Java to address this very issue, which is incredibly exciting.</p>
<p>But it highlights a crucial lesson in system design: the architecture you choose must fit the problem you're solving. For massively concurrent, network-bound applications, Go's built-in, lightweight concurrency model provides a solution that feels like it was designed for the modern internet. It solved a bottleneck that, for my use case, the traditional Java model simply couldn't.</p>
<hr />
<p><em>Thanks for reading! I'd love to hear your thoughts on concurrency and system design. You can find me on</em> <a target="_blank" href="http://linkedin.com/in/pranav-srivathsa"><strong><em>LinkedIn</em></strong></a><em>.</em></p>
]]></content:encoded></item></channel></rss>