Skip to content
You must be logged in to sponsor pmbanugo

Become a sponsor to Peter Mbanugo

@pmbanugo

Peter Mbanugo

pmbanugo
https://pmbanugo.me

Fund high-throughput, zero-allocation systems research

I am the creator of Tina, a thread-per-core, shared-nothing concurrency framework that's written in Odin, but may spread to other system languages (e.g. Zig).

Modern software development has fallen into a trilemma: you either suffer through callback/async hell, fight mutexes and cache-line contention to share state, or accept the performance ceiling and unpredictable pauses of a Garbage Collector.

Tina is an architectural argument that you do not have to choose. By enforcing strict constraints — no dynamic memory allocation after boot, no async/await, and lock-free messaging — Tina provides C-level performance, Erlang-level fault tolerance, and 100% deterministic simulation testing.

Why Sponsor?

Tina is an independent project. It is not VC-funded, which is a deliberate choice. VC-funded infrastructure inevitably suffers from feature bloat to capture market share. Tina's value comes from what it refuses to do.

Your sponsorship buys me the deep, uninterrupted focus required to:

  1. Advance the Core: Maintain and optimize the io_uring, kqueue, and IOCP reactors.
  2. Build the Ecosystem: Write high-performance, allocation-free protocol parsers (e.g. HTTP, WebSocket) that compose perfectly with Tina's typed arenas.
  3. Produce Deep Documentation: Continue writing rigorous architectural guides and research notes that teach systems engineering, not just framework usage.

If you believe in mechanical sympathy, strict bounds, and systems software that actually sheds load instead of crashing, your support makes this research possible.

Together we can build software systems that you will LOVE!


Staying Informed
I write extensively about the engineering decisions, mechanical sympathy, and architectural tradeoffs behind Project Tina.

@pmbanugo

Tina’s core (scheduler, memory sub-system, messaging, I/O reactor) is functionally stable. Reaching this goal allows me to dedicate the deep, sustained focus needed to build the surrounding ecosystem: a zero-allocation HTTP/1.1+ router and a lock-free WebSocket parser that natively compose with Tina's typed arenas and deterministic simulation testing. This is the bridge from systems framework to production backend. If I supersede that, I'll also be able to buy more tools to benchmark, test, and explore advanced ideas that are currently omitted (e.g. a high-throughput Disk I/O for efficient file system access).

Featured work

  1. pmbanugo/tina

    A shared-nothing, thread-per-core concurrency framework. Designed for massive concurrency because There Is No Alternative (TINA)

    Odin 60
  2. pmbanugo/demitter

    Distributed Node.js event emitter (pub/sub)

    TypeScript 18

0% towards $2,500 per month goal

Be the first to sponsor this goal!

Select a tier

$ a month

You'll receive any rewards listed in the $20 monthly tier. Additionally, a Public Sponsor achievement will be added to your profile.

$5 a month

Select

For individual engineers who respect the architecture. You get my deep gratitude, a Sponsor badge on your profile, and the knowledge that you are funding safety and high-throughput system's work.

$20 a month

Select

The Grand Arena

For senior practitioners and early adopters actively exploring or building with Tina. Sponsors at this level get a shout-out on Twitter, plus priority responses on GitHub Discussions.

$100 a month

Select

The Supervision Tree sponsors:

For small startups or teams evaluating Tina for production. At this tier, I will prioritize your GitHub Issues and bug reports. Your company logo (small) will be placed in the SPONSORS.md file.

$1,000 a month

Select

Thread-Per-Core Partner

For companies deploying Tina in mission-critical environments. You receive large logo placement on the README.md, highest-priority routing for bug fixes, and a 30-minute architecture advisory call per month to discuss Shard topologies and memory budgets for your specific workload.