· MMV Firm · Brief · April 2026 ·

AI infrastructure that compresses what models learn from.

By Sean Jones, Chief Technology Officer


MMV makes AI cheaper to train and easier to update. We do this with two technologies. The first compresses the knowledge that AI models learn from, so models can be trained on a fraction of the original data. The second lets a deployed AI keep learning new things without forgetting the old ones. It can also forget specific things on demand, the way a privacy regulation might require.

I. The problem

Two coupled problems in AI infrastructure.

A I cannot keep scaling at its current pace without burning through impossible levels of resources. The bottleneck has become physical infrastructure, not software capability. At the same time, production AI systems face a binary choice: retrain a model from scratch every time the world changes, or run a stale model. There is no efficient middle path. The field calls this catastrophic forgetting, and it is one of the major unsolved problems in deployed AI.

Both problems compound the same way: as models get bigger and the world keeps changing, the cost of keeping models current grows faster than the value the models produce. We address the underlying inefficiency in both.


II. The solution

Two technologies, separately useful, designed to compose.

Knowledge Compression.
A geometric compression layer for the knowledge inside data. Rather than compressing raw bytes, we compress the learned representations that AI models actually use: the embeddings, the conditioning information, the substrate of model behavior.
Patent pending. IAMAI001 covers the compressed knowledge artifact and its encoding pipeline.
Continuous Learning.
A continual-learning architecture that lets a deployed model absorb new domains without losing prior capability, and selectively forget specific tasks on demand without retraining the rest. This addresses both the forgetting problem and the GDPR / EU AI Act requirement that AI systems support data deletion.
Patent pending — IAMAI002 covers the continual-learning architecture, including selective task unlearning in O(k) time with provable non-interference.
Fig. 1 · Where the two technologies fit
 
Source data
embeddings + data
MMV
Knowledge Compression
Patent IAMAI001
 
Trained model
smaller, cheaper
MMV
Continuous Learning
Patent IAMAI002
 
Updated model
in production
Knowledge Compression operates on the training side — shrinking what models learn from. Continuous Learning operates on the deployment side — keeping models current without retraining from scratch. Together they cover both ends of the AI model lifecycle.

III. What we have proven

Measured results across three operating points.

1. Compressed classification artifacts at industry-relevant ratios.

Across four standard text-classification benchmarks (AG News, DBpedia-14, Emotion, ToxicConversations), our compressed artifact preserves 88–99% of full-data classifier accuracy at compression ratios from 1,000× to 2,900×. On Banking77 — a 77-class fine-grained intent-classification task — our artifact is both more accurate (+4.5 points over the FAISS PQ M=2 industry baseline) and 3.3× smaller: a clean win on both axes simultaneously. This is the cleanest single capability we own. Shipped, repeatable, patent-pending.

2. Geometric compression at the language-model layer.

We applied the same geometric thesis to the input embedding layer of a 360M-parameter language model. The resulting student model retains 77% of the unmodified teacher's accuracy at the compressed operating point, with deployment storage of the embedding layer reduced 14×. A control student trained without compression matched the teacher within statistical noise — confirming the gap is the cost of compression specifically, not a training-budget artifact.

3. Continual learning with no measurable forgetting.

Five sequential tasks on Permuted MNIST with no measurable degradation on prior tasks. Selective task unlearning demonstrated at small scale with sub-millisecond unlearning time. Multi-domain LoRA composition validated on Qwen 0.5B across five sequential domains. A multi-domain run on a 7B-parameter model is in flight.

All of the above was accomplished on under $250K in seed funding, with a five-person team, on commodity hardware. We chose to spend that capital on validating the technical core rather than scaling prematurely.

Fig. 2 · Three measured operating points
1,000–2,900×
Compression

88–99% accuracy retained across four text-classification benchmarks.

14×
LM embedding layer

Deployment storage reduced 14× on a 360M-parameter LM at 77% teacher accuracy.

5
Sequential tasks

Permuted MNIST, no measurable forgetting. Sub-millisecond selective unlearning.

The three capabilities we have measured. Each is repeatable, patent-pending, and reproducible on commodity hardware.
Fig. 3 · No measurable forgetting across five sequential tasks
0255075100ACCURACY ON TASK 1 (%)Task 1Task 2Task 3Task 4Task 5SEQUENTIAL TRAINING ON NEW TASKS →MMV — Continuous Learningno measurable degradationConventional fine-tuning (illustrative)the catastrophic forgetting failure mode
Permuted MNIST, accuracy on Task 1 measured after each subsequent task is learned. MMV's system holds; the illustrative baseline curve traces the catastrophic-forgetting pattern that any conventional fine-tuning sequence exhibits without our continuous-learning architecture.

IV. Banking77, in detail

The cleanest Pareto-dominant point in our test set.

Banking77 is a 77-class intent-classification task — a hard, fine-grained benchmark. Our compressed artifact beats the industry-standard FAISS PQ M=2 baseline on both axes simultaneously: more accurate by 4.5 percentage points, and 3.3 times smaller in size. Pareto dominance is rare at this scale of granularity. It is the cleanest single capability we own.

Fig. 4 · Banking77 — accuracy and size, vs FAISS PQ M=2
ARTIFACT SIZE → SMALLER IS BETTERACCURACY → HIGHER IS BETTERFAISS PQ M=2industry baselineMMV+4.5 accuracy points3.3× smaller
Pareto-dominant: better on both accuracy and size at the same time. A clean win, not a trade-off.

V. What we have not yet shown

The Series A milestone.

Honest summary: we have proof points, not a finished product. The Series A funds taking validated proof-of-concept to externally-validated commercial proof. Specifically, the four things we have not yet shown:

End-to-end LLM training at commercial scale.
We have validated the components. We have not yet trained a complete model end-to-end on our infrastructure and benchmarked it head-to-head against a conventionally-trained equivalent.
Side-by-side training-time speed and cost.
Our compression ratios imply meaningful cost reductions at scale. We have not yet measured those reductions on a production-grade end-to-end run.
Multi-domain continual learning at 7B scale.
Validated at 0.5B. The 7B run is in flight; full demonstration is part of the Series A scope.
Independent third-party validation.
Peer-reviewed publication and / or external lab reproduction is included in the milestone deliverables.

VI. Scope

What we are. What we are not.

We sell efficiency and capability that hyperscalers, enterprises, and device manufacturers license to make their own models cheaper to train and easier to update. The structural comparable is ARM — a $132B foundational-efficiency layer paid by everyone who builds on top.

We are
  • +AI infrastructure company in Clearwater, Florida.
  • +Two patents pending — IAMAI001 and IAMAI002.
  • +Five-person team, under $250K self-funded to date.
  • +A licensing business — same economics as ARM, Qualcomm, Dolby.
We are not
  • Not a hyperscaler. We don't build frontier models.
  • Not a foundation-model company. The proof model is a validation artifact.
  • Not a consumer applications company. We don't build chatbots or agents.
  • Not a custom-silicon or hardware play. Algorithms, not chips.
  • Not a research lab without commercial intent.
Fig. 5 · Three buyer layers · Structural comparable: ARM
TIER 1 · LARGEST CONTRACTSAI labs & hyperscalersOpenAI · Anthropic · Google · Meta · Microsoft · AWSTIER 2 · COMPLIANCE-DRIVENEnterprises deploying AI at scalefinancial · healthcare · defense · regulated industriesTIER 3 · BROADEST REACHDevice manufacturers putting AI on-devicephones · laptops · vehicles · embedded systemsSTRUCTURAL COMPARABLEARM — $132B foundational-efficiency layer
Three buyer layers, each with distinct contract size and pricing logic. Same target business model as ARM, Qualcomm, Dolby — paid by everyone who ships on top of the layer.

VII. Capability domains

Where we stand, where the gap is.

The clearest answer to can MMV deliver what investors are looking for? is to put the questions every serious AI investor asks next to what we have actually measured, and to be specific about the gap.

Fig. 6 · Key domains and capability gaps
DomainCapabilityProven todayCapability gap (Series A)
Model typeFull LLM, side-by-side training run360M-parameter language model with our compression in the input embedding layerLarger model sizes (7B+) — pending compute
CompressionReduce data the model trains from14× deployment storage of input embedding layer; 1,000–2,900× compression of classification training artifacts at 88–99% retention; Banking77 beats industry baseline on both accuracy and sizeToken-level training-data compression for LLM pretraining — research roadmap
Quality retentionComparable accuracy to uncompressed baseline77% of teacher accuracy at the compression point, isolated cleanly from training cost via a no-compression controlClosing the 23-point gap to teacher requires longer training runs at scale
Speed / costFaster, cheaper LLM trainingNot yet demonstrated end-to-end. Cost wins are projected from compression ratios, not measured.End-to-end speed/cost benchmark on a production-scale workload — Series A milestone
Continual learningUpdate without retraining from scratch5 sequential tasks with no measurable forgetting; selective task unlearning at small scale; multi-domain LoRA composition validated on Qwen 0.5BMulti-domain validation at 7B parameter scale — in flight
The gap column is the Series A — capital, talent, time, and a defined milestone with falsifiable deliverables. The middle column is what we have today, repeatable, on the record, and covered by pending patents.
— End —
ServicesInvestor resources