Rumblings — Roadmap 2026

Feature Matrix to 3-Tier Commercial Product · March – December 2026
Created: 2026-02-28 Revised: 2026-04-14 Author: Aria (strategic planning) Constraint: Tom @ 2 days/week (656h) Scope: Demo-ready May, free pilots June, paid Q3/Q4 Status: MECE refactor complete
656h
Total Budget
4
Phases
17
Features
6
Milestones

Executive Summary

The honest assessment: 2026 is a validation year, not a revenue year. The business is funded. The goal is to prove the product works, build compelling case studies, run free pilots, and convert to paid by Q3/Q4. Revenue is an outcome of getting the product right, not the primary target.

The critical insight: The intelligence layer IS the product. Trend detection is table stakes — every trend getting a ‘So What’ (sector context), a lite ‘Now What’ (basic vertical-level activation suggestions), and generated narrative is what makes Rumblings worth paying for. If the LLM outputs are generic or wrong, nothing else matters.

The revised timeline: No pilots until June (free). No paid revenue until Q3/Q4. March was case studies + intelligence layer + data quality (all complete). April is V6 SOP quality + Tim onboarding. Multi-tenancy pulled forward to W16 via Tim Goerner (Augmentra, 2 days/week). This is a deliberate sequencing change — build the product that impresses, THEN worry about client infrastructure.

Hour Budget at a Glance

Phase Period Hours Focus
Phase 1: Build March – May 208h Intelligence layer (V6 SOPs), /report skill, multi-tenancy (Tim), social research
Phase 2: Pilot June – July 128h Free pilots, client matching, activation features
Phase 3: Tier 2 August – September 128h Content briefs, creator matching, start charging
Phase 4: Tier 3 Stretch October – December 192h API, attribution, trajectory modelling
Total March – December 656h
Note on March: Tom is pushing to 80h (20h/week) because March is critical. All other months are 64h (16h/week, 2 days).

What “Done” Looks Like

T1 Done

V6 reports generating quality intelligence + live demo + per-client homework process. That’s the definition of ready for pilots.

Pilots Done

2–3 clients receiving weekly intelligence reports (manual email delivery initially), providing feedback.

Year Done

Paid clients, validated ICP, clear Tier 2 upsell path, 2027 plan.


Milestones

ID Milestone Target Definition of Done
M1 Pipeline + Scoring Complete Done Feb 28 Pipeline running, H/W/D scoring, dashboard operational
M2 Demo Ready May 31 3 case studies (retro + live), live demo, dashboard polished, intelligence layer producing quality output
M3 First Pilots Jun 30 2–3 free pilots launched, receiving weekly intelligence reports (manual email initially)
M4 Validation Checkpoint Jul 31 Validation evidence from pilots, ICP clarity, go/no-go on scaling
M5 First Revenue Sep 30 Convert free pilots to paid, scale decision made
M6 Tier 2 Complete Nov 30 Client matching + activation + content briefs live for paying clients

Feature Priority Order

This is the authoritative build sequence. Every feature depends on the ones above it.

Priority Feature Target Period Hours Est
1Intelligence layer (So What + Now What lite + narratives)March–April~55h
2Data quality (Google Trends #1913, fuzzy dedup #1602, entity resolution #1706)March~20h
3Product polish (client-ready dashboard)March–April~20h
4Demo environment + pilot prep homeworkMarch–April~10h
5Pipeline observability (lightweight hooks)April~4h
6Legal/contracts (ToS, data agreements, privacy policy)April~9h
6.5Social Signal Validation ResearchApril–May (evenings)~15h
7Multi-tenancy foundationApril–May (Tim W16+)~22h (Tim)
8Minimal auth (API key + URL) — Tim W18May~4h (Tim)
9Client onboarding (needs founder workshop for docs)May–June~15h
10Client matching (F-05)June–July~40h
11‘Now What’ activation (F-06)July~20h
12Content briefs (F-07)July–August~35h
13Creator matching (F-08)August~20h
14Saturation alerts (F-09)August~15h
15API access (F-12)September–October~45h
16Trend attribution (F-11)September–October~45h
17Trajectory modelling (F-10)October–November~45h

Phase 1

Phase 1: Build (March – May) — 208h

March — “Case Studies + Intelligence Layer”

80h

March is the make-or-break month. Tom is pushing to 20h/week because this work directly determines whether the June pilot demo lands. Four parallel streams, all critical.

Sprint M3-1: Intelligence Layer Foundation + Data Quality (Weeks 1–2, ~40h)

Intelligence Layer — Core Architecture (~20h)

TaskHoursDependenciesNotes
Design intelligence layer architecture4NoneLLM pipeline: trend data in, So What + Now What lite + narrative out
Build ‘So What’ generation (sector context per trend)8ArchitectureEvery trend gets sector-specific context explaining WHY it matters
Build lite ‘Now What’ suggestions (basic vertical-level)5So What workingGeneric activation suggestions by vertical, NOT client-specific
Narrative generation (trend story for reports)3So What + Now WhatCombining signals into readable trend narrative
Important: Lite ‘Now What’ is part of the Tier 1 product — basic vertical-level suggestions that ship with every trend. This is NOT the full client-specific ‘Now What’ activation (F-06), which is Tier 2. Don’t over-build here.

Data Quality Fixes (~20h)

TaskHoursDependenciesNotes
Google Trends enrichment fix (#1913)8NoneBroken since Feb 11. 6,493 pending. Blocking Height V2 GA dual scoring.
Fuzzy dedup (#1602)6NoneDuplicate terms polluting signal quality
Entity resolution (#1706)6NoneSame entity appearing as different terms
Sprint M3-2: Case Studies + Polish (Weeks 3–4, ~40h)

Case Studies (~20h)

TaskHoursDependenciesNotes
Select 3–5 retrospective trend candidates2NoneCherry-pick from 151 archaeology trends — strongest lead-time examples
Build retrospective case studies (3 minimum)10Candidate selectionShow: Rumblings detected X, Y weeks before mainstream. Timeline, signals, outcome.
Set up live tracking case studies5Intelligence layerPick 2–3 current emerging trends, start tracking with full intelligence layer
Write case study presentation materials3Retro cases doneSlides/docs for AJ and Jen to use in kickoffs
T1 “Done” = 3 case studies + live demo. If we nail the retrospective cases and the live tracking shows signal, we’re ready.

Product Polish (~10h)

TaskHoursDependenciesNotes
Dashboard client-readiness audit3NoneWhat does a non-technical person see? Remove internal jargon, fix UX issues
Dashboard visual polish4Audit completeCharts, layout, loading states, error messages
Demo flow preparation3Polish doneGuided demo path through dashboard showing intelligence layer value

March Deliverables

  • Intelligence layer producing So What + lite Now What + narrative for every scored trend — V5 shipped, V6 in progress
  • Google Trends enrichment working again (completed 2026-02-25)
  • Fuzzy dedup and entity resolution deployed — deployed W11
  • 3 retrospective case studies written — V5 reports shipped W13; standalone case studies reframed as per-client homework (MECE refactor Apr 14)
  • 2–3 live tracking case studies initiated
  • Dashboard visually polished for external eyes

April — “Intelligence Deep + Demo Ready”

64h

April is about quality. The intelligence layer exists from March — now it needs to be GOOD. Jen and AJ start reviewing outputs and providing feedback. Pipeline observability gets lightweight hooks. Tim Goerner (Augmentra) starts W16 on multi-tenancy. Email delivery is manual for pilots.

Sprint M4-1: Intelligence Refinement + Pipeline Quality (Weeks 1–2, ~32h)

Intelligence Layer Quality (~15h)

TaskHoursDependenciesNotes
Jen/AJ quality review loop — first round3March intelligence layerThey review 20–30 trend outputs, flag generic/wrong/weak ones
Prompt iteration based on feedback6Review feedbackThe hard work — making LLM outputs specific, insightful, actionable
Sector-specific tuning (curated examples per vertical)4Prompt iterationFashion trends need different So What framing than tech trends
Second review round + refinement2Tuning doneIterate until quality bar is met
This is the highest-risk work in the entire plan. If So What/Now What outputs are generic (“This trend is growing and brands should pay attention”), we fail. The outputs need to be specific, insightful, and show domain expertise. Budget extra time here if needed — steal from polish, not from this.

Pipeline & Data Quality (~10h)

TaskHoursDependenciesNotes
GT enrichment all sources2NoneExtend Google Trends enrichment across all collector sources
Intelligence edge cases4Core intelligence workingHandle insufficient data, low confidence, novel patterns
Narrative quality validation2Prompt iterationReadability, tone, length consistency checks
Pipeline observability hooks2Core pipeline stableLightweight monitoring hooks for pipeline health
Email delivery is manual for pilots. Automated delivery deferred to Tim’s WS2 or Phase 2.

Tim Goerner / Augmentra (~32h from W16)

TaskHoursDependenciesNotes
Local dev setup4NoneDB snapshot, Plotly Dash env, no VPS access
Multi-tenancy DDL14Dev setup doneClient tables, schema design, migration scripts
Client-scoped queries8DDL completeEvery intelligence query becomes client-aware
Validation UI backend4Queries workingBackend support for validation views
React portal proposal2DDL completeProposal only — decision deferred to WS2
Sprint M4-2: Demo Materials + Legal (Weeks 3–4, ~32h)

Intelligence Layer Continued (~10h)

TaskHoursDependenciesNotes
Edge case handling (insufficient data, low confidence)4Core intelligence workingWhat does the system say when it doesn’t have enough signal?
Narrative quality polish3Prompt iterationReadability, tone, length consistency
Performance optimization (batch processing for reports)3Core pipeline stableIntelligence layer needs to process all scored trends efficiently

Case Study Refinement + Demo Materials (~10h)

TaskHoursDependenciesNotes
Refine retrospective case studies with intelligence layer outputs4Intelligence layer quality passNow case studies include actual So What / Now What content
Live tracking case study update2Ongoing since MarchHow are the live-tracked trends performing? Update narrative.
Build demo script for AJ/Jen2Dashboard polishedStep-by-step guided demo they can run independently
Demo environment setup2Demo scriptClean demo dataset, stable environment, no surprises

Legal / Contracts (~9h)

TaskHoursDependenciesNotes
Terms of Service draft3NoneLori leads, Tom reviews. Standard SaaS ToS.
Data agreements3NoneWhat data do we collect, how do we use it, client data ownership
Privacy policy3NoneEspecially important given trend data sourcing
Legal was missing from the original plan. Lori leads this work — Tom’s role is technical review (data handling, API terms) and sign-off. 9h is Tom’s time, not total effort.

April Deliverables

  • Intelligence layer outputs reviewed and approved by Jen/AJ (quality bar met)
  • Pipeline observability hooks deployed
  • Demo script written and tested
  • 3 case studies refined with intelligence layer content
  • Legal documents drafted (ToS, data agreements, privacy policy)
  • Tim WS1: Multi-tenancy tables + scoped queries

May — “Multi-Tenancy + Pilot Prep”

64h

May is infrastructure month. Multi-tenancy gets built just-in-time for June pilots. Auth gets researched and minimally implemented. Onboarding flow gets created so AJ/Jen can run client kickoffs independently.

Sprint M5-1: Multi-Tenancy Foundation (Weeks 1–2, ~32h)

Multi-Tenancy (~30h)

TaskHoursDependenciesNotes
Client data model design (tables, relationships)6NoneMinimum viable: clients, client_verticals, client_terms, client_preferences
Client tables + migrations8Data model designedPostgreSQL schema, migration scripts
Query layer updates (filter by client context)10Tables createdEvery intelligence query becomes client-aware
Dashboard multi-tenant views6Query layer doneClient-scoped dashboard views (client sees only their trends)
Scope discipline is critical here. Minimum viable client model = 3–4 tables. No fancy role-based access control, no billing integration, no usage tracking. Just “who is this client and what verticals/terms do they care about?” Multi-tenancy scope creep is Risk #2.
Sprint M5-2: Auth + Pilot Prep (Weeks 3–4, ~32h)

Minimal Auth (API key + unique URL) — Tim W18 (~4h)

TaskHoursDependenciesNotes
API key generation per client2Multi-tenancy doneSimple key-based auth, no login flow needed for pilots
Unique URL per client2API key workingClient accesses their scoped view via unique URL + key

Tim Goerner / Augmentra WS2 — Report Infrastructure (conditional, June–July)

TaskHoursDependenciesNotes
Report infrastructure backend10WS1 completeServing generated reports to client-scoped views
Manual email delivery tooling4Reports accessibleLightweight tooling to support manual email sends during pilots
React portal evaluation4WS1 learningsEvaluate only — build decision based on pilot feedback

Pilot Preparation (~15h)

TaskHoursDependenciesNotes
Client onboarding flow design4Multi-tenancy doneDiscovery → seed terms → configuration → first delivery
Onboarding documentation for AJ/Jen4Flow designedThey run client kickoffs (discovery, seed term workshop). Tom does technical config only.
Technical onboarding automation5Documentation doneScript/tool to create client, set up terms, configure notifications
End-to-end pilot dry run2Everything aboveFull pilot simulation: onboard fake client, deliver first intelligence report
AJ/Jen run client kickoffs (discovery, seed term workshop). Tom does technical config only. This separation is critical for scaling — Tom can’t be in every client meeting.

May Deliverables

  • Multi-tenancy foundation deployed (client tables, scoped queries, scoped dashboard)
  • Minimal auth (API key + unique URL) implemented
  • Client onboarding flow documented and tested
  • AJ/Jen trained on kickoff process
  • End-to-end pilot dry run completed successfully
  • All M2 milestone criteria met (3 case studies + live demo + dashboard polished)

Phase 2

Phase 2: Pilot (June – July) — 128h

June — “First Pilots”

64h

The moment of truth. Real clients (free) receiving real intelligence. Every assumption gets tested.

Sprint M6-1: Launch Pilots (Weeks 1–2, ~32h)

Multi-Tenancy Completion (~10h)

TaskHoursDependenciesNotes
Multi-tenancy refinements from dry run5May dry run feedbackEdge cases, performance issues, missing fields
Client data seeding for pilot clients5Clients identifiedAJ/Jen have run kickoffs, Tom configures technical setup

First Pilots (~15h)

TaskHoursDependenciesNotes
Pilot client 1: technical onboarding5Client kickoff done (AJ/Jen)Configure terms, verticals, notification preferences
Pilot client 2: technical onboarding5Client kickoff done (AJ/Jen)Same process, second client
Pilot client 3: technical onboarding (stretch)3If bandwidth allowsThird pilot if things go smoothly
First weekly intelligence delivery2Onboarding completeManual email delivery. Watch for issues.

Bug Fixes + Iteration (~7h)

TaskHoursDependenciesNotes
Day-1 bug fixes4Pilots launchedThings will break. Budget for it.
Delivery quality review3First delivery sentReview what clients actually received. Quality check.
Sprint M6-2: Client Matching Starts (Weeks 3–4, ~32h)

Client Matching (F-05) (~20h)

TaskHoursDependenciesNotes
Client matching architecture4Multi-tenancy doneMatch trends to specific client verticals, interests, brand positioning
Client-trend relevance scoring10Architecture designedLLM-based: given client profile + trend, how relevant is this?
Client-matched report generation6Scoring workingClients receive only trends relevant to THEM, ranked by relevance

Pilot Iteration (~12h)

TaskHoursDependenciesNotes
Week 2 intelligence delivery2OngoingDeliver, review, iterate
Client feedback collection3Deliveries happeningWhat’s useful? What’s noise? What’s missing?
Feedback-driven improvements5Feedback receivedAdjust intelligence layer, matching, delivery format
Week 3 intelligence delivery2Improvements madeBetter delivery informed by feedback

June Deliverables

  • 2–3 free pilots launched and receiving weekly intelligence
  • Intelligence delivered via manual email
  • Client matching (F-05) architecture built and partially deployed
  • First round of client feedback collected and acted on
  • M3 milestone met (2–3 free pilots receiving weekly intelligence)

July — “Prove Value + Validation Checkpoint”

64h

July is about deepening value for pilot clients and gathering the evidence needed for the validation checkpoint. Client matching gets completed, ‘Now What’ activation starts, content briefs begin.

Sprint M7-1: Matching + Activation (Weeks 1–2, ~32h)

Client Matching Completion (~20h)

TaskHoursDependenciesNotes
Client matching refinement from pilot feedback8June feedbackAre we matching the right trends to the right clients?
Relevance scoring calibration6Feedback-drivenTune the matching model based on what clients actually found useful
Matched delivery automation6Matching refinedFully automated: pipeline scores trends → matches to clients → delivers

‘Now What’ Activation (F-06) (~12h)

TaskHoursDependenciesNotes
‘Now What’ client-specific architecture3Client matching doneGiven client profile + matched trend, generate specific activation suggestions
Client-specific activation generation6Architecture done“Brand X should do Y because Z” — specific, actionable, client-aware
Integration with intelligence reports3Generation workingNow What appears in weekly intelligence alongside So What
Tier 1 has lite ‘Now What’ (generic vertical-level). Tier 2 ‘Now What’ (F-06) is client-specific. The upgrade path should be obvious to clients: “See how we say ‘fashion brands should X’? Paid tier says ‘YOUR brand should X because of your positioning.’”
Sprint M7-2: Content Briefs Start + Checkpoint (Weeks 3–4, ~32h)

Content Brief Generation (F-07) Start (~15h)

TaskHoursDependenciesNotes
Content brief template design3None500-word structured brief: angle, audience, key messages, format, timing, brand safety
Brief generation pipeline8Template designedLLM generates briefs from trend data + client profile + So What / Now What context
Brief quality review (Jen)4First briefs generatedJen reviews output quality — this is content strategy, her domain
Content briefs are medium depth — 500-word structured brief, NOT a full content strategy. Structured sections: angle, target audience, key messages, recommended format, timing window, brand safety considerations.

Validation Checkpoint Prep (~8h)

TaskHoursDependenciesNotes
Compile pilot results and metrics3July pilots runningWhat have we delivered? What feedback? Client engagement metrics?
Validation evidence document3Results compiledClear evidence of product-market fit (or gaps)
Checkpoint presentation2Evidence documentedM4 validation review: evidence, ICP clarity, go/no-go on scaling

Ongoing Pilot Delivery (~9h)

TaskHoursDependenciesNotes
Weekly intelligence deliveries (4 weeks)4OngoingAutomated but monitored
Client feedback iteration3OngoingContinuous improvement loop
Bug fixes + operational issues2OngoingThings break, clients ask questions

July Deliverables

  • Client matching (F-05) fully deployed and calibrated
  • ‘Now What’ activation (F-06) client-specific generation working
  • Content briefs (F-07) pipeline started, first briefs reviewed by Jen
  • M4 validation checkpoint completed with evidence
  • Clear go/no-go decision on scaling

Phase 3

Phase 3: Tier 2 (August – September) — 128h

August — “Tier 2 Core”

64h

Assuming M4 checkpoint is a go, August builds out the Tier 2 features that differentiate the paid product.

Sprint M8-1: Content Briefs + Creator Matching (Weeks 1–2, ~32h)

Content Brief Completion (~20h)

TaskHoursDependenciesNotes
Brief generation refinement from Jen feedback8July reviewQuality tuning — briefs need to be genuinely useful, not generic
Multi-format brief support5Core briefs workingSocial post angles vs. long-form vs. video concept briefs
Brief delivery integration4Briefs refinedBriefs included in weekly intelligence reports for Tier 2 clients
Automated brief quality scoring3Delivery workingSelf-assessment: flag low-confidence briefs for human review

Creator Matching (F-08) (~12h)

TaskHoursDependenciesNotes
Creator signal extraction from existing data5NoneMine Bluesky, Substack, HN data for creator profiles
Creator-trend matching5Signal extraction doneWhich creators are already talking about which trends?
Creator recommendation in reports2Matching working“These creators are already engaging with this trend”
Creator matching builds from EXISTING signal data. We do NOT build a separate creator database. We already have Bluesky engagement data, Substack author data, HN poster data. Mine it. This keeps scope manageable and the data fresh.
Sprint M8-2: Saturation Alerts + Iteration (Weeks 3–4, ~32h)

Saturation Alerts (F-09) (~15h)

TaskHoursDependenciesNotes
Saturation signal detection6Trend scoring pipelineWhen does a trend go from “emerging” to “saturated”? Velocity decay, mainstream media pickup.
Saturation alert generation5Detection working“Trend X is approaching saturation — act now or deprioritize”
Alert delivery integration4Notification systemSaturation alerts via email, separate from weekly report

Ongoing Pilot Delivery + Iteration (~9h)

TaskHoursDependenciesNotes
Weekly deliveries (4 weeks)4OngoingNow includes matching, activation, briefs for Tier 2 pilots
Client feedback + iteration3OngoingAre Tier 2 features actually valued?
Bug fixes + operational2OngoingSteady-state operations

August Deliverables

  • Content briefs (F-07) fully deployed (500-word structured briefs)
  • Creator matching (F-08) live from existing signal data
  • Saturation alerts (F-09) deployed
  • Tier 2 feature suite available for pilot upsell conversations

September — “Start Charging + Scale Decision”

64h

September is the conversion month. Free pilots become paid clients (or they don’t, and that’s important information too). Technical work shifts to Tier 3 foundations.

Sprint M9-1: Conversion + API (Weeks 1–2, ~32h)

Pilot Conversion (Sales/Ops — Low Tom Hours) (~5h)

TaskHoursDependenciesNotes
Technical support for conversion conversations3AJ/Jen lead salesTom provides technical answers, pricing support, contract configuration
Billing/subscription setup2Conversion happeningMinimal billing integration (Stripe or manual invoicing for v1)
Conversion is sales/ops work, not engineering. AJ runs relationships, Lori handles ops, Jen handles content quality positioning. Tom supports technically.

API Access (F-12) (~25h)

TaskHoursDependenciesNotes
REST API design4Multi-tenancy stableAuthenticated API for agencies building on Rumblings data
API implementation (trend data + intelligence)12Design completeCore endpoints: trends, intelligence, matching, briefs
Authentication + rate limiting5API implementedAPI key auth, per-client rate limits
API documentation4API workingDeveloper docs for agency clients
API access is for agencies building on top of Rumblings data. Authenticated REST API, not a public data dump. This opens the “platform” revenue stream alongside the “product” revenue stream.
Sprint M9-2: Attribution + Scale Decision (Weeks 3–4, ~32h)

Trend Attribution (F-11) (~25h)

TaskHoursDependenciesNotes
Attribution model design5Historical data available“Rumblings detected this trend X days before mainstream media” — provable lead time
Attribution data pipeline10Model designedCompare Rumblings detection timestamps vs. mainstream media coverage timestamps
Attribution reports for clients6Pipeline working“Here’s proof we gave you early signal” — retention and upsell tool
Attribution dashboard view4Reports workingVisual timeline: our signal vs. mainstream coverage

M5 Scale Decision (~7h)

TaskHoursDependenciesNotes
Compile all pilot + conversion data3September data availableRevenue, retention, feedback, usage metrics
Scale decision document2Data compiledHonest assessment: is this working? What needs to change?
2027 preliminary planning2Decision madeBased on M5 outcome: scale up, pivot, or adjust

September Deliverables

  • Free pilots converted to paid (or clear lessons on why not)
  • M5 Scale Decision checkpoint completed
  • API access (F-12) implemented and documented
  • Trend attribution (F-11) pipeline deployed
  • Clear evidence for or against scaling

Phase 4

Phase 4: Tier 3 Stretch (October – December) — 192h

October — “Tier 3 Foundations”

64h
Sprint M10-1: API + Attribution Completion (Weeks 1–2, ~32h)

API Completion (~20h)

TaskHoursDependenciesNotes
API refinement from early adopter feedback8Sep API launchWhat endpoints are agencies actually using? What’s missing?
API v1.1 improvements8Feedback collectedAdd missing endpoints, improve response formats
API monitoring + analytics4API stableUsage tracking, error rates, latency monitoring

Attribution Completion (~12h)

TaskHoursDependenciesNotes
Attribution refinement6Sep attribution launchImprove accuracy, expand mainstream media comparison sources
Client-facing attribution reports6Refinement doneAutomated monthly attribution summaries per client
Sprint M10-2: Trajectory Modelling Starts (Weeks 3–4, ~32h)

Trajectory Modelling (F-10) — Basic Version (~20h)

TaskHoursDependenciesNotes
Pattern matching architecture5151 archaeology trends availableMatch current trends against historical trajectory patterns
Historical pattern library8Architecture designedClassify 151 archaeology trends into trajectory archetypes (flash, slow burn, seasonal, etc.)
Basic trajectory prediction7Pattern library built“This trend’s signal pattern looks like [archetype], which typically [outcome]”
This is pattern matching, NOT full predictive modelling. We have 151 archived trends from trend archaeology. Match current trend signal patterns against historical ones. “This trend is following the same signal pattern as X, which peaked in Y weeks and Z’d.” Good enough beats perfect.

November — “Trajectory + Polish”

64h
Sprint M11-1: Trajectory Refinement (Weeks 1–2, ~32h)

Trajectory Modelling Refinement (~25h)

TaskHoursDependenciesNotes
Trajectory accuracy validation8Basic model workingBack-test: how well do pattern matches predict actual outcomes?
Confidence scoring5Validation done“High confidence (similar to 12 historical trends)” vs. “Low confidence (novel pattern)”
Trajectory visualization6Scoring workingDashboard view showing predicted trajectory with confidence bands
Integration with intelligence reports6Visualization doneTrajectory predictions included in weekly intelligence for Tier 3 clients

Client-Driven Improvements (~7h)

TaskHoursDependenciesNotes
Client feature requests triage3Ongoing feedbackWhat are clients actually asking for?
High-priority client improvements4Triage doneBuild what clients need most
Sprint M11-2: Data Quality + Polish (Weeks 3–4, ~32h)

Data Quality Improvements (~10h)

TaskHoursDependenciesNotes
Collector health improvements4Ongoing monitoringFix any degraded collectors
Signal quality enhancements4Ongoing monitoringImprove scoring accuracy based on months of real data
Pipeline performance optimization2OngoingSpeed, reliability, cost optimization

M6 Tier 2 Complete Milestone (~4h)

TaskHoursDependenciesNotes
Tier 2 feature audit2All Tier 2 features deployedAre all Tier 2 features production-quality?
M6 milestone documentation2Audit doneFormal sign-off: matching + activation + briefs live for paying clients

November Deliverables

  • Trajectory modelling (F-10) deployed with confidence scoring
  • M6 Tier 2 Complete milestone met
  • All data quality issues addressed
  • Client-driven improvements shipped

December — “Polish + Package”

64h
Sprint M12-1: Final Polish (Weeks 1–2, ~32h)

Tier 1 Final Polish (~15h)

TaskHoursDependenciesNotes
Intelligence layer quality audit5Full year of dataReview output quality after months of iteration
Notification system reliability3Months of deliveryFix any delivery reliability issues
Dashboard UX improvements4Client feedbackPolish based on 6 months of actual usage
Documentation update3All aboveEnsure all documentation reflects current state

Tier 2 Final Polish (~15h)

TaskHoursDependenciesNotes
Client matching accuracy review4Months of matching dataHow accurate is matching? What can improve?
Content brief quality review4Months of briefsBrief quality over time — improving or degrading?
Creator matching refresh4Months of signal dataUpdate creator profiles with latest data
Feature integration audit3All Tier 2 featuresDo all Tier 2 features work well together?
Sprint M12-2: Package + Plan (Weeks 3–4, ~32h)

Tier 3 Preview Packaging (~15h)

TaskHoursDependenciesNotes
API access packaging5API stableDeveloper portal, onboarding flow, pricing
Attribution product packaging5Attribution stableHow do we sell/present attribution as a feature?
Trajectory modelling preview5Model stablePreview packaging for Tier 3 upsell conversations

Year-End Review + 2027 Planning (~19h)

TaskHoursDependenciesNotes
2026 year-in-review document5Full year dataWhat worked, what didn’t, key metrics, lessons
Client feedback synthesis4All client feedbackAggregate themes across all pilot and paid clients
2027 roadmap draft5Review + feedbackWhere does the product go next? White-label (F-13)? Scale?
Technical debt assessment3Full codebase reviewWhat needs refactoring before 2027 scale?
Founder alignment session prep2All aboveMaterials for end-of-year founder planning session

December Deliverables

  • All three tiers polished and production-quality
  • Tier 3 packaged for 2027 sales conversations
  • 2026 review completed
  • 2027 roadmap drafted
  • Technical debt documented and prioritized

Critical Path

The critical path is the sequence of work where a delay in any item delays the entire plan. Everything else has slack.

Intelligence Layer (Mar)
     Intelligence Quality (Apr)
         Case Studies Refined (Apr)
             Demo Ready [M2] (May)
                 Multi-Tenancy (May)
                     Pilot Onboarding (May–Jun)
                         First Pilots [M3] (Jun)
                             Client Feedback (Jun–Jul)
                                 Validation Checkpoint [M4] (Jul)
                                     Conversion (Sep)
                                         First Revenue [M5] (Sep)

The critical dependency: Intelligence layer quality. If So What / Now What outputs aren’t good by end of April, case studies don’t impress, demo doesn’t land, pilots don’t convert. Everything flows from output quality.

Off the critical path (can slip without delaying milestones):


Dependencies Map

Internal Dependencies

FeatureDepends OnNotes
Intelligence layerScored trend data (done)H/W/D scoring pipeline is M1-complete
Case studiesIntelligence layerNeed So What / Now What in case study output
Notification systemNone (standalone)But needs intelligence layer content to be useful
Multi-tenancyNone (standalone)But benefits from notification system being ready
AuthMulti-tenancyNeeds client model to scope access
Client matching (F-05)Multi-tenancyNeeds client profiles to match against
‘Now What’ activation (F-06)Client matchingNeeds matched trends to generate client-specific actions
Content briefs (F-07)Client matchingNeeds client context for relevant briefs
Creator matching (F-08)Signal data (existing)Mines existing Bluesky/Substack/HN data
Saturation alerts (F-09)Trend scoring pipelineDetects velocity decay in scored trends
Trajectory modelling (F-10)151 archaeology trendsPattern matching against historical data
Trend attribution (F-11)Historical detection dataCompares our timestamps vs. mainstream
API access (F-12)Multi-tenancy + authNeeds client scoping and authentication

External Dependencies

DependencyOwnerRiskNotes
Jen/AJ quality reviewJen, AJMediumNeed their availability in April for intelligence layer review
AJ/Jen pilot kickoffsAJ, JenMediumThey run discovery + seed term workshops. Tom can’t do this alone.
Lori legal/contractsLoriLowLori leads, Tom reviews. Low Tom-hours.
Client willingness (free pilots)AJ, Jen (networks)LowWarm leads exist from founder networks
Client willingness (paid conversion)AJ, LoriMediumFree-to-paid conversion is unproven
Founder validation checkpointAll foundersLowNeed solid evidence for go/no-go. M4 must be convincing.

Operational Roles

RoleWhoScope
Technical developmentTomAll engineering, all the time
Client relationshipsAJKickoffs, ongoing relationship, conversion conversations
Content qualityJenIntelligence layer review, brief quality, editorial standards
OperationsLoriLegal, billing, contracts, operational processes
Account managementSharedLori ops, AJ relationships, Jen content quality. No dedicated AM in 2026.
Client kickoffsAJ + JenDiscovery, seed term workshop. Tom does technical config ONLY.

Risk Register

IDRiskProbImpactMitigationOwner
R1 LLM output quality — Intelligence layer produces generic/wrong So What and Now What content HIGH HIGH Jen/AJ quality review loop, prompt iteration, curated vertical-specific examples, quality scoring Tom + Jen
R2 Multi-tenancy scope creep — Client data model expands beyond minimum viable MED HIGH Strict scope: 3–4 tables, no fancy auth, no billing integration. Time-box to 30h. Tom
R3 No clients by July — Can’t find willing pilot clients LOW HIGH AJ/Jen activating their networks. Warm leads exist. Free removes price objection. AJ + Jen
R4 Single developer bottleneck — Tom illness/burnout stops all engineering progress HIGH MED AI-assisted development (1.5x multiplier), well-documented codebase, sustainable pace (except March push) Tom
R5 Case studies don’t impress — Retrospective analysis doesn’t show clear lead time MED MED Cherry-pick strongest examples from 151 archaeology trends. Run live tracking as backup evidence. Tom
R6 Intelligence layer takes too long — Prompt iteration absorbs more hours than budgeted MED HIGH Steal hours from polish/buffer, not from data quality. Accept “good enough” over perfect for v1. Tom
R7 Auth complexity — Chosen auth approach is more complex than expected LOW MED Research task with time-box. Worst case: basic JWT + API key for v1. Tom
R8 Notification delivery failures — Email delivery is unreliable LOW MED Build abstraction layer so channels are swappable. Monitor delivery rates. Tom

Risk Response Thresholds


Tier Structure Recap

Tier 1: Watch

Free Pilots → Entry Paid

F-01Trend alerts (weekly intelligence via manual email)
F-02Client-scoped dashboard
F-03Cross-platform validation (H/W/D scoring) DONE
F-04‘So What’ context + lite ‘Now What’
Tier 2: Act

Paid Upsell

F-05Client matching
F-06‘Now What’ activation (client-specific)
F-07Content briefs (500-word structured)
F-08Creator matching
F-09Saturation alerts
Tier 3: Lead

Stretch / 2027

F-10Trajectory modelling
F-11Trend attribution
F-12API access
F-13White-label (2027)
F-14Dedicated AM (N/A)
Tier 1 includes basic activation: Every trend gets vertical-level generic Now What suggestions as part of the So What context. This is NOT the full client-specific Now What (F-06) — that’s Tier 2.

Deferred to 2027

ItemReason
White-label (F-13)Requires multi-tenant maturity, brand customization infrastructure. Not viable in 2026.
Dedicated account manager (F-14)Shared across founders in 2026. Hire when client count justifies.
Full predictive trajectory modellingBasic pattern matching is enough for 2026. Full ML modelling requires more data + dedicated data science time.
Advanced auth (SSO, SAML, RBAC)Minimal auth in 2026. Enterprise auth features when enterprise clients arrive.
Billing automationManual invoicing or basic Stripe for 2026. Automate when client count justifies.

Open Research Tasks

These are explicitly unresolved and need dedicated research sessions before implementation:

TaskTarget SessionDecision Needed
Auth approachApril (before May implementation)Clerk vs. Auth0 vs. Supabase Auth vs. custom JWT
Pricing modelAugust (before September conversion)Per-seat vs. per-report vs. tiered flat-rate
API rate limits and pricingSeptember (with API build)Free tier limits, paid tier pricing

Month-by-Month Summary

MonthHoursThemeKey Deliverable
March80hCase Studies + Intelligence LayerIntelligence layer producing quality output, 3 case studies, data quality fixed
April64hIntelligence Deep + Demo ReadyIntelligence quality bar met (Jen/AJ approved), notifications working, legal drafted
May64hMulti-Tenancy + Pilot PrepClient data model, auth, onboarding flow, M2 Demo Ready milestone
June64hFirst Pilots2–3 free pilots launched, client matching starts, M3 First Pilots milestone
July64hProve Value + CheckpointMatching complete, Now What activation, content briefs start, M4 Validation Checkpoint
August64hTier 2 CoreContent briefs, creator matching, saturation alerts
September64hStart Charging + ScaleConvert to paid, API + attribution start, M5 First Revenue
October64hTier 3 FoundationsAPI + attribution complete, trajectory modelling starts
November64hTrajectory + PolishTrajectory modelling, M6 Tier 2 Complete, data quality
December64hPolish + PackageAll tiers polished, 2027 planning, year-end review

Revision History

DateChangeAuthor
2026-02-28Original draft from feature matrix analysisAria
2026-03-01Complete rewrite following founder interview. Major changes: pilots deferred to June (free), March refocused on case studies + intelligence layer, multi-tenancy deferred to May, intelligence layer elevated to core product priority, legal/contracts added, pricing/revenue deferred to Q3/Q4, auth marked as open research task.Aria