Executive Summary
The honest assessment: 2026 is a validation year, not a revenue year. The business is funded. The goal is to prove the product works, build compelling case studies, run free pilots, and convert to paid by Q3/Q4. Revenue is an outcome of getting the product right, not the primary target.
The critical insight: The intelligence layer IS the product. Trend detection is table stakes — every trend getting a ‘So What’ (sector context), a lite ‘Now What’ (basic vertical-level activation suggestions), and generated narrative is what makes Rumblings worth paying for. If the LLM outputs are generic or wrong, nothing else matters.
The revised timeline: No pilots until June (free). No paid revenue until Q3/Q4. March was case studies + intelligence layer + data quality (all complete). April is V6 SOP quality + Tim onboarding. Multi-tenancy pulled forward to W16 via Tim Goerner (Augmentra, 2 days/week). This is a deliberate sequencing change — build the product that impresses, THEN worry about client infrastructure.
Hour Budget at a Glance
| Phase |
Period |
Hours |
Focus |
| Phase 1: Build |
March – May |
208h |
Intelligence layer (V6 SOPs), /report skill, multi-tenancy (Tim), social research |
| Phase 2: Pilot |
June – July |
128h |
Free pilots, client matching, activation features |
| Phase 3: Tier 2 |
August – September |
128h |
Content briefs, creator matching, start charging |
| Phase 4: Tier 3 Stretch |
October – December |
192h |
API, attribution, trajectory modelling |
| Total |
March – December |
656h |
|
Note on March: Tom is pushing to 80h (20h/week) because March is critical. All other months are 64h (16h/week, 2 days).
What “Done” Looks Like
T1 Done
V6 reports generating quality intelligence + live demo + per-client homework process. That’s the definition of ready for pilots.
Pilots Done
2–3 clients receiving weekly intelligence reports (manual email delivery initially), providing feedback.
Year Done
Paid clients, validated ICP, clear Tier 2 upsell path, 2027 plan.
Milestones
| ID |
Milestone |
Target |
Definition of Done |
| M1 |
Pipeline + Scoring Complete |
Done Feb 28 |
Pipeline running, H/W/D scoring, dashboard operational |
| M2 |
Demo Ready |
May 31 |
3 case studies (retro + live), live demo, dashboard polished, intelligence layer producing quality output |
| M3 |
First Pilots |
Jun 30 |
2–3 free pilots launched, receiving weekly intelligence reports (manual email initially) |
| M4 |
Validation Checkpoint |
Jul 31 |
Validation evidence from pilots, ICP clarity, go/no-go on scaling |
| M5 |
First Revenue |
Sep 30 |
Convert free pilots to paid, scale decision made |
| M6 |
Tier 2 Complete |
Nov 30 |
Client matching + activation + content briefs live for paying clients |
Feature Priority Order
This is the authoritative build sequence. Every feature depends on the ones above it.
| Priority |
Feature |
Target Period |
Hours Est |
| 1 | Intelligence layer (So What + Now What lite + narratives) | March–April | ~55h |
| 2 | Data quality (Google Trends #1913, fuzzy dedup #1602, entity resolution #1706) | March | ~20h |
| 3 | Product polish (client-ready dashboard) | March–April | ~20h |
| 4 | Demo environment + pilot prep homework | March–April | ~10h |
| 5 | Pipeline observability (lightweight hooks) | April | ~4h |
| 6 | Legal/contracts (ToS, data agreements, privacy policy) | April | ~9h |
| 6.5 | Social Signal Validation Research | April–May (evenings) | ~15h |
| 7 | Multi-tenancy foundation | April–May (Tim W16+) | ~22h (Tim) |
| 8 | Minimal auth (API key + URL) — Tim W18 | May | ~4h (Tim) |
| 9 | Client onboarding (needs founder workshop for docs) | May–June | ~15h |
| 10 | Client matching (F-05) | June–July | ~40h |
| 11 | ‘Now What’ activation (F-06) | July | ~20h |
| 12 | Content briefs (F-07) | July–August | ~35h |
| 13 | Creator matching (F-08) | August | ~20h |
| 14 | Saturation alerts (F-09) | August | ~15h |
| 15 | API access (F-12) | September–October | ~45h |
| 16 | Trend attribution (F-11) | September–October | ~45h |
| 17 | Trajectory modelling (F-10) | October–November | ~45h |
Phase 1
Phase 1: Build (March – May) — 208h
March is the make-or-break month. Tom is pushing to 20h/week because this work directly determines whether the June pilot demo lands. Four parallel streams, all critical.
Sprint M3-1: Intelligence Layer Foundation + Data Quality (Weeks 1–2, ~40h)
Intelligence Layer — Core Architecture (~20h)
| Task | Hours | Dependencies | Notes |
| Design intelligence layer architecture | 4 | None | LLM pipeline: trend data in, So What + Now What lite + narrative out |
| Build ‘So What’ generation (sector context per trend) | 8 | Architecture | Every trend gets sector-specific context explaining WHY it matters |
| Build lite ‘Now What’ suggestions (basic vertical-level) | 5 | So What working | Generic activation suggestions by vertical, NOT client-specific |
| Narrative generation (trend story for reports) | 3 | So What + Now What | Combining signals into readable trend narrative |
Important: Lite ‘Now What’ is part of the Tier 1 product — basic vertical-level suggestions that ship with every trend. This is NOT the full client-specific ‘Now What’ activation (F-06), which is Tier 2. Don’t over-build here.
Data Quality Fixes (~20h)
| Task | Hours | Dependencies | Notes |
| Google Trends enrichment fix (#1913) | 8 | None | Broken since Feb 11. 6,493 pending. Blocking Height V2 GA dual scoring. |
| Fuzzy dedup (#1602) | 6 | None | Duplicate terms polluting signal quality |
| Entity resolution (#1706) | 6 | None | Same entity appearing as different terms |
Sprint M3-2: Case Studies + Polish (Weeks 3–4, ~40h)
Case Studies (~20h)
| Task | Hours | Dependencies | Notes |
| Select 3–5 retrospective trend candidates | 2 | None | Cherry-pick from 151 archaeology trends — strongest lead-time examples |
| Build retrospective case studies (3 minimum) | 10 | Candidate selection | Show: Rumblings detected X, Y weeks before mainstream. Timeline, signals, outcome. |
| Set up live tracking case studies | 5 | Intelligence layer | Pick 2–3 current emerging trends, start tracking with full intelligence layer |
| Write case study presentation materials | 3 | Retro cases done | Slides/docs for AJ and Jen to use in kickoffs |
T1 “Done” = 3 case studies + live demo. If we nail the retrospective cases and the live tracking shows signal, we’re ready.
Product Polish (~10h)
| Task | Hours | Dependencies | Notes |
| Dashboard client-readiness audit | 3 | None | What does a non-technical person see? Remove internal jargon, fix UX issues |
| Dashboard visual polish | 4 | Audit complete | Charts, layout, loading states, error messages |
| Demo flow preparation | 3 | Polish done | Guided demo path through dashboard showing intelligence layer value |
March Deliverables
- Intelligence layer producing So What + lite Now What + narrative for every scored trend — V5 shipped, V6 in progress
- Google Trends enrichment working again (completed 2026-02-25)
- Fuzzy dedup and entity resolution deployed — deployed W11
- 3 retrospective case studies written — V5 reports shipped W13; standalone case studies reframed as per-client homework (MECE refactor Apr 14)
- 2–3 live tracking case studies initiated
- Dashboard visually polished for external eyes
April is about quality. The intelligence layer exists from March — now it needs to be GOOD. Jen and AJ start reviewing outputs and providing feedback. Pipeline observability gets lightweight hooks. Tim Goerner (Augmentra) starts W16 on multi-tenancy. Email delivery is manual for pilots.
Sprint M4-1: Intelligence Refinement + Pipeline Quality (Weeks 1–2, ~32h)
Intelligence Layer Quality (~15h)
| Task | Hours | Dependencies | Notes |
| Jen/AJ quality review loop — first round | 3 | March intelligence layer | They review 20–30 trend outputs, flag generic/wrong/weak ones |
| Prompt iteration based on feedback | 6 | Review feedback | The hard work — making LLM outputs specific, insightful, actionable |
| Sector-specific tuning (curated examples per vertical) | 4 | Prompt iteration | Fashion trends need different So What framing than tech trends |
| Second review round + refinement | 2 | Tuning done | Iterate until quality bar is met |
This is the highest-risk work in the entire plan. If So What/Now What outputs are generic (“This trend is growing and brands should pay attention”), we fail. The outputs need to be specific, insightful, and show domain expertise. Budget extra time here if needed — steal from polish, not from this.
Pipeline & Data Quality (~10h)
| Task | Hours | Dependencies | Notes |
| GT enrichment all sources | 2 | None | Extend Google Trends enrichment across all collector sources |
| Intelligence edge cases | 4 | Core intelligence working | Handle insufficient data, low confidence, novel patterns |
| Narrative quality validation | 2 | Prompt iteration | Readability, tone, length consistency checks |
| Pipeline observability hooks | 2 | Core pipeline stable | Lightweight monitoring hooks for pipeline health |
Email delivery is manual for pilots. Automated delivery deferred to Tim’s WS2 or Phase 2.
Tim Goerner / Augmentra (~32h from W16)
| Task | Hours | Dependencies | Notes |
| Local dev setup | 4 | None | DB snapshot, Plotly Dash env, no VPS access |
| Multi-tenancy DDL | 14 | Dev setup done | Client tables, schema design, migration scripts |
| Client-scoped queries | 8 | DDL complete | Every intelligence query becomes client-aware |
| Validation UI backend | 4 | Queries working | Backend support for validation views |
| React portal proposal | 2 | DDL complete | Proposal only — decision deferred to WS2 |
Sprint M4-2: Demo Materials + Legal (Weeks 3–4, ~32h)
Intelligence Layer Continued (~10h)
| Task | Hours | Dependencies | Notes |
| Edge case handling (insufficient data, low confidence) | 4 | Core intelligence working | What does the system say when it doesn’t have enough signal? |
| Narrative quality polish | 3 | Prompt iteration | Readability, tone, length consistency |
| Performance optimization (batch processing for reports) | 3 | Core pipeline stable | Intelligence layer needs to process all scored trends efficiently |
Case Study Refinement + Demo Materials (~10h)
| Task | Hours | Dependencies | Notes |
| Refine retrospective case studies with intelligence layer outputs | 4 | Intelligence layer quality pass | Now case studies include actual So What / Now What content |
| Live tracking case study update | 2 | Ongoing since March | How are the live-tracked trends performing? Update narrative. |
| Build demo script for AJ/Jen | 2 | Dashboard polished | Step-by-step guided demo they can run independently |
| Demo environment setup | 2 | Demo script | Clean demo dataset, stable environment, no surprises |
Legal / Contracts (~9h)
| Task | Hours | Dependencies | Notes |
| Terms of Service draft | 3 | None | Lori leads, Tom reviews. Standard SaaS ToS. |
| Data agreements | 3 | None | What data do we collect, how do we use it, client data ownership |
| Privacy policy | 3 | None | Especially important given trend data sourcing |
Legal was missing from the original plan. Lori leads this work — Tom’s role is technical review (data handling, API terms) and sign-off. 9h is Tom’s time, not total effort.
April Deliverables
- Intelligence layer outputs reviewed and approved by Jen/AJ (quality bar met)
- Pipeline observability hooks deployed
- Demo script written and tested
- 3 case studies refined with intelligence layer content
- Legal documents drafted (ToS, data agreements, privacy policy)
- Tim WS1: Multi-tenancy tables + scoped queries
May is infrastructure month. Multi-tenancy gets built just-in-time for June pilots. Auth gets researched and minimally implemented. Onboarding flow gets created so AJ/Jen can run client kickoffs independently.
Sprint M5-1: Multi-Tenancy Foundation (Weeks 1–2, ~32h)
Multi-Tenancy (~30h)
| Task | Hours | Dependencies | Notes |
| Client data model design (tables, relationships) | 6 | None | Minimum viable: clients, client_verticals, client_terms, client_preferences |
| Client tables + migrations | 8 | Data model designed | PostgreSQL schema, migration scripts |
| Query layer updates (filter by client context) | 10 | Tables created | Every intelligence query becomes client-aware |
| Dashboard multi-tenant views | 6 | Query layer done | Client-scoped dashboard views (client sees only their trends) |
Scope discipline is critical here. Minimum viable client model = 3–4 tables. No fancy role-based access control, no billing integration, no usage tracking. Just “who is this client and what verticals/terms do they care about?” Multi-tenancy scope creep is Risk #2.
Sprint M5-2: Auth + Pilot Prep (Weeks 3–4, ~32h)
Minimal Auth (API key + unique URL) — Tim W18 (~4h)
| Task | Hours | Dependencies | Notes |
| API key generation per client | 2 | Multi-tenancy done | Simple key-based auth, no login flow needed for pilots |
| Unique URL per client | 2 | API key working | Client accesses their scoped view via unique URL + key |
Tim Goerner / Augmentra WS2 — Report Infrastructure (conditional, June–July)
| Task | Hours | Dependencies | Notes |
| Report infrastructure backend | 10 | WS1 complete | Serving generated reports to client-scoped views |
| Manual email delivery tooling | 4 | Reports accessible | Lightweight tooling to support manual email sends during pilots |
| React portal evaluation | 4 | WS1 learnings | Evaluate only — build decision based on pilot feedback |
Pilot Preparation (~15h)
| Task | Hours | Dependencies | Notes |
| Client onboarding flow design | 4 | Multi-tenancy done | Discovery → seed terms → configuration → first delivery |
| Onboarding documentation for AJ/Jen | 4 | Flow designed | They run client kickoffs (discovery, seed term workshop). Tom does technical config only. |
| Technical onboarding automation | 5 | Documentation done | Script/tool to create client, set up terms, configure notifications |
| End-to-end pilot dry run | 2 | Everything above | Full pilot simulation: onboard fake client, deliver first intelligence report |
AJ/Jen run client kickoffs (discovery, seed term workshop). Tom does technical config only. This separation is critical for scaling — Tom can’t be in every client meeting.
May Deliverables
- Multi-tenancy foundation deployed (client tables, scoped queries, scoped dashboard)
- Minimal auth (API key + unique URL) implemented
- Client onboarding flow documented and tested
- AJ/Jen trained on kickoff process
- End-to-end pilot dry run completed successfully
- All M2 milestone criteria met (3 case studies + live demo + dashboard polished)
Phase 2
Phase 2: Pilot (June – July) — 128h
The moment of truth. Real clients (free) receiving real intelligence. Every assumption gets tested.
Sprint M6-1: Launch Pilots (Weeks 1–2, ~32h)
Multi-Tenancy Completion (~10h)
| Task | Hours | Dependencies | Notes |
| Multi-tenancy refinements from dry run | 5 | May dry run feedback | Edge cases, performance issues, missing fields |
| Client data seeding for pilot clients | 5 | Clients identified | AJ/Jen have run kickoffs, Tom configures technical setup |
First Pilots (~15h)
| Task | Hours | Dependencies | Notes |
| Pilot client 1: technical onboarding | 5 | Client kickoff done (AJ/Jen) | Configure terms, verticals, notification preferences |
| Pilot client 2: technical onboarding | 5 | Client kickoff done (AJ/Jen) | Same process, second client |
| Pilot client 3: technical onboarding (stretch) | 3 | If bandwidth allows | Third pilot if things go smoothly |
| First weekly intelligence delivery | 2 | Onboarding complete | Manual email delivery. Watch for issues. |
Bug Fixes + Iteration (~7h)
| Task | Hours | Dependencies | Notes |
| Day-1 bug fixes | 4 | Pilots launched | Things will break. Budget for it. |
| Delivery quality review | 3 | First delivery sent | Review what clients actually received. Quality check. |
Sprint M6-2: Client Matching Starts (Weeks 3–4, ~32h)
Client Matching (F-05) (~20h)
| Task | Hours | Dependencies | Notes |
| Client matching architecture | 4 | Multi-tenancy done | Match trends to specific client verticals, interests, brand positioning |
| Client-trend relevance scoring | 10 | Architecture designed | LLM-based: given client profile + trend, how relevant is this? |
| Client-matched report generation | 6 | Scoring working | Clients receive only trends relevant to THEM, ranked by relevance |
Pilot Iteration (~12h)
| Task | Hours | Dependencies | Notes |
| Week 2 intelligence delivery | 2 | Ongoing | Deliver, review, iterate |
| Client feedback collection | 3 | Deliveries happening | What’s useful? What’s noise? What’s missing? |
| Feedback-driven improvements | 5 | Feedback received | Adjust intelligence layer, matching, delivery format |
| Week 3 intelligence delivery | 2 | Improvements made | Better delivery informed by feedback |
June Deliverables
- 2–3 free pilots launched and receiving weekly intelligence
- Intelligence delivered via manual email
- Client matching (F-05) architecture built and partially deployed
- First round of client feedback collected and acted on
- M3 milestone met (2–3 free pilots receiving weekly intelligence)
July is about deepening value for pilot clients and gathering the evidence needed for the validation checkpoint. Client matching gets completed, ‘Now What’ activation starts, content briefs begin.
Sprint M7-1: Matching + Activation (Weeks 1–2, ~32h)
Client Matching Completion (~20h)
| Task | Hours | Dependencies | Notes |
| Client matching refinement from pilot feedback | 8 | June feedback | Are we matching the right trends to the right clients? |
| Relevance scoring calibration | 6 | Feedback-driven | Tune the matching model based on what clients actually found useful |
| Matched delivery automation | 6 | Matching refined | Fully automated: pipeline scores trends → matches to clients → delivers |
‘Now What’ Activation (F-06) (~12h)
| Task | Hours | Dependencies | Notes |
| ‘Now What’ client-specific architecture | 3 | Client matching done | Given client profile + matched trend, generate specific activation suggestions |
| Client-specific activation generation | 6 | Architecture done | “Brand X should do Y because Z” — specific, actionable, client-aware |
| Integration with intelligence reports | 3 | Generation working | Now What appears in weekly intelligence alongside So What |
Tier 1 has lite ‘Now What’ (generic vertical-level). Tier 2 ‘Now What’ (F-06) is client-specific. The upgrade path should be obvious to clients: “See how we say ‘fashion brands should X’? Paid tier says ‘YOUR brand should X because of your positioning.’”
Sprint M7-2: Content Briefs Start + Checkpoint (Weeks 3–4, ~32h)
Content Brief Generation (F-07) Start (~15h)
| Task | Hours | Dependencies | Notes |
| Content brief template design | 3 | None | 500-word structured brief: angle, audience, key messages, format, timing, brand safety |
| Brief generation pipeline | 8 | Template designed | LLM generates briefs from trend data + client profile + So What / Now What context |
| Brief quality review (Jen) | 4 | First briefs generated | Jen reviews output quality — this is content strategy, her domain |
Content briefs are medium depth — 500-word structured brief, NOT a full content strategy. Structured sections: angle, target audience, key messages, recommended format, timing window, brand safety considerations.
Validation Checkpoint Prep (~8h)
| Task | Hours | Dependencies | Notes |
| Compile pilot results and metrics | 3 | July pilots running | What have we delivered? What feedback? Client engagement metrics? |
| Validation evidence document | 3 | Results compiled | Clear evidence of product-market fit (or gaps) |
| Checkpoint presentation | 2 | Evidence documented | M4 validation review: evidence, ICP clarity, go/no-go on scaling |
Ongoing Pilot Delivery (~9h)
| Task | Hours | Dependencies | Notes |
| Weekly intelligence deliveries (4 weeks) | 4 | Ongoing | Automated but monitored |
| Client feedback iteration | 3 | Ongoing | Continuous improvement loop |
| Bug fixes + operational issues | 2 | Ongoing | Things break, clients ask questions |
July Deliverables
- Client matching (F-05) fully deployed and calibrated
- ‘Now What’ activation (F-06) client-specific generation working
- Content briefs (F-07) pipeline started, first briefs reviewed by Jen
- M4 validation checkpoint completed with evidence
- Clear go/no-go decision on scaling
Phase 3
Phase 3: Tier 2 (August – September) — 128h
Assuming M4 checkpoint is a go, August builds out the Tier 2 features that differentiate the paid product.
Sprint M8-1: Content Briefs + Creator Matching (Weeks 1–2, ~32h)
Content Brief Completion (~20h)
| Task | Hours | Dependencies | Notes |
| Brief generation refinement from Jen feedback | 8 | July review | Quality tuning — briefs need to be genuinely useful, not generic |
| Multi-format brief support | 5 | Core briefs working | Social post angles vs. long-form vs. video concept briefs |
| Brief delivery integration | 4 | Briefs refined | Briefs included in weekly intelligence reports for Tier 2 clients |
| Automated brief quality scoring | 3 | Delivery working | Self-assessment: flag low-confidence briefs for human review |
Creator Matching (F-08) (~12h)
| Task | Hours | Dependencies | Notes |
| Creator signal extraction from existing data | 5 | None | Mine Bluesky, Substack, HN data for creator profiles |
| Creator-trend matching | 5 | Signal extraction done | Which creators are already talking about which trends? |
| Creator recommendation in reports | 2 | Matching working | “These creators are already engaging with this trend” |
Creator matching builds from EXISTING signal data. We do NOT build a separate creator database. We already have Bluesky engagement data, Substack author data, HN poster data. Mine it. This keeps scope manageable and the data fresh.
Sprint M8-2: Saturation Alerts + Iteration (Weeks 3–4, ~32h)
Saturation Alerts (F-09) (~15h)
| Task | Hours | Dependencies | Notes |
| Saturation signal detection | 6 | Trend scoring pipeline | When does a trend go from “emerging” to “saturated”? Velocity decay, mainstream media pickup. |
| Saturation alert generation | 5 | Detection working | “Trend X is approaching saturation — act now or deprioritize” |
| Alert delivery integration | 4 | Notification system | Saturation alerts via email, separate from weekly report |
Ongoing Pilot Delivery + Iteration (~9h)
| Task | Hours | Dependencies | Notes |
| Weekly deliveries (4 weeks) | 4 | Ongoing | Now includes matching, activation, briefs for Tier 2 pilots |
| Client feedback + iteration | 3 | Ongoing | Are Tier 2 features actually valued? |
| Bug fixes + operational | 2 | Ongoing | Steady-state operations |
August Deliverables
- Content briefs (F-07) fully deployed (500-word structured briefs)
- Creator matching (F-08) live from existing signal data
- Saturation alerts (F-09) deployed
- Tier 2 feature suite available for pilot upsell conversations
September is the conversion month. Free pilots become paid clients (or they don’t, and that’s important information too). Technical work shifts to Tier 3 foundations.
Sprint M9-1: Conversion + API (Weeks 1–2, ~32h)
Pilot Conversion (Sales/Ops — Low Tom Hours) (~5h)
| Task | Hours | Dependencies | Notes |
| Technical support for conversion conversations | 3 | AJ/Jen lead sales | Tom provides technical answers, pricing support, contract configuration |
| Billing/subscription setup | 2 | Conversion happening | Minimal billing integration (Stripe or manual invoicing for v1) |
Conversion is sales/ops work, not engineering. AJ runs relationships, Lori handles ops, Jen handles content quality positioning. Tom supports technically.
API Access (F-12) (~25h)
| Task | Hours | Dependencies | Notes |
| REST API design | 4 | Multi-tenancy stable | Authenticated API for agencies building on Rumblings data |
| API implementation (trend data + intelligence) | 12 | Design complete | Core endpoints: trends, intelligence, matching, briefs |
| Authentication + rate limiting | 5 | API implemented | API key auth, per-client rate limits |
| API documentation | 4 | API working | Developer docs for agency clients |
API access is for agencies building on top of Rumblings data. Authenticated REST API, not a public data dump. This opens the “platform” revenue stream alongside the “product” revenue stream.
Sprint M9-2: Attribution + Scale Decision (Weeks 3–4, ~32h)
Trend Attribution (F-11) (~25h)
| Task | Hours | Dependencies | Notes |
| Attribution model design | 5 | Historical data available | “Rumblings detected this trend X days before mainstream media” — provable lead time |
| Attribution data pipeline | 10 | Model designed | Compare Rumblings detection timestamps vs. mainstream media coverage timestamps |
| Attribution reports for clients | 6 | Pipeline working | “Here’s proof we gave you early signal” — retention and upsell tool |
| Attribution dashboard view | 4 | Reports working | Visual timeline: our signal vs. mainstream coverage |
M5 Scale Decision (~7h)
| Task | Hours | Dependencies | Notes |
| Compile all pilot + conversion data | 3 | September data available | Revenue, retention, feedback, usage metrics |
| Scale decision document | 2 | Data compiled | Honest assessment: is this working? What needs to change? |
| 2027 preliminary planning | 2 | Decision made | Based on M5 outcome: scale up, pivot, or adjust |
September Deliverables
- Free pilots converted to paid (or clear lessons on why not)
- M5 Scale Decision checkpoint completed
- API access (F-12) implemented and documented
- Trend attribution (F-11) pipeline deployed
- Clear evidence for or against scaling
Phase 4
Phase 4: Tier 3 Stretch (October – December) — 192h
Sprint M10-1: API + Attribution Completion (Weeks 1–2, ~32h)
API Completion (~20h)
| Task | Hours | Dependencies | Notes |
| API refinement from early adopter feedback | 8 | Sep API launch | What endpoints are agencies actually using? What’s missing? |
| API v1.1 improvements | 8 | Feedback collected | Add missing endpoints, improve response formats |
| API monitoring + analytics | 4 | API stable | Usage tracking, error rates, latency monitoring |
Attribution Completion (~12h)
| Task | Hours | Dependencies | Notes |
| Attribution refinement | 6 | Sep attribution launch | Improve accuracy, expand mainstream media comparison sources |
| Client-facing attribution reports | 6 | Refinement done | Automated monthly attribution summaries per client |
Sprint M10-2: Trajectory Modelling Starts (Weeks 3–4, ~32h)
Trajectory Modelling (F-10) — Basic Version (~20h)
| Task | Hours | Dependencies | Notes |
| Pattern matching architecture | 5 | 151 archaeology trends available | Match current trends against historical trajectory patterns |
| Historical pattern library | 8 | Architecture designed | Classify 151 archaeology trends into trajectory archetypes (flash, slow burn, seasonal, etc.) |
| Basic trajectory prediction | 7 | Pattern library built | “This trend’s signal pattern looks like [archetype], which typically [outcome]” |
This is pattern matching, NOT full predictive modelling. We have 151 archived trends from trend archaeology. Match current trend signal patterns against historical ones. “This trend is following the same signal pattern as X, which peaked in Y weeks and Z’d.” Good enough beats perfect.
Sprint M11-1: Trajectory Refinement (Weeks 1–2, ~32h)
Trajectory Modelling Refinement (~25h)
| Task | Hours | Dependencies | Notes |
| Trajectory accuracy validation | 8 | Basic model working | Back-test: how well do pattern matches predict actual outcomes? |
| Confidence scoring | 5 | Validation done | “High confidence (similar to 12 historical trends)” vs. “Low confidence (novel pattern)” |
| Trajectory visualization | 6 | Scoring working | Dashboard view showing predicted trajectory with confidence bands |
| Integration with intelligence reports | 6 | Visualization done | Trajectory predictions included in weekly intelligence for Tier 3 clients |
Client-Driven Improvements (~7h)
| Task | Hours | Dependencies | Notes |
| Client feature requests triage | 3 | Ongoing feedback | What are clients actually asking for? |
| High-priority client improvements | 4 | Triage done | Build what clients need most |
Sprint M11-2: Data Quality + Polish (Weeks 3–4, ~32h)
Data Quality Improvements (~10h)
| Task | Hours | Dependencies | Notes |
| Collector health improvements | 4 | Ongoing monitoring | Fix any degraded collectors |
| Signal quality enhancements | 4 | Ongoing monitoring | Improve scoring accuracy based on months of real data |
| Pipeline performance optimization | 2 | Ongoing | Speed, reliability, cost optimization |
M6 Tier 2 Complete Milestone (~4h)
| Task | Hours | Dependencies | Notes |
| Tier 2 feature audit | 2 | All Tier 2 features deployed | Are all Tier 2 features production-quality? |
| M6 milestone documentation | 2 | Audit done | Formal sign-off: matching + activation + briefs live for paying clients |
November Deliverables
- Trajectory modelling (F-10) deployed with confidence scoring
- M6 Tier 2 Complete milestone met
- All data quality issues addressed
- Client-driven improvements shipped
Sprint M12-1: Final Polish (Weeks 1–2, ~32h)
Tier 1 Final Polish (~15h)
| Task | Hours | Dependencies | Notes |
| Intelligence layer quality audit | 5 | Full year of data | Review output quality after months of iteration |
| Notification system reliability | 3 | Months of delivery | Fix any delivery reliability issues |
| Dashboard UX improvements | 4 | Client feedback | Polish based on 6 months of actual usage |
| Documentation update | 3 | All above | Ensure all documentation reflects current state |
Tier 2 Final Polish (~15h)
| Task | Hours | Dependencies | Notes |
| Client matching accuracy review | 4 | Months of matching data | How accurate is matching? What can improve? |
| Content brief quality review | 4 | Months of briefs | Brief quality over time — improving or degrading? |
| Creator matching refresh | 4 | Months of signal data | Update creator profiles with latest data |
| Feature integration audit | 3 | All Tier 2 features | Do all Tier 2 features work well together? |
Sprint M12-2: Package + Plan (Weeks 3–4, ~32h)
Tier 3 Preview Packaging (~15h)
| Task | Hours | Dependencies | Notes |
| API access packaging | 5 | API stable | Developer portal, onboarding flow, pricing |
| Attribution product packaging | 5 | Attribution stable | How do we sell/present attribution as a feature? |
| Trajectory modelling preview | 5 | Model stable | Preview packaging for Tier 3 upsell conversations |
Year-End Review + 2027 Planning (~19h)
| Task | Hours | Dependencies | Notes |
| 2026 year-in-review document | 5 | Full year data | What worked, what didn’t, key metrics, lessons |
| Client feedback synthesis | 4 | All client feedback | Aggregate themes across all pilot and paid clients |
| 2027 roadmap draft | 5 | Review + feedback | Where does the product go next? White-label (F-13)? Scale? |
| Technical debt assessment | 3 | Full codebase review | What needs refactoring before 2027 scale? |
| Founder alignment session prep | 2 | All above | Materials for end-of-year founder planning session |
December Deliverables
- All three tiers polished and production-quality
- Tier 3 packaged for 2027 sales conversations
- 2026 review completed
- 2027 roadmap drafted
- Technical debt documented and prioritized
Critical Path
The critical path is the sequence of work where a delay in any item delays the entire plan. Everything else has slack.
Intelligence Layer (Mar)
→ Intelligence Quality (Apr)
→ Case Studies Refined (Apr)
→ Demo Ready [M2] (May)
→ Multi-Tenancy (May)
→ Pilot Onboarding (May–Jun)
→ First Pilots [M3] (Jun)
→ Client Feedback (Jun–Jul)
→ Validation Checkpoint [M4] (Jul)
→ Conversion (Sep)
→ First Revenue [M5] (Sep)
The critical dependency: Intelligence layer quality. If So What / Now What outputs aren’t good by end of April, case studies don’t impress, demo doesn’t land, pilots don’t convert. Everything flows from output quality.
Off the critical path (can slip without delaying milestones):
- Data quality fixes (important but independent)
- Legal/contracts (Lori-led, parallel track)
- Dashboard polish (nice-to-have, not blocking)
- Tier 3 features (stretch goals, independent timeline)
Dependencies Map
Internal Dependencies
| Feature | Depends On | Notes |
| Intelligence layer | Scored trend data (done) | H/W/D scoring pipeline is M1-complete |
| Case studies | Intelligence layer | Need So What / Now What in case study output |
| Notification system | None (standalone) | But needs intelligence layer content to be useful |
| Multi-tenancy | None (standalone) | But benefits from notification system being ready |
| Auth | Multi-tenancy | Needs client model to scope access |
| Client matching (F-05) | Multi-tenancy | Needs client profiles to match against |
| ‘Now What’ activation (F-06) | Client matching | Needs matched trends to generate client-specific actions |
| Content briefs (F-07) | Client matching | Needs client context for relevant briefs |
| Creator matching (F-08) | Signal data (existing) | Mines existing Bluesky/Substack/HN data |
| Saturation alerts (F-09) | Trend scoring pipeline | Detects velocity decay in scored trends |
| Trajectory modelling (F-10) | 151 archaeology trends | Pattern matching against historical data |
| Trend attribution (F-11) | Historical detection data | Compares our timestamps vs. mainstream |
| API access (F-12) | Multi-tenancy + auth | Needs client scoping and authentication |
External Dependencies
| Dependency | Owner | Risk | Notes |
| Jen/AJ quality review | Jen, AJ | Medium | Need their availability in April for intelligence layer review |
| AJ/Jen pilot kickoffs | AJ, Jen | Medium | They run discovery + seed term workshops. Tom can’t do this alone. |
| Lori legal/contracts | Lori | Low | Lori leads, Tom reviews. Low Tom-hours. |
| Client willingness (free pilots) | AJ, Jen (networks) | Low | Warm leads exist from founder networks |
| Client willingness (paid conversion) | AJ, Lori | Medium | Free-to-paid conversion is unproven |
| Founder validation checkpoint | All founders | Low | Need solid evidence for go/no-go. M4 must be convincing. |
Operational Roles
| Role | Who | Scope |
| Technical development | Tom | All engineering, all the time |
| Client relationships | AJ | Kickoffs, ongoing relationship, conversion conversations |
| Content quality | Jen | Intelligence layer review, brief quality, editorial standards |
| Operations | Lori | Legal, billing, contracts, operational processes |
| Account management | Shared | Lori ops, AJ relationships, Jen content quality. No dedicated AM in 2026. |
| Client kickoffs | AJ + Jen | Discovery, seed term workshop. Tom does technical config ONLY. |
Risk Register
| ID | Risk | Prob | Impact | Mitigation | Owner |
R1 |
LLM output quality — Intelligence layer produces generic/wrong So What and Now What content |
HIGH |
HIGH |
Jen/AJ quality review loop, prompt iteration, curated vertical-specific examples, quality scoring |
Tom + Jen |
R2 |
Multi-tenancy scope creep — Client data model expands beyond minimum viable |
MED |
HIGH |
Strict scope: 3–4 tables, no fancy auth, no billing integration. Time-box to 30h. |
Tom |
R3 |
No clients by July — Can’t find willing pilot clients |
LOW |
HIGH |
AJ/Jen activating their networks. Warm leads exist. Free removes price objection. |
AJ + Jen |
R4 |
Single developer bottleneck — Tom illness/burnout stops all engineering progress |
HIGH |
MED |
AI-assisted development (1.5x multiplier), well-documented codebase, sustainable pace (except March push) |
Tom |
R5 |
Case studies don’t impress — Retrospective analysis doesn’t show clear lead time |
MED |
MED |
Cherry-pick strongest examples from 151 archaeology trends. Run live tracking as backup evidence. |
Tom |
R6 |
Intelligence layer takes too long — Prompt iteration absorbs more hours than budgeted |
MED |
HIGH |
Steal hours from polish/buffer, not from data quality. Accept “good enough” over perfect for v1. |
Tom |
R7 |
Auth complexity — Chosen auth approach is more complex than expected |
LOW |
MED |
Research task with time-box. Worst case: basic JWT + API key for v1. |
Tom |
R8 |
Notification delivery failures — Email delivery is unreliable |
LOW |
MED |
Build abstraction layer so channels are swappable. Monitor delivery rates. |
Tom |
Risk Response Thresholds
- If R1 (LLM quality) materializes by end of April: Defer pilot timeline by 1 month. Quality cannot be compromised.
- If R2 (multi-tenancy creep) materializes: Cut scope to 2 tables (clients + client_terms). Add complexity later.
- If R4 (burnout) materializes: Pause Tier 3 entirely. Focus on maintaining Tier 1 + 2 delivery.
- If R6 (intelligence hours) materializes: Borrow from May buffer and dashboard polish. Intelligence layer quality is non-negotiable.
Tier Structure Recap
Tier 1: Watch
Free Pilots → Entry Paid
| F-01 | Trend alerts (weekly intelligence via manual email) |
| F-02 | Client-scoped dashboard |
| F-03 | Cross-platform validation (H/W/D scoring) DONE |
| F-04 | ‘So What’ context + lite ‘Now What’ |
Tier 2: Act
Paid Upsell
| F-05 | Client matching |
| F-06 | ‘Now What’ activation (client-specific) |
| F-07 | Content briefs (500-word structured) |
| F-08 | Creator matching |
| F-09 | Saturation alerts |
Tier 3: Lead
Stretch / 2027
| F-10 | Trajectory modelling |
| F-11 | Trend attribution |
| F-12 | API access |
| F-13 | White-label (2027) |
| F-14 | Dedicated AM (N/A) |
Tier 1 includes basic activation: Every trend gets vertical-level generic Now What suggestions as part of the So What context. This is NOT the full client-specific Now What (F-06) — that’s Tier 2.
Deferred to 2027
| Item | Reason |
| White-label (F-13) | Requires multi-tenant maturity, brand customization infrastructure. Not viable in 2026. |
| Dedicated account manager (F-14) | Shared across founders in 2026. Hire when client count justifies. |
| Full predictive trajectory modelling | Basic pattern matching is enough for 2026. Full ML modelling requires more data + dedicated data science time. |
| Advanced auth (SSO, SAML, RBAC) | Minimal auth in 2026. Enterprise auth features when enterprise clients arrive. |
| Billing automation | Manual invoicing or basic Stripe for 2026. Automate when client count justifies. |
Open Research Tasks
These are explicitly unresolved and need dedicated research sessions before implementation:
| Task | Target Session | Decision Needed |
| Auth approach | April (before May implementation) | Clerk vs. Auth0 vs. Supabase Auth vs. custom JWT |
| Pricing model | August (before September conversion) | Per-seat vs. per-report vs. tiered flat-rate |
| API rate limits and pricing | September (with API build) | Free tier limits, paid tier pricing |
Month-by-Month Summary
| Month | Hours | Theme | Key Deliverable |
| March | 80h | Case Studies + Intelligence Layer | Intelligence layer producing quality output, 3 case studies, data quality fixed |
| April | 64h | Intelligence Deep + Demo Ready | Intelligence quality bar met (Jen/AJ approved), notifications working, legal drafted |
| May | 64h | Multi-Tenancy + Pilot Prep | Client data model, auth, onboarding flow, M2 Demo Ready milestone |
| June | 64h | First Pilots | 2–3 free pilots launched, client matching starts, M3 First Pilots milestone |
| July | 64h | Prove Value + Checkpoint | Matching complete, Now What activation, content briefs start, M4 Validation Checkpoint |
| August | 64h | Tier 2 Core | Content briefs, creator matching, saturation alerts |
| September | 64h | Start Charging + Scale | Convert to paid, API + attribution start, M5 First Revenue |
| October | 64h | Tier 3 Foundations | API + attribution complete, trajectory modelling starts |
| November | 64h | Trajectory + Polish | Trajectory modelling, M6 Tier 2 Complete, data quality |
| December | 64h | Polish + Package | All tiers polished, 2027 planning, year-end review |
Revision History
| Date | Change | Author |
2026-02-28 | Original draft from feature matrix analysis | Aria |
2026-03-01 | Complete rewrite following founder interview. Major changes: pilots deferred to June (free), March refocused on case studies + intelligence layer, multi-tenancy deferred to May, intelligence layer elevated to core product priority, legal/contracts added, pricing/revenue deferred to Q3/Q4, auth marked as open research task. | Aria |