Book 2: Resource Dynamics

Foraging OptimizationNew

Finding and Acquiring Resources

Book 2, Chapter 2: Foraging Optimization

Opening: The Starling That Knew When to Leave

A European starling lands on a blackberry bush in late summer. Forty-seven berries hang from the canes - some plump and dark purple, easy targets, others small and half-hidden under leaves. The starling begins: peck, swallow, peck, swallow. Four seconds per berry. A mechanical rhythm.

After the twelfth berry - 48 seconds into foraging - the starling pauses. The next berry is smaller, tucked under a leaf, requiring a longer search. The bird tilts its head. Then launches into flight, wings cutting through air toward the next bush 200 feet away. Thirty-five berries remain, unpicked.

Why?

The answer is one of the most elegant mathematical principles in biology: the marginal value theorem. The starling doesn't leave because the bush is empty. It leaves because the next berry at the next bush will deliver more calories per second than staying at the current bush.

Here's the math:

Current bush: 35 berries remain, but picking rate is slowing (easiest berries already eaten). Next berry takes 6 seconds.

Next bush: 200 feet away (5 seconds flight + 35 seconds searching for a productive patch = 40 seconds total), but fresh berries picked at 3 seconds each.

Calculation: Stay here = 6 seconds per berry. Fly to next bush = 40 seconds travel + 3 seconds per berry = 10.3 seconds per berry (averaged over first 10 berries).

The starling should stay, right? 6 < 10.3.

Wrong.

The starling is optimizing over time, not per bush.

If it stays for all 35 remaining berries (at increasing intervals of 6, 7, 8, 9 seconds each as picking gets harder):

  • Time: 252 seconds total
  • Berries: 35
  • Rate: 0.14 berries/second

If it leaves after 12 berries and flies to a fresh bush:

  • Time: 118 seconds (48 here + 40 travel + 30 at next bush picking 10 berries)
  • Berries: 22 total
  • Rate: 0.19 berries/second

The starling that leaves early collects more berries per unit time. Over a full day of foraging, that's the difference between survival and starvation.

The marginal value theorem: Leave a resource patch when the marginal return (next unit of value) falls below the average return available across the environment.

The starling doesn't know calculus. But evolution has wired its brain with the optimal algorithm: when picking slows down, leave - even if food remains.

Most companies fail this test. Blockbuster stayed at depleted DVD rental patches because "there's still revenue here" - $6B in 2004. By 2010, Netflix had flown to streaming, and Blockbuster was bankrupt. The berries were still there. The opportunity had moved.

They stay at depleted resource patches (dying markets, low-margin customers, legacy products) because they optimize per customer, not across their entire market. They don't know when to leave the bush.

This chapter is about foraging optimization - how organisms and companies decide where to search, when to stay, and when to leave. Because in biology and business, the most important decision isn't finding resources. It's knowing when to abandon them.


Part 1: The Biology of Foraging

Optimal Foraging Theory

Foraging is the search for food. Every organism forages: lions hunt, bees gather nectar, bacteria swim toward glucose.

Foraging costs energy (search, pursuit, handling) and delivers energy (calories consumed). The difference determines survival.

Optimal foraging theory (developed by ecologists in the 1960s-70s) predicts how organisms should forage to maximize energy intake per unit time. The theory makes specific predictions:

Prediction 1: Diet Selection

Organisms should include a food item in their diet if:

Energy gained from item / (Search time + Handling time) > Current average energy intake rate

Translation: Eat it if it's better than your current average, ignore it if it's worse.

Example: Shore crabs eating mussels

Shore crabs on beaches can choose three mussel sizes:

  • Large mussels: 20 calories, 60 seconds to crack open
  • Medium mussels: 12 calories, 30 seconds to crack
  • Small mussels: 5 calories, 20 seconds to crack

If large mussels are abundant (search time = 10 seconds):

  • Large: 20 cal / (10 search + 60 handling) = 0.29 cal/sec
  • Medium: 12 cal / (10 + 30) = 0.30 cal/sec
  • Small: 5 cal / (10 + 20) = 0.17 cal/sec

Optimal strategy: Eat large and medium, ignore small. Small mussels aren't worth the handling time.

But if large mussels are rare (search time = 120 seconds):

  • Large: 20 / (120 + 60) = 0.11 cal/sec
  • Medium: 12 / (40 + 30) = 0.17 cal/sec
  • Small: 5 / (20 + 20) = 0.13 cal/sec

Optimal strategy: Eat medium only, skip large (too much search time) and small (too much handling time).

The crab doesn't change preferences - it changes strategy based on abundance. When resources are scarce, be selective. When abundant, broaden the diet. The optimal strategy is context-dependent.

Prediction 2: Patch Residence Time (Marginal Value Theorem)

This is the starling's algorithm: when should you leave a resource patch?

A patch is a cluster of resources - the starling's berry bush, the crab's mussel bed, the bee's flower. In business, patches are customer segments, geographic markets, or product categories.

The theorem was formalized by Eric Charnov in 1976. The math is elegant:

Leave a patch when: Marginal gain in current patch = Average gain across environment

In practice:

  • Stay while returns are high (initial berries picked quickly)
  • Leave when returns drop below environmental average (picking slows down)
  • Travel to next patch (incur travel cost)
  • Repeat

The counter-intuitive insight: You should leave while food still remains. Staying until empty wastes time. The opportunity cost of staying (missing better patches) exceeds the value of remaining resources.

Example: Bumblebees foraging on flowers

  • First visit to flower: 10 nectar units, 5 seconds
  • Second visit to same flower: 4 nectar units, 5 seconds (nectar partially replenished)
  • Third visit: 1 nectar unit, 5 seconds
  • Travel time between flowers: 3 seconds

Thorough strategy (3 visits per flower):

  • Gain: 10 + 4 + 1 = 15 units per flower
  • Time: 5 + 5 + 5 + 3 (travel to next) = 18 seconds per flower
  • Rate: 15/18 = 0.83 units/sec

Skimming strategy (1 visit per flower):

  • Gain: 10 units per flower
  • Time: 5 + 3 (travel) = 8 seconds per flower
  • Rate: 10/8 = 1.25 units/sec

The skimming strategy is 50% more efficient. The bee should visit each flower once and move on - even though 5 nectar units remain per flower (the original 15 minus the 10 collected).

The principle: In rich environments, visit many patches briefly. In poor environments, extract patches thoroughly.

Prediction 3: Central Place Foraging

Some organisms (birds with nests, bees with hives, humans with homes) must return to a central location. This changes the math.

Example: Starlings feeding chicks

  • Nest location: Fixed
  • Foraging sites: Variable distances
  • Load capacity: Can carry ~5 caterpillars at a time

Trade-off:

  • Close sites (100m): Less travel time, but lower caterpillar density (already depleted from repeated visits)
  • Distant sites (500m): More travel time, but higher caterpillar density (less visited)

Optimal strategy:

  • Early in breeding season: Forage close (chicks small, need frequent feeding, minimize travel)
  • Late in breeding season: Forage far (chicks large, need more food per trip, worth the travel time)

The starling adjusts foraging distance based on chick needs - maximizing delivery rate, not foraging rate.

Business parallel: Companies with fixed distribution costs (warehouses, stores, physical infrastructure) face similar trade-offs - a constraint we'll explore in Part 2.

Risk-Sensitive Foraging

Standard foraging theory assumes organisms maximize average energy intake. But evolution doesn't reward averages - it rewards survival.

Example: Two foraging strategies in winter

  • Strategy A (safe): 90 calories per day, every day (guaranteed)
  • Strategy B (risky): 50% chance of 0 calories, 50% chance of 200 calories (100 average)

Survival requires 100 calories per day minimum.

Which should an animal choose?

If well-fed (energy reserves at 500 calories):

  • Choose Strategy A (safe). Gaining 90/day builds reserves slowly but steadily. Avoid risk of 0-calorie day.

If starving (energy reserves at 80 calories):

  • Choose Strategy B (risky). Strategy A gives 90 calories/day but need 100 - net deficit of 10/day = death in 8 days. Strategy B gives 50% chance of death today, 50% chance of survival + accumulation. 50% survival > guaranteed death.

Risk-sensitive foraging theory: Organisms switch between risk-averse (when reserves are adequate) and risk-seeking (when facing starvation) strategies. The threshold is survival need.

But this isn't recklessness - it's rational recalibration. When reserves are high, minimizing variance makes sense (don't risk what you have). When reserves are below survival threshold, maximizing variance becomes the conservative choice (guaranteed slow death vs. probabilistic survival).

Evidence: Juncos (small birds) in winter

  • When fed reliably: Forage in safe, low-quality patches (covered areas, low predation)
  • When food-deprived: Forage in risky, high-quality patches (exposed areas, high predation, more food)

The bird's personality doesn't change - its strategy does.

At 80 calories with 100 needed for survival, the safe patch (guaranteed 90 calories) delivers 0% survival over time. The risky patch (50% chance of 200 calories) delivers 50% survival. 50% > 0%. The math is brutal and clear.

Business parallel: Startups with 3 months runway make risky bets (pivot, launch unfinished products, aggressive customer acquisition). Startups with 24 months runway optimize safely (test, measure, incremental improvement). Same company, different reserve levels, different foraging strategies.

Search Patterns: Random Walks vs. Lévy Flights

How should you search for resources when you don't know where they are?

Two strategies:

1. Random Walk (Brownian Motion)

  • Move in random directions, short steps
  • Thoroughly search local area before moving far
  • Optimal when resources are clustered and predictable

Example: Ants searching near nest (food likely nearby)

2. Lévy Flight (Power Law Distribution)

  • Mostly short steps, occasional very long jumps
  • Cover more ground, find distant patches
  • Optimal when resources are sparse and unpredictable

Example: Albatrosses searching ocean (fish schools randomly distributed)

The difference: Imagine an albatross searching from above: short hop, short hop, short hop, GIANT LEAP (200km), short hop, short hop, GIANT LEAP. This isn't random wandering - it's optimized search.

For sparse, randomly distributed resources (e.g., prey in open ocean), Lévy flights are mathematically optimal.

Evidence: Researchers tracked wandering albatrosses with GPS. Flight patterns matched Lévy distribution - mostly short flights (~1-5km), rare long-distance journeys (100-200km). This is mathematically optimal for finding randomly distributed prey in oceans.

Business parallel: Customer acquisition strategies

  • Random walk: Local advertising, referrals, word-of-mouth (works when customers are clustered geographically or demographically)
  • Lévy flight: Viral marketing, occasional big PR bets, geographic expansion jumps (works when customer distribution is unpredictable)

Searching efficiently requires matching search pattern to resource distribution.


Part 2: Foraging Optimization in Business

Companies forage for customers, markets, partnerships, and talent. The biology maps directly:

  • Search costs energy: Sales, marketing, business development
  • Handling costs energy: Onboarding, integration, customer support
  • Returns deliver value: Revenue, growth, strategic positioning

The question is the same as the starling's: Where do you search? When do you stay? When do you leave?

Case Study 1: Netflix (2007-2011) - Optimal Diet Selection

2007. Reed Hastings has $200 million for streaming content. Studios want $5 million per blockbuster. He does the math: 40 blockbusters = entire budget, angry subscribers demanding more variety. 100 mid-tier titles = same budget, satisfied subscribers, competitive moat.

He chooses... medium mussels.

In 2007, Netflix had a choice: which content to license for streaming?

The options:

  • Blockbusters (new releases, popular films): Expensive licensing ($1-5M per title), high subscriber demand
  • Mid-tier content (catalog titles, TV shows): Moderate licensing ($100K-500K per title), moderate demand
  • Long-tail content (obscure films, documentaries): Cheap licensing ($10K-50K per title), niche demand

Netflix's DVD business had trained it to think like a foraging animal: maximize value per dollar spent.

The analysis (simplified for illustration):

Blockbusters:

  • Cost: $2M per title (average)
  • Views: 5M subscribers watch
  • Cost per view: $0.40
  • Retention impact: High (reduces churn)

Mid-tier:

  • Cost: $300K per title
  • Views: 1M subscribers watch
  • Cost per view: $0.30
  • Retention impact: Moderate

Long-tail:

  • Cost: $30K per title
  • Views: 50K subscribers watch
  • Cost per view: $0.60
  • Retention impact: Low individually, high in aggregate (catalog depth)

The mistake most companies make: License only blockbusters (highest retention impact per title).

Netflix's insight: Mid-tier content had the best efficiency ($0.30 per view), but blockbusters had the best strategy (prevented churn). Long-tail had poor individual metrics but created the perception of infinite choice.

The optimal diet:

  • License enough blockbusters to prevent churn (critical threshold ~100 titles/year)
  • Fill library with mid-tier content (efficient filler, broad appeal)
  • Add long-tail for depth (signals "we have everything," even if rarely watched)

Result:

  • 2011: Netflix had 12,000 streaming titles (vs. 100K DVD titles)
  • Subscriber growth: 20M (2010) → 24M (2011) despite pricing controversies
  • Content budget: $2B (2011) → $17B+ (2024)

Why it worked:

  • Diet selection: Netflix chose content based on cost-per-value, not absolute popularity
  • Threshold strategy: Licensed enough blockbusters to satisfy, not maximize
  • Aggregate value: Long-tail titles individually weak but collectively strong (depth perception)

Netflix didn't license "the best" content. It licensed the optimal mix given budget constraints - exactly like a shore crab eating medium mussels but skipping small and large.

Case Study 2: Bain & Company (1973-1990) - Marginal Value Theorem in Consulting

1973. Bill Bain sits in his BCG partner office - mahogany desk, Charles River view, partnership track secured. He's built the perfect foraging patch. High status, high fees, high prestige.

And he's about to leave 35 berries on the bush.

In 1973, Bill Bain left Boston Consulting Group to found Bain & Company. His innovation wasn't methodology - it was client selection based on marginal value.

BCG's model (industry standard):

  • Work with any client willing to pay
  • Deliver project, move to next project
  • Relationship ends when project ends
  • Revenue = number of projects × fees

Bain's model (foraging optimization):

  • Work with one client per industry (exclusivity)
  • Deliver first project, then second, then third (patch residence)
  • Leave client only when marginal value falls below next client's value
  • Revenue = client lifetime value

The marginal value calculation:

Client A (existing):

  • Project 1: $500K fee, 6 months, high impact (major restructuring)
  • Project 2: $400K fee, 6 months, moderate impact (follow-up optimization)
  • Project 3: $300K fee, 6 months, low impact (minor tweaks)

Client B (new):

  • Project 1: $600K fee, 6 months, high impact (new relationship, major problem)

Traditional logic: Switch to Client B after Project 1 (higher fee).

Bain's logic: Stay with Client A through Project 2, then switch.

Why?

Relationship capital changes the economics:

Client A (existing):

  • Fee: $400K
  • Sales cost: Low (trust established, no pitch needed)
  • Delivery risk: Low (understand their business)
  • Profit margin: 60%
  • Profit: $400K × 60% = $240K

Client B (new):

  • Fee: $600K
  • Sales cost: High (cold pitch, due diligence, proof of concept)
  • Delivery risk: High (learning their business from scratch)
  • Profit margin: 40%
  • Profit: $600K × 40% = $240K

Decision: $240K = $240K profit. The higher fee from Client B is offset by higher costs. Bain should stay with Client A for Project 2.

But by Project 3, if Client A's fees have declined or delivery costs increased, marginal profit falls below new client acquisition profit. That's when Bain leaves.

The mechanism: This is the marginal value theorem applied to consulting. BCG left patches early (one project per client), flying constantly between bushes. Bain stayed at patches longer (multiple projects per client), leaving only when marginal returns fell below next-client value.

Result:

  • Bain & Company grew to $500M revenue by 1990 (17 years)
  • Client retention: 80%+ (vs. BCG 30-40%)
  • Premium pricing: 20% higher fees than competitors (exclusivity value)
  • Profit margins: 35-40% (vs. industry 25-30%)

Why it worked:

  • Patch residence: Stayed with clients longer than competitors (extracted more value per relationship)
  • Marginal value: Left clients before relationship became unprofitable (avoided low-margin tail projects)
  • Exclusivity: One client per industry prevented competition within patches (maximized patch value)

Bain didn't abandon clients prematurely (lost relationship capital) or stay too long (diminishing returns). It optimized patch residence time - exactly like the starling leaving the bush.

Case Study 3: Amazon (2000-2015) - Lévy Flights in Product Strategy

In 2000, Amazon was an online bookstore. By 2015, it had launched hundreds of products and services. Most failed. A few transformed the industry.

The pattern: Mostly small bets (product categories, features, tools), occasional giant leaps (cloud services, hardware devices, subscription programs).

This is Lévy flight search strategy.

Small steps (short flights):

  • Product categories (2000-2005): Music, electronics, toys - each adding ~$50-200M revenue
  • Features: 1-Click ordering, customer reviews, recommendations - incremental improvements
  • Tools: Seller tools, marketplace features - low cost experiments

Giant leaps (long flights):

  • AWS launch (2006): Cloud infrastructure, $billions invested, 10-year bet on enterprise market
  • Kindle (2007): $3B R&D, hardware platform, publishing disruption
  • Prime membership (2005): Free 2-day shipping, $billions in logistics, customer lock-in strategy

The strategy:

  • Launch 20 small products per year (low cost, rapid experiments)
  • Make 1-2 giant bets per year (high cost, long-term potential)
  • Kill failures quickly (60%+ of products shut down within 2 years)
  • Double down on successes (AWS, Prime, Kindle became core businesses)

The math (simplified):

Small bets:

  • Investment: 200 projects × $1M = $200M
  • Success rate: 10% (20 projects succeed)
  • Value created: 20 × $10M = $200M

Giant bets:

  • Investment: 10 projects × $100M = $1B
  • Success rate: 30% (3 projects succeed)
  • Value created: 3 × $10B = $30B

Total value created: $30.2B from $1.2B invested (25× return)

The giant bets created 150× more value than small bets, despite being only 5× the budget.

Traditional strategy (random walk):

  • Launch 50 medium bets at $5M each = $250M invested
  • 20% succeed = 10 successes at $50M value each = $500M total value

The Lévy flight strategy created 60× more value than random walk.

Why it worked:

  • Power law distribution: Retail and tech markets have power-law returns (winner-take-most). Lévy flights match this distribution.
  • Search pattern: Small steps found incremental wins (categories, features). Giant leaps found transformative wins (cloud, devices, subscriptions).
  • Fast failure: Failed experiments killed quickly, avoiding sunk cost trap

Successful companies don't optimize every product - they optimize their search pattern. Mostly short flights, occasional long jumps, rapid abandonment of failed patches.

Case Study 4: Airbnb (2008-2010) - Risk-Sensitive Foraging

December 2008. Brian Chesky calls Joe Gebbia.

"We have $5,000 left. I want to spend it all on a photographer."

Silence.

"That's our entire runway."

"I know. But five weeks of slow growth guarantees death. One photo bet gives us a shot."

Gebbia pauses. "...Do it."

In 2008, Airbnb had $5,000 in the bank and was burning $1,000/week. The founders (Brian Chesky, Joe Gebbia, Nathan Blecharczyk) faced a choice:

Strategy A (safe):

  • Incremental growth through word-of-mouth
  • Minimal spend on marketing/product
  • Extend runway to 5 months
  • Average outcome: Slow growth, likely failure

Strategy B (risky):

  • Spend $5,000 on professional photography for top 50 listings
  • Bet that better photos increase bookings
  • Runway reduced to 0 weeks
  • 50% chance of failure (immediate), 50% chance of success (booking surge)

Traditional logic: Choose Strategy A. Extend runway, optimize for survival.

Risk-sensitive foraging logic: Choose Strategy B.

Why?

Airbnb's reserves were below survival threshold. Average bookings (10 per week) weren't enough to reach profitability before running out of money. Strategy A guaranteed slow death. Strategy B gave 50% chance of success.

When facing starvation, risk-seeking is optimal.

They chose Strategy B.

Result:

  • Spent $5,000 hiring photographer (December 2008)
  • Photographed top 50 listings in NYC
  • Bookings doubled within 2 weeks (20 per week)
  • Revenue increased 2.5× within 1 month
  • Raised $600K seed round (March 2009) based on growth metrics

The insight: Professional photos weren't "nice to have" - they were the difference between survival and death. The risky bet (spend last $5K) delivered higher expected value than the safe bet (extend runway by 5 weeks).

Why it worked:

  • Reserves below threshold: Safe strategy guaranteed failure, risky strategy gave survival chance
  • Binary outcome: Either photos worked (bookings surge) or didn't (immediate death). No middle ground.
  • Fast feedback: Result known within weeks, not months. Could iterate if alive.

Airbnb foraged like a starving junco - took the risky patch because safe patches guaranteed death. Risk-sensitive foraging saved the company.

Case Study 5: Research In Motion (2007-2013) - Staying at Depleted Patches

In 2007, RIM (makers of BlackBerry) had 47 berries. The smartphone market was exploding, BlackBerry dominated enterprise email, and the company had $3B in annual revenue.

They ate 12 berries (grew to $20B revenue by 2011, 50% market share) and then faced the starling's question: when to leave?

The answer: 2008, when iPhone launched.

The signs of depletion:

  • Consumer preference shifting from keyboards to touchscreens
  • App ecosystem becoming competitive advantage (Apple App Store launched 2008)
  • Enterprise security (BlackBerry's moat) being commoditized by iOS/Android

The marginal value calculation (simplified):

2008 decision:

  • Stay with keyboards (current patch): $10B invested in BlackBerry OS, manufacturing, carrier relationships. Marginal value: extend market leadership 2-3 years.
  • Fly to touchscreens (new patch): $10B invested in new OS, developer ecosystem, consumer marketing. Marginal value: compete in new smartphone paradigm.

Traditional logic: Stay with keyboards. We're leaders, have 50% market share, generate $20B revenue. Why abandon success?

Marginal value logic: Leave keyboards in 2008. Market has shifted. Next year's keyboard sales will be lower than this year's (declining patch). Touchscreen market is growing 50% annually (abundant patch). Fly to new bush.

RIM chose to stay.

Result:

  • 2008-2011: Ate remaining berries (keyboard market). Revenue peaked at $20B (2011).
  • 2011-2013: Berries gone (keyboard market collapsed). Revenue fell to $11B (2013).
  • 2013: Company renamed to BlackBerry, mobile division eventually sold (2016).
  • As of December 2025: BlackBerry no longer makes phones. Market cap $2.4B (was $80B in 2008).

What went wrong?

RIM optimized per patch (keyboard market share) not across environment (smartphone market evolution). They stayed at a depleted bush because "there's still revenue here" (there was - $20B!). They missed that marginal value was negative by 2008 - every year in keyboards meant one less year to establish touchscreen position.

The starling leaves with 35 berries remaining. RIM stayed until all 47 berries were gone. By then, Apple and Samsung had eaten all the berries at the next bush.

The lesson: Market leadership in a declining patch is worth less than small share in a growing patch. RIM was king of keyboards in 2011. They should have been building touchscreens in 2008. Optimal patch residence isn't about maximizing current patch - it's about leaving before marginal value goes negative.


The Foraging Paradox: When Everyone Optimizes

Optimal foraging theory assumes you're the only one optimizing. But what happens when every company in your industry reads this chapter?

Scenario 1: The Customer Death Spiral

  • Company A identifies Year 5 as customer inflection point, exits
  • Company B acquires customer, extracts Year 6-7 value, exits
  • Company C picks up Year 8-9 value
  • Customer X passed between firms like a depleting berry bush until death
  • Result: Industry-wide optimization damages long-term customer relationships

Scenario 2: The Segment Arms Race

  • Every company's analysis identifies Mid-market as optimal segment (highest LTV-to-CAC ratio)
  • All competitors flood mid-market simultaneously
  • Customer acquisition costs (CAC) rise 300%
  • LTV/CAC ratio collapses from 8:1 to 2:1
  • Result: The "optimal" patch becomes depleted through competition

The Evolutionary Insight: Frequency-dependent selection

In biology, a strategy's success depends on how common it is in the population. Rare strategies often outperform common ones because they exploit underused resources.

Business Translation:

  • Marginal value optimization works best when you're among the first to implement
  • Being contrarian - serving "suboptimal" customers well - can be optimal when everyone else abandons them
  • The best patch is sometimes the one everyone thinks is depleted

The Meta-Strategy: Optimize one level deeper than your competitors. If they optimize per-customer, you optimize across portfolio. If they optimize across portfolio, you optimize across time (building durable relationships in "inefficient" segments).


Part 3: The Framework - How to Forage Optimally

These four companies - Netflix, Bain, Amazon, Airbnb - applied foraging principles intuitively. But you don't need intuition. The biological strategies they discovered can be systematized into executable frameworks.

Here's how to implement optimal foraging in your organization.

Framework 1: The Shore Crab Decision (Diet Selection Matrix)

Question: Which customers/markets/products should you pursue?

Remember the shore crab choosing between large, medium, and small mussels? It calculated calories per second of effort and chose the optimal mix. Here's how to do the same with customer segments:

Algorithm:

  1. Calculate value per unit effort for each option:
    • Value = (Annual revenue × Lifetime years × Margin × Close rate)
    • Effort = (Search cost + Sales cost + Onboarding cost)
    • Ratio = Value / Effort
  1. Rank options by ratio (highest to lowest)
  1. Include in "diet" if:
    • Ratio > current portfolio average
    • Exclude if ratio < current portfolio average
  1. Adjust based on abundance:
    • If high-value targets are abundant: Be selective (only pursue top tier)
    • If high-value targets are scarce: Broaden diet (pursue mid-tier)

Example:

Note: LTV (Lifetime Value) = total profit a customer generates over their entire relationship. CAC (Customer Acquisition Cost) = total sales and marketing expense to acquire one customer.

SegmentLTVEffort CostValue/Effort RatioClose RateExpected ValueDecision
Enterprise$1.25M$200K6.2510%$125KLarge mussel - pursue selectively
Mid-market$75K$35K2.1430%$22.5KMedium mussel - prioritize
SMB$2.5K$3.5K0.7160%$1.5KSmall mussel - deprioritize

Optimal diet:

  • Pursue: Enterprise and Mid-market (ratio > 0.5)
  • Ignore: SMB (ratio < 0.5, below portfolio average)

Adjust for scarcity:

  • If enterprise deals are abundant (10% close rate holds): Focus there
  • If enterprise deals dry up (close rate drops to 5%): Shift to mid-market

The Insight: You're not looking for the "best" customer. You're looking for the optimal mix given your constraints. The crab eats medium mussels even though large ones exist. You pursue mid-market even though enterprise deals are sexier. Optimal ≠ maximum.

Framework 2: The Starling's Algorithm (Marginal Value Decision)

Question: When should you leave a customer/market/product?

The starling left the bush after 12 berries, not 47. Bain left clients after Project 2 or 3, not Project 5. Here's the algorithm for knowing when to leave:

Algorithm:

  1. Calculate marginal value of next interaction:
    • Marginal value = (Next deal revenue × Margin × Retention probability) - (Sales cost + Delivery cost)
  1. Calculate opportunity cost:
    • Opportunity cost = Expected profit from best alternative (new customer, different market, etc.)
  1. Leave when:
    • Marginal value < Opportunity cost

Example:

Current customer (Year 3 of relationship):

  • Next renewal: $300K revenue
  • Margin: 50%
  • Retention probability: 80%
  • Sales cost: $20K (relationship established)
  • Marginal value: ($300K × 0.5 × 0.8) - $20K = $100K

New customer (prospecting):

  • First deal: $500K revenue
  • Margin: 40%
  • Close probability: 30%
  • Sales cost: $100K (cold pitch, long cycle)
  • Expected value: ($500K × 0.4 × 0.3) - $100K = -$40K (net loss on first deal, but positive lifetime value)

Decision: Stay with current customer (marginal value $100K > new customer first deal -$40K).

But check the same calculation at Year 7:

Current customer (Year 7 of relationship):

  • Next renewal: $150K revenue (shrinking)
  • Margin: 30% (squeezed)
  • Retention probability: 60% (shaky)
  • Sales cost: $30K (high maintenance)
  • Marginal value: ($150K × 0.3 × 0.6) - $30K = -$3K

You're paying to keep this customer. The math is screaming: leave.

The key: Compare marginal return on current activity vs. expected return on next-best activity. Leave when marginal < average.

This is the starling's algorithm - leave the bush when the next berry is slower than flying to a new bush.

In Plain English: Stay at a customer/market/product as long as your profit from the next transaction exceeds the profit you'd get from moving to a new customer/market/product.

Framework 3: The Albatross Search (Search Pattern Selector)

Question: How should you search for new opportunities?

Albatrosses use Lévy flights - short hops interspersed with long journeys across the ocean. Your search pattern should match your market structure: random walks for clustered resources, Lévy flights for sparse opportunities.

Decision tree:

If resources are clustered and predictable:

  • Use random walk (local search, thorough coverage)
  • Examples: Geographic expansion (city by city), vertical expansion (industry by industry), referral networks

If resources are sparse and unpredictable:

  • Use Lévy flight (mostly small bets, occasional big jumps)
  • Examples: Product portfolio (many small launches, few big bets), market entry (test multiple geos, double down on winners), M&A strategy (many small acquisitions, rare large acquisitions)

How to tell:

  • Clustered resources: Customer acquisition cost (CAC) is similar across channels, referral rates >30%, geographic/demographic concentration
  • Sparse resources: CAC varies 10× across channels, low referral rates, random distribution

Example:

Clustered (Local restaurant):

  • Customers are geographically clustered (neighborhood)
  • Search strategy: Flyers, local ads, community events (random walk)

Sparse (B2B SaaS):

  • Customers are randomly distributed (any industry, any geography)
  • Search strategy: Content marketing (small bets), occasional conference sponsorships (big bets), viral loops (Lévy flight)

Match your search pattern to resource distribution.

Framework 4: The Junco's Risk Threshold (Risk-Sensitivity Threshold)

Question: When should you take risky bets vs. safe bets?

The junco forages safely when fed, riskily when starving. Here's how to apply the same logic:

Algorithm:

  1. Calculate reserves: Current cash / Monthly burn = Runway (months)
  1. Calculate survival threshold: Minimum runway to reach next milestone (funding, profitability, product launch)
  1. Compare:
    • If reserves > threshold + 6 months: Be risk-averse (optimize for average outcome)
    • If reserves < threshold + 3 months: Be risk-seeking (optimize for survival probability)
    • If reserves between threshold + 3-6 months: Mixed strategy (balance risk/reward)

Example:

Startup A:

  • Reserves: $2M
  • Burn: $200K/month
  • Runway: 10 months
  • Next milestone: Product launch in 6 months
  • Threshold: 6 months
  • Reserves > threshold + 3 months (10 > 9)
  • Strategy: Risk-averse (incremental testing, safe customer acquisition, proven channels)

Startup B:

  • Reserves: $600K
  • Burn: $200K/month
  • Runway: 3 months
  • Next milestone: Seed raise requires traction
  • Threshold: 6 months (need traction in 3 months to raise in time)
  • Reserves < threshold (3 < 6)
  • Strategy: Risk-seeking (big PR bets, aggressive product launches, risky channel experiments)

You're Startup B. Your co-founder asks: "Should we spend $100K on this PR gambit?" You check the bank: $600K. Three months. The safe answer is "No - we need to conserve cash." The correct answer is "Yes - because conserving cash guarantees we die in 3 months. The PR gambit gives us a chance." When reserves fall below survival threshold, "risky" becomes "rational."

The key: When facing starvation, safe strategies guarantee death. Risky strategies give survival chance. Expected value of risky > expected value of safe when reserves are below threshold.

This is the junco's algorithm - forage safely when fed, forage riskily when starving.


Implementation: From Theory to Practice

The frameworks above are mathematically elegant. Implementation is messy. Here's how to navigate common barriers:

Barrier 1: Insufficient Data

Most companies don't have clean customer LTV, cohort profitability, or retention probability data.

Minimum Viable Data Approach:

  • If you have: Revenue per customer over time → Calculate inflection point as "when quarterly revenue growth rate turns negative"
  • If you have: Gross margin per customer → Proxy for profitability; leave when margin drops below company average
  • If you have: Support ticket volume → Spike in tickets often precedes churn; flag for exit consideration
  • If you have: Nothing quantitative → Score customers 1-5 on "strategic fit" and "growth trajectory" quarterly; exit bottom quartile

Start with directional data. Precision increases over time.

Barrier 2: Organizational Resistance

Sales compensation plans reward retention, not optimal quitting. Account managers resist "firing" customers. Leadership fears revenue decline.

Change Management Playbook:

  1. Reframe: "Graduating customers" not "firing customers" - they've outgrown us, we're not right for them
  2. Pilot: Test on 5-10 customers, measure results (freed capacity, improved service to remaining customers)
  3. Celebrate: Share success stories - "We exited CustomerX, they're happier with Competitor Y, we reallocated time to CustomerZ who grew 40%"
  4. Comp Plans: Adjust sales incentives - reward profitable retention, not total retention

Barrier 3: Competitive Fear

"If we abandon customers, competitors will take them."

Response: Yes. That's the point. Let competitors waste resources on post-inflection customers while you focus on pre-inflection opportunities.

Counter-scenario: You keep unprofitable customers, competitors poach your profitable ones. You lose both ways.

Barrier 4: Master Integration

You have 4 frameworks. How do you use them together?

Quarterly Planning Integration:

  1. Q-4 weeks: Run Framework 1 (Diet Selection) on all customer segments
  2. Q-3 weeks: Run Framework 2 (Marginal Value) on bottom quartile segments - identify exit candidates
  3. Q-2 weeks: Run Framework 4 (Risk Sensitivity) to calibrate risk appetite for new segment exploration
  4. Q-1 week: Run Framework 3 (Search Pattern) to allocate sales/marketing resources
  5. Quarter start: Execute decisions
  6. Mid-quarter: Review leading indicators (CAC trends, early conversion rates)
  7. Quarter end: Measure results, update models

The goal: Make foraging optimization part of quarterly rhythm, not one-time analysis.


The Ethics of Optimal Quitting

The Concern: "You're teaching companies to extract maximum value from customers and then abandon them when they're no longer profitable. Isn't this just ruthless capitalism?"

The Counter-Argument:

1. Honest Pricing Creates Honest Relationships If your cost-to-serve exceeds the value you deliver, the relationship is subsidized. You're either overcharging new customers to subsidize old ones, or burning investor capital. Neither is sustainable. Raising prices to match costs or exiting creates an honest exchange.

2. Better Matches Serve Customers Better Customers past their inflection point often receive better service from specialists. A startup that built its brand serving SMBs may struggle to serve enterprise customers well - those customers are better served by enterprise-focused vendors with appropriate infrastructure and expertise.

3. Resource Reallocation Improves Overall Service The time, capital, and attention spent on post-inflection customers could serve pre-inflection customers better. By optimizing portfolio composition, you deliver more value to customers you're best positioned to serve.

4. Transparency Prevents Harm The damage comes from pretending. A customer paying for a service you're de-investing in gets poor value. Being explicit about fit - "We're not the right vendor for your scale" - allows the customer to find better alternatives.

The Principle: Optimal foraging is about honest fit. Stay while you're creating value, leave when you're not. The marginal value theorem doesn't justify exploitation - it identifies when a relationship has passed its value-creation peak.

The starling doesn't leave to hurt the berry bush. It leaves because it can deliver more value (to itself and the ecosystem) by pollinating fresh flowers.


Closing: The Starling's Wisdom

The starling left the bush with 35 berries remaining. To human eyes, this looks wasteful. To quarterly-earnings-obsessed executives, leaving 35 berries looks insane. "We're still extracting value!" they shout to the board.

To evolution - which has run this experiment for 200 million years across trillions of organisms - it's the only strategy that survives.

The starling doesn't maximize berries per bush. It maximizes berries per lifetime. Staying at depleted patches wastes time. Moving to fresh patches - even with travel costs - delivers more calories over a foraging career.

Most companies fail this test:

  • Blockbuster (2004): $6B in DVD revenue, stayed in dying market, refused to fly to streaming. 2010: Bankrupt.
  • Kodak (2000): 80% market share in film, stayed in dying market, refused to fly to digital. 2012: Bankrupt.
  • Nokia (2007): 51% smartphone market share, stayed with feature phones, refused to fly to touchscreens. 2013: Mobile division sold for 4% of peak value.
  • BlackBerry (2011): "Physical keyboards are superior!" 2016: Exited phone manufacturing.

They all had berries remaining. They all optimized per patch, not across the environment. They all stayed too long.

The lesson from foraging biology:

  1. Diet selection: Pursue opportunities with value/effort above portfolio average. Ignore the rest.
  2. Marginal value: Leave customers/markets/products when marginal return falls below next-best alternative.
  3. Search patterns: Match your search strategy to resource distribution (random walk for clustered, Lévy flight for sparse).
  4. Risk sensitivity: When reserves are high, play safe. When reserves are low, take risks. Survival probability matters more than average outcome.

Netflix didn't license the "best" content - it licensed the optimal mix. Bain didn't serve all clients - it optimized patch residence time. Amazon didn't bet evenly - it used Lévy flights (small products, giant infrastructure bets). Airbnb didn't play safe - it bet everything because reserves were below survival threshold.

The starling has 47 berries available. It eats 12 and leaves.

Your company has customers who no longer generate positive marginal value. Markets where your share is declining and cost of service is rising. Products that require disproportionate support for shrinking revenue.

How many berries are you leaving on the bush because you don't know when to fly?

More importantly: How many berries are you staying for because you've forgotten how to leave?


Key Takeaways

  1. The starling's rule: Leave when marginal return < average return. Optimize across your environment, not within each patch.
  1. Diet selection: Pursue opportunities where value/effort > portfolio average. Shore crabs eat medium mussels and skip small and large - so should you.
  1. Risk sensitivity: When reserves > survival threshold, minimize variance. When reserves < survival threshold, maximize upside. Airbnb bet their last $5K because safe strategy guaranteed death.
  1. Lévy flights: Many small bets + occasional giant leaps creates outsized returns. Amazon's AWS, Kindle, and Prime transformed industries while hundreds of small bets failed quietly.
  1. Know when to quit: The most important decision isn't finding resources - it's knowing when to abandon them. RIM stayed at keyboards until the market collapsed. Apple flew to touchscreens while berries remained.

The ultimate lesson: Optimize across your environment, not within each patch. Leave the bush while berries remain. The starling knows this. Most companies don't.


References

Biological Foundations:

  • Charnov, E. L. (1976). "Optimal foraging, the marginal value theorem." Theoretical Population Biology, 9(2), 129-136.
  • MacArthur, R. H., & Pianka, E. R. (1966). "On optimal use of a patchy environment." American Naturalist, 100(916), 603-609.
  • Emlen, J. M. (1966). "The role of time and energy in food preference." The American Naturalist, 100(916), 611-617.
  • Elner, R. W., & Hughes, R. N. (1978). "Energy maximization in the diet of the shore crab, Carcinus maenas." Journal of Animal Ecology, 47(1), 103-116.
  • Caraco, T., Martindale, S., & Whittam, T. S. (1980). "An empirical demonstration of risk-sensitive foraging preferences." Animal Behaviour, 28(3), 820-830.
  • Stephens, D. W. (1981). "The logic of risk-sensitive foraging preferences." Animal Behaviour, 29(2), 628-629.
  • Orians, G. H., & Pearson, N. E. (1979). "On the theory of central place foraging." Analysis of Ecological Systems, 154-177.
  • Viswanathan, G. M., et al. (1999). "Optimizing the success of random searches." Nature, 401(6756), 911-914.
  • Sims, D. W., et al. (2008). "Scaling laws of marine predator search behaviour." Nature, 451(7182), 1098-1102.
  • Stephens, D. W., & Krebs, J. R. (1986). Foraging Theory. Princeton University Press.

Business Case Sources:

  • Netflix investor reports (2010-2011)
  • Gallagher, L. (2017). The Airbnb Story: How Three Ordinary Guys Disrupted an Industry, Made Billions... and Created Plenty of Controversy. Houghton Mifflin Harcourt.
  • Stone, B. (2013). The Everything Store: Jeff Bezos and the Age of Amazon. Little, Brown and Company.

Note: Business case financial figures are illustrative examples based on industry norms and public information, not companies' internal data.


The starling leaves the berry bush.

But where does it fly to? How does it decide which patches are worth exploring? And most critically - how much energy can it afford to spend searching before it starves?

That's Chapter 3: Energy Budgets and Metabolic Constraints - the physiology of resource allocation and the mathematics of survival.


End of Chapter 2

Sources & Citations

The biological principles in this chapter are grounded in peer-reviewed research. Explore the full collection of academic sources that inform The Biology of Business.

Browse all citations →
v0.1 Last updated 11th December 2025

Want to go deeper?

The full Biology of Business book explores these concepts in depth with practical frameworks.

Get Notified When Available →