The over-engineered plant authority

Botanical facts, framed like a performance review

eFerns treats houseplants, lawn replacements, and shade beds with the extreme seriousness usually reserved for quarterly operations dashboards. Underneath the tone, the information architecture stays rigid, structured, and reusable.

eFerns hero illustration showing structured plant data panels
Operational baseline

The site is optimized for evidence density, not ornamental lifestyle copy

These numbers are intentionally editorial rather than fictional SaaS telemetry. They encode constraints that shape the implementation and the content model.

Hero image policy
Eager, explicit dimensions Protects mobile LCP on detail pages
Client runtime
Zero by default Local state only where it changes outcomes
Card summary limit
160 characters Enforced in CloudCannon to prevent fractures
Controlled vocabularies
12 taxonomies Shared across profiles, guides, and experiments
Myth versus measured

Common gardening advice that fails a basic logging discipline

The tone is slightly absurd because much of the content market for this category is vague on purpose. The implementation is the opposite.

Mist every fern daily and humidity problems disappear.

Root-zone moisture plus ambient humidity explain more variance than leaf spray.

Replace ritual with the variables that actually changed the outcome.

Shade mixes are interchangeable as long as the bag says “woodland.”

Texture, drainage speed, and organic matter ratios change survival rates fast.

Soil should be documented like a spec, not a mood.

Turf replacement is mostly a design decision.

Hours of light, moisture stability, and maintenance tolerance eliminate bad candidates early.

A site audit saves more time than another inspiration board.

Seasonal checklist

Low-drama tasks that keep recommendations honest

This one tiny interactive surface earns its keep. Checklist state is scoped to the page slug and task id, written to localStorage, and restored on reload.

Failure postmortem

Negative results stay visible because they are operationally useful

The most commercial version of this site would hide failed interventions. The better reference implementation leaves them in plain view.

Experiment Daily misting failure Duration 6 weeks Risk high

Observed failure

No durable improvement. Leaf surfaces looked better briefly after spraying, but edge crisping and decline pattern stayed functionally unchanged.

Intervention: Misted foliage once every morning for 6 weeks while keeping the same pot, room, and baseline watering schedule.

Read the full postmortem
Manual effort added
42 extra sprays over 6 weeks
Durable improvement detected
None beyond the first visual day
Better corrective variable
Stable humidity plus consistent root moisture

Questions people ask once the joke wears off

Is this a real gardening site or a demo for Astro and CloudCannon?

Both. Publicly it is a rigorous editorial property. Underneath, it is also a reference implementation for structured, editor-safe Astro builds.

Why not use client-side filtering for plant comparisons?

The core navigation still works with no JavaScript. Structured data should improve authoring and reuse before it adds runtime complexity.

Why are failed experiments shown so prominently?

Failed interventions are often the clearest way to explain what variable actually mattered.

This is how a content-first Astro and CloudCannon build should feel when the information architecture is serious

Browse the collections, inspect the methodology, or open the component docs and treat the project like a starter for a real client build.