All essays
Drupal··9 min·Series: Drupal AI

From Lovable to Drupal: how we made the DriesNote demo real

After the DriesNote, one of the most common questions we got was some version of this: " how did you actually do the Lovable-to-Drupal migration, and how could we do it ourselves ?"

From Lovable to Drupal: how we made the DriesNote demo real

After the DriesNote, one of the most common questions we got was some version of this: "how did you actually do the Lovable-to-Drupal migration, and how could we do it ourselves?"

There is a very real scenario behind that question. Your CEO starts vibe-coding, comes back with a beautiful site in Lovable, and says, "Great, now make this real." That is both the opportunity and the hard part.

What we built was not a magic import button. It was a repeatable outside-in rebuild workflow: use Lovable to get to the visual answer quickly, then rebuild that answer in Drupal CMS so it becomes structured, governed, editable, reusable, and fit for a real team to run.

It is also important to say this clearly: Lovable deserves real credit in this story. It was excellent at helping us explore visual direction quickly. It gave us a premium-looking prototype fast, helped us iterate toward something bold and stage-readable, and did not force us into a closed end state. In the planning and demo materials, we explicitly called out that Lovable could do more than static pages, and that one of its strengths was openness: build quickly, export, and take it wherever you want next. That made it a very good front-end exploration tool for this workflow, not a strawman we were trying to beat up on.

The AX (Agent Experience) framing is the clearest way I know to explain why this worked. In plain English, we had to improve three things around the agent: its action surface, its context surface, and its governance surface.

  • The action surface is the boring, reliable way it can act: Drush, config import, scripts, APIs, provisioning.

  • The context surface is the project knowledge it can load: AGENTS.md, mapping docs, examples, skills, and runbooks.

  • The governance surface is what makes the work safe and reviewable: permissions, diffs, parity evidence, approvals, logs, and human signoff.

In practice, that meant better ways for the agent to act, better project context to load, and better rails for review. The breakthrough was not that we discovered a magic sentence to paste into a chatbot. The breakthrough was that we gave the model a much better environment to work in.

What we actually did

This did not start as one perfect event-site demo. The work began earlier with the FinDrop proof of concept. That first pass proved the core motion: a site created in Lovable could be exposed to an agent, mapped into Drupal CMS, and rebuilt with reasonably strong visual similarity.

It also exposed the failure modes immediately. Too much content stayed hardcoded, the content model was weaker than it needed to be, and "pretty close" parity was still too loose to count as done. That early work mattered because it gave us the mistakes that later got turned into process.

From there, we deliberately shifted to the brighter Vision 25 / Vision 2051 event-site concept people saw on stage. That was not just a cosmetic re-theme. It made the site easier to read on a giant projected screen, stronger as a keynote story, and much better at showing the editorial payoff of Drupal. The timeline edit landed because the site was no longer just a generated artifact. It had become a real CMS-backed system.

By the final Vision build, the implementation had become much more Drupal-shaped. Repeatable content lived in canonical bundles like page, session, lab, and milestone, with taxonomy for things like tracks and rooms. One-off routes stayed as canvas_page where that made sense. Listings like agenda, labs, timeline, and tracks were Views-backed. Seed and provision scripts rebuilt the structure. Parity, motion, accessibility, and editorial checks were captured as evidence instead of being left to taste. That is the difference between "a convincing clone" and "a system a team can actually operate."

One reason the keynote version looked so simple is that the prompt story inverted over time. On the Lovable side, the prompt was large and context-heavy. I have now published the full Lovable prompt. On the Drupal side, the prompt got shorter. That was intentional. More and more of the real judgment moved into the repo: AGENTS.md, mapping rules, parity docs, runbooks, and review checklists. That inversion is the point. The important part stopped living in the chat box and started living in the substrate.

Here is the actual Drupal-side prompt we used:

That prompt is strikingly short, and that is exactly why it worked. The repo was already carrying the rest of the instructions.

How to recreate the workflow

The simplest way to recreate this is to think in two phases: prototype fast and rebuild properly.

1. Start outside Drupal

Start in Lovable, Replit, Figma-to-code, or another prototype generator to get the hierarchy, page types, motion, tone, and visual direction right. In my case, that did not mean one perfect first try. I made multiple versions, rejected some for being too dark, kept iterating for stage readability, and only moved on once the experience looked and felt right. That is the right mental model: treat this stage more like working with a fast design agency than like writing production architecture.

*(*pro tip, make use of a tool like ChatGPT or Gemini to help write the prompt you'll use in Lovable. i.e. "I want to build a cinematic high impact site...")

2. Give the rebuild side both the live site and the source

Once the prototype is good enough, give the Drupal-side agent both the live site and the source, if you have it. That matters more than people think. The live site helps with parity. The source helps the agent understand styles, assets, and structure. The route list and screenshots keep it from declaring victory when it is only “close enough.”

3. Start from a clean Drupal workspace and record the build path immediately

The durable version was reproduced through DDEV, Composer, Drush, config import, provisioning scripts, and Canvas entity APIs, not through a pile of undocumented admin clicks. That is an important nuance. Canvas CLI was a real part of the broader story, especially for code-component and remote Canvas workflows, but the keynote-grade local Drupal build was primarily a normal, reproducible Drupal build.

4. Write AGENTS.md before the build drifts

At minimum, encode stable interfaces only, config and code as the source of truth, a mapping-spec requirement, rules for when something should be an entity versus a canvas_page, a parity definition, editor-readiness checks, and a clear definition of done. Without that, the agent will optimize for apparent progress instead of durable Drupal quality.

This is where "the best prompt is the one you never have to retype" becomes practical instead of philosophical.

Note: The full final AGENTS.md file we landed on after many iterations is available here.

5. Decide the content model early

This is where Drupal starts paying for itself. If editors may someday list it, filter it, search it, relate it, or reuse it, it probably should not live as page-only component state. Use entities and fields for repeatable content. Use taxonomy for grouping and filtering. Use Views for content families. Use Canvas for composition and presentation, not as an excuse to skip information architecture.

6. Use Canvas reproducibly

For a local Drupal CMS build, that means config-managed content templates and programmatic page composition, with provisioning scripts creating or updating component trees. For remote code-component environments, Canvas CLI is useful for validate/build/upload flows. The key principle is simple: do not let "whatever somebody clicked together in preview" become your source of truth.

7. Treat parity like testing

Define exact routes, exact viewports, exact flows, accepted differences, and where evidence lives. Then automate the boring parts. Also separate the checks: static parity, motion parity, and editorial verification are not the same thing. The later repos got much better once those became separate gates instead of one fuzzy visual review.

8. Plan the cleanup pass on purpose

The first working build is not the final architecture. The repeatable pattern was: get it working, tighten parity, remove prototype shortcuts, harden provisioning, move data ownership into Drupal, and only then clean the repo for handoff. That cleanup is not evidence the method failed. It is part of the method.

What will bite you

The import myth

If people think this was a one-click conversion, they will try to reproduce the wrong thing. The repo trail shows repeated rebuilds, parity loops, architecture corrections, and cleanup, not a magic importer.

The hardcoded content trap

A site can look surprisingly complete while key copy, cards, section text, and listing logic are still hardcoded in theme code or prototype-shaped data. That is why the later workflow got stricter about fields, Views, and Drupal as the source of truth.

Using Canvas as a crutch

Canvas is a strength here, but only when it sits on top of a real content model. If it becomes the reason you postpone modeling repeatable content, you are just building a more polished prototype.

Static parity is the easy part

Forms, navigation, responsive behavior, motion, and accessibility are where drift shows up. The later process improved because parity stopped meaning "the homepage looks close" and started meaning "the experience behaves correctly, and we can prove it."

Timing gets oversold fast

Public demos necessarily compress reality into a clean narrative, like 15 minutes in a prototype generator followed by a couple of hours of rebuilding. But if people expect a one-shot, linear process, they will be disappointed. In practice, we did not just fire off one magic prompt. We ran many iterative loops.

The honest timeline is this: generating the visual prototype was incredibly fast, and getting the first credible Drupal rebuild on screen was measured in hours. But moving from "visually close" to an architecturally durable, team-ready site took deliberate repetition.

That is exactly where human expertise became the critical ingredient. The AI provided the raw velocity, but an experienced Drupal developer provided the architectural governance. It took a human to review the outputs, steer the loops, and enforce the Drupal way. A human looks at a generated grid and says, "No, don’t hardcode this layout. That needs to be a View driven by a taxonomy," or, "This data needs to live in structured entities so editors can reuse it." The AI executes the build, but the human enforces the information architecture, accessibility, and editorial experience. Acknowledging that reality makes the method more believable, and much more valuable for real teams.

No human review means no trust

This is not a story about AI replacing engineering judgment. It is a story about AI accelerating both the prototype and the rebuild, while human review makes the output trustworthy.

One other detail worth keeping in the story: we later had the result audited by Niels Aers from DropSolid and Gábor Hojtsy from Acquia. The takeaway was not that the build was perfect. It was that it was already in range of what many teams would assemble on their own, even if some rationalization and cleanup was still warranted. In short, the AI produced a usable first pass, not a flawless architecture.

What this proved

The strongest lesson here is not "AI can clone websites fast." The stronger lesson is that different tools are now very good at different parts of the job.

Lovable is very good at helping you explore and converge on a visual answer quickly. Drupal is very good at turning that answer into a system a team can actually run: structured content, workflows, permissions, diffs, reusable components, reviewability, and durable editorial ownership. That is why these tools fit together so well. Or in the shortest version: AI helps you get to visual ambition quickly; Drupal helps a team own the result.

What comes next

The takeaway I want people to leave with is: prototype with whatever tool gets you to the right experience fastest. Then rebuild the result in Drupal so it becomes governed, reviewable, and durable. The tool on the left will keep changing. The value of the system on the right is much more stable.

Now, we need to see agencies and customers do this for real. Build with any agent, but let Drupal be the governor. Reach out if you want help - I'm happy to discuss.

Keep reading