top of page

How I Prepare Before Building Projects in Lovable or Vibe-Coding

  • Writer: Harshal
    Harshal
  • 3 hours ago
  • 6 min read

The 3 inputs I organize before I start prompting

People asked how I build projects in Lovable or with any other vibe-coding tools. I share my approach here.

Vibe-coding is the build stage in my workflow. More thinking before I open a coding agent helps me reach a minimum lovable product (MLP) faster.

I prepare the work in a structured way before I write the first build prompt. The next section explains the three inputs I use.

I mention Lovable because it is the most approachable option for non-technical users. I use the same approach with other AI coding agents, including Cursor and Claude Code. I shared more from these experiments in multiple AI agents here.

Passing context, schema, and visual examples to AI agents to start a project
Passing context, schema, and visual examples to AI agents to start a project

Related:

The 3 Inputs I Prepare Before AI Coding Agents

I brainstorm before using Lovable or other AI coding agents.

I use ChatGPT and Perplexity as question-and-answer partners before I build. They help me clarify what I am asking for, which constraints matter, and surface vague details. Then I carry a structured version of that thinking into AI coding agents.

I organize the initial prompt into three distinct parts, not one giant message:

  1. Context

  2. Schema

  3. Visual examples

This preparation improves output reliability and reduces correction loops after the first draft.

Why I brainstorm before I use AI coding agents

I brainstorm first to make sure I am solving the right problem. I check whether an app already meets my need, or at least 80 percent of it. If that app exists, I start from it. If it is open source, I can modify it. If not, I can still take inspiration from it or use it as is and skip building. Building is much easier now, but shipping a high quality app still takes real time.

Lazar Jovanovic made the same point in Lenny's Podcast on professional vibe coding: strong context and structured prompts improve build quality from the first draft. Lovable's prompting docs make the same case.

1) Context

I prepare Context as my first input. Context is the basic project brief behind the build.

  • what I am building

  • who it is for

  • what the user is trying to do

  • what the main flow looks like

  • what constraints matter

  • what outcome or quality bar I care about

I refine this context with ChatGPT and Perplexity. I tell the AI to ask me questions, rewrite unclear parts, and use their responses to find gaps in my thinking. Sometimes I start with a rough voice dictated paragraph. Then I turn that paragraph into a usable brief through a few rounds of Q&A.

For example, my initial vague brief was

build a dashboard for API usage.

That was too vague. After some Q&A, I refined it to:

build a dashboard for a solo operator who uses multiple AI providers; compare usage and cost by provider in one view; add trend charts with a selectable time range (daily and monthly rollups); make the main screen readable on mobile; support light and dark mode; include CSV export, sign-in so usage stays private, and a simple way to inspect outlier spikes.

That second version gives AI coding agents a concrete project brief. The brief defines the user, the job to be done, the main views, and the quality bar.

I do not need perfect context before I start. I need enough context so the first build solves the right problem.

2) Database Schema

Database schema is the second input. It gives the build structure before code generation starts.

Define what data the app stores and how that data connects across entities. This schema definition gives the AI coding agent clear structure and context.

I usually write this in plain language first:

  • what things the app needs to track (for example: users, projects, tasks, events, usage)

  • what details each thing should store

  • how those things are linked

  • what cannot be duplicated

  • what should happen when something is changed or removed

When I skip schema planning, the first version often has a weak data model. The UI may look fine. The entity relationships break under real usage, and late fixes cost more.

I do not build schema drafts from scratch every time. I brainstorm options with ChatGPT or Gemini. Then I review the final structure before I start the build. The goal is to reduce ambiguity so Lovable generates cleaner data and backend logic on the first pass.

3) Visual Examples

Visual examples are the third input. They reduce UI and flow guesswork before generation starts.

This includes screenshots, references, before-and-after examples, rough mockups, and example apps. I collect these before I start when the UI or flow matters. I often use Mobbin to find UI patterns from real products for inspiration.

Perplexity also helps me find comparable products. I do that research to reduce build ambiguity.

I also decide the UI component library early. I usually use shadcn/ui because AI coding agents use it often and produce cleaner first drafts with it. If I want options, I ask ChatGPT for a short list of UI libraries. I review the list quickly, then state the selected library in my initial prompt.

If I skip this at the start, I sometimes do this in a later reskin pass.

The visual examples help answer practical questions such as:

  • what layout should this screen resemble

  • what level of density feels right

  • what interaction pattern is already familiar to users

  • what should change from an existing version

Sometimes the best visual input is a screenshot of another product. Sometimes it is a before-and-after example from my own project. Sometimes it is a rough sketch with boxes and labels. The point is to give the AI coding agent a clearer target.

Without visual references, the AI has to guess too much about layout and interaction style. The output may be acceptable. It is rarely optimal on the first try.

Clear visual input improves first-pass layout decisions and reduces revision loops.

How I Organize Prompts for Lovable

The earlier sections explained what I prepare. This section explains how I execute prompts in Lovable, step by step. At the same time, keep in mind that you don't need a rigid approach towards vibe coding. Its just that converting a person's approach into a template makes it look rigid.

I run prompts in a fixed order:

  1. Base brief once at project start

  2. Initial generation prompt for the first app version

  3. One prompt per major feature, screen, or revision

I break the prompt system into a few layers:

  • one master brief for the project

  • one feature ask for the current change

  • persistent constraints only when they affect the current task

  • reusable templates for recurring work

Hierarchy matters more than prompt length. A long prompt is not automatically a good prompt. Group information so the tool can apply it cleanly.

I avoid pasting full project history into every prompt. Each prompt carries only the context required for that step.

I attach visual references only when a task depends on layout, interaction, or style choices.

This prompt workflow keeps sessions clean and makes changes easier to track and debug.

Other files I maintain in each project

I covered the initial prompts I use to start projects. I also keep a small set of project files current so humans and agents can work with less ambiguity. I use AI to draft and maintain these files:

  • README.md - A human-readable project guide for the operator and for GitHub viewers. It explains what the app does, how to run it, how to test it, and how to deploy it. It also explains the project purpose, the user benefit, and sometimes includes a high-level architecture diagram for GitHub readers.

  • AGENTS.md - The working contract for coding agents. It captures project-specific constraints, test policy, and documentation rules before making changes.

  • docs/backlog.md - The decision list for upcoming work. It keeps open tasks prioritized by phase, impact, and effort.

  • docs/diaries/ - A persistent memory for the agents. Each diary captures reasoning, debugging trail, decisions, and verification notes while work is in progress.

  • docs/CHANGELOG.md - The build log for completed work. It records shipped changes, implementation notes, and verification evidence.

  • public/user-changelog.md - The user-facing "what's new" log. It translates internal changes into plain language users can scan quickly.

See examples on my GitHub.

Related:


bottom of page