Describe an app in natural language and Micracode streams code into an in-browser workspace.
Iterate by chat or edit the code directly in a Monaco editor β everything runs on your laptop.
Star us, and you will receive all release notifications from GitHub without any delay!
-
π οΈ Natural-Language Codegen β Describe an app in plain English; Micracode streams a working project into the workspace file by file.
-
π¬ Iterative Chat β Refine your project through conversation. Ask for changes, fixes, or new features and watch them stream in.
-
π In-Browser Monaco Editor β Edit generated code directly in a full Monaco editor; changes persist to disk.
-
π Pluggable LLM Providers β Ships with Google Gemini by default; switch to OpenAI with one env var. Configurable model IDs.
-
π¦ Local-First Storage β Projects live as plain folders on your filesystem. No database, no auth, no cloud service required.
-
π§ͺ Streaming Backend β Server-sent events deliver generated code in real time using a typed stream-event contract shared between web and API.
-
ποΈ Snapshots & Prompt History β Every project keeps its prompt history and snapshots so you can review or roll back.
- FastAPI β High-performance Python web framework
- LangChain + Google Gemini / OpenAI β Pluggable LLM orchestration (gemini-2.5-flash by default)
- SSE-Starlette β Server-sent events for streaming code generation
- UV β Modern Python package manager
- Pytest β Storage and HTTP test suite
- Next.js 15 β React framework with App Router
- React 19 β Latest React with concurrent features
- Tailwind CSS β Utility-first CSS framework
- Radix UI + shadcn/ui β Accessible component primitives
- Monaco Editor β VS Code's editor in the browser
- WebContainer API β Run Node.js apps directly in the browser
- Zustand β Lightweight state management
- ai-sdk β Vercel AI SDK for chat streaming
- Bun β JS workspace manager and runtime
- TypeScript β End-to-end type safety, with shared types in
packages/shared
- Node.js v22.18.0 (pinned via
.nvmrc) - Bun β₯ 1.1.0
- Python β₯ 3.12 (managed automatically by
uv) - uv β₯ 0.4
- A Google Gemini or OpenAI API key
Copy the example env file into the API app and add your key:
cp .env.example apps/api/.env
$EDITOR apps/api/.envMinimum config (Gemini, the default provider):
LLM_PROVIDER=gemini
GOOGLE_API_KEY=your_gemini_api_keyOr use OpenAI:
LLM_PROVIDER=openai
OPENAI_API_KEY=your_openai_api_key
OPENAI_MODEL=gpt-4oSee docs/configuration.md for the full reference and supported model IDs.
nvm use # picks up .nvmrc -> Node 22.18.0
bun install # JS workspaces (web + shared)
bun run api:install # Python deps for the API (creates a uv-managed venv)Start both apps in parallel:
bun run dev- Web: http://localhost:3000
- API: http://127.0.0.1:8000
Or run them individually:
bun run dev:web # Next.js only
bun run dev:api # FastAPI only (uvicorn --reload)Open http://localhost:3000, type a project description into the prompt box, and you're off. Full walkthrough in Getting Started.
micracode/
βββ apps/
β βββ web/ # Next.js 15 frontend
β β βββ src/
β β β βββ app/ # App Router pages
β β β βββ components/ # React components (incl. shadcn/ui)
β β β βββ lib/ # Utilities and clients
β β β βββ store/ # Zustand stores
β β βββ package.json
β β
β βββ api/ # FastAPI backend
β βββ src/micracode_api/
β β βββ agents/ # LLM orchestrator, prompts, model catalog
β β βββ routers/ # health, models, projects, generate
β β βββ schemas/ # Pydantic request/response models
β β βββ starter/ # Starter project templates
β β βββ config.py # Settings (env vars)
β β βββ storage.py # Local filesystem project storage
β β βββ main.py # FastAPI app entry point
β βββ tests/
β βββ pyproject.toml
β
βββ packages/
β βββ shared/ # Shared TypeScript types (stream event contract)
β
βββ docs/ # End-user documentation
βββ README.md
All endpoints are mounted under /v1.
| Method | Endpoint | Description |
|---|---|---|
| GET | /v1/health |
Service health check |
| GET | /v1/models |
List available LLM models |
| POST | /v1/generate |
Stream code generation events (SSE) |
| GET | /v1/projects |
List all projects |
| POST | /v1/projects |
Create a new project |
| GET | /v1/projects/{id} |
Get a project by id |
| DELETE | /v1/projects/{id} |
Delete a project |
| GET | /v1/projects/{id}/files |
List/read project files |
| PUT | /v1/projects/{id}/files |
Write project files |
| GET | /v1/projects/{id}/download |
Download project as archive |
| GET | /v1/projects/{id}/prompts |
Get prompt history |
| POST | /v1/projects/{id}/prompts/pop-assistant |
Pop last assistant message |
| GET | /v1/projects/{id}/snapshots |
List project snapshots |
End-user docs live in docs/:
- Getting Started β install prerequisites, configure an API key, and run the app.
- Configuration β environment variables, switching between OpenAI and Gemini, and supported model IDs.
- Using the Workspace β the home page, chat, editor, and preview panels.
- Projects on Disk β where your generated apps live and how to work with them outside the app.
- Troubleshooting β common errors and how to fix them.
- FAQ β short answers to common questions.
bun run dev # web + api in parallel
bun run dev:web # Next.js only
bun run dev:api # FastAPI only (uvicorn --reload, 127.0.0.1:8000)
bun run typecheck # TS across all workspaces
bun run lint # eslint across workspaces
bun run format # prettier
bun run test:api # pytest (storage + HTTP tests)
bun run api:lint # ruff check
bun run api:format # ruff formatThis project is licensed under the MIT License.
Contributions are welcome! Feel free to open issues and pull requests.
Join our community Discord