- Install dependencies
npm install- Start the dev server (auto-reloads on changes)
npm run devDev server runs on http://localhost:3000 (configurable in vite.config.ts).
Production deployment (Vercel): https://unmutte.vercel.app
Use this link to quickly verify UI changes, AI chat responses, and performance in a production-like environment. Note that environment variables for AI provider (Gemini/NVIDIA) must be set in the Vercel project for the chat to return real model output.
If you want the AI chat to work locally (calls /api/chat), run the Netlify function runtime:
npm run dev:netlifyNotes:
- This uses
npx netlify-cli@17.34.0under the hood. If you see errors, use Node 18–20 (e.g.nvm use 20). - Create
.envfrom.env.exampleand setGEMINI_API_KEYfor local function dev. Never commit real secrets.
npm run buildThe static files are output to the dist/ directory (see vite.config.ts outDir).
npm run previewThis serves the built site locally (default http://localhost:4173). Useful to validate the production bundle.
The AI chat uses a Netlify Function (/.netlify/functions/gemini-chat) that proxies requests to your chosen provider so API keys stay server-side.
Supported providers:
- Google Gemini (default)
- NVIDIA API (Nemotron, etc.) via OpenAI-compatible Chat Completions
Configure via environment variables (either in .env for local dev or in Netlify site settings):
AI_PROVIDER—gemini(default) ornvidia- If
gemini:GEMINI_API_KEY=<your_gemini_key>- Optional:
GEMINI_MODEL=gemini-2.5-flash
- If
nvidia:NVIDIA_API_KEY=<your_nvidia_key>- Optional:
NVIDIA_MODEL=nvidia/nemotron-4-9b-instruct
The frontend posts to /api/chat, which is redirected to the function in netlify.toml.
If you host on Vercel instead of Netlify, this project includes a Vercel Serverless Function at api/chat.js.
Requests to /api/chat will automatically invoke the function on Vercel.
Set these environment variables in your Vercel Project (Production and Preview as needed):
-
AI_PROVIDER=geminiornvidia -
GEMINI_API_KEY(when using Gemini) -
GEMINI_MODEL(optional; the function auto-discovers supported models) -
NVIDIA_API_KEY(when using NVIDIA) -
NVIDIA_MODEL(optional; the function falls back to widely-available models if restricted) -
VITE_MIXPANEL_TOKEN(optional) — your Mixpanel project token. Prefix withVITE_so the token is exposed to the client by Vite. Keep this token secret in repo; set it in Vercel/Netlify UI.
Build: npm run build (detected automatically)
Output dir: dist (Vite default here)
Troubleshooting on Vercel:
- If you see "AI model not available" in the UI, it often means
/api/chatreturned 404/400. Ensureapi/chat.jsis deployed and your environment variables are configured. Check Vercel Function logs for details. - Some NVIDIA models may be restricted per account/region. If you get 404/403 from NVIDIA, set
NVIDIA_MODEL=meta/llama-3.1-8b-instructor leave it empty to let the function try safe fallbacks.
- Node.js 18+ recommended
- Modern browser
- If port 3000 is taken, change the
server.portinvite.config.tsor run:npm run dev -- --port 4000. - After dependency changes, delete
node_modulesand reinstall:rm -rf node_modules package-lock.json && npm install.