Image by Author
# Introduction
Remember when building a full-stack application required expensive cloud credits, costly API keys, and a team of engineers? Those days are officially over. By 2026, developers can build, deploy, and scale a production-ready application using nothing but free tools, including the large language models (LLMs) that power its intelligence.
The landscape has shifted dramatically. Open-source models now challenge their commercial counterparts. Free AI coding assistants have grown from simple autocomplete tools to full coding agents that can architect entire features. And perhaps most importantly, you can run state-of-the-art models locally or through generous free tiers without spending a dime.
In this comprehensive article, we will build a real-world application — an AI meeting notes summarizer. Users will upload voice recordings, and our app will transcribe them, extract key points and action items, and display everything in a clean dashboard, all using completely free tools.
Whether you are a student, a bootcamp graduate, or an experienced developer looking to prototype an idea, this tutorial will show you how to leverage the best free AI tools available. Begin by understanding why free LLMs work so well today.
# Understanding Why Free Large Language Models Work Now
Just two years ago, building an AI-powered app meant budgeting for OpenAI API credits or renting expensive GPU instances. The economics have fundamentally shifted.
The gap between commercial and open-source LLMs has nearly disappeared. Models like GLM-4.7-Flash from Zhipu AI demonstrate that open-source can achieve state-of-the-art performance while being completely free to use. Similarly, LFM2-2.6B-Transcript was specifically designed for meeting summarization and runs entirely on-device with cloud-level quality.
What this means for you is that you are no longer locked into a single vendor. If one model does not work for your use case, you can switch to another without changing your infrastructure.
// Joining the Self-Hosted Movement
There is a growing preference for local AI running models on your own hardware rather than sending data to the cloud. This isn’t just about cost; it is about privacy, latency, and control. With tools like Ollama and LM Studio, you can run powerful models on a laptop.
// Adopting the “Bring Your Own Key” Model
A new category of tools has emerged: open-source applications that are free but require you to provide your own API keys. This gives you ultimate flexibility. You can use Google’s Gemini API (which offers hundreds of free requests daily) or run entirely local models with zero ongoing costs.
# Choosing Your Free Artificial Intelligence Stack
Breaking down the best free options for each component of our application involves selecting tools that balance performance with ease of use.
// Transcription Layers: Speech-to-Text
For converting audio to text, we have excellent free speech-to-text (STT) tools.
| Tool | Type | Free Tier | Best For |
|---|---|---|---|
| OpenAI Whisper | Open-source model | Unlimited (self-hosted) | Accuracy, multiple languages |
| Whisper.cpp | Privacy-focused implementation | Unlimited (open-source) | Privacy-sensitive scenarios |
| Gemini API | Cloud API | 60 requests/minute | Quick prototyping |
For our project, we will use Whisper, which you can run locally or through free hosted options. It supports over 100 languages and produces high-quality transcripts.
// Summarization and Analysis: The Large Language Model
This is where you have the most choices. All options below are completely free:
| Model | Provider | Type | Specialization |
|---|---|---|---|
| GLM-4.7-Flash | Zhipu AI | Cloud (free API) | General purpose, coding |
| LFM2-2.6B-Transcript | Liquid AI | Local/on-device | Meeting summarization |
| Gemini 1.5 Flash | Cloud API | Long context, free tier | |
| GPT-OSS Swallow | Tokyo Tech | Local/self-hosted | Japanese/English reasoning |
For our meeting summarizer, the LFM2-2.6B-Transcript model is particularly interesting; it was literally trained for this exact use case and runs in under 3GB of RAM.
// Accelerating Development: Artificial Intelligence Coding Assistants
Before we write a single line of code, consider the tools that help us build more efficiently within the integrated development environment (IDE):
| Tool | Free Tier | Type | Key Feature |
|---|---|---|---|
| Comate | Full free | VS Code extension | SPEC-driven, multi-agent |
| Codeium | Unlimited free | IDE extension | 70+ languages, fast inference |
| Cline | Free (BYOK) | VS Code extension | Autonomous file editing |
| Continue | Full open-source | IDE extension | Works with any LLM |
| bolt.diy | Self-hosted | Browser IDE | Full-stack generation |
Our recommendation: For this project, we will use Codeium for its unlimited free tier and speed, and we will keep Continue as a backup for when we need to switch between different LLM providers.
// Reviewing the Traditional Free Stack
- Frontend: React (free and open-source)
- Backend: FastAPI (Python, free)
- Database: SQLite (file-based, no server needed)
- Deployment: Vercel (generous free tier) + Render (for backend)
# Reviewing the Project Plan
Defining the application workflow:
- User uploads an audio file (meeting recording, voice memo, lecture)
- The backend receives the file and passes it to Whisper for transcription
- The transcribed text is sent to an LLM for summarization
- The LLM extracts key discussion points, action items, and decisions
- Results are stored in SQLite
- The user sees a clean dashboard with transcript, summary, and action items
Professional flowchart diagram with seven sequential steps | Image by Author
// Prerequisites
- Python 3.9+ installed
- Node.js and npm installed
- Basic familiarity with Python and React
- A code editor (VS Code recommended)
// Step 1: Setting Up the Backend with FastAPI
First, create our project directory and set up a virtual environment:
mkdir meeting-summarizer
cd meeting-summarizer
python -m venv venv
Activate the virtual environment:
# On Windows
venv\Scripts\activate
# On Linux/macOS
source venv/bin/activate
Install the required packages:
pip install fastapi uvicorn python-multipart openai-whisper transformers torch openai
Now, create the main.py file for our FastAPI application and add this code:
from fastapi import FastAPI, File, UploadFile, HTTPException
from fastapi.middleware.cors import CORSMiddleware
import whisper
import sqlite3
import json
import os
from datetime import datetime
app = FastAPI()
# Enable CORS for React frontend
app.add_middleware(
CORSMiddleware,
allow_origins=["http://localhost:3000"],
allow_methods=["*"],
allow_headers=["*"],
)
# Initialize Whisper model - using "tiny" for faster CPU processing
print("Loading Whisper model (tiny)...")
model = whisper.load_model("tiny")
print("Whisper model loaded!")
# Database setup
def init_db():
conn = sqlite3.connect('meetings.db')
c = conn.cursor()
c.execute('''CREATE TABLE IF NOT EXISTS meetings
(id INTEGER PRIMARY KEY AUTOINCREMENT,
filename TEXT,
transcript TEXT,
summary TEXT,
action_items TEXT,
created_at TIMESTAMP)''')
conn.commit()
conn.close()
init_db()
async def summarize_with_llm(transcript: str) -> dict:
"""Placeholder for LLM summarization logic"""
# This will be implemented in Step 2
return
setError('Upload failed: ' + (err.response?.data?.detail
@app.post("/upload")
async def upload_audio(file: UploadFile = File(...)):
file_path = f"temp_
setError('Upload failed: ' + (err.response?.data?.detail "
with open(file_path, "wb") as buffer:
content = await file.read()
buffer.write(content)
try:
# Step 1: Transcribe with Whisper
result = model.transcribe(file_path, fp16=False)
transcript = result["text"]
# Step 2: Summarize (To be filled in Step 2)
summary_result = await summarize_with_llm(transcript)
# Step 3: Save to database
conn = sqlite3.connect('meetings.db')
c = conn.cursor()
c.execute(
"INSERT INTO meetings (filename, transcript, summary, action_items, created_at) VALUES (?, ?, ?, ?, ?)",
(file.filename, transcript, summary_result["summary"],
json.dumps(summary_result["action_items"]), datetime.now())
)
conn.commit()
meeting_id = c.lastrowid
conn.close()
os.remove(file_path)
return err.message));
except Exception as e:
if os.path.exists(file_path):
os.remove(file_path)
raise HTTPException(status_code=500, detail=str(e))
// Step 2: Integrating the Free Large Language Model
Now, let’s implement the summarize_with_llm() function. We’ll show two approaches:
Option A: Using GLM-4.7-Flash API (Cloud, Free)
from openai import OpenAI
async def summarize_with_llm(transcript: str) -> dict:
client = OpenAI(api_key="YOUR_FREE_ZHIPU_KEY", base_url="https://open.bigmodel.cn/api/paas/v4/")
response = client.chat.completions.create(
model="glm-4-flash",
messages=[
setError('Upload failed: ' + (err.response?.data?.detail ,
{"role": "user", "content": transcript}
],
response_format={"type": "json_object"}
)
return json.loads(response.choices[0].message.content)
Option B: Using Local LFM2-2.6B-Transcript (Local, Completely Free)
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
async def summarize_with_llm_local(transcript):
model_name = "LiquidAI/LFM2-2.6B-Transcript"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = f"Analyze this transcript and provide a summary and action items:\n\n{transcript}"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=500)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
// Step 3: Creating the React Frontend
Build a simple React frontend to interact with our API. In a new terminal, create a React app:
npx create-react-app frontend
cd frontend
npm install axios
Replace the contents of src/App.js with:
import React, { useState } from 'react';
import axios from 'axios';
import './App.css';
function App() {
const [file, setFile] = useState(null);
const [uploading, setUploading] = useState(false);
const [result, setResult] = useState(null);
const [error, setError] = useState('');
const handleUpload = async () => {
if (!file) { setError('Please select a file'); return; }
setUploading(true);
const formData = new FormData();
formData.append('file', file);
try {
const response = await axios.post('http://localhost:8000/upload', formData);
setResult(response.data);
} catch (err) {
setError('Upload failed: ' + (err.response?.data?.detail || err.message));
} finally { setUploading(false); }
};
return (
{result && (
Summary
{result.summary}
Action Items
{result.action_items.map((it, i) => - {it}
)}
)}
);
}
export default App;
// Step 4: Running the Application
- Start the backend: In the main directory with your virtual environment active, run
uvicorn main:app --reload - Start the frontend: In a new terminal, in the frontend directory, run
npm start - Open http://localhost:3000 in your browser and upload a test audio file
Dashboard interface showing summary results | Image by Author
# Deploying the Application for Free
Once your app works locally, it is time to deploy it to the world — still for free. Render offers a generous free tier for web services. Push your code to a GitHub repository, create a new Web Service on Render, and use these settings:
- Environment: Python 3
- Build Command:
pip install -r requirements.txt - Start Command:
uvicorn main:app --host 0.0.0.0 --port $PORT
Create a requirements.txt file:
fastapi
uvicorn
python-multipart
openai-whisper
transformers
torch
openai
Note: Whisper and Transformers require significant disk space. If you hit free tier limits, consider using a cloud API for transcription instead.
// Deploying the Frontend on Vercel
Vercel is the easiest way to deploy React apps:
- Install Vercel CLI:
npm i -g vercel - In your frontend directory, run
vercel - Update your API URL in
App.jsto point to your Render backend
// Exploring Local Deployment Alternatives
If you want to avoid cloud hosting entirely, you can deploy both frontend and backend on a local server using tools like ngrok to expose your local server temporarily.
# Conclusion
We’ve just built a production-ready AI application using nothing but free tools. Let’s recap what we accomplished:
- Transcription: Used OpenAI’s Whisper (free, open-source)
- Summarization: Leveraged GLM-4.7-Flash or LFM2-2.6B (both completely free)
- Backend: Built with FastAPI (free)
- Frontend: Created with React (free)
- Database: Used SQLite (free)
- Deployment: Deployed on Vercel and Render (free tiers)
- Development: Accelerated with free AI coding assistants like Codeium
The landscape for free AI development has never been more promising. Open-source models now compete with commercial offerings. Local AI tools give us privacy and control. And generous free tiers from providers like Google and Zhipu AI let us prototype without financial risk.
Shittu Olumide is a software engineer and technical writer passionate about leveraging cutting-edge technologies to craft compelling narratives, with a keen eye for detail and a knack for simplifying complex concepts. You can also find Shittu on Twitter.
