GPT4Free (g4f) is a free tool that gives you access to powerful AI models: GPT-4/5, Claude, Gemini, DeepSeek. It works by reverse-engineering public APIs.
⚠️ Note: For educational and testing purposes only. May violate some services’ ToS.
Install in 2 minutes
Requirements
- Any computer with internet
- Python 3.10+ (check “Add to PATH” during install)
One command
pip install -U g4f[all]
Done. Library is ready.
Run it
Option 1: Web UI (chat in your browser)
python -m g4f.cli gui --port 8080
Open in browser: http://localhost:8080/chat/
Option 2: Developer mode (local API)
python -m g4f --port 1337
Now you can connect any app that supports OpenAI API.
Your first script: 5 lines
Create test.py:
from g4f.client import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Why is potato a state of mind?"}]
)
print(response.choices[0].message.content)
Run it:
python test.py
Generate images
from g4f.client import Client
client = Client()
img = client.images.generate(
model="flux",
prompt="Cyberpunk potato in a neon city",
response_format="url"
)
print(f"Done: {img.data[0].url}")
Working models (March 2026)
| Model | Status | Best for |
|---|---|---|
gpt-4o-mini | ✅ Stable | Fast chat, quick answers |
gpt-4o | ✅ Stable | Complex tasks, reasoning |
deepseek-v3 | ✅ Stable | Code, math, logic |
gemini-2.5-pro | ⚠️ Intermittent | Multimodal tasks |
llama-3.3-70b | ✅ Stable | Open-source alternative |
gpt-5 | 🔶 Experimental | May not work |
💡 List changes often. Get the live list via:
GET http://localhost:8080/backend-api/models
Connect to any OpenAI-compatible app
After running python -m g4f --port 1337:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:1337/v1",
api_key="doesnt-matter" # any value works
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Tell me a potato joke"}]
)
print(response.choices[0].message.content)
Works with LibreChat, Flowise, AnythingLLM, and more.
Troubleshooting
# Update the library
pip install -U g4f
# Install errors on Windows
pip install --upgrade pip setuptools wheel
# Model not responding
# → Try a different model
# → Enable VPN
# → Wait 10-30 seconds (some providers are slow)
# Using Docker? Give the browser more shared memory:
docker run -p 8080:8080 --shm-size="2g" hlohaus789/g4f:latest
Docker (for servers & advanced users)
docker run -p 8080:8080 --shm-size="2g" hlohaus789/g4f:latest
- Web UI:
http://localhost:8080 - API:
http://localhost:8080/v1