## 主要更新 - ✨ 更新所有依赖到最新稳定版本 - 📝 添加详细的项目文档和模型推荐 - 🔧 配置 VSCode Cloud Studio 预览功能 - 🐛 修复 PyTorch API 弃用警告 ## 依赖更新 - diffusers: 0.27.2 → 0.35.2 - gradio: 4.21.0 → 5.46.0 - peft: 0.7.1 → 0.18.0 - Pillow: 9.5.0 → 11.3.0 - fastapi: 0.108.0 → 0.116.2 ## 新增文件 - CLAUDE.md - 项目架构和开发指南 - UPGRADE_NOTES.md - 详细的升级说明 - .vscode/preview.yml - 预览配置 - .vscode/LAUNCH_GUIDE.md - 启动指南 - .gitignore - 更新的忽略规则 ## 代码修复 - 修复 iopaint/model/ldm.py 中的 torch.cuda.amp.autocast() 弃用警告 ## 文档更新 - README.md - 添加模型推荐和使用指南 - 完整的项目源码(iopaint/) - Web 前端源码(web_app/) 🤖 Generated with Claude Code
9.3 KiB
IOPaint
A free and open-source inpainting & outpainting tool powered by SOTA AI model.
| Erase(LaMa) | Replace Object(PowerPaint) |
|---|---|
| Draw Text(AnyText) | Out-painting(PowerPaint) |
|---|---|
Features
-
Completely free and open-source, fully self-hosted, support CPU & GPU & Apple Silicon
-
OptiClean: macOS & iOS App for object erase
-
Supports various AI models to perform erase, inpainting or outpainting task.
- Erase models: These models can be used to remove unwanted object, defect, watermarks, people from image.
- Diffusion models: These models can be used to replace objects or perform outpainting. Some popular used models include:
-
- Segment Anything: Accurate and fast Interactive Object Segmentation
- RemoveBG: Remove image background or generate masks for foreground objects
- Anime Segmentation: Similar to RemoveBG, the model is specifically trained for anime images.
- RealESRGAN: Super Resolution
- GFPGAN: Face Restoration
- RestoreFormer: Face Restoration
-
FileManager: Browse your pictures conveniently and save them directly to the output directory.
Quick Start
Start webui
IOPaint provides a convenient webui for using the latest AI models to edit your images. You can install and start IOPaint easily by running following command:
# In order to use GPU, install cuda version of pytorch first.
# pip3 install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu118
# AMD GPU users, please utilize the following command, only works on linux, as pytorch is not yet supported on Windows with ROCm.
# pip3 install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/rocm5.6
pip3 install iopaint
iopaint start --model=lama --device=cpu --port=8080
That's it, you can start using IOPaint by visiting http://localhost:8080 in your web browser.
All models will be downloaded automatically at startup. If you want to change the download directory, you can add --model-dir. More documentation can be found here
You can see other supported models at here and how to use local sd ckpt/safetensors file at here.
Plugins
You can specify which plugins to use when starting the service, and you can view the commands to enable plugins by using iopaint start --help.
More demonstrations of the Plugin can be seen here
iopaint start --enable-interactive-seg --interactive-seg-device=cuda
Batch processing
You can also use IOPaint in the command line to batch process images:
iopaint run --model=lama --device=cpu \
--image=/path/to/image_folder \
--mask=/path/to/mask_folder \
--output=output_dir
--image is the folder containing input images, --mask is the folder containing corresponding mask images.
When --mask is a path to a mask file, all images will be processed using this mask.
You can see more information about the available models and plugins supported by IOPaint below.
Model Recommendations
Choosing the right model depends on your use case and hardware. Here's our recommended model strategy:
🚀 Quick Start - For Daily Use
LaMa (Recommended for beginners)
iopaint start --model lama --device cuda --port 8080
- ⚡ Fastest - Near real-time processing
- 💾 Low VRAM - Uses ~1GB GPU memory
- 🎯 Best for: Removing watermarks, people, objects from images
- ✅ Most stable and reliable
🎨 Creative Editing - With Prompt Control
Stable Diffusion Inpainting
iopaint start --model runwayml/stable-diffusion-inpainting --device cuda --port 8080
- 🎨 Smart content generation - Not just removal, but intelligent filling
- 📝 Text prompts - Control what gets generated
- 🖼️ Creative flexibility - Replace objects with AI-generated content
- ✅ Official model - Well-maintained and stable
💎 Professional - High Quality Results
SDXL Inpainting (For high-resolution work)
iopaint start --model diffusers/stable-diffusion-xl-1.0-inpainting-0.1 --device cuda --low-mem --port 8080
- 🖼️ High resolution - Supports up to 1024x1024
- 🎨 Better details - Superior quality output
- 💎 Professional use - Best for photography and commercial work
- ⚠️ Requires more VRAM - Use
--low-memflag for optimization
📊 Model Comparison
| Model | Speed | Quality | VRAM | Use Case | Recommended |
|---|---|---|---|---|---|
| LaMa | ⚡⚡⚡⚡⚡ | ⭐⭐⭐⭐ | ~1GB | Quick erase | ⭐⭐⭐⭐⭐ |
| SD Inpainting | ⚡⚡⚡ | ⭐⭐⭐⭐⭐ | ~4GB | Creative edit | ⭐⭐⭐⭐⭐ |
| SDXL Inpainting | ⚡⚡ | ⭐⭐⭐⭐⭐ | ~8GB | Professional | ⭐⭐⭐⭐ |
| PowerPaint V2 | ⚡⚡⚡ | ⭐⭐⭐⭐ | ~5GB | Multi-task | ⭐⭐⭐⭐ |
🔧 GPU Optimization Tips
For NVIDIA GPUs with limited VRAM:
# Enable low memory mode
iopaint start --model <model_name> --device cuda --low-mem --port 8080
# Enable CPU offload for very large models
iopaint start --model <model_name> --device cuda --cpu-offload --port 8080
For CPU-only systems:
# LaMa works well on CPU
iopaint start --model lama --device cpu --port 8080
📦 Installation Note
Updated Dependencies (2025-11-28)
This project now uses the latest stable versions of all dependencies. Install with:
# Recommended: Use mirror for faster installation (China users)
pip3 install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
# Or use official PyPI
pip3 install -r requirements.txt
See UPGRADE_NOTES.md for detailed information about package updates.
Development
Install nodejs, then install the frontend dependencies.
git clone https://github.com/Sanster/IOPaint.git
cd IOPaint/web_app
npm install
npm run build
cp -r dist/ ../iopaint/web_app
Create a .env.local file in web_app and fill in the backend IP and port.
VITE_BACKEND=http://127.0.0.1:8080
Start front-end development environment
npm run dev
Install back-end requirements and start backend service
pip install -r requirements.txt
python3 main.py start --model lama --port 8080
Then you can visit http://localhost:5173/ for development.
The frontend code will automatically update after being modified,
but the backend needs to restart the service after modifying the python code.