新增示例库逻辑
This commit is contained in:
60
docs/en/getting-started/configuration.md
Normal file
60
docs/en/getting-started/configuration.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# Configuration
|
||||
|
||||
After installation, you need to configure services to use Pixelle-Video.
|
||||
|
||||
---
|
||||
|
||||
## LLM Configuration
|
||||
|
||||
LLM (Large Language Model) is used to generate video scripts.
|
||||
|
||||
### Quick Preset Selection
|
||||
|
||||
1. Select a preset model from the dropdown:
|
||||
- Qianwen (recommended, great value)
|
||||
- GPT-4o
|
||||
- DeepSeek
|
||||
- Ollama (local, completely free)
|
||||
|
||||
2. The system will auto-fill `base_url` and `model`
|
||||
|
||||
3. Click「🔑 Get API Key」to register and obtain credentials
|
||||
|
||||
4. Enter your API Key
|
||||
|
||||
---
|
||||
|
||||
## Image Configuration
|
||||
|
||||
Two options available:
|
||||
|
||||
### Local Deployment (Recommended)
|
||||
|
||||
Using local ComfyUI service:
|
||||
|
||||
1. Install and start ComfyUI
|
||||
2. Enter ComfyUI URL (default `http://127.0.0.1:8188`)
|
||||
3. Click "Test Connection" to verify
|
||||
|
||||
### Cloud Deployment
|
||||
|
||||
Using RunningHub cloud service:
|
||||
|
||||
1. Register for a RunningHub account
|
||||
2. Obtain API Key
|
||||
3. Enter API Key in configuration
|
||||
|
||||
---
|
||||
|
||||
## Save Configuration
|
||||
|
||||
After filling in all required configuration, click the "Save Configuration" button.
|
||||
|
||||
Configuration will be saved to `config.yaml` file.
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Quick Start](quick-start.md) - Create your first video
|
||||
|
||||
115
docs/en/getting-started/installation.md
Normal file
115
docs/en/getting-started/installation.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Installation
|
||||
|
||||
This page will guide you through installing Pixelle-Video.
|
||||
|
||||
---
|
||||
|
||||
## System Requirements
|
||||
|
||||
### Required
|
||||
|
||||
- **Python**: 3.10 or higher
|
||||
- **Operating System**: Windows, macOS, or Linux
|
||||
- **Package Manager**: uv (recommended) or pip
|
||||
|
||||
### Optional
|
||||
|
||||
- **GPU**: NVIDIA GPU with 6GB+ VRAM recommended for local ComfyUI
|
||||
- **Network**: Stable internet connection for LLM API and image generation services
|
||||
|
||||
---
|
||||
|
||||
## Installation Steps
|
||||
|
||||
### Step 1: Clone the Repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/PixelleLab/Pixelle-Video.git
|
||||
cd Pixelle-Video
|
||||
```
|
||||
|
||||
### Step 2: Install Dependencies
|
||||
|
||||
!!! tip "Recommended: Use uv"
|
||||
This project uses `uv` as the package manager, which is faster and more reliable than traditional pip.
|
||||
|
||||
#### Using uv (Recommended)
|
||||
|
||||
```bash
|
||||
# Install uv if you haven't already
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
|
||||
# Install project dependencies (uv will create a virtual environment automatically)
|
||||
uv sync
|
||||
```
|
||||
|
||||
#### Using pip
|
||||
|
||||
```bash
|
||||
# Create virtual environment
|
||||
python -m venv venv
|
||||
|
||||
# Activate virtual environment
|
||||
# Windows:
|
||||
venv\Scripts\activate
|
||||
# macOS/Linux:
|
||||
source venv/bin/activate
|
||||
|
||||
# Install dependencies
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verify Installation
|
||||
|
||||
Run the following command to verify the installation:
|
||||
|
||||
```bash
|
||||
# Using uv
|
||||
uv run streamlit run web/app.py
|
||||
|
||||
# Or using pip (activate virtual environment first)
|
||||
streamlit run web/app.py
|
||||
```
|
||||
|
||||
Your browser should automatically open `http://localhost:8501` and display the Pixelle-Video web interface.
|
||||
|
||||
!!! success "Installation Successful!"
|
||||
If you can see the web interface, the installation was successful! Next, check out the [Configuration Guide](configuration.md) to set up your services.
|
||||
|
||||
---
|
||||
|
||||
## Optional: Install ComfyUI (Local Deployment)
|
||||
|
||||
If you want to run image generation locally, you'll need to install ComfyUI:
|
||||
|
||||
### Quick Install
|
||||
|
||||
```bash
|
||||
# Clone ComfyUI
|
||||
git clone https://github.com/comfyanonymous/ComfyUI.git
|
||||
cd ComfyUI
|
||||
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Start ComfyUI
|
||||
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
ComfyUI runs on `http://127.0.0.1:8188` by default.
|
||||
|
||||
!!! info "ComfyUI Models"
|
||||
ComfyUI requires downloading model files to work. Please refer to the [ComfyUI documentation](https://github.com/comfyanonymous/ComfyUI) for information on downloading and configuring models.
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [Configuration](configuration.md) - Configure LLM and image generation services
|
||||
- [Quick Start](quick-start.md) - Create your first video
|
||||
|
||||
107
docs/en/getting-started/quick-start.md
Normal file
107
docs/en/getting-started/quick-start.md
Normal file
@@ -0,0 +1,107 @@
|
||||
# Quick Start
|
||||
|
||||
Already installed and configured? Let's create your first video!
|
||||
|
||||
---
|
||||
|
||||
## Start the Web Interface
|
||||
|
||||
```bash
|
||||
# Using uv
|
||||
uv run streamlit run web/app.py
|
||||
```
|
||||
|
||||
Your browser will automatically open `http://localhost:8501`
|
||||
|
||||
---
|
||||
|
||||
## Create Your First Video
|
||||
|
||||
### Step 1: Check Configuration
|
||||
|
||||
On first use, expand the「⚙️ System Configuration」panel and confirm:
|
||||
|
||||
- **LLM Configuration**: Select an AI model (e.g., Qianwen, GPT) and enter API Key
|
||||
- **Image Configuration**: Configure ComfyUI address or RunningHub API Key
|
||||
|
||||
If not yet configured, see the [Configuration Guide](configuration.md).
|
||||
|
||||
Click "Save Configuration" when done.
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Enter a Topic
|
||||
|
||||
In the left panel's「📝 Content Input」section:
|
||||
|
||||
1. Select「**AI Generate Content**」mode
|
||||
2. Enter a topic in the text box, for example:
|
||||
```
|
||||
Why develop a reading habit
|
||||
```
|
||||
3. (Optional) Set number of scenes, default is 5 frames
|
||||
|
||||
!!! tip "Topic Examples"
|
||||
- Why develop a reading habit
|
||||
- How to improve work efficiency
|
||||
- The importance of healthy eating
|
||||
- The meaning of travel
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Configure Voice and Visuals
|
||||
|
||||
In the middle panel:
|
||||
|
||||
**Voice Settings**
|
||||
- Select TTS workflow (default Edge-TTS works well)
|
||||
- For voice cloning, upload a reference audio file
|
||||
|
||||
**Visual Settings**
|
||||
- Select image generation workflow (default works well)
|
||||
- Set image dimensions (default 1024x1024)
|
||||
- Choose video template (recommend portrait 1080x1920)
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Generate Video
|
||||
|
||||
Click the「🎬 Generate Video」button in the right panel!
|
||||
|
||||
The system will show real-time progress:
|
||||
- Generate script
|
||||
- Generate images (for each scene)
|
||||
- Synthesize voice
|
||||
- Compose video
|
||||
|
||||
!!! info "Generation Time"
|
||||
Generating a 5-scene video takes about 2-5 minutes, depending on: LLM API response speed, image generation speed, TTS workflow type, and network conditions
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Preview Video
|
||||
|
||||
Once complete, the video will automatically play in the right panel!
|
||||
|
||||
You'll see:
|
||||
- 📹 Video preview player
|
||||
- ⏱️ Video duration
|
||||
- 📦 File size
|
||||
- 🎬 Number of scenes
|
||||
- 📐 Video dimensions
|
||||
|
||||
The video file is saved in the `output/` folder.
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
Congratulations! You've successfully created your first video 🎉
|
||||
|
||||
Next, you can:
|
||||
|
||||
- **Adjust Styles** - See the [Custom Visual Style](../tutorials/custom-style.md) tutorial
|
||||
- **Clone Voices** - See the [Voice Cloning with Reference Audio](../tutorials/voice-cloning.md) tutorial
|
||||
- **Use API** - See the [API Usage Guide](../user-guide/api.md)
|
||||
- **Develop Templates** - See the [Template Development Guide](../user-guide/templates.md)
|
||||
|
||||
Reference in New Issue
Block a user