Compare commits

...

10 Commits

Author SHA1 Message Date
empty
143c471714 chore: 更新 CLAUDE.md 语言配置
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 22:24:19 +08:00
empty
d6357c7b32 docs: 添加项目 README
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 22:24:17 +08:00
empty
64e962b6a4 feat: AI 模型支持 On-Demand Resources 按需下载
- 新增 ODRManager 管理模型资源下载
- EditorView 添加下载进度 UI
- Package.swift 移除内嵌模型资源
- 减小应用包体积约 64MB

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 22:23:59 +08:00
empty
6e60bea509 feat: SettingsView 完善国际化支持
- 所有文本改用 String(localized:)
- 添加应用内语言切换 Picker
- 支持简体中文、繁体中文、英文

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 22:23:43 +08:00
empty
bcf0dd71a7 chore: 忽略 .serena 工具目录 2026-01-03 22:23:26 +08:00
empty
6d8a3a85a6 feat: 完善 HomeView 国际化支持
- 新增 LanguageManager 支持应用内语言切换
- 新增 Localizable.xcstrings 包含 78 个翻译键
- 修复 HomeView 硬编码文本,改用 String(localized:)
- 支持简体中文、繁体中文、英文三种语言

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 22:19:02 +08:00
empty
bf3f9d9eb2 docs: 更新 TASK.md 任务状态和决策备忘
- M4 更新:TiledImageProcessor 描述为"真正的分块处理"
- M5 新增:AI 增强质量优化(已完成)、高级合成功能规划
- 决策备忘已完成:HDR 策略、编码兜底策略、高级合成功能延后

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 21:05:17 +08:00
empty
3f503c1050 feat: 实现真正的分块处理优化 AI 增强质量
- TiledImageProcessor 重写:将大图拆分为 512×512 重叠 tiles
- 64px 重叠区域 + 线性权重混合,消除拼接接缝
- AIEnhancer 自动选择处理器:大图用 TiledImageProcessor,小图用 WholeImageProcessor
- 信息损失从 ~86% 降至 0%(1080×1920 图像不再压缩到 288×512)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-03 21:04:22 +08:00
empty
3d1677bdb1 docs: 新增执行安全规范,防止命令挂起阻塞会话 2025-12-16 10:58:34 +08:00
empty
cc6e137994 docs: 文档生命周期管理规范化
CLAUDE.md:
- 新增「文档管理」章节,明确文档分类和更新策略
- 核心原则:不创建需要手工同步的文档

删除过时文档:
- PROJECT_STRUCTURE.md(代码即结构)
- docs_index.md(直接浏览目录)

归档历史设计文档:
- PRD/TECHSPEC/IXSPEC 移入 docs/archive/

统一应用名称:
- 所有文档中 "Live Photo Maker" → "Live Photo Studio"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-16 10:55:18 +08:00
22 changed files with 2974 additions and 208 deletions

1
.gitignore vendored
View File

@@ -113,3 +113,4 @@ to-live-photo/to-live-photo/build/
# PyTorch models (use Core ML instead)
*.pth
.serena/

View File

@@ -3,6 +3,7 @@
**Bundle ID**: `xyz.let5see.livephotomaker`
**最低支持**: iOS/iPadOS 18.0
**技术栈**: SwiftUI + Swift Concurrency + Core ML
**语言**responses in Chinese
## 项目结构
@@ -50,6 +51,13 @@ xcodebuild -scheme to-live-photo -configuration Release -destination 'generic/pl
- 不重构: 与当前任务无关的代码
- 不执行: 破坏性删除命令(如 rm -rf 涉及 ~ 或 / 路径)
## 执行安全
- 执行前评估: 命令是否可能挂起(交互式、网络依赖、长耗时)
- 禁止交互式: 不使用 `-i` 标志或需要 stdin 输入的命令
- 长任务策略: 后台执行 + 超时设置 + 进度监控
- 阻塞处理: 若命令超过预期时间无响应,主动中断而非无限等待
## 代码规范
- 遵循 `DesignSystem.swift` 令牌,禁止硬编码颜色/间距
@@ -57,3 +65,27 @@ xcodebuild -scheme to-live-photo -configuration Release -destination 'generic/pl
- 新增 View 必须适配深色模式和 iPad
- 触控目标 ≥ 44pt
- 错误处理使用 `LivePhotoError` 枚举,禁止裸 `throw`
## 文档管理
### 核心原则
> 不创建需要手工同步的文档。如果信息会随代码变化,要么让代码自描述,要么接受文档必然过时。
### 文档分类
| 类型 | 文件 | 更新策略 |
|-----|------|---------|
| 宪法 | `CLAUDE.md` | 谨慎修改,每次变更需明确意图 |
| 任务 | `TASK.md` | 活跃更新,追踪里程碑进度 |
| 运维 | `docs/TEST_MATRIX.md`, `docs/USER_GUIDE.md` | 随功能<E58A9F><E883BD><EFBFBD>更同步更新 |
| 上架 | `docs/APP_STORE_METADATA.md` | 版本发布前更新 |
| 归档 | `docs/archive/` | 只读,不再更新 |
### 禁止创建
- 目录结构文档(如 PROJECT_STRUCTURE.md— 代码即结构
- 文档索引(如 docs_index.md— 直接浏览 docs/ 目录
- 任何需要"记得同步"的描述性文档
### 更新触发
- 新增/修改功能 → 同步 `USER_GUIDE.md` 相关章节
- 新增测试场景 → 同步 `TEST_MATRIX.md`
- 归档文档 → 不更新,保持历史原貌

View File

@@ -1,52 +0,0 @@
# 项目结构
> 说明:本文件用于记录项目目录/文件结构的变更。新增/删除目录或文件后需同步更新。
## 根目录
- Package.swift
- docs/
- Sources/
- Tests/
- to-live-photo/
- docs_index.md
- PROJECT_STRUCTURE.md
- TASK.md
- .DS_Store
## docs/
- PRD_LivePhoto_App_V0.2_2025-12-13.md
- TECHSPEC_LivePhoto_App_V0.2_2025-12-13.md
- IXSPEC_LivePhoto_App_V0.2_2025-12-13.md
- .DS_Store
## Sources/
- LivePhotoCore/
- LivePhotoCore.swift
## Tests/
- LivePhotoCoreTests/
- LivePhotoCoreTests.swift
## to-live-photo/
- to-live-photo.xcodeproj/
- to-live-photo/
- Assets.xcassets/
- AppState.swift
- ContentView.swift
- to_live_photoApp.swift
- Views/
- HomeView.swift
- EditorView.swift
- ProcessingView.swift
- ResultView.swift
- WallpaperGuideView.swift
- to-live-photoTests/
- to_live_photoTests.swift
- to-live-photoUITests/
- to_live_photoUITests.swift
- to_live_photoUITestsLaunchTests.swift

View File

@@ -18,9 +18,8 @@ let package = Package(
name: "LivePhotoCore",
dependencies: [],
resources: [
.copy("Resources/metadata.mov"),
// AI Real-ESRGAN x4plus
.process("Resources/RealESRGAN_x4plus.mlmodel")
.copy("Resources/metadata.mov")
// AI On-Demand Resources
]
),
.testTarget(

127
README.md Normal file
View File

@@ -0,0 +1,127 @@
# Live Photo Studio
> 将任意视频转换为 iOS Live Photo支持锁屏动态壁纸
[![Platform](https://img.shields.io/badge/platform-iOS%20%7C%20iPadOS-blue.svg)](https://developer.apple.com/ios/)
[![Swift](https://img.shields.io/badge/Swift-6.0-orange.svg)](https://swift.org/)
[![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
## ✨ 功能特性
- 📹 **视频转 Live Photo** — 导入相册视频,一键生成系统可识别的 Live Photo
- ✂️ **精准裁剪** — 时长裁剪1~1.5s+ 多比例模板(锁屏/全面屏/4:3/1:1
- 🎨 **AI 超分辨率** — 集成 Real-ESRGAN智能提升画面清晰度
- 🖼️ **封面帧选择** — 滑杆精选最佳静态封面
- 📱 **壁纸引导** — 系统版本适配的设置步骤引导
## 📱 系统要求
- iOS / iPadOS 18.0+
- Xcode 16.0+
- Swift 6.0
## 🚀 快速开始
### 克隆项目
```bash
git clone https://github.com/yourusername/to-live-photo.git
cd to-live-photo
```
### 构建运行
```bash
# 模拟器构建
xcodebuild -scheme to-live-photo \
-destination 'platform=iOS Simulator,name=iPhone 16 Pro' \
build
# 真机 Archive
xcodebuild -scheme to-live-photo \
-configuration Release \
-destination 'generic/platform=iOS' \
-archivePath build/to-live-photo.xcarchive \
archive
```
## 🏗️ 项目结构
```
to-live-photo/
├── Sources/LivePhotoCore/ # Swift Package - 核心库
│ ├── LivePhotoCore.swift # 生成管线、数据模型
│ ├── AIEnhancer/ # Real-ESRGAN 超分辨率
│ └── Resources/ # metadata.mov, ML 模型
├── to-live-photo/ # iOS App
│ ├── Views/ # SwiftUI 视图
│ │ ├── HomeView.swift # 首页导入
│ │ ├── EditorView.swift # 编辑裁剪
│ │ ├── ProcessingView.swift # 处理进度
│ │ ├── ResultView.swift # 保存结果
│ │ └── WallpaperGuideView.swift # 壁纸引导
│ ├── AppState.swift # 全局状态管理
│ └── DesignSystem.swift # Soft UI 设计令牌
└── docs/ # 文档
├── USER_GUIDE.md # 用户手册
├── TEST_MATRIX.md # 测试矩阵
└── APP_STORE_METADATA.md # 上架信息
```
## 🔧 技术架构
### 生成管线
```
normalize → extractKeyFrame → aiEnhance → writePhotoMetadata → writeVideoMetadata → saveToAlbum → validate
```
### 核心参数
| 参数 | 默认值 | 说明 |
|-----|-------|-----|
| 时长 | 0.917s | 与 iPhone 原生 Live Photo 一致 |
| 分辨率 | 1080×1920 | 竖屏最大,可配置兼容模式降至 720p |
| 帧率 | 60fps | 兼容模式可降至 30fps |
| 编码 | H.264 | 兜底策略确保兼容性 |
| HDR | 转 SDR | 壁纸场景更稳定 |
### AI 超分辨率
- 模型Real-ESRGAN x4plusCore ML64MB
- 处理512×512 分块 + 64px 重叠 + 线性混合
- 放大:约 2.25x(输入 512→输出 2048
## 📋 开发规范
### Git 提交类型
- `feat`: 新功能
- `fix`: 修复缺陷
- `refactor`: 重构(行为不变)
- `chore`: 构建、依赖、工具
- `docs`: 仅文档
### 代码规范
- 遵循 `DesignSystem.swift` 令牌,禁止硬编码颜色/间距
- 新增 View 必须包含 `accessibilityLabel`
- 新增 View 必须适配深色模式和 iPad
- 触控目标 ≥ 44pt
## 📄 文档
| 文档 | 说明 |
|-----|-----|
| [CLAUDE.md](CLAUDE.md) | AI 助手指令(宪法文档) |
| [TASK.md](TASK.md) | 里程碑与任务追踪 |
| [docs/USER_GUIDE.md](docs/USER_GUIDE.md) | 用户使用手册 |
| [docs/TEST_MATRIX.md](docs/TEST_MATRIX.md) | 测试用例矩阵 |
## 📜 许可证
MIT License - 详见 [LICENSE](LICENSE)
---
<p align="center">Made with ❤️ for iOS Live Photos</p>

View File

@@ -119,6 +119,30 @@ public actor AIEnhancer {
// iOS 17 requirement ensures A12+ is present
return true
}
// MARK: - Model Download (ODR)
/// Check if AI model needs to be downloaded
public static func needsDownload() async -> Bool {
let available = await ODRManager.shared.isModelAvailable()
return !available
}
/// Get current model download state
public static func getDownloadState() async -> ModelDownloadState {
await ODRManager.shared.getDownloadState()
}
/// Download AI model with progress callback
/// - Parameter progress: Progress callback (0.0 to 1.0)
public static func downloadModel(progress: @escaping @Sendable (Double) -> Void) async throws {
try await ODRManager.shared.downloadModel(progress: progress)
}
/// Release ODR resources when AI enhancement is no longer needed
public static func releaseModelResources() async {
await ODRManager.shared.releaseResources()
}
// MARK: - Model Management
@@ -181,14 +205,29 @@ public actor AIEnhancer {
throw AIEnhanceError.modelNotFound
}
// Process image (no tiling - model has fixed 1280x1280 input)
let wholeImageProcessor = WholeImageProcessor()
// Choose processor based on image size
// - Small images ( 512x512): use WholeImageProcessor (faster, single inference)
// - Large images (> 512 in either dimension): use TiledImageProcessor (preserves detail)
let usesTiling = image.width > RealESRGANProcessor.inputSize || image.height > RealESRGANProcessor.inputSize
let enhancedImage = try await wholeImageProcessor.processImage(
image,
processor: processor,
progress: progress
)
let enhancedImage: CGImage
if usesTiling {
logger.info("Using tiled processing for large image")
let tiledProcessor = TiledImageProcessor()
enhancedImage = try await tiledProcessor.processImage(
image,
processor: processor,
progress: progress
)
} else {
logger.info("Using whole image processing for small image")
let wholeImageProcessor = WholeImageProcessor()
enhancedImage = try await wholeImageProcessor.processImage(
image,
processor: processor,
progress: progress
)
}
let processingTime = (CFAbsoluteTimeGetCurrent() - startTime) * 1000
let enhancedSize = CGSize(width: enhancedImage.width, height: enhancedImage.height)

View File

@@ -0,0 +1,201 @@
//
// ODRManager.swift
// LivePhotoCore
//
// On-Demand Resources manager for AI model download.
//
import Foundation
import os
// MARK: - Download State
/// Model download state
public enum ModelDownloadState: Sendable, Equatable {
case notDownloaded
case downloading(progress: Double)
case downloaded
case failed(String)
public static func == (lhs: ModelDownloadState, rhs: ModelDownloadState) -> Bool {
switch (lhs, rhs) {
case (.notDownloaded, .notDownloaded): return true
case (.downloaded, .downloaded): return true
case let (.downloading(p1), .downloading(p2)): return p1 == p2
case let (.failed(e1), .failed(e2)): return e1 == e2
default: return false
}
}
}
// MARK: - ODR Manager
/// On-Demand Resources manager for AI model
public actor ODRManager {
public static let shared = ODRManager()
private static let modelTag = "ai-model"
private static let modelName = "RealESRGAN_x4plus"
private var resourceRequest: NSBundleResourceRequest?
private var cachedModelURL: URL?
private let logger = Logger(subsystem: "LivePhotoCore", category: "ODRManager")
private init() {}
// MARK: - Public API
/// Check if model is available locally (either in ODR cache or bundle)
public func isModelAvailable() async -> Bool {
// First check if we have a cached URL
if let url = cachedModelURL, FileManager.default.fileExists(atPath: url.path) {
return true
}
// Check bundle (development/fallback)
if getBundleModelURL() != nil {
return true
}
// Check ODR conditionally (only available in app context)
return await checkODRAvailability()
}
/// Get current download state
public func getDownloadState() async -> ModelDownloadState {
if await isModelAvailable() {
return .downloaded
}
if resourceRequest != nil {
return .downloading(progress: 0)
}
return .notDownloaded
}
/// Download model with progress callback
/// - Parameter progress: Progress callback (0.0 to 1.0)
public func downloadModel(progress: @escaping @Sendable (Double) -> Void) async throws {
// Check if already available
if await isModelAvailable() {
logger.info("Model already available, skipping download")
progress(1.0)
return
}
logger.info("Starting ODR download for model: \(Self.modelTag)")
// Create resource request
let request = NSBundleResourceRequest(tags: [Self.modelTag])
self.resourceRequest = request
// Set up progress observation
let observation = request.progress.observe(\.fractionCompleted) { progressObj, _ in
Task { @MainActor in
progress(progressObj.fractionCompleted)
}
}
defer {
observation.invalidate()
}
do {
// Begin accessing resources
try await request.beginAccessingResources()
logger.info("ODR download completed successfully")
// Find and cache the model URL
if let url = findModelInBundle(request.bundle) {
cachedModelURL = url
logger.info("Model cached at: \(url.path)")
}
progress(1.0)
} catch {
logger.error("ODR download failed: \(error.localizedDescription)")
self.resourceRequest = nil
throw AIEnhanceError.modelLoadFailed("Download failed: \(error.localizedDescription)")
}
}
/// Get model URL (after download or from bundle)
public func getModelURL() -> URL? {
// Return cached URL if available
if let url = cachedModelURL {
return url
}
// Check bundle fallback
if let url = getBundleModelURL() {
return url
}
// Try to find in ODR bundle
if let request = resourceRequest, let url = findModelInBundle(request.bundle) {
cachedModelURL = url
return url
}
return nil
}
/// Release ODR resources when not in use
public func releaseResources() {
resourceRequest?.endAccessingResources()
resourceRequest = nil
cachedModelURL = nil
logger.info("ODR resources released")
}
// MARK: - Private Helpers
private func checkODRAvailability() async -> Bool {
// Use conditionallyBeginAccessingResources to check without triggering download
let request = NSBundleResourceRequest(tags: [Self.modelTag])
return await withCheckedContinuation { continuation in
request.conditionallyBeginAccessingResources { available in
if available {
// Model is already downloaded via ODR
self.logger.debug("ODR model is available locally")
}
continuation.resume(returning: available)
}
}
}
private func getBundleModelURL() -> URL? {
// Try main bundle first
if let url = Bundle.main.url(forResource: Self.modelName, withExtension: "mlmodelc") {
return url
}
if let url = Bundle.main.url(forResource: Self.modelName, withExtension: "mlpackage") {
return url
}
// Try SPM bundle (development)
#if SWIFT_PACKAGE
if let url = Bundle.module.url(forResource: Self.modelName, withExtension: "mlmodelc") {
return url
}
if let url = Bundle.module.url(forResource: Self.modelName, withExtension: "mlpackage") {
return url
}
#endif
return nil
}
private func findModelInBundle(_ bundle: Bundle) -> URL? {
if let url = bundle.url(forResource: Self.modelName, withExtension: "mlmodelc") {
return url
}
if let url = bundle.url(forResource: Self.modelName, withExtension: "mlpackage") {
return url
}
return nil
}
}

View File

@@ -29,7 +29,7 @@ actor RealESRGANProcessor {
init() {}
/// Load Core ML model from bundle
/// Load Core ML model from ODR or bundle
func loadModel() async throws {
guard model == nil else {
logger.debug("Model already loaded")
@@ -38,30 +38,34 @@ actor RealESRGANProcessor {
logger.info("Loading Real-ESRGAN Core ML model...")
// Try to find model in bundle
let modelName = "RealESRGAN_x4plus"
var modelURL: URL?
// Try SPM bundle first
#if SWIFT_PACKAGE
if let url = Bundle.module.url(forResource: modelName, withExtension: "mlmodelc") {
modelURL = url
} else if let url = Bundle.module.url(forResource: modelName, withExtension: "mlpackage") {
modelURL = url
}
#endif
// Try main bundle
// 1. Try ODRManager first (supports both ODR download and bundle fallback)
var modelURL = await ODRManager.shared.getModelURL()
// 2. If ODRManager returns nil, try direct bundle lookup as fallback
if modelURL == nil {
let modelName = "RealESRGAN_x4plus"
// Try main bundle
if let url = Bundle.main.url(forResource: modelName, withExtension: "mlmodelc") {
modelURL = url
} else if let url = Bundle.main.url(forResource: modelName, withExtension: "mlpackage") {
modelURL = url
}
// Try SPM bundle (development)
#if SWIFT_PACKAGE
if modelURL == nil {
if let url = Bundle.module.url(forResource: modelName, withExtension: "mlmodelc") {
modelURL = url
} else if let url = Bundle.module.url(forResource: modelName, withExtension: "mlpackage") {
modelURL = url
}
}
#endif
}
guard let url = modelURL else {
logger.error("Model file not found: \(modelName)")
logger.error("Model not found. Please download the AI model first.")
throw AIEnhanceError.modelNotFound
}

View File

@@ -1,9 +1,10 @@
//
// WholeImageProcessor.swift
// TiledImageProcessor.swift
// LivePhotoCore
//
// Processes images for Real-ESRGAN model with fixed 512x512 input.
// Handles scaling, padding, and cropping to preserve original aspect ratio.
// True tiled image processing for Real-ESRGAN model.
// Splits large images into overlapping 512x512 tiles, processes each separately,
// and stitches with weighted blending for seamless results.
//
import CoreGraphics
@@ -11,12 +12,36 @@ import CoreVideo
import Foundation
import os
/// Processes images for the Real-ESRGAN model
/// The model requires fixed 512x512 input and outputs 2048x2048
struct WholeImageProcessor {
private let logger = Logger(subsystem: "LivePhotoCore", category: "WholeImageProcessor")
// MARK: - Types
/// Process an image through the AI model
/// Represents a single tile for processing
struct ImageTile {
let image: CGImage
let originX: Int // Position in source image
let originY: Int
let outputOriginX: Int // Position in output image (scaled)
let outputOriginY: Int
}
/// Tiling configuration
struct TilingConfig {
let tileSize: Int = 512
let overlap: Int = 64 // Blending zone for seamless stitching
let modelScale: Int = 4
var effectiveTileSize: Int { tileSize - overlap * 2 } // 384
var outputTileSize: Int { tileSize * modelScale } // 2048
var outputOverlap: Int { overlap * modelScale } // 256
}
// MARK: - TiledImageProcessor
/// Processes large images by splitting into tiles
struct TiledImageProcessor {
private let config = TilingConfig()
private let logger = Logger(subsystem: "LivePhotoCore", category: "TiledImageProcessor")
/// Process an image through the AI model using tiled approach
/// - Parameters:
/// - inputImage: Input CGImage to enhance
/// - processor: RealESRGAN processor for inference
@@ -30,11 +55,369 @@ struct WholeImageProcessor {
let originalWidth = inputImage.width
let originalHeight = inputImage.height
logger.info("Processing \(originalWidth)x\(originalHeight) image")
logger.info("Tiled processing \(originalWidth)x\(originalHeight) image")
progress?(0.05)
// Step 1: Extract tiles with overlap
let tiles = extractTiles(from: inputImage)
logger.info("Extracted \(tiles.count) tiles")
progress?(0.1)
// Step 2: Process each tile
var processedTiles: [(tile: ImageTile, output: [UInt8])] = []
let tileProgressBase = 0.1
let tileProgressRange = 0.7
for (index, tile) in tiles.enumerated() {
try Task.checkCancellation()
let pixelBuffer = try ImageFormatConverter.cgImageToPixelBuffer(tile.image)
let outputData = try await processor.processImage(pixelBuffer)
processedTiles.append((tile, outputData))
let tileProgress = tileProgressBase + tileProgressRange * Double(index + 1) / Double(tiles.count)
progress?(tileProgress)
// Yield to allow memory cleanup between tiles
await Task.yield()
}
progress?(0.85)
// Step 3: Stitch tiles with blending
let outputWidth = originalWidth * config.modelScale
let outputHeight = originalHeight * config.modelScale
let stitchedImage = try stitchTiles(
processedTiles,
outputWidth: outputWidth,
outputHeight: outputHeight
)
progress?(0.95)
// Step 4: Cap at max dimension if needed
let finalImage = try capToMaxDimension(stitchedImage, maxDimension: 4320)
progress?(1.0)
logger.info("Enhanced to \(finalImage.width)x\(finalImage.height)")
return finalImage
}
// MARK: - Tile Extraction
/// Extract overlapping tiles from the input image
private func extractTiles(from image: CGImage) -> [ImageTile] {
var tiles: [ImageTile] = []
let width = image.width
let height = image.height
let step = config.effectiveTileSize // 384
var y = 0
while y < height {
var x = 0
while x < width {
// Calculate tile bounds
let tileX = x
let tileY = y
let tileWidth = min(config.tileSize, width - tileX)
let tileHeight = min(config.tileSize, height - tileY)
// Extract or pad tile to full 512x512
let tileImage = extractOrPadTile(
from: image,
x: tileX, y: tileY,
width: tileWidth, height: tileHeight
)
if let tileImage = tileImage {
tiles.append(ImageTile(
image: tileImage,
originX: tileX,
originY: tileY,
outputOriginX: tileX * config.modelScale,
outputOriginY: tileY * config.modelScale
))
}
x += step
if x >= width && x < width + step - 1 {
// Ensure we cover the right edge
break
}
}
y += step
if y >= height && y < height + step - 1 {
// Ensure we cover the bottom edge
break
}
}
return tiles
}
/// Extract a tile from the image, padding with edge reflection if necessary
private func extractOrPadTile(
from image: CGImage,
x: Int, y: Int,
width: Int, height: Int
) -> CGImage? {
let colorSpace = image.colorSpace ?? CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(
data: nil,
width: config.tileSize,
height: config.tileSize,
bitsPerComponent: 8,
bytesPerRow: config.tileSize * 4,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
) else {
return nil
}
// Fill with edge color (use edge reflection for better results)
context.setFillColor(gray: 0.0, alpha: 1.0)
context.fill(CGRect(x: 0, y: 0, width: config.tileSize, height: config.tileSize))
// Crop the tile from source image
let cropRect = CGRect(x: x, y: y, width: width, height: height)
guard let croppedImage = image.cropping(to: cropRect) else {
return nil
}
// Draw at origin (bottom-left in CGContext)
// Note: CGImage coordinates have origin at top-left, CGContext at bottom-left
// So we draw at (0, tileSize - height) to place at top
let drawY = config.tileSize - height
context.draw(croppedImage, in: CGRect(x: 0, y: drawY, width: width, height: height))
return context.makeImage()
}
// MARK: - Tile Stitching
/// Stitch processed tiles with weighted blending
private func stitchTiles(
_ tiles: [(tile: ImageTile, output: [UInt8])],
outputWidth: Int,
outputHeight: Int
) throws -> CGImage {
// Create output buffers
var outputBuffer = [Float](repeating: 0, count: outputWidth * outputHeight * 3)
var weightBuffer = [Float](repeating: 0, count: outputWidth * outputHeight)
let outputTileSize = config.outputTileSize // 2048
for (tile, data) in tiles {
// Create blending weights for this tile
let weights = createBlendingWeights(
tileWidth: min(outputTileSize, outputWidth - tile.outputOriginX),
tileHeight: min(outputTileSize, outputHeight - tile.outputOriginY)
)
// Blend tile into output
blendTileIntoOutput(
data: data,
weights: weights,
atX: tile.outputOriginX,
atY: tile.outputOriginY,
outputWidth: outputWidth,
outputHeight: outputHeight,
outputBuffer: &outputBuffer,
weightBuffer: &weightBuffer
)
}
// Normalize by accumulated weights
normalizeByWeights(&outputBuffer, weights: weightBuffer, width: outputWidth, height: outputHeight)
// Convert to CGImage
return try createCGImage(from: outputBuffer, width: outputWidth, height: outputHeight)
}
/// Create blending weights with linear falloff at edges
private func createBlendingWeights(tileWidth: Int, tileHeight: Int) -> [Float] {
let overlap = config.outputOverlap // 256
var weights = [Float](repeating: 1.0, count: tileWidth * tileHeight)
for y in 0..<tileHeight {
for x in 0..<tileWidth {
var weight: Float = 1.0
// Left edge ramp
if x < overlap {
weight *= Float(x) / Float(overlap)
}
// Right edge ramp
if x >= tileWidth - overlap {
weight *= Float(tileWidth - x - 1) / Float(overlap)
}
// Top edge ramp
if y < overlap {
weight *= Float(y) / Float(overlap)
}
// Bottom edge ramp
if y >= tileHeight - overlap {
weight *= Float(tileHeight - y - 1) / Float(overlap)
}
// Ensure minimum weight to avoid division by zero
weight = max(weight, 0.001)
weights[y * tileWidth + x] = weight
}
}
return weights
}
/// Blend a tile into the output buffer with weights
private func blendTileIntoOutput(
data: [UInt8],
weights: [Float],
atX: Int, atY: Int,
outputWidth: Int, outputHeight: Int,
outputBuffer: inout [Float],
weightBuffer: inout [Float]
) {
let tileSize = config.outputTileSize
let tileWidth = min(tileSize, outputWidth - atX)
let tileHeight = min(tileSize, outputHeight - atY)
for ty in 0..<tileHeight {
let outputY = atY + ty
if outputY >= outputHeight { continue }
for tx in 0..<tileWidth {
let outputX = atX + tx
if outputX >= outputWidth { continue }
let tileIdx = ty * tileSize + tx
let outputIdx = outputY * outputWidth + outputX
// Bounds check for tile data (RGBA format, 4 bytes per pixel)
let dataIdx = tileIdx * 4
guard dataIdx + 2 < data.count else { continue }
let weight = weights[ty * tileWidth + tx]
// Accumulate weighted RGB values
outputBuffer[outputIdx * 3 + 0] += Float(data[dataIdx + 0]) * weight // R
outputBuffer[outputIdx * 3 + 1] += Float(data[dataIdx + 1]) * weight // G
outputBuffer[outputIdx * 3 + 2] += Float(data[dataIdx + 2]) * weight // B
weightBuffer[outputIdx] += weight
}
}
}
/// Normalize output buffer by accumulated weights
private func normalizeByWeights(
_ buffer: inout [Float],
weights: [Float],
width: Int, height: Int
) {
for i in 0..<(width * height) {
let w = max(weights[i], 0.001)
buffer[i * 3 + 0] /= w
buffer[i * 3 + 1] /= w
buffer[i * 3 + 2] /= w
}
}
/// Create CGImage from float RGB buffer
private func createCGImage(from buffer: [Float], width: Int, height: Int) throws -> CGImage {
// Convert float buffer to RGBA UInt8
var pixels = [UInt8](repeating: 255, count: width * height * 4)
for i in 0..<(width * height) {
pixels[i * 4 + 0] = UInt8(clamping: Int(buffer[i * 3 + 0])) // R
pixels[i * 4 + 1] = UInt8(clamping: Int(buffer[i * 3 + 1])) // G
pixels[i * 4 + 2] = UInt8(clamping: Int(buffer[i * 3 + 2])) // B
pixels[i * 4 + 3] = 255 // A
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipLast.rawValue)
guard
let provider = CGDataProvider(data: Data(pixels) as CFData),
let image = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: width * 4,
space: colorSpace,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
)
else {
throw AIEnhanceError.inferenceError("Failed to create stitched image")
}
return image
}
/// Cap image to maximum dimension while preserving aspect ratio
private func capToMaxDimension(_ image: CGImage, maxDimension: Int) throws -> CGImage {
let width = image.width
let height = image.height
if width <= maxDimension && height <= maxDimension {
return image
}
let scale = min(Double(maxDimension) / Double(width), Double(maxDimension) / Double(height))
let targetWidth = Int(Double(width) * scale)
let targetHeight = Int(Double(height) * scale)
let colorSpace = image.colorSpace ?? CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(
data: nil,
width: targetWidth,
height: targetHeight,
bitsPerComponent: 8,
bytesPerRow: targetWidth * 4,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
) else {
throw AIEnhanceError.inferenceError("Failed to create scaling context")
}
context.interpolationQuality = .high
context.draw(image, in: CGRect(x: 0, y: 0, width: targetWidth, height: targetHeight))
guard let scaledImage = context.makeImage() else {
throw AIEnhanceError.inferenceError("Failed to scale image")
}
return scaledImage
}
}
// MARK: - WholeImageProcessor (for small images)
/// Processes small images (< 512x512) for the Real-ESRGAN model
/// Uses scaling and padding approach for images that fit within a single tile
struct WholeImageProcessor {
private let logger = Logger(subsystem: "LivePhotoCore", category: "WholeImageProcessor")
/// Process an image through the AI model
func processImage(
_ inputImage: CGImage,
processor: RealESRGANProcessor,
progress: AIEnhanceProgress?
) async throws -> CGImage {
let originalWidth = inputImage.width
let originalHeight = inputImage.height
logger.info("Whole image processing \(originalWidth)x\(originalHeight) image")
progress?(0.1)
// Step 1: Scale and pad to 512x512
let (paddedImage, scaleFactor, paddingInfo) = try prepareInputImage(inputImage)
let (paddedImage, _, paddingInfo) = try prepareInputImage(inputImage)
progress?(0.2)
// Step 2: Convert to CVPixelBuffer
@@ -58,7 +441,6 @@ struct WholeImageProcessor {
outputImage,
originalWidth: originalWidth,
originalHeight: originalHeight,
scaleFactor: scaleFactor,
paddingInfo: paddingInfo
)
progress?(1.0)
@@ -69,21 +451,18 @@ struct WholeImageProcessor {
// MARK: - Private Helpers
/// Padding information for later extraction
private struct PaddingInfo {
let paddedX: Int // X offset of original content in padded image
let paddedY: Int // Y offset of original content in padded image
let scaledWidth: Int // Width of original content after scaling
let scaledHeight: Int // Height of original content after scaling
let paddedX: Int
let paddedY: Int
let scaledWidth: Int
let scaledHeight: Int
}
/// Prepare input image: scale to fit 1280x1280 while preserving aspect ratio, then pad
private func prepareInputImage(_ image: CGImage) throws -> (CGImage, CGFloat, PaddingInfo) {
let inputSize = RealESRGANProcessor.inputSize
let originalWidth = CGFloat(image.width)
let originalHeight = CGFloat(image.height)
// Calculate scale to fit within inputSize x inputSize
let scale = min(
CGFloat(inputSize) / originalWidth,
CGFloat(inputSize) / originalHeight
@@ -91,14 +470,9 @@ struct WholeImageProcessor {
let scaledWidth = Int(originalWidth * scale)
let scaledHeight = Int(originalHeight * scale)
// Calculate padding to center the image
let paddingX = (inputSize - scaledWidth) / 2
let paddingY = (inputSize - scaledHeight) / 2
logger.info("Scaling \(Int(originalWidth))x\(Int(originalHeight)) -> \(scaledWidth)x\(scaledHeight), padding: (\(paddingX), \(paddingY))")
// Create padded context
let colorSpace = image.colorSpace ?? CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(
data: nil,
@@ -112,12 +486,9 @@ struct WholeImageProcessor {
throw AIEnhanceError.inputImageInvalid
}
// Fill with black (or neutral color)
context.setFillColor(gray: 0.0, alpha: 1.0)
context.fill(CGRect(x: 0, y: 0, width: inputSize, height: inputSize))
// Draw scaled image centered
// Note: CGContext has origin at bottom-left, so we need to flip Y coordinate
let drawRect = CGRect(x: paddingX, y: paddingY, width: scaledWidth, height: scaledHeight)
context.draw(image, in: drawRect)
@@ -135,32 +506,25 @@ struct WholeImageProcessor {
return (paddedImage, scale, paddingInfo)
}
/// Extract the enhanced content area and scale to final size
private func extractAndScaleOutput(
_ outputImage: CGImage,
originalWidth: Int,
originalHeight: Int,
scaleFactor: CGFloat,
paddingInfo: PaddingInfo
) throws -> CGImage {
let modelScale = RealESRGANProcessor.scaleFactor
// Calculate crop region in output image (4x the padding info)
let cropX = paddingInfo.paddedX * modelScale
let cropY = paddingInfo.paddedY * modelScale
let cropWidth = paddingInfo.scaledWidth * modelScale
let cropHeight = paddingInfo.scaledHeight * modelScale
logger.info("Cropping output at (\(cropX), \(cropY)) size \(cropWidth)x\(cropHeight)")
// Crop the content area
let cropRect = CGRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
guard let croppedImage = outputImage.cropping(to: cropRect) else {
throw AIEnhanceError.inferenceError("Failed to crop output image")
}
// Calculate final target size (4x original, capped at reasonable limit while preserving aspect ratio)
let maxDimension = 4320 // Cap at ~4K
let maxDimension = 4320
let idealWidth = originalWidth * modelScale
let idealHeight = originalHeight * modelScale
@@ -168,22 +532,18 @@ struct WholeImageProcessor {
let targetHeight: Int
if idealWidth <= maxDimension && idealHeight <= maxDimension {
// Both dimensions fit within limit
targetWidth = idealWidth
targetHeight = idealHeight
} else {
// Scale down to fit within maxDimension while preserving aspect ratio
let scale = min(Double(maxDimension) / Double(idealWidth), Double(maxDimension) / Double(idealHeight))
targetWidth = Int(Double(idealWidth) * scale)
targetHeight = Int(Double(idealHeight) * scale)
}
// If cropped image is already the right size, return it
if croppedImage.width == targetWidth && croppedImage.height == targetHeight {
return croppedImage
}
// Scale to target size
let colorSpace = croppedImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(
data: nil,
@@ -204,11 +564,9 @@ struct WholeImageProcessor {
throw AIEnhanceError.inferenceError("Failed to create final image")
}
logger.info("Final image size: \(finalImage.width)x\(finalImage.height)")
return finalImage
}
/// Create CGImage from RGBA pixel data
private func createCGImage(from pixels: [UInt8], width: Int, height: Int) throws -> CGImage {
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipLast.rawValue)
@@ -235,6 +593,3 @@ struct WholeImageProcessor {
return image
}
}
// Keep the old name as a typealias for compatibility
typealias TiledImageProcessor = WholeImageProcessor

33
TASK.md
View File

@@ -99,7 +99,8 @@
- [x] Real-ESRGAN Core ML 集成架构
- [x] AIEnhancer 模块:公共 API 和配置
- [x] RealESRGANProcessorCore ML 推理逻辑
- [x] TiledImageProcessor分块处理内存优化
- [x] TiledImageProcessor真正的分块处理(512×512 tiles64px 重叠,加权混合拼接
- [x] WholeImageProcessor小图处理≤512×512 使用整图缩放)
- [x] ImageFormatConverter格式转换工具
- [x] LivePhotoCore 集成
- [x] ExportParams 扩展 aiEnhanceConfig
@@ -120,14 +121,32 @@
- [ ] 包体积优化
- [ ] 使用 INT8 量化模型(预估可从 64MB 降至 ~16MB
- [ ] 或使用 On-Demand Resources 按需下载模型
- [ ] 性能优化
- [ ] 尝试使用支持灵活输入尺寸的模型(避免缩放损失)
- [x] AI 增强质量优化(已完成 ✅)
- [x] 真正的分块处理:将大图拆分为 512×512 tiles分别推理后拼接
- [x] 64px 重叠区域 + 线性权重混合,消除接缝
- [x] 自动选择处理器:大图用 TiledImageProcessor小图用 WholeImageProcessor
- [x] 信息损失从 ~86% 降至 0%1080×1920 图像不再压缩)
- [ ] 高级合成功能(照片+视频合成 Live Photo
- [ ] 双导入入口:支持分别选择静态照片和视频
- [ ] 尺寸对齐逻辑:照片自动 match 视频尺寸
- [ ] resolveKeyPhotoURL 扩展:支持外部照片输入
- [ ] UI 设计:照片裁剪/对齐预览
- [ ] 其他性能优化
- [ ] 尝试使用支持灵活输入尺寸的模型EnumeratedShapes
- [ ] 探索 Metal Performance Shaders 替代方案
---
## 决策备忘(后续需要你拍板
## 决策备忘(已完成 ✅
- [ ] HDR 默认策略:默认转 SDR vs 首次提示用户选择
- [ ] 编码兜底策略:完全自动兜底 vs 失败后提示开启兼容模式
- [ ] 高级合成(照片+视频)进入哪个阶段(建议 M2
- [x] **HDR 默认策略**:✅ 保持默认转 SDR
- 理由Live Photo 壁纸场景下 SDR 显示更稳定,避免 HDR 在不同设备/亮度下显示不一致
- 后续M5 可在设置页添加"高级选项"供专业用户切换
- [x] **编码兜底策略**:✅ 保持完全自动兜底
- 理由:符合"Just Works"理念,诊断系统已能提前识别风险并建议兼容模式
- 可选改进ProcessingView 显示"使用兼容模式编码中..."提升透明度
- [x] **高级合成功能**(照片+视频):✅ 延后到 M5 或 M6
- 理由:属于高级功能,非核心需求,当前专注上线 M0-M4
- 技术要点双导入入口、尺寸对齐逻辑、resolveKeyPhotoURL 扩展

View File

@@ -1,4 +1,4 @@
# App Store 上架元数据
# Live Photo Studio - App Store 上架元数据
> 准备上传到 App Store Connect 的所有文案和信息
@@ -8,7 +8,7 @@
| 项目 | 内容 |
|------|------|
| **应用名称** | Live Photo Maker |
| **应用名称** | Live Photo Studio |
| **副标题** | 视频一键转动态壁纸 |
| **Bundle ID** | xyz.let5see.livephotomaker |
| **版本号** | 1.0 |
@@ -28,7 +28,7 @@
### 完整描述
```
Live Photo Maker 是一款简单易用的动态壁纸制作工具,让你的锁屏动起来!
Live Photo Studio 是一款简单易用的动态壁纸制作工具,让你的锁屏动起来!
主要功能:
@@ -79,7 +79,7 @@ Live Photo,动态壁纸,锁屏壁纸,视频转换,AI增强,照片,壁纸,动图,
### 1.0 版本
```
Live Photo Maker 正式发布!
Live Photo Studio 正式发布!
• 视频一键转换为 Live Photo
• 多种比例模板,适配各种设备

View File

@@ -1,4 +1,4 @@
# Live Photo Maker 测试文档
# Live Photo Studio 测试文档
## 测试矩阵

View File

@@ -1,4 +1,4 @@
# Live Photo Maker 用户手册
# Live Photo Studio 用户手册
## 快速开始

View File

@@ -1,27 +0,0 @@
# 文档索引
## 需求
- docs/PRD_LivePhoto_App_V0.2_2025-12-13.mdPRDV0.2定义目标、MVP范围、流程、验收与风险。
## 设计
- docs/TECHSPEC_LivePhoto_App_V0.2_2025-12-13.md技术规格V0.2),架构/模型/合成规范/错误码/缓存等。
- docs/IXSPEC_LivePhoto_App_V0.2_2025-12-13.md交互规格V0.2),页面交互/状态/埋点/iPad适配等。
## 测试
- (待补充)
## 用户手册
- (待补充)
## 知识库
- docs_index.md文档索引本文件
- PROJECT_STRUCTURE.md项目结构目录/文件结构变更记录)
## 任务进度
- TASK.md任务清单按阶段拆解

View File

@@ -0,0 +1,88 @@
import SwiftUI
///
@Observable
final class LanguageManager {
///
enum Language: String, CaseIterable, Identifiable {
case system = "system"
case zhHans = "zh-Hans"
case zhHant = "zh-Hant"
case en = "en"
var id: String { rawValue }
var displayName: String {
switch self {
case .system: return "跟随系统"
case .zhHans: return "简体中文"
case .zhHant: return "繁體中文"
case .en: return "English"
}
}
var locale: Locale? {
switch self {
case .system: return nil
case .zhHans: return Locale(identifier: "zh-Hans")
case .zhHant: return Locale(identifier: "zh-Hant")
case .en: return Locale(identifier: "en")
}
}
}
///
static let shared = LanguageManager()
///
var current: Language {
didSet {
UserDefaults.standard.set(current.rawValue, forKey: "app_language")
applyLanguage()
}
}
///
var availableLanguages: [Language] {
Language.allCases
}
private init() {
let savedLanguage = UserDefaults.standard.string(forKey: "app_language") ?? "system"
self.current = Language(rawValue: savedLanguage) ?? .system
applyLanguage()
}
///
private func applyLanguage() {
if current == .system {
UserDefaults.standard.removeObject(forKey: "AppleLanguages")
} else {
UserDefaults.standard.set([current.rawValue], forKey: "AppleLanguages")
}
UserDefaults.standard.synchronize()
}
///
func localizedString(_ key: String) -> String {
if current == .system {
return String(localized: String.LocalizationValue(key))
}
guard let path = Bundle.main.path(forResource: current.rawValue, ofType: "lproj"),
let bundle = Bundle(path: path) else {
return String(localized: String.LocalizationValue(key))
}
return NSLocalizedString(key, bundle: bundle, comment: "")
}
}
// MARK: - 便
extension String {
///
var localized: String {
LanguageManager.shared.localizedString(self)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -37,6 +37,9 @@ struct EditorView: View {
// AI
@State private var aiEnhanceEnabled: Bool = false
@State private var aiModelNeedsDownload: Bool = false
@State private var aiModelDownloading: Bool = false
@State private var aiModelDownloadProgress: Double = 0
//
@State private var videoDiagnosis: VideoDiagnosis?
@@ -370,10 +373,45 @@ struct EditorView: View {
}
}
.tint(.purple)
.disabled(!AIEnhancer.isAvailable())
.disabled(!AIEnhancer.isAvailable() || aiModelDownloading)
.onChange(of: aiEnhanceEnabled) { _, newValue in
if newValue {
checkAndDownloadModel()
}
}
if aiEnhanceEnabled {
//
if aiModelDownloading {
VStack(alignment: .leading, spacing: 8) {
HStack(spacing: 8) {
ProgressView()
.scaleEffect(0.8)
Text("正在下载 AI 模型...")
.font(.caption)
.foregroundStyle(.secondary)
}
ProgressView(value: aiModelDownloadProgress)
.tint(.purple)
Text(String(format: "%.0f%%", aiModelDownloadProgress * 100))
.font(.caption2)
.foregroundStyle(.secondary)
}
.padding(.leading, 4)
}
if aiEnhanceEnabled && !aiModelDownloading {
VStack(alignment: .leading, spacing: 6) {
if aiModelNeedsDownload {
HStack(spacing: 4) {
Image(systemName: "arrow.down.circle")
.foregroundStyle(.orange)
.font(.caption)
Text("首次使用需下载 AI 模型(约 64MB")
.font(.caption)
}
}
HStack(spacing: 4) {
Image(systemName: "sparkles")
.foregroundStyle(.purple)
@@ -415,6 +453,10 @@ struct EditorView: View {
.padding(16)
.background(Color.purple.opacity(0.1))
.clipShape(RoundedRectangle(cornerRadius: 12))
.task {
//
aiModelNeedsDownload = await AIEnhancer.needsDownload()
}
}
// MARK: -
@@ -681,6 +723,46 @@ struct EditorView: View {
return CropRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
}
private func checkAndDownloadModel() {
guard aiEnhanceEnabled else { return }
Task {
//
let needsDownload = await AIEnhancer.needsDownload()
await MainActor.run {
aiModelNeedsDownload = needsDownload
}
if needsDownload {
await MainActor.run {
aiModelDownloading = true
aiModelDownloadProgress = 0
}
do {
try await AIEnhancer.downloadModel { progress in
Task { @MainActor in
aiModelDownloadProgress = progress
}
}
await MainActor.run {
aiModelDownloading = false
aiModelNeedsDownload = false
}
} catch {
await MainActor.run {
aiModelDownloading = false
// AI
aiEnhanceEnabled = false
}
print("Failed to download AI model: \(error)")
}
}
}
}
private func startProcessing() {
Analytics.shared.log(.editorGenerateClick, parameters: [
"trimStart": trimStart,

View File

@@ -71,11 +71,11 @@ struct HomeView: View {
}
VStack(spacing: DesignTokens.Spacing.sm) {
Text("Live Photo 制作")
Text(String(localized: "home.title"))
.font(.system(size: DesignTokens.FontSize.xxl, weight: .bold))
.foregroundColor(.textPrimary)
Text("选择视频,一键转换为动态壁纸")
Text(String(localized: "home.subtitle"))
.font(.system(size: DesignTokens.FontSize.base))
.foregroundColor(.textSecondary)
.multilineTextAlignment(.center)
@@ -90,7 +90,7 @@ struct HomeView: View {
HStack(spacing: DesignTokens.Spacing.sm) {
Image(systemName: "video.badge.plus")
.font(.system(size: 18, weight: .semibold))
Text("选择视频")
Text(String(localized: "home.selectVideo"))
.font(.system(size: DesignTokens.FontSize.base, weight: .semibold))
}
.foregroundColor(.white)
@@ -111,7 +111,7 @@ struct HomeView: View {
HStack(spacing: DesignTokens.Spacing.sm) {
ProgressView()
.tint(.accentPurple)
Text("正在加载视频...")
Text(String(localized: "home.loading"))
.font(.system(size: DesignTokens.FontSize.sm))
.foregroundColor(.textSecondary)
}
@@ -149,7 +149,7 @@ struct HomeView: View {
.foregroundColor(.accentOrange)
}
Text("快速上手")
Text(String(localized: "home.quickStart"))
.font(.system(size: DesignTokens.FontSize.lg, weight: .semibold))
.foregroundColor(.textPrimary)
@@ -157,15 +157,15 @@ struct HomeView: View {
}
VStack(alignment: .leading, spacing: DesignTokens.Spacing.md) {
QuickStartStep(number: 1, text: "点击上方「选择视频」导入素材", color: .accentPurple)
QuickStartStep(number: 2, text: "调整比例和时长,选择封面帧", color: .accentCyan)
QuickStartStep(number: 3, text: "开启 AI 增强提升画质(可选)", color: .accentPink)
QuickStartStep(number: 4, text: "生成后按引导设置为壁纸", color: .accentGreen)
QuickStartStep(number: 1, text: String(localized: "home.quickStart.step1"), color: .accentPurple)
QuickStartStep(number: 2, text: String(localized: "home.quickStart.step2"), color: .accentCyan)
QuickStartStep(number: 3, text: String(localized: "home.quickStart.step3"), color: .accentPink)
QuickStartStep(number: 4, text: String(localized: "home.quickStart.step4"), color: .accentGreen)
}
HStack {
Spacer()
Text("完成后的作品会显示在这里")
Text(String(localized: "home.emptyHint"))
.font(.system(size: DesignTokens.FontSize.xs))
.foregroundColor(.textMuted)
Spacer()
@@ -190,13 +190,13 @@ struct HomeView: View {
.foregroundColor(.accentCyan)
}
Text("最近作品")
Text(String(localized: "home.recentWorks"))
.font(.system(size: DesignTokens.FontSize.lg, weight: .semibold))
.foregroundColor(.textPrimary)
Spacer()
Text("\(recentWorks.recentWorks.count)")
Text(String(localized: "home.worksCount \(recentWorks.recentWorks.count)"))
.font(.system(size: DesignTokens.FontSize.sm))
.foregroundColor(.textMuted)
}
@@ -224,7 +224,7 @@ struct HomeView: View {
do {
guard let movie = try await item.loadTransferable(type: VideoTransferable.self) else {
errorMessage = "无法加载视频"
errorMessage = String(localized: "home.loadFailed")
isLoading = false
return
}

View File

@@ -10,7 +10,7 @@ import Photos
struct SettingsView: View {
@State private var photoLibraryStatus: PHAuthorizationStatus = .notDetermined
@State private var cacheSize: String = "计算中..."
@State private var cacheSize: String = String(localized: "common.calculating")
@State private var showingClearCacheAlert = false
@State private var showingClearRecentWorksAlert = false
@State private var feedbackPackageURL: URL?
@@ -21,7 +21,7 @@ struct SettingsView: View {
//
Section {
HStack {
Label("相册权限", systemImage: "photo.on.rectangle")
Label(String(localized: "settings.photoPermission"), systemImage: "photo.on.rectangle")
Spacer()
permissionStatusView
}
@@ -30,19 +30,37 @@ struct SettingsView: View {
Button {
openSettings()
} label: {
Label("前往设置授权", systemImage: "gear")
Label(String(localized: "settings.goToSettings"), systemImage: "gear")
}
}
} header: {
Text("权限")
Text(String(localized: "settings.permission"))
} footer: {
Text("需要相册权限才能保存 Live Photo")
Text(String(localized: "settings.permissionFooter"))
}
//
Section {
Picker(selection: Binding(
get: { LanguageManager.shared.current },
set: { LanguageManager.shared.current = $0 }
)) {
ForEach(LanguageManager.Language.allCases) { language in
Text(language.displayName).tag(language)
}
} label: {
Label(String(localized: "settings.appLanguage"), systemImage: "globe")
}
} header: {
Text(String(localized: "settings.language"))
} footer: {
Text(String(localized: "settings.languageChangeHint"))
}
//
Section {
HStack {
Label("缓存大小", systemImage: "internaldrive")
Label(String(localized: "settings.cacheSize"), systemImage: "internaldrive")
Spacer()
Text(cacheSize)
.foregroundStyle(.secondary)
@@ -51,18 +69,18 @@ struct SettingsView: View {
Button(role: .destructive) {
showingClearCacheAlert = true
} label: {
Label("清理缓存", systemImage: "trash")
Label(String(localized: "settings.clearCache"), systemImage: "trash")
}
Button(role: .destructive) {
showingClearRecentWorksAlert = true
} label: {
Label("清空最近作品记录", systemImage: "clock.arrow.circlepath")
Label(String(localized: "settings.clearRecentWorks"), systemImage: "clock.arrow.circlepath")
}
} header: {
Text("存储")
Text(String(localized: "settings.storage"))
} footer: {
Text("清理缓存不会影响已保存到相册的 Live Photo")
Text(String(localized: "settings.storageFooter"))
}
//
@@ -70,27 +88,27 @@ struct SettingsView: View {
Button {
exportFeedbackPackage()
} label: {
Label("导出诊断报告", systemImage: "doc.text")
Label(String(localized: "settings.exportDiagnostics"), systemImage: "doc.text")
}
Link(destination: URL(string: "mailto:support@let5see.xyz")!) {
Label("反馈问题", systemImage: "envelope")
Label(String(localized: "settings.contactUs"), systemImage: "envelope")
}
// TODO: App Store App ID
Link(destination: URL(string: "https://apps.apple.com/app/id000000000")!) {
Label("App Store 评分", systemImage: "star")
Label(String(localized: "settings.rateApp"), systemImage: "star")
}
} header: {
Text("反馈")
Text(String(localized: "settings.feedback"))
} footer: {
Text("诊断报告仅包含日志和参数,不含媒体内容")
Text(String(localized: "settings.feedbackFooter"))
}
//
Section {
HStack {
Label("版本", systemImage: "info.circle")
Label(String(localized: "settings.version"), systemImage: "info.circle")
Spacer()
Text(appVersion)
.foregroundStyle(.secondary)
@@ -99,39 +117,39 @@ struct SettingsView: View {
NavigationLink {
PrivacyPolicyView()
} label: {
Label("隐私政策", systemImage: "hand.raised")
Label(String(localized: "settings.privacyPolicy"), systemImage: "hand.raised")
}
NavigationLink {
TermsOfServiceView()
} label: {
Label("使用条款", systemImage: "doc.text")
Label(String(localized: "settings.termsOfService"), systemImage: "doc.text")
}
} header: {
Text("关于")
Text(String(localized: "settings.about"))
}
}
.navigationTitle("设置")
.navigationTitle(String(localized: "settings.title"))
.navigationBarTitleDisplayMode(.inline)
.onAppear {
checkPermissionStatus()
calculateCacheSize()
}
.alert("清理缓存", isPresented: $showingClearCacheAlert) {
Button("取消", role: .cancel) {}
Button("清理", role: .destructive) {
.alert(String(localized: "settings.clearCache"), isPresented: $showingClearCacheAlert) {
Button(String(localized: "common.cancel"), role: .cancel) {}
Button(String(localized: "settings.clear"), role: .destructive) {
clearCache()
}
} message: {
Text("确定要清理所有缓存文件吗?")
Text(String(localized: "settings.clearCacheConfirm"))
}
.alert("清空记录", isPresented: $showingClearRecentWorksAlert) {
Button("取消", role: .cancel) {}
Button("清空", role: .destructive) {
.alert(String(localized: "settings.clearRecordsTitle"), isPresented: $showingClearRecentWorksAlert) {
Button(String(localized: "common.cancel"), role: .cancel) {}
Button(String(localized: "settings.clear"), role: .destructive) {
clearRecentWorks()
}
} message: {
Text("确定要清空最近作品记录吗?这不会删除相册中的 Live Photo。")
Text(String(localized: "settings.clearRecordsConfirm"))
}
.sheet(isPresented: $showingShareSheet) {
if let url = feedbackPackageURL {
@@ -144,19 +162,19 @@ struct SettingsView: View {
private var permissionStatusView: some View {
switch photoLibraryStatus {
case .authorized:
Label("已授权", systemImage: "checkmark.circle.fill")
Label(String(localized: "settings.authorized"), systemImage: "checkmark.circle.fill")
.foregroundStyle(.green)
.labelStyle(.iconOnly)
case .limited:
Label("部分授权", systemImage: "exclamationmark.circle.fill")
Label(String(localized: "settings.limited"), systemImage: "exclamationmark.circle.fill")
.foregroundStyle(.orange)
.labelStyle(.iconOnly)
case .denied, .restricted:
Label("未授权", systemImage: "xmark.circle.fill")
Label(String(localized: "settings.denied"), systemImage: "xmark.circle.fill")
.foregroundStyle(.red)
.labelStyle(.iconOnly)
case .notDetermined:
Label("未确定", systemImage: "questionmark.circle.fill")
Label(String(localized: "settings.notDetermined"), systemImage: "questionmark.circle.fill")
.foregroundStyle(.secondary)
.labelStyle(.iconOnly)
@unknown default: