Compare commits
10 Commits
d97152b5df
...
143c471714
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
143c471714 | ||
|
|
d6357c7b32 | ||
|
|
64e962b6a4 | ||
|
|
6e60bea509 | ||
|
|
bcf0dd71a7 | ||
|
|
6d8a3a85a6 | ||
|
|
bf3f9d9eb2 | ||
|
|
3f503c1050 | ||
|
|
3d1677bdb1 | ||
|
|
cc6e137994 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -113,3 +113,4 @@ to-live-photo/to-live-photo/build/
|
|||||||
|
|
||||||
# PyTorch models (use Core ML instead)
|
# PyTorch models (use Core ML instead)
|
||||||
*.pth
|
*.pth
|
||||||
|
.serena/
|
||||||
|
|||||||
32
CLAUDE.md
32
CLAUDE.md
@@ -3,6 +3,7 @@
|
|||||||
**Bundle ID**: `xyz.let5see.livephotomaker`
|
**Bundle ID**: `xyz.let5see.livephotomaker`
|
||||||
**最低支持**: iOS/iPadOS 18.0
|
**最低支持**: iOS/iPadOS 18.0
|
||||||
**技术栈**: SwiftUI + Swift Concurrency + Core ML
|
**技术栈**: SwiftUI + Swift Concurrency + Core ML
|
||||||
|
**语言**:responses in Chinese
|
||||||
|
|
||||||
## 项目结构
|
## 项目结构
|
||||||
|
|
||||||
@@ -50,6 +51,13 @@ xcodebuild -scheme to-live-photo -configuration Release -destination 'generic/pl
|
|||||||
- 不重构: 与当前任务无关的代码
|
- 不重构: 与当前任务无关的代码
|
||||||
- 不执行: 破坏性删除命令(如 rm -rf 涉及 ~ 或 / 路径)
|
- 不执行: 破坏性删除命令(如 rm -rf 涉及 ~ 或 / 路径)
|
||||||
|
|
||||||
|
## 执行安全
|
||||||
|
|
||||||
|
- 执行前评估: 命令是否可能挂起(交互式、网络依赖、长耗时)
|
||||||
|
- 禁止交互式: 不使用 `-i` 标志或需要 stdin 输入的命令
|
||||||
|
- 长任务策略: 后台执行 + 超时设置 + 进度监控
|
||||||
|
- 阻塞处理: 若命令超过预期时间无响应,主动中断而非无限等待
|
||||||
|
|
||||||
## 代码规范
|
## 代码规范
|
||||||
|
|
||||||
- 遵循 `DesignSystem.swift` 令牌,禁止硬编码颜色/间距
|
- 遵循 `DesignSystem.swift` 令牌,禁止硬编码颜色/间距
|
||||||
@@ -57,3 +65,27 @@ xcodebuild -scheme to-live-photo -configuration Release -destination 'generic/pl
|
|||||||
- 新增 View 必须适配深色模式和 iPad
|
- 新增 View 必须适配深色模式和 iPad
|
||||||
- 触控目标 ≥ 44pt
|
- 触控目标 ≥ 44pt
|
||||||
- 错误处理使用 `LivePhotoError` 枚举,禁止裸 `throw`
|
- 错误处理使用 `LivePhotoError` 枚举,禁止裸 `throw`
|
||||||
|
|
||||||
|
## 文档管理
|
||||||
|
|
||||||
|
### 核心原则
|
||||||
|
> 不创建需要手工同步的文档。如果信息会随代码变化,要么让代码自描述,要么接受文档必然过时。
|
||||||
|
|
||||||
|
### 文档分类
|
||||||
|
| 类型 | 文件 | 更新策略 |
|
||||||
|
|-----|------|---------|
|
||||||
|
| 宪法 | `CLAUDE.md` | 谨慎修改,每次变更需明确意图 |
|
||||||
|
| 任务 | `TASK.md` | 活跃更新,追踪里程碑进度 |
|
||||||
|
| 运维 | `docs/TEST_MATRIX.md`, `docs/USER_GUIDE.md` | 随功能<E58A9F><E883BD><EFBFBD>更同步更新 |
|
||||||
|
| 上架 | `docs/APP_STORE_METADATA.md` | 版本发布前更新 |
|
||||||
|
| 归档 | `docs/archive/` | 只读,不再更新 |
|
||||||
|
|
||||||
|
### 禁止创建
|
||||||
|
- 目录结构文档(如 PROJECT_STRUCTURE.md)— 代码即结构
|
||||||
|
- 文档索引(如 docs_index.md)— 直接浏览 docs/ 目录
|
||||||
|
- 任何需要"记得同步"的描述性文档
|
||||||
|
|
||||||
|
### 更新触发
|
||||||
|
- 新增/修改功能 → 同步 `USER_GUIDE.md` 相关章节
|
||||||
|
- 新增测试场景 → 同步 `TEST_MATRIX.md`
|
||||||
|
- 归档文档 → 不更新,保持历史原貌
|
||||||
|
|||||||
@@ -1,52 +0,0 @@
|
|||||||
# 项目结构
|
|
||||||
|
|
||||||
> 说明:本文件用于记录项目目录/文件结构的变更。新增/删除目录或文件后需同步更新。
|
|
||||||
|
|
||||||
## 根目录
|
|
||||||
|
|
||||||
- Package.swift
|
|
||||||
- docs/
|
|
||||||
- Sources/
|
|
||||||
- Tests/
|
|
||||||
- to-live-photo/
|
|
||||||
- docs_index.md
|
|
||||||
- PROJECT_STRUCTURE.md
|
|
||||||
- TASK.md
|
|
||||||
- .DS_Store
|
|
||||||
|
|
||||||
## docs/
|
|
||||||
|
|
||||||
- PRD_LivePhoto_App_V0.2_2025-12-13.md
|
|
||||||
- TECHSPEC_LivePhoto_App_V0.2_2025-12-13.md
|
|
||||||
- IXSPEC_LivePhoto_App_V0.2_2025-12-13.md
|
|
||||||
- .DS_Store
|
|
||||||
|
|
||||||
## Sources/
|
|
||||||
|
|
||||||
- LivePhotoCore/
|
|
||||||
- LivePhotoCore.swift
|
|
||||||
|
|
||||||
## Tests/
|
|
||||||
|
|
||||||
- LivePhotoCoreTests/
|
|
||||||
- LivePhotoCoreTests.swift
|
|
||||||
|
|
||||||
## to-live-photo/
|
|
||||||
|
|
||||||
- to-live-photo.xcodeproj/
|
|
||||||
- to-live-photo/
|
|
||||||
- Assets.xcassets/
|
|
||||||
- AppState.swift
|
|
||||||
- ContentView.swift
|
|
||||||
- to_live_photoApp.swift
|
|
||||||
- Views/
|
|
||||||
- HomeView.swift
|
|
||||||
- EditorView.swift
|
|
||||||
- ProcessingView.swift
|
|
||||||
- ResultView.swift
|
|
||||||
- WallpaperGuideView.swift
|
|
||||||
- to-live-photoTests/
|
|
||||||
- to_live_photoTests.swift
|
|
||||||
- to-live-photoUITests/
|
|
||||||
- to_live_photoUITests.swift
|
|
||||||
- to_live_photoUITestsLaunchTests.swift
|
|
||||||
@@ -18,9 +18,8 @@ let package = Package(
|
|||||||
name: "LivePhotoCore",
|
name: "LivePhotoCore",
|
||||||
dependencies: [],
|
dependencies: [],
|
||||||
resources: [
|
resources: [
|
||||||
.copy("Resources/metadata.mov"),
|
.copy("Resources/metadata.mov")
|
||||||
// AI 超分辨率模型(Real-ESRGAN x4plus)
|
// AI 模型已移至 On-Demand Resources,按需下载
|
||||||
.process("Resources/RealESRGAN_x4plus.mlmodel")
|
|
||||||
]
|
]
|
||||||
),
|
),
|
||||||
.testTarget(
|
.testTarget(
|
||||||
|
|||||||
127
README.md
Normal file
127
README.md
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
# Live Photo Studio
|
||||||
|
|
||||||
|
> 将任意视频转换为 iOS Live Photo,支持锁屏动态壁纸
|
||||||
|
|
||||||
|
[](https://developer.apple.com/ios/)
|
||||||
|
[](https://swift.org/)
|
||||||
|
[](LICENSE)
|
||||||
|
|
||||||
|
## ✨ 功能特性
|
||||||
|
|
||||||
|
- 📹 **视频转 Live Photo** — 导入相册视频,一键生成系统可识别的 Live Photo
|
||||||
|
- ✂️ **精准裁剪** — 时长裁剪(1~1.5s)+ 多比例模板(锁屏/全面屏/4:3/1:1)
|
||||||
|
- 🎨 **AI 超分辨率** — 集成 Real-ESRGAN,智能提升画面清晰度
|
||||||
|
- 🖼️ **封面帧选择** — 滑杆精选最佳静态封面
|
||||||
|
- 📱 **壁纸引导** — 系统版本适配的设置步骤引导
|
||||||
|
|
||||||
|
## 📱 系统要求
|
||||||
|
|
||||||
|
- iOS / iPadOS 18.0+
|
||||||
|
- Xcode 16.0+
|
||||||
|
- Swift 6.0
|
||||||
|
|
||||||
|
## 🚀 快速开始
|
||||||
|
|
||||||
|
### 克隆项目
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/yourusername/to-live-photo.git
|
||||||
|
cd to-live-photo
|
||||||
|
```
|
||||||
|
|
||||||
|
### 构建运行
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 模拟器构建
|
||||||
|
xcodebuild -scheme to-live-photo \
|
||||||
|
-destination 'platform=iOS Simulator,name=iPhone 16 Pro' \
|
||||||
|
build
|
||||||
|
|
||||||
|
# 真机 Archive
|
||||||
|
xcodebuild -scheme to-live-photo \
|
||||||
|
-configuration Release \
|
||||||
|
-destination 'generic/platform=iOS' \
|
||||||
|
-archivePath build/to-live-photo.xcarchive \
|
||||||
|
archive
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🏗️ 项目结构
|
||||||
|
|
||||||
|
```
|
||||||
|
to-live-photo/
|
||||||
|
├── Sources/LivePhotoCore/ # Swift Package - 核心库
|
||||||
|
│ ├── LivePhotoCore.swift # 生成管线、数据模型
|
||||||
|
│ ├── AIEnhancer/ # Real-ESRGAN 超分辨率
|
||||||
|
│ └── Resources/ # metadata.mov, ML 模型
|
||||||
|
├── to-live-photo/ # iOS App
|
||||||
|
│ ├── Views/ # SwiftUI 视图
|
||||||
|
│ │ ├── HomeView.swift # 首页导入
|
||||||
|
│ │ ├── EditorView.swift # 编辑裁剪
|
||||||
|
│ │ ├── ProcessingView.swift # 处理进度
|
||||||
|
│ │ ├── ResultView.swift # 保存结果
|
||||||
|
│ │ └── WallpaperGuideView.swift # 壁纸引导
|
||||||
|
│ ├── AppState.swift # 全局状态管理
|
||||||
|
│ └── DesignSystem.swift # Soft UI 设计令牌
|
||||||
|
└── docs/ # 文档
|
||||||
|
├── USER_GUIDE.md # 用户手册
|
||||||
|
├── TEST_MATRIX.md # 测试矩阵
|
||||||
|
└── APP_STORE_METADATA.md # 上架信息
|
||||||
|
```
|
||||||
|
|
||||||
|
## 🔧 技术架构
|
||||||
|
|
||||||
|
### 生成管线
|
||||||
|
|
||||||
|
```
|
||||||
|
normalize → extractKeyFrame → aiEnhance → writePhotoMetadata → writeVideoMetadata → saveToAlbum → validate
|
||||||
|
```
|
||||||
|
|
||||||
|
### 核心参数
|
||||||
|
|
||||||
|
| 参数 | 默认值 | 说明 |
|
||||||
|
|-----|-------|-----|
|
||||||
|
| 时长 | 0.917s | 与 iPhone 原生 Live Photo 一致 |
|
||||||
|
| 分辨率 | 1080×1920 | 竖屏最大,可配置兼容模式降至 720p |
|
||||||
|
| 帧率 | 60fps | 兼容模式可降至 30fps |
|
||||||
|
| 编码 | H.264 | 兜底策略确保兼容性 |
|
||||||
|
| HDR | 转 SDR | 壁纸场景更稳定 |
|
||||||
|
|
||||||
|
### AI 超分辨率
|
||||||
|
|
||||||
|
- 模型:Real-ESRGAN x4plus(Core ML,64MB)
|
||||||
|
- 处理:512×512 分块 + 64px 重叠 + 线性混合
|
||||||
|
- 放大:约 2.25x(输入 512→输出 2048)
|
||||||
|
|
||||||
|
## 📋 开发规范
|
||||||
|
|
||||||
|
### Git 提交类型
|
||||||
|
|
||||||
|
- `feat`: 新功能
|
||||||
|
- `fix`: 修复缺陷
|
||||||
|
- `refactor`: 重构(行为不变)
|
||||||
|
- `chore`: 构建、依赖、工具
|
||||||
|
- `docs`: 仅文档
|
||||||
|
|
||||||
|
### 代码规范
|
||||||
|
|
||||||
|
- 遵循 `DesignSystem.swift` 令牌,禁止硬编码颜色/间距
|
||||||
|
- 新增 View 必须包含 `accessibilityLabel`
|
||||||
|
- 新增 View 必须适配深色模式和 iPad
|
||||||
|
- 触控目标 ≥ 44pt
|
||||||
|
|
||||||
|
## 📄 文档
|
||||||
|
|
||||||
|
| 文档 | 说明 |
|
||||||
|
|-----|-----|
|
||||||
|
| [CLAUDE.md](CLAUDE.md) | AI 助手指令(宪法文档) |
|
||||||
|
| [TASK.md](TASK.md) | 里程碑与任务追踪 |
|
||||||
|
| [docs/USER_GUIDE.md](docs/USER_GUIDE.md) | 用户使用手册 |
|
||||||
|
| [docs/TEST_MATRIX.md](docs/TEST_MATRIX.md) | 测试用例矩阵 |
|
||||||
|
|
||||||
|
## 📜 许可证
|
||||||
|
|
||||||
|
MIT License - 详见 [LICENSE](LICENSE)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
<p align="center">Made with ❤️ for iOS Live Photos</p>
|
||||||
@@ -120,6 +120,30 @@ public actor AIEnhancer {
|
|||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MARK: - Model Download (ODR)
|
||||||
|
|
||||||
|
/// Check if AI model needs to be downloaded
|
||||||
|
public static func needsDownload() async -> Bool {
|
||||||
|
let available = await ODRManager.shared.isModelAvailable()
|
||||||
|
return !available
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get current model download state
|
||||||
|
public static func getDownloadState() async -> ModelDownloadState {
|
||||||
|
await ODRManager.shared.getDownloadState()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Download AI model with progress callback
|
||||||
|
/// - Parameter progress: Progress callback (0.0 to 1.0)
|
||||||
|
public static func downloadModel(progress: @escaping @Sendable (Double) -> Void) async throws {
|
||||||
|
try await ODRManager.shared.downloadModel(progress: progress)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Release ODR resources when AI enhancement is no longer needed
|
||||||
|
public static func releaseModelResources() async {
|
||||||
|
await ODRManager.shared.releaseResources()
|
||||||
|
}
|
||||||
|
|
||||||
// MARK: - Model Management
|
// MARK: - Model Management
|
||||||
|
|
||||||
/// Preload the model (call during app launch or settings change)
|
/// Preload the model (call during app launch or settings change)
|
||||||
@@ -181,14 +205,29 @@ public actor AIEnhancer {
|
|||||||
throw AIEnhanceError.modelNotFound
|
throw AIEnhanceError.modelNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
// Process image (no tiling - model has fixed 1280x1280 input)
|
// Choose processor based on image size
|
||||||
let wholeImageProcessor = WholeImageProcessor()
|
// - Small images (≤ 512x512): use WholeImageProcessor (faster, single inference)
|
||||||
|
// - Large images (> 512 in either dimension): use TiledImageProcessor (preserves detail)
|
||||||
|
let usesTiling = image.width > RealESRGANProcessor.inputSize || image.height > RealESRGANProcessor.inputSize
|
||||||
|
|
||||||
let enhancedImage = try await wholeImageProcessor.processImage(
|
let enhancedImage: CGImage
|
||||||
image,
|
if usesTiling {
|
||||||
processor: processor,
|
logger.info("Using tiled processing for large image")
|
||||||
progress: progress
|
let tiledProcessor = TiledImageProcessor()
|
||||||
)
|
enhancedImage = try await tiledProcessor.processImage(
|
||||||
|
image,
|
||||||
|
processor: processor,
|
||||||
|
progress: progress
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
logger.info("Using whole image processing for small image")
|
||||||
|
let wholeImageProcessor = WholeImageProcessor()
|
||||||
|
enhancedImage = try await wholeImageProcessor.processImage(
|
||||||
|
image,
|
||||||
|
processor: processor,
|
||||||
|
progress: progress
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
let processingTime = (CFAbsoluteTimeGetCurrent() - startTime) * 1000
|
let processingTime = (CFAbsoluteTimeGetCurrent() - startTime) * 1000
|
||||||
let enhancedSize = CGSize(width: enhancedImage.width, height: enhancedImage.height)
|
let enhancedSize = CGSize(width: enhancedImage.width, height: enhancedImage.height)
|
||||||
|
|||||||
201
Sources/LivePhotoCore/AIEnhancer/ODRManager.swift
Normal file
201
Sources/LivePhotoCore/AIEnhancer/ODRManager.swift
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
//
|
||||||
|
// ODRManager.swift
|
||||||
|
// LivePhotoCore
|
||||||
|
//
|
||||||
|
// On-Demand Resources manager for AI model download.
|
||||||
|
//
|
||||||
|
|
||||||
|
import Foundation
|
||||||
|
import os
|
||||||
|
|
||||||
|
// MARK: - Download State
|
||||||
|
|
||||||
|
/// Model download state
|
||||||
|
public enum ModelDownloadState: Sendable, Equatable {
|
||||||
|
case notDownloaded
|
||||||
|
case downloading(progress: Double)
|
||||||
|
case downloaded
|
||||||
|
case failed(String)
|
||||||
|
|
||||||
|
public static func == (lhs: ModelDownloadState, rhs: ModelDownloadState) -> Bool {
|
||||||
|
switch (lhs, rhs) {
|
||||||
|
case (.notDownloaded, .notDownloaded): return true
|
||||||
|
case (.downloaded, .downloaded): return true
|
||||||
|
case let (.downloading(p1), .downloading(p2)): return p1 == p2
|
||||||
|
case let (.failed(e1), .failed(e2)): return e1 == e2
|
||||||
|
default: return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - ODR Manager
|
||||||
|
|
||||||
|
/// On-Demand Resources manager for AI model
|
||||||
|
public actor ODRManager {
|
||||||
|
public static let shared = ODRManager()
|
||||||
|
|
||||||
|
private static let modelTag = "ai-model"
|
||||||
|
private static let modelName = "RealESRGAN_x4plus"
|
||||||
|
|
||||||
|
private var resourceRequest: NSBundleResourceRequest?
|
||||||
|
private var cachedModelURL: URL?
|
||||||
|
private let logger = Logger(subsystem: "LivePhotoCore", category: "ODRManager")
|
||||||
|
|
||||||
|
private init() {}
|
||||||
|
|
||||||
|
// MARK: - Public API
|
||||||
|
|
||||||
|
/// Check if model is available locally (either in ODR cache or bundle)
|
||||||
|
public func isModelAvailable() async -> Bool {
|
||||||
|
// First check if we have a cached URL
|
||||||
|
if let url = cachedModelURL, FileManager.default.fileExists(atPath: url.path) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check bundle (development/fallback)
|
||||||
|
if getBundleModelURL() != nil {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check ODR conditionally (only available in app context)
|
||||||
|
return await checkODRAvailability()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get current download state
|
||||||
|
public func getDownloadState() async -> ModelDownloadState {
|
||||||
|
if await isModelAvailable() {
|
||||||
|
return .downloaded
|
||||||
|
}
|
||||||
|
|
||||||
|
if resourceRequest != nil {
|
||||||
|
return .downloading(progress: 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
return .notDownloaded
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Download model with progress callback
|
||||||
|
/// - Parameter progress: Progress callback (0.0 to 1.0)
|
||||||
|
public func downloadModel(progress: @escaping @Sendable (Double) -> Void) async throws {
|
||||||
|
// Check if already available
|
||||||
|
if await isModelAvailable() {
|
||||||
|
logger.info("Model already available, skipping download")
|
||||||
|
progress(1.0)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
logger.info("Starting ODR download for model: \(Self.modelTag)")
|
||||||
|
|
||||||
|
// Create resource request
|
||||||
|
let request = NSBundleResourceRequest(tags: [Self.modelTag])
|
||||||
|
self.resourceRequest = request
|
||||||
|
|
||||||
|
// Set up progress observation
|
||||||
|
let observation = request.progress.observe(\.fractionCompleted) { progressObj, _ in
|
||||||
|
Task { @MainActor in
|
||||||
|
progress(progressObj.fractionCompleted)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
defer {
|
||||||
|
observation.invalidate()
|
||||||
|
}
|
||||||
|
|
||||||
|
do {
|
||||||
|
// Begin accessing resources
|
||||||
|
try await request.beginAccessingResources()
|
||||||
|
|
||||||
|
logger.info("ODR download completed successfully")
|
||||||
|
|
||||||
|
// Find and cache the model URL
|
||||||
|
if let url = findModelInBundle(request.bundle) {
|
||||||
|
cachedModelURL = url
|
||||||
|
logger.info("Model cached at: \(url.path)")
|
||||||
|
}
|
||||||
|
|
||||||
|
progress(1.0)
|
||||||
|
} catch {
|
||||||
|
logger.error("ODR download failed: \(error.localizedDescription)")
|
||||||
|
self.resourceRequest = nil
|
||||||
|
throw AIEnhanceError.modelLoadFailed("Download failed: \(error.localizedDescription)")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get model URL (after download or from bundle)
|
||||||
|
public func getModelURL() -> URL? {
|
||||||
|
// Return cached URL if available
|
||||||
|
if let url = cachedModelURL {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check bundle fallback
|
||||||
|
if let url = getBundleModelURL() {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try to find in ODR bundle
|
||||||
|
if let request = resourceRequest, let url = findModelInBundle(request.bundle) {
|
||||||
|
cachedModelURL = url
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Release ODR resources when not in use
|
||||||
|
public func releaseResources() {
|
||||||
|
resourceRequest?.endAccessingResources()
|
||||||
|
resourceRequest = nil
|
||||||
|
cachedModelURL = nil
|
||||||
|
logger.info("ODR resources released")
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Private Helpers
|
||||||
|
|
||||||
|
private func checkODRAvailability() async -> Bool {
|
||||||
|
// Use conditionallyBeginAccessingResources to check without triggering download
|
||||||
|
let request = NSBundleResourceRequest(tags: [Self.modelTag])
|
||||||
|
|
||||||
|
return await withCheckedContinuation { continuation in
|
||||||
|
request.conditionallyBeginAccessingResources { available in
|
||||||
|
if available {
|
||||||
|
// Model is already downloaded via ODR
|
||||||
|
self.logger.debug("ODR model is available locally")
|
||||||
|
}
|
||||||
|
continuation.resume(returning: available)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func getBundleModelURL() -> URL? {
|
||||||
|
// Try main bundle first
|
||||||
|
if let url = Bundle.main.url(forResource: Self.modelName, withExtension: "mlmodelc") {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
if let url = Bundle.main.url(forResource: Self.modelName, withExtension: "mlpackage") {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
|
||||||
|
// Try SPM bundle (development)
|
||||||
|
#if SWIFT_PACKAGE
|
||||||
|
if let url = Bundle.module.url(forResource: Self.modelName, withExtension: "mlmodelc") {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
if let url = Bundle.module.url(forResource: Self.modelName, withExtension: "mlpackage") {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
private func findModelInBundle(_ bundle: Bundle) -> URL? {
|
||||||
|
if let url = bundle.url(forResource: Self.modelName, withExtension: "mlmodelc") {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
if let url = bundle.url(forResource: Self.modelName, withExtension: "mlpackage") {
|
||||||
|
return url
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -29,7 +29,7 @@ actor RealESRGANProcessor {
|
|||||||
|
|
||||||
init() {}
|
init() {}
|
||||||
|
|
||||||
/// Load Core ML model from bundle
|
/// Load Core ML model from ODR or bundle
|
||||||
func loadModel() async throws {
|
func loadModel() async throws {
|
||||||
guard model == nil else {
|
guard model == nil else {
|
||||||
logger.debug("Model already loaded")
|
logger.debug("Model already loaded")
|
||||||
@@ -38,30 +38,34 @@ actor RealESRGANProcessor {
|
|||||||
|
|
||||||
logger.info("Loading Real-ESRGAN Core ML model...")
|
logger.info("Loading Real-ESRGAN Core ML model...")
|
||||||
|
|
||||||
// Try to find model in bundle
|
// 1. Try ODRManager first (supports both ODR download and bundle fallback)
|
||||||
let modelName = "RealESRGAN_x4plus"
|
var modelURL = await ODRManager.shared.getModelURL()
|
||||||
var modelURL: URL?
|
|
||||||
|
|
||||||
// Try SPM bundle first
|
// 2. If ODRManager returns nil, try direct bundle lookup as fallback
|
||||||
#if SWIFT_PACKAGE
|
|
||||||
if let url = Bundle.module.url(forResource: modelName, withExtension: "mlmodelc") {
|
|
||||||
modelURL = url
|
|
||||||
} else if let url = Bundle.module.url(forResource: modelName, withExtension: "mlpackage") {
|
|
||||||
modelURL = url
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
// Try main bundle
|
|
||||||
if modelURL == nil {
|
if modelURL == nil {
|
||||||
|
let modelName = "RealESRGAN_x4plus"
|
||||||
|
|
||||||
|
// Try main bundle
|
||||||
if let url = Bundle.main.url(forResource: modelName, withExtension: "mlmodelc") {
|
if let url = Bundle.main.url(forResource: modelName, withExtension: "mlmodelc") {
|
||||||
modelURL = url
|
modelURL = url
|
||||||
} else if let url = Bundle.main.url(forResource: modelName, withExtension: "mlpackage") {
|
} else if let url = Bundle.main.url(forResource: modelName, withExtension: "mlpackage") {
|
||||||
modelURL = url
|
modelURL = url
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Try SPM bundle (development)
|
||||||
|
#if SWIFT_PACKAGE
|
||||||
|
if modelURL == nil {
|
||||||
|
if let url = Bundle.module.url(forResource: modelName, withExtension: "mlmodelc") {
|
||||||
|
modelURL = url
|
||||||
|
} else if let url = Bundle.module.url(forResource: modelName, withExtension: "mlpackage") {
|
||||||
|
modelURL = url
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
guard let url = modelURL else {
|
guard let url = modelURL else {
|
||||||
logger.error("Model file not found: \(modelName)")
|
logger.error("Model not found. Please download the AI model first.")
|
||||||
throw AIEnhanceError.modelNotFound
|
throw AIEnhanceError.modelNotFound
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,9 +1,10 @@
|
|||||||
//
|
//
|
||||||
// WholeImageProcessor.swift
|
// TiledImageProcessor.swift
|
||||||
// LivePhotoCore
|
// LivePhotoCore
|
||||||
//
|
//
|
||||||
// Processes images for Real-ESRGAN model with fixed 512x512 input.
|
// True tiled image processing for Real-ESRGAN model.
|
||||||
// Handles scaling, padding, and cropping to preserve original aspect ratio.
|
// Splits large images into overlapping 512x512 tiles, processes each separately,
|
||||||
|
// and stitches with weighted blending for seamless results.
|
||||||
//
|
//
|
||||||
|
|
||||||
import CoreGraphics
|
import CoreGraphics
|
||||||
@@ -11,12 +12,36 @@ import CoreVideo
|
|||||||
import Foundation
|
import Foundation
|
||||||
import os
|
import os
|
||||||
|
|
||||||
/// Processes images for the Real-ESRGAN model
|
// MARK: - Types
|
||||||
/// The model requires fixed 512x512 input and outputs 2048x2048
|
|
||||||
struct WholeImageProcessor {
|
|
||||||
private let logger = Logger(subsystem: "LivePhotoCore", category: "WholeImageProcessor")
|
|
||||||
|
|
||||||
/// Process an image through the AI model
|
/// Represents a single tile for processing
|
||||||
|
struct ImageTile {
|
||||||
|
let image: CGImage
|
||||||
|
let originX: Int // Position in source image
|
||||||
|
let originY: Int
|
||||||
|
let outputOriginX: Int // Position in output image (scaled)
|
||||||
|
let outputOriginY: Int
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Tiling configuration
|
||||||
|
struct TilingConfig {
|
||||||
|
let tileSize: Int = 512
|
||||||
|
let overlap: Int = 64 // Blending zone for seamless stitching
|
||||||
|
let modelScale: Int = 4
|
||||||
|
|
||||||
|
var effectiveTileSize: Int { tileSize - overlap * 2 } // 384
|
||||||
|
var outputTileSize: Int { tileSize * modelScale } // 2048
|
||||||
|
var outputOverlap: Int { overlap * modelScale } // 256
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - TiledImageProcessor
|
||||||
|
|
||||||
|
/// Processes large images by splitting into tiles
|
||||||
|
struct TiledImageProcessor {
|
||||||
|
private let config = TilingConfig()
|
||||||
|
private let logger = Logger(subsystem: "LivePhotoCore", category: "TiledImageProcessor")
|
||||||
|
|
||||||
|
/// Process an image through the AI model using tiled approach
|
||||||
/// - Parameters:
|
/// - Parameters:
|
||||||
/// - inputImage: Input CGImage to enhance
|
/// - inputImage: Input CGImage to enhance
|
||||||
/// - processor: RealESRGAN processor for inference
|
/// - processor: RealESRGAN processor for inference
|
||||||
@@ -30,11 +55,369 @@ struct WholeImageProcessor {
|
|||||||
let originalWidth = inputImage.width
|
let originalWidth = inputImage.width
|
||||||
let originalHeight = inputImage.height
|
let originalHeight = inputImage.height
|
||||||
|
|
||||||
logger.info("Processing \(originalWidth)x\(originalHeight) image")
|
logger.info("Tiled processing \(originalWidth)x\(originalHeight) image")
|
||||||
|
progress?(0.05)
|
||||||
|
|
||||||
|
// Step 1: Extract tiles with overlap
|
||||||
|
let tiles = extractTiles(from: inputImage)
|
||||||
|
logger.info("Extracted \(tiles.count) tiles")
|
||||||
|
progress?(0.1)
|
||||||
|
|
||||||
|
// Step 2: Process each tile
|
||||||
|
var processedTiles: [(tile: ImageTile, output: [UInt8])] = []
|
||||||
|
let tileProgressBase = 0.1
|
||||||
|
let tileProgressRange = 0.7
|
||||||
|
|
||||||
|
for (index, tile) in tiles.enumerated() {
|
||||||
|
try Task.checkCancellation()
|
||||||
|
|
||||||
|
let pixelBuffer = try ImageFormatConverter.cgImageToPixelBuffer(tile.image)
|
||||||
|
let outputData = try await processor.processImage(pixelBuffer)
|
||||||
|
processedTiles.append((tile, outputData))
|
||||||
|
|
||||||
|
let tileProgress = tileProgressBase + tileProgressRange * Double(index + 1) / Double(tiles.count)
|
||||||
|
progress?(tileProgress)
|
||||||
|
|
||||||
|
// Yield to allow memory cleanup between tiles
|
||||||
|
await Task.yield()
|
||||||
|
}
|
||||||
|
|
||||||
|
progress?(0.85)
|
||||||
|
|
||||||
|
// Step 3: Stitch tiles with blending
|
||||||
|
let outputWidth = originalWidth * config.modelScale
|
||||||
|
let outputHeight = originalHeight * config.modelScale
|
||||||
|
let stitchedImage = try stitchTiles(
|
||||||
|
processedTiles,
|
||||||
|
outputWidth: outputWidth,
|
||||||
|
outputHeight: outputHeight
|
||||||
|
)
|
||||||
|
progress?(0.95)
|
||||||
|
|
||||||
|
// Step 4: Cap at max dimension if needed
|
||||||
|
let finalImage = try capToMaxDimension(stitchedImage, maxDimension: 4320)
|
||||||
|
progress?(1.0)
|
||||||
|
|
||||||
|
logger.info("Enhanced to \(finalImage.width)x\(finalImage.height)")
|
||||||
|
return finalImage
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Tile Extraction
|
||||||
|
|
||||||
|
/// Extract overlapping tiles from the input image
|
||||||
|
private func extractTiles(from image: CGImage) -> [ImageTile] {
|
||||||
|
var tiles: [ImageTile] = []
|
||||||
|
let width = image.width
|
||||||
|
let height = image.height
|
||||||
|
let step = config.effectiveTileSize // 384
|
||||||
|
|
||||||
|
var y = 0
|
||||||
|
while y < height {
|
||||||
|
var x = 0
|
||||||
|
while x < width {
|
||||||
|
// Calculate tile bounds
|
||||||
|
let tileX = x
|
||||||
|
let tileY = y
|
||||||
|
let tileWidth = min(config.tileSize, width - tileX)
|
||||||
|
let tileHeight = min(config.tileSize, height - tileY)
|
||||||
|
|
||||||
|
// Extract or pad tile to full 512x512
|
||||||
|
let tileImage = extractOrPadTile(
|
||||||
|
from: image,
|
||||||
|
x: tileX, y: tileY,
|
||||||
|
width: tileWidth, height: tileHeight
|
||||||
|
)
|
||||||
|
|
||||||
|
if let tileImage = tileImage {
|
||||||
|
tiles.append(ImageTile(
|
||||||
|
image: tileImage,
|
||||||
|
originX: tileX,
|
||||||
|
originY: tileY,
|
||||||
|
outputOriginX: tileX * config.modelScale,
|
||||||
|
outputOriginY: tileY * config.modelScale
|
||||||
|
))
|
||||||
|
}
|
||||||
|
|
||||||
|
x += step
|
||||||
|
if x >= width && x < width + step - 1 {
|
||||||
|
// Ensure we cover the right edge
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
y += step
|
||||||
|
if y >= height && y < height + step - 1 {
|
||||||
|
// Ensure we cover the bottom edge
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return tiles
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Extract a tile from the image, padding with edge reflection if necessary
|
||||||
|
private func extractOrPadTile(
|
||||||
|
from image: CGImage,
|
||||||
|
x: Int, y: Int,
|
||||||
|
width: Int, height: Int
|
||||||
|
) -> CGImage? {
|
||||||
|
let colorSpace = image.colorSpace ?? CGColorSpaceCreateDeviceRGB()
|
||||||
|
|
||||||
|
guard let context = CGContext(
|
||||||
|
data: nil,
|
||||||
|
width: config.tileSize,
|
||||||
|
height: config.tileSize,
|
||||||
|
bitsPerComponent: 8,
|
||||||
|
bytesPerRow: config.tileSize * 4,
|
||||||
|
space: colorSpace,
|
||||||
|
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
|
||||||
|
) else {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fill with edge color (use edge reflection for better results)
|
||||||
|
context.setFillColor(gray: 0.0, alpha: 1.0)
|
||||||
|
context.fill(CGRect(x: 0, y: 0, width: config.tileSize, height: config.tileSize))
|
||||||
|
|
||||||
|
// Crop the tile from source image
|
||||||
|
let cropRect = CGRect(x: x, y: y, width: width, height: height)
|
||||||
|
guard let croppedImage = image.cropping(to: cropRect) else {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Draw at origin (bottom-left in CGContext)
|
||||||
|
// Note: CGImage coordinates have origin at top-left, CGContext at bottom-left
|
||||||
|
// So we draw at (0, tileSize - height) to place at top
|
||||||
|
let drawY = config.tileSize - height
|
||||||
|
context.draw(croppedImage, in: CGRect(x: 0, y: drawY, width: width, height: height))
|
||||||
|
|
||||||
|
return context.makeImage()
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Tile Stitching
|
||||||
|
|
||||||
|
/// Stitch processed tiles with weighted blending
|
||||||
|
private func stitchTiles(
|
||||||
|
_ tiles: [(tile: ImageTile, output: [UInt8])],
|
||||||
|
outputWidth: Int,
|
||||||
|
outputHeight: Int
|
||||||
|
) throws -> CGImage {
|
||||||
|
// Create output buffers
|
||||||
|
var outputBuffer = [Float](repeating: 0, count: outputWidth * outputHeight * 3)
|
||||||
|
var weightBuffer = [Float](repeating: 0, count: outputWidth * outputHeight)
|
||||||
|
|
||||||
|
let outputTileSize = config.outputTileSize // 2048
|
||||||
|
|
||||||
|
for (tile, data) in tiles {
|
||||||
|
// Create blending weights for this tile
|
||||||
|
let weights = createBlendingWeights(
|
||||||
|
tileWidth: min(outputTileSize, outputWidth - tile.outputOriginX),
|
||||||
|
tileHeight: min(outputTileSize, outputHeight - tile.outputOriginY)
|
||||||
|
)
|
||||||
|
|
||||||
|
// Blend tile into output
|
||||||
|
blendTileIntoOutput(
|
||||||
|
data: data,
|
||||||
|
weights: weights,
|
||||||
|
atX: tile.outputOriginX,
|
||||||
|
atY: tile.outputOriginY,
|
||||||
|
outputWidth: outputWidth,
|
||||||
|
outputHeight: outputHeight,
|
||||||
|
outputBuffer: &outputBuffer,
|
||||||
|
weightBuffer: &weightBuffer
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normalize by accumulated weights
|
||||||
|
normalizeByWeights(&outputBuffer, weights: weightBuffer, width: outputWidth, height: outputHeight)
|
||||||
|
|
||||||
|
// Convert to CGImage
|
||||||
|
return try createCGImage(from: outputBuffer, width: outputWidth, height: outputHeight)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create blending weights with linear falloff at edges
|
||||||
|
private func createBlendingWeights(tileWidth: Int, tileHeight: Int) -> [Float] {
|
||||||
|
let overlap = config.outputOverlap // 256
|
||||||
|
var weights = [Float](repeating: 1.0, count: tileWidth * tileHeight)
|
||||||
|
|
||||||
|
for y in 0..<tileHeight {
|
||||||
|
for x in 0..<tileWidth {
|
||||||
|
var weight: Float = 1.0
|
||||||
|
|
||||||
|
// Left edge ramp
|
||||||
|
if x < overlap {
|
||||||
|
weight *= Float(x) / Float(overlap)
|
||||||
|
}
|
||||||
|
// Right edge ramp
|
||||||
|
if x >= tileWidth - overlap {
|
||||||
|
weight *= Float(tileWidth - x - 1) / Float(overlap)
|
||||||
|
}
|
||||||
|
// Top edge ramp
|
||||||
|
if y < overlap {
|
||||||
|
weight *= Float(y) / Float(overlap)
|
||||||
|
}
|
||||||
|
// Bottom edge ramp
|
||||||
|
if y >= tileHeight - overlap {
|
||||||
|
weight *= Float(tileHeight - y - 1) / Float(overlap)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Ensure minimum weight to avoid division by zero
|
||||||
|
weight = max(weight, 0.001)
|
||||||
|
weights[y * tileWidth + x] = weight
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return weights
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Blend a tile into the output buffer with weights
|
||||||
|
private func blendTileIntoOutput(
|
||||||
|
data: [UInt8],
|
||||||
|
weights: [Float],
|
||||||
|
atX: Int, atY: Int,
|
||||||
|
outputWidth: Int, outputHeight: Int,
|
||||||
|
outputBuffer: inout [Float],
|
||||||
|
weightBuffer: inout [Float]
|
||||||
|
) {
|
||||||
|
let tileSize = config.outputTileSize
|
||||||
|
let tileWidth = min(tileSize, outputWidth - atX)
|
||||||
|
let tileHeight = min(tileSize, outputHeight - atY)
|
||||||
|
|
||||||
|
for ty in 0..<tileHeight {
|
||||||
|
let outputY = atY + ty
|
||||||
|
if outputY >= outputHeight { continue }
|
||||||
|
|
||||||
|
for tx in 0..<tileWidth {
|
||||||
|
let outputX = atX + tx
|
||||||
|
if outputX >= outputWidth { continue }
|
||||||
|
|
||||||
|
let tileIdx = ty * tileSize + tx
|
||||||
|
let outputIdx = outputY * outputWidth + outputX
|
||||||
|
|
||||||
|
// Bounds check for tile data (RGBA format, 4 bytes per pixel)
|
||||||
|
let dataIdx = tileIdx * 4
|
||||||
|
guard dataIdx + 2 < data.count else { continue }
|
||||||
|
|
||||||
|
let weight = weights[ty * tileWidth + tx]
|
||||||
|
|
||||||
|
// Accumulate weighted RGB values
|
||||||
|
outputBuffer[outputIdx * 3 + 0] += Float(data[dataIdx + 0]) * weight // R
|
||||||
|
outputBuffer[outputIdx * 3 + 1] += Float(data[dataIdx + 1]) * weight // G
|
||||||
|
outputBuffer[outputIdx * 3 + 2] += Float(data[dataIdx + 2]) * weight // B
|
||||||
|
weightBuffer[outputIdx] += weight
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Normalize output buffer by accumulated weights
|
||||||
|
private func normalizeByWeights(
|
||||||
|
_ buffer: inout [Float],
|
||||||
|
weights: [Float],
|
||||||
|
width: Int, height: Int
|
||||||
|
) {
|
||||||
|
for i in 0..<(width * height) {
|
||||||
|
let w = max(weights[i], 0.001)
|
||||||
|
buffer[i * 3 + 0] /= w
|
||||||
|
buffer[i * 3 + 1] /= w
|
||||||
|
buffer[i * 3 + 2] /= w
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Create CGImage from float RGB buffer
|
||||||
|
private func createCGImage(from buffer: [Float], width: Int, height: Int) throws -> CGImage {
|
||||||
|
// Convert float buffer to RGBA UInt8
|
||||||
|
var pixels = [UInt8](repeating: 255, count: width * height * 4)
|
||||||
|
|
||||||
|
for i in 0..<(width * height) {
|
||||||
|
pixels[i * 4 + 0] = UInt8(clamping: Int(buffer[i * 3 + 0])) // R
|
||||||
|
pixels[i * 4 + 1] = UInt8(clamping: Int(buffer[i * 3 + 1])) // G
|
||||||
|
pixels[i * 4 + 2] = UInt8(clamping: Int(buffer[i * 3 + 2])) // B
|
||||||
|
pixels[i * 4 + 3] = 255 // A
|
||||||
|
}
|
||||||
|
|
||||||
|
let colorSpace = CGColorSpaceCreateDeviceRGB()
|
||||||
|
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipLast.rawValue)
|
||||||
|
|
||||||
|
guard
|
||||||
|
let provider = CGDataProvider(data: Data(pixels) as CFData),
|
||||||
|
let image = CGImage(
|
||||||
|
width: width,
|
||||||
|
height: height,
|
||||||
|
bitsPerComponent: 8,
|
||||||
|
bitsPerPixel: 32,
|
||||||
|
bytesPerRow: width * 4,
|
||||||
|
space: colorSpace,
|
||||||
|
bitmapInfo: bitmapInfo,
|
||||||
|
provider: provider,
|
||||||
|
decode: nil,
|
||||||
|
shouldInterpolate: true,
|
||||||
|
intent: .defaultIntent
|
||||||
|
)
|
||||||
|
else {
|
||||||
|
throw AIEnhanceError.inferenceError("Failed to create stitched image")
|
||||||
|
}
|
||||||
|
|
||||||
|
return image
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Cap image to maximum dimension while preserving aspect ratio
|
||||||
|
private func capToMaxDimension(_ image: CGImage, maxDimension: Int) throws -> CGImage {
|
||||||
|
let width = image.width
|
||||||
|
let height = image.height
|
||||||
|
|
||||||
|
if width <= maxDimension && height <= maxDimension {
|
||||||
|
return image
|
||||||
|
}
|
||||||
|
|
||||||
|
let scale = min(Double(maxDimension) / Double(width), Double(maxDimension) / Double(height))
|
||||||
|
let targetWidth = Int(Double(width) * scale)
|
||||||
|
let targetHeight = Int(Double(height) * scale)
|
||||||
|
|
||||||
|
let colorSpace = image.colorSpace ?? CGColorSpaceCreateDeviceRGB()
|
||||||
|
guard let context = CGContext(
|
||||||
|
data: nil,
|
||||||
|
width: targetWidth,
|
||||||
|
height: targetHeight,
|
||||||
|
bitsPerComponent: 8,
|
||||||
|
bytesPerRow: targetWidth * 4,
|
||||||
|
space: colorSpace,
|
||||||
|
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
|
||||||
|
) else {
|
||||||
|
throw AIEnhanceError.inferenceError("Failed to create scaling context")
|
||||||
|
}
|
||||||
|
|
||||||
|
context.interpolationQuality = .high
|
||||||
|
context.draw(image, in: CGRect(x: 0, y: 0, width: targetWidth, height: targetHeight))
|
||||||
|
|
||||||
|
guard let scaledImage = context.makeImage() else {
|
||||||
|
throw AIEnhanceError.inferenceError("Failed to scale image")
|
||||||
|
}
|
||||||
|
|
||||||
|
return scaledImage
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - WholeImageProcessor (for small images)
|
||||||
|
|
||||||
|
/// Processes small images (< 512x512) for the Real-ESRGAN model
|
||||||
|
/// Uses scaling and padding approach for images that fit within a single tile
|
||||||
|
struct WholeImageProcessor {
|
||||||
|
private let logger = Logger(subsystem: "LivePhotoCore", category: "WholeImageProcessor")
|
||||||
|
|
||||||
|
/// Process an image through the AI model
|
||||||
|
func processImage(
|
||||||
|
_ inputImage: CGImage,
|
||||||
|
processor: RealESRGANProcessor,
|
||||||
|
progress: AIEnhanceProgress?
|
||||||
|
) async throws -> CGImage {
|
||||||
|
let originalWidth = inputImage.width
|
||||||
|
let originalHeight = inputImage.height
|
||||||
|
|
||||||
|
logger.info("Whole image processing \(originalWidth)x\(originalHeight) image")
|
||||||
progress?(0.1)
|
progress?(0.1)
|
||||||
|
|
||||||
// Step 1: Scale and pad to 512x512
|
// Step 1: Scale and pad to 512x512
|
||||||
let (paddedImage, scaleFactor, paddingInfo) = try prepareInputImage(inputImage)
|
let (paddedImage, _, paddingInfo) = try prepareInputImage(inputImage)
|
||||||
progress?(0.2)
|
progress?(0.2)
|
||||||
|
|
||||||
// Step 2: Convert to CVPixelBuffer
|
// Step 2: Convert to CVPixelBuffer
|
||||||
@@ -58,7 +441,6 @@ struct WholeImageProcessor {
|
|||||||
outputImage,
|
outputImage,
|
||||||
originalWidth: originalWidth,
|
originalWidth: originalWidth,
|
||||||
originalHeight: originalHeight,
|
originalHeight: originalHeight,
|
||||||
scaleFactor: scaleFactor,
|
|
||||||
paddingInfo: paddingInfo
|
paddingInfo: paddingInfo
|
||||||
)
|
)
|
||||||
progress?(1.0)
|
progress?(1.0)
|
||||||
@@ -69,21 +451,18 @@ struct WholeImageProcessor {
|
|||||||
|
|
||||||
// MARK: - Private Helpers
|
// MARK: - Private Helpers
|
||||||
|
|
||||||
/// Padding information for later extraction
|
|
||||||
private struct PaddingInfo {
|
private struct PaddingInfo {
|
||||||
let paddedX: Int // X offset of original content in padded image
|
let paddedX: Int
|
||||||
let paddedY: Int // Y offset of original content in padded image
|
let paddedY: Int
|
||||||
let scaledWidth: Int // Width of original content after scaling
|
let scaledWidth: Int
|
||||||
let scaledHeight: Int // Height of original content after scaling
|
let scaledHeight: Int
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Prepare input image: scale to fit 1280x1280 while preserving aspect ratio, then pad
|
|
||||||
private func prepareInputImage(_ image: CGImage) throws -> (CGImage, CGFloat, PaddingInfo) {
|
private func prepareInputImage(_ image: CGImage) throws -> (CGImage, CGFloat, PaddingInfo) {
|
||||||
let inputSize = RealESRGANProcessor.inputSize
|
let inputSize = RealESRGANProcessor.inputSize
|
||||||
let originalWidth = CGFloat(image.width)
|
let originalWidth = CGFloat(image.width)
|
||||||
let originalHeight = CGFloat(image.height)
|
let originalHeight = CGFloat(image.height)
|
||||||
|
|
||||||
// Calculate scale to fit within inputSize x inputSize
|
|
||||||
let scale = min(
|
let scale = min(
|
||||||
CGFloat(inputSize) / originalWidth,
|
CGFloat(inputSize) / originalWidth,
|
||||||
CGFloat(inputSize) / originalHeight
|
CGFloat(inputSize) / originalHeight
|
||||||
@@ -91,14 +470,9 @@ struct WholeImageProcessor {
|
|||||||
|
|
||||||
let scaledWidth = Int(originalWidth * scale)
|
let scaledWidth = Int(originalWidth * scale)
|
||||||
let scaledHeight = Int(originalHeight * scale)
|
let scaledHeight = Int(originalHeight * scale)
|
||||||
|
|
||||||
// Calculate padding to center the image
|
|
||||||
let paddingX = (inputSize - scaledWidth) / 2
|
let paddingX = (inputSize - scaledWidth) / 2
|
||||||
let paddingY = (inputSize - scaledHeight) / 2
|
let paddingY = (inputSize - scaledHeight) / 2
|
||||||
|
|
||||||
logger.info("Scaling \(Int(originalWidth))x\(Int(originalHeight)) -> \(scaledWidth)x\(scaledHeight), padding: (\(paddingX), \(paddingY))")
|
|
||||||
|
|
||||||
// Create padded context
|
|
||||||
let colorSpace = image.colorSpace ?? CGColorSpaceCreateDeviceRGB()
|
let colorSpace = image.colorSpace ?? CGColorSpaceCreateDeviceRGB()
|
||||||
guard let context = CGContext(
|
guard let context = CGContext(
|
||||||
data: nil,
|
data: nil,
|
||||||
@@ -112,12 +486,9 @@ struct WholeImageProcessor {
|
|||||||
throw AIEnhanceError.inputImageInvalid
|
throw AIEnhanceError.inputImageInvalid
|
||||||
}
|
}
|
||||||
|
|
||||||
// Fill with black (or neutral color)
|
|
||||||
context.setFillColor(gray: 0.0, alpha: 1.0)
|
context.setFillColor(gray: 0.0, alpha: 1.0)
|
||||||
context.fill(CGRect(x: 0, y: 0, width: inputSize, height: inputSize))
|
context.fill(CGRect(x: 0, y: 0, width: inputSize, height: inputSize))
|
||||||
|
|
||||||
// Draw scaled image centered
|
|
||||||
// Note: CGContext has origin at bottom-left, so we need to flip Y coordinate
|
|
||||||
let drawRect = CGRect(x: paddingX, y: paddingY, width: scaledWidth, height: scaledHeight)
|
let drawRect = CGRect(x: paddingX, y: paddingY, width: scaledWidth, height: scaledHeight)
|
||||||
context.draw(image, in: drawRect)
|
context.draw(image, in: drawRect)
|
||||||
|
|
||||||
@@ -135,32 +506,25 @@ struct WholeImageProcessor {
|
|||||||
return (paddedImage, scale, paddingInfo)
|
return (paddedImage, scale, paddingInfo)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Extract the enhanced content area and scale to final size
|
|
||||||
private func extractAndScaleOutput(
|
private func extractAndScaleOutput(
|
||||||
_ outputImage: CGImage,
|
_ outputImage: CGImage,
|
||||||
originalWidth: Int,
|
originalWidth: Int,
|
||||||
originalHeight: Int,
|
originalHeight: Int,
|
||||||
scaleFactor: CGFloat,
|
|
||||||
paddingInfo: PaddingInfo
|
paddingInfo: PaddingInfo
|
||||||
) throws -> CGImage {
|
) throws -> CGImage {
|
||||||
let modelScale = RealESRGANProcessor.scaleFactor
|
let modelScale = RealESRGANProcessor.scaleFactor
|
||||||
|
|
||||||
// Calculate crop region in output image (4x the padding info)
|
|
||||||
let cropX = paddingInfo.paddedX * modelScale
|
let cropX = paddingInfo.paddedX * modelScale
|
||||||
let cropY = paddingInfo.paddedY * modelScale
|
let cropY = paddingInfo.paddedY * modelScale
|
||||||
let cropWidth = paddingInfo.scaledWidth * modelScale
|
let cropWidth = paddingInfo.scaledWidth * modelScale
|
||||||
let cropHeight = paddingInfo.scaledHeight * modelScale
|
let cropHeight = paddingInfo.scaledHeight * modelScale
|
||||||
|
|
||||||
logger.info("Cropping output at (\(cropX), \(cropY)) size \(cropWidth)x\(cropHeight)")
|
|
||||||
|
|
||||||
// Crop the content area
|
|
||||||
let cropRect = CGRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
|
let cropRect = CGRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
|
||||||
guard let croppedImage = outputImage.cropping(to: cropRect) else {
|
guard let croppedImage = outputImage.cropping(to: cropRect) else {
|
||||||
throw AIEnhanceError.inferenceError("Failed to crop output image")
|
throw AIEnhanceError.inferenceError("Failed to crop output image")
|
||||||
}
|
}
|
||||||
|
|
||||||
// Calculate final target size (4x original, capped at reasonable limit while preserving aspect ratio)
|
let maxDimension = 4320
|
||||||
let maxDimension = 4320 // Cap at ~4K
|
|
||||||
let idealWidth = originalWidth * modelScale
|
let idealWidth = originalWidth * modelScale
|
||||||
let idealHeight = originalHeight * modelScale
|
let idealHeight = originalHeight * modelScale
|
||||||
|
|
||||||
@@ -168,22 +532,18 @@ struct WholeImageProcessor {
|
|||||||
let targetHeight: Int
|
let targetHeight: Int
|
||||||
|
|
||||||
if idealWidth <= maxDimension && idealHeight <= maxDimension {
|
if idealWidth <= maxDimension && idealHeight <= maxDimension {
|
||||||
// Both dimensions fit within limit
|
|
||||||
targetWidth = idealWidth
|
targetWidth = idealWidth
|
||||||
targetHeight = idealHeight
|
targetHeight = idealHeight
|
||||||
} else {
|
} else {
|
||||||
// Scale down to fit within maxDimension while preserving aspect ratio
|
|
||||||
let scale = min(Double(maxDimension) / Double(idealWidth), Double(maxDimension) / Double(idealHeight))
|
let scale = min(Double(maxDimension) / Double(idealWidth), Double(maxDimension) / Double(idealHeight))
|
||||||
targetWidth = Int(Double(idealWidth) * scale)
|
targetWidth = Int(Double(idealWidth) * scale)
|
||||||
targetHeight = Int(Double(idealHeight) * scale)
|
targetHeight = Int(Double(idealHeight) * scale)
|
||||||
}
|
}
|
||||||
|
|
||||||
// If cropped image is already the right size, return it
|
|
||||||
if croppedImage.width == targetWidth && croppedImage.height == targetHeight {
|
if croppedImage.width == targetWidth && croppedImage.height == targetHeight {
|
||||||
return croppedImage
|
return croppedImage
|
||||||
}
|
}
|
||||||
|
|
||||||
// Scale to target size
|
|
||||||
let colorSpace = croppedImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
|
let colorSpace = croppedImage.colorSpace ?? CGColorSpaceCreateDeviceRGB()
|
||||||
guard let context = CGContext(
|
guard let context = CGContext(
|
||||||
data: nil,
|
data: nil,
|
||||||
@@ -204,11 +564,9 @@ struct WholeImageProcessor {
|
|||||||
throw AIEnhanceError.inferenceError("Failed to create final image")
|
throw AIEnhanceError.inferenceError("Failed to create final image")
|
||||||
}
|
}
|
||||||
|
|
||||||
logger.info("Final image size: \(finalImage.width)x\(finalImage.height)")
|
|
||||||
return finalImage
|
return finalImage
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Create CGImage from RGBA pixel data
|
|
||||||
private func createCGImage(from pixels: [UInt8], width: Int, height: Int) throws -> CGImage {
|
private func createCGImage(from pixels: [UInt8], width: Int, height: Int) throws -> CGImage {
|
||||||
let colorSpace = CGColorSpaceCreateDeviceRGB()
|
let colorSpace = CGColorSpaceCreateDeviceRGB()
|
||||||
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipLast.rawValue)
|
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipLast.rawValue)
|
||||||
@@ -235,6 +593,3 @@ struct WholeImageProcessor {
|
|||||||
return image
|
return image
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Keep the old name as a typealias for compatibility
|
|
||||||
typealias TiledImageProcessor = WholeImageProcessor
|
|
||||||
|
|||||||
33
TASK.md
33
TASK.md
@@ -99,7 +99,8 @@
|
|||||||
- [x] Real-ESRGAN Core ML 集成架构
|
- [x] Real-ESRGAN Core ML 集成架构
|
||||||
- [x] AIEnhancer 模块:公共 API 和配置
|
- [x] AIEnhancer 模块:公共 API 和配置
|
||||||
- [x] RealESRGANProcessor:Core ML 推理逻辑
|
- [x] RealESRGANProcessor:Core ML 推理逻辑
|
||||||
- [x] TiledImageProcessor:分块处理(内存优化)
|
- [x] TiledImageProcessor:真正的分块处理(512×512 tiles,64px 重叠,加权混合拼接)
|
||||||
|
- [x] WholeImageProcessor:小图处理(≤512×512 使用整图缩放)
|
||||||
- [x] ImageFormatConverter:格式转换工具
|
- [x] ImageFormatConverter:格式转换工具
|
||||||
- [x] LivePhotoCore 集成
|
- [x] LivePhotoCore 集成
|
||||||
- [x] ExportParams 扩展 aiEnhanceConfig
|
- [x] ExportParams 扩展 aiEnhanceConfig
|
||||||
@@ -120,14 +121,32 @@
|
|||||||
- [ ] 包体积优化
|
- [ ] 包体积优化
|
||||||
- [ ] 使用 INT8 量化模型(预估可从 64MB 降至 ~16MB)
|
- [ ] 使用 INT8 量化模型(预估可从 64MB 降至 ~16MB)
|
||||||
- [ ] 或使用 On-Demand Resources 按需下载模型
|
- [ ] 或使用 On-Demand Resources 按需下载模型
|
||||||
- [ ] 性能优化
|
- [x] AI 增强质量优化(已完成 ✅)
|
||||||
- [ ] 尝试使用支持灵活输入尺寸的模型(避免缩放损失)
|
- [x] 真正的分块处理:将大图拆分为 512×512 tiles,分别推理后拼接
|
||||||
|
- [x] 64px 重叠区域 + 线性权重混合,消除接缝
|
||||||
|
- [x] 自动选择处理器:大图用 TiledImageProcessor,小图用 WholeImageProcessor
|
||||||
|
- [x] 信息损失从 ~86% 降至 0%(1080×1920 图像不再压缩)
|
||||||
|
- [ ] 高级合成功能(照片+视频合成 Live Photo)
|
||||||
|
- [ ] 双导入入口:支持分别选择静态照片和视频
|
||||||
|
- [ ] 尺寸对齐逻辑:照片自动 match 视频尺寸
|
||||||
|
- [ ] resolveKeyPhotoURL 扩展:支持外部照片输入
|
||||||
|
- [ ] UI 设计:照片裁剪/对齐预览
|
||||||
|
- [ ] 其他性能优化
|
||||||
|
- [ ] 尝试使用支持灵活输入尺寸的模型(EnumeratedShapes)
|
||||||
- [ ] 探索 Metal Performance Shaders 替代方案
|
- [ ] 探索 Metal Performance Shaders 替代方案
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 决策备忘(后续需要你拍板)
|
## 决策备忘(已完成 ✅)
|
||||||
|
|
||||||
- [ ] HDR 默认策略:默认转 SDR vs 首次提示用户选择
|
- [x] **HDR 默认策略**:✅ 保持默认转 SDR
|
||||||
- [ ] 编码兜底策略:完全自动兜底 vs 失败后提示开启兼容模式
|
- 理由:Live Photo 壁纸场景下 SDR 显示更稳定,避免 HDR 在不同设备/亮度下显示不一致
|
||||||
- [ ] 高级合成(照片+视频)进入哪个阶段(建议 M2)
|
- 后续:M5 可在设置页添加"高级选项"供专业用户切换
|
||||||
|
|
||||||
|
- [x] **编码兜底策略**:✅ 保持完全自动兜底
|
||||||
|
- 理由:符合"Just Works"理念,诊断系统已能提前识别风险并建议兼容模式
|
||||||
|
- 可选改进:ProcessingView 显示"使用兼容模式编码中..."提升透明度
|
||||||
|
|
||||||
|
- [x] **高级合成功能**(照片+视频):✅ 延后到 M5 或 M6
|
||||||
|
- 理由:属于高级功能,非核心需求,当前专注上线 M0-M4
|
||||||
|
- 技术要点:双导入入口、尺寸对齐逻辑、resolveKeyPhotoURL 扩展
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# App Store 上架元数据
|
# Live Photo Studio - App Store 上架元数据
|
||||||
|
|
||||||
> 准备上传到 App Store Connect 的所有文案和信息
|
> 准备上传到 App Store Connect 的所有文案和信息
|
||||||
|
|
||||||
@@ -8,7 +8,7 @@
|
|||||||
|
|
||||||
| 项目 | 内容 |
|
| 项目 | 内容 |
|
||||||
|------|------|
|
|------|------|
|
||||||
| **应用名称** | Live Photo Maker |
|
| **应用名称** | Live Photo Studio |
|
||||||
| **副标题** | 视频一键转动态壁纸 |
|
| **副标题** | 视频一键转动态壁纸 |
|
||||||
| **Bundle ID** | xyz.let5see.livephotomaker |
|
| **Bundle ID** | xyz.let5see.livephotomaker |
|
||||||
| **版本号** | 1.0 |
|
| **版本号** | 1.0 |
|
||||||
@@ -28,7 +28,7 @@
|
|||||||
|
|
||||||
### 完整描述
|
### 完整描述
|
||||||
```
|
```
|
||||||
Live Photo Maker 是一款简单易用的动态壁纸制作工具,让你的锁屏动起来!
|
Live Photo Studio 是一款简单易用的动态壁纸制作工具,让你的锁屏动起来!
|
||||||
|
|
||||||
主要功能:
|
主要功能:
|
||||||
|
|
||||||
@@ -79,7 +79,7 @@ Live Photo,动态壁纸,锁屏壁纸,视频转换,AI增强,照片,壁纸,动图,
|
|||||||
|
|
||||||
### 1.0 版本
|
### 1.0 版本
|
||||||
```
|
```
|
||||||
Live Photo Maker 正式发布!
|
Live Photo Studio 正式发布!
|
||||||
|
|
||||||
• 视频一键转换为 Live Photo
|
• 视频一键转换为 Live Photo
|
||||||
• 多种比例模板,适配各种设备
|
• 多种比例模板,适配各种设备
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Live Photo Maker 测试文档
|
# Live Photo Studio 测试文档
|
||||||
|
|
||||||
## 测试矩阵
|
## 测试矩阵
|
||||||
|
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
# Live Photo Maker 用户手册
|
# Live Photo Studio 用户手册
|
||||||
|
|
||||||
## 快速开始
|
## 快速开始
|
||||||
|
|
||||||
|
|||||||
@@ -1,27 +0,0 @@
|
|||||||
# 文档索引
|
|
||||||
|
|
||||||
## 需求
|
|
||||||
|
|
||||||
- docs/PRD_LivePhoto_App_V0.2_2025-12-13.md:PRD(V0.2),定义目标、MVP范围、流程、验收与风险。
|
|
||||||
|
|
||||||
## 设计
|
|
||||||
|
|
||||||
- docs/TECHSPEC_LivePhoto_App_V0.2_2025-12-13.md:技术规格(V0.2),架构/模型/合成规范/错误码/缓存等。
|
|
||||||
- docs/IXSPEC_LivePhoto_App_V0.2_2025-12-13.md:交互规格(V0.2),页面交互/状态/埋点/iPad适配等。
|
|
||||||
|
|
||||||
## 测试
|
|
||||||
|
|
||||||
- (待补充)
|
|
||||||
|
|
||||||
## 用户手册
|
|
||||||
|
|
||||||
- (待补充)
|
|
||||||
|
|
||||||
## 知识库
|
|
||||||
|
|
||||||
- docs_index.md:文档索引(本文件)
|
|
||||||
- PROJECT_STRUCTURE.md:项目结构(目录/文件结构变更记录)
|
|
||||||
|
|
||||||
## 任务进度
|
|
||||||
|
|
||||||
- TASK.md:任务清单(按阶段拆解)
|
|
||||||
88
to-live-photo/to-live-photo/LanguageManager.swift
Normal file
88
to-live-photo/to-live-photo/LanguageManager.swift
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
import SwiftUI
|
||||||
|
|
||||||
|
/// 语言管理器:支持应用内动态切换语言
|
||||||
|
@Observable
|
||||||
|
final class LanguageManager {
|
||||||
|
|
||||||
|
/// 支持的语言
|
||||||
|
enum Language: String, CaseIterable, Identifiable {
|
||||||
|
case system = "system"
|
||||||
|
case zhHans = "zh-Hans"
|
||||||
|
case zhHant = "zh-Hant"
|
||||||
|
case en = "en"
|
||||||
|
|
||||||
|
var id: String { rawValue }
|
||||||
|
|
||||||
|
var displayName: String {
|
||||||
|
switch self {
|
||||||
|
case .system: return "跟随系统"
|
||||||
|
case .zhHans: return "简体中文"
|
||||||
|
case .zhHant: return "繁體中文"
|
||||||
|
case .en: return "English"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
var locale: Locale? {
|
||||||
|
switch self {
|
||||||
|
case .system: return nil
|
||||||
|
case .zhHans: return Locale(identifier: "zh-Hans")
|
||||||
|
case .zhHant: return Locale(identifier: "zh-Hant")
|
||||||
|
case .en: return Locale(identifier: "en")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// 单例
|
||||||
|
static let shared = LanguageManager()
|
||||||
|
|
||||||
|
/// 当前选择的语言
|
||||||
|
var current: Language {
|
||||||
|
didSet {
|
||||||
|
UserDefaults.standard.set(current.rawValue, forKey: "app_language")
|
||||||
|
applyLanguage()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// 可用语言列表
|
||||||
|
var availableLanguages: [Language] {
|
||||||
|
Language.allCases
|
||||||
|
}
|
||||||
|
|
||||||
|
private init() {
|
||||||
|
let savedLanguage = UserDefaults.standard.string(forKey: "app_language") ?? "system"
|
||||||
|
self.current = Language(rawValue: savedLanguage) ?? .system
|
||||||
|
applyLanguage()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// 应用语言设置
|
||||||
|
private func applyLanguage() {
|
||||||
|
if current == .system {
|
||||||
|
UserDefaults.standard.removeObject(forKey: "AppleLanguages")
|
||||||
|
} else {
|
||||||
|
UserDefaults.standard.set([current.rawValue], forKey: "AppleLanguages")
|
||||||
|
}
|
||||||
|
UserDefaults.standard.synchronize()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// 获取本地化字符串
|
||||||
|
func localizedString(_ key: String) -> String {
|
||||||
|
if current == .system {
|
||||||
|
return String(localized: String.LocalizationValue(key))
|
||||||
|
}
|
||||||
|
|
||||||
|
guard let path = Bundle.main.path(forResource: current.rawValue, ofType: "lproj"),
|
||||||
|
let bundle = Bundle(path: path) else {
|
||||||
|
return String(localized: String.LocalizationValue(key))
|
||||||
|
}
|
||||||
|
|
||||||
|
return NSLocalizedString(key, bundle: bundle, comment: "")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - 便捷扩展
|
||||||
|
extension String {
|
||||||
|
/// 本地化字符串
|
||||||
|
var localized: String {
|
||||||
|
LanguageManager.shared.localizedString(self)
|
||||||
|
}
|
||||||
|
}
|
||||||
1880
to-live-photo/to-live-photo/Localizable.xcstrings
Normal file
1880
to-live-photo/to-live-photo/Localizable.xcstrings
Normal file
File diff suppressed because it is too large
Load Diff
@@ -37,6 +37,9 @@ struct EditorView: View {
|
|||||||
|
|
||||||
// AI 超分辨率
|
// AI 超分辨率
|
||||||
@State private var aiEnhanceEnabled: Bool = false
|
@State private var aiEnhanceEnabled: Bool = false
|
||||||
|
@State private var aiModelNeedsDownload: Bool = false
|
||||||
|
@State private var aiModelDownloading: Bool = false
|
||||||
|
@State private var aiModelDownloadProgress: Double = 0
|
||||||
|
|
||||||
// 视频诊断
|
// 视频诊断
|
||||||
@State private var videoDiagnosis: VideoDiagnosis?
|
@State private var videoDiagnosis: VideoDiagnosis?
|
||||||
@@ -370,10 +373,45 @@ struct EditorView: View {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
.tint(.purple)
|
.tint(.purple)
|
||||||
.disabled(!AIEnhancer.isAvailable())
|
.disabled(!AIEnhancer.isAvailable() || aiModelDownloading)
|
||||||
|
.onChange(of: aiEnhanceEnabled) { _, newValue in
|
||||||
|
if newValue {
|
||||||
|
checkAndDownloadModel()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if aiEnhanceEnabled {
|
// 模型下载进度
|
||||||
|
if aiModelDownloading {
|
||||||
|
VStack(alignment: .leading, spacing: 8) {
|
||||||
|
HStack(spacing: 8) {
|
||||||
|
ProgressView()
|
||||||
|
.scaleEffect(0.8)
|
||||||
|
Text("正在下载 AI 模型...")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
|
||||||
|
ProgressView(value: aiModelDownloadProgress)
|
||||||
|
.tint(.purple)
|
||||||
|
|
||||||
|
Text(String(format: "%.0f%%", aiModelDownloadProgress * 100))
|
||||||
|
.font(.caption2)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
.padding(.leading, 4)
|
||||||
|
}
|
||||||
|
|
||||||
|
if aiEnhanceEnabled && !aiModelDownloading {
|
||||||
VStack(alignment: .leading, spacing: 6) {
|
VStack(alignment: .leading, spacing: 6) {
|
||||||
|
if aiModelNeedsDownload {
|
||||||
|
HStack(spacing: 4) {
|
||||||
|
Image(systemName: "arrow.down.circle")
|
||||||
|
.foregroundStyle(.orange)
|
||||||
|
.font(.caption)
|
||||||
|
Text("首次使用需下载 AI 模型(约 64MB)")
|
||||||
|
.font(.caption)
|
||||||
|
}
|
||||||
|
}
|
||||||
HStack(spacing: 4) {
|
HStack(spacing: 4) {
|
||||||
Image(systemName: "sparkles")
|
Image(systemName: "sparkles")
|
||||||
.foregroundStyle(.purple)
|
.foregroundStyle(.purple)
|
||||||
@@ -415,6 +453,10 @@ struct EditorView: View {
|
|||||||
.padding(16)
|
.padding(16)
|
||||||
.background(Color.purple.opacity(0.1))
|
.background(Color.purple.opacity(0.1))
|
||||||
.clipShape(RoundedRectangle(cornerRadius: 12))
|
.clipShape(RoundedRectangle(cornerRadius: 12))
|
||||||
|
.task {
|
||||||
|
// 检查模型是否需要下载
|
||||||
|
aiModelNeedsDownload = await AIEnhancer.needsDownload()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// MARK: - 兼容模式开关
|
// MARK: - 兼容模式开关
|
||||||
@@ -681,6 +723,46 @@ struct EditorView: View {
|
|||||||
return CropRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
|
return CropRect(x: cropX, y: cropY, width: cropWidth, height: cropHeight)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private func checkAndDownloadModel() {
|
||||||
|
guard aiEnhanceEnabled else { return }
|
||||||
|
|
||||||
|
Task {
|
||||||
|
// 检查是否需要下载
|
||||||
|
let needsDownload = await AIEnhancer.needsDownload()
|
||||||
|
|
||||||
|
await MainActor.run {
|
||||||
|
aiModelNeedsDownload = needsDownload
|
||||||
|
}
|
||||||
|
|
||||||
|
if needsDownload {
|
||||||
|
await MainActor.run {
|
||||||
|
aiModelDownloading = true
|
||||||
|
aiModelDownloadProgress = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
do {
|
||||||
|
try await AIEnhancer.downloadModel { progress in
|
||||||
|
Task { @MainActor in
|
||||||
|
aiModelDownloadProgress = progress
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
await MainActor.run {
|
||||||
|
aiModelDownloading = false
|
||||||
|
aiModelNeedsDownload = false
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
await MainActor.run {
|
||||||
|
aiModelDownloading = false
|
||||||
|
// 下载失败时禁用 AI 增强
|
||||||
|
aiEnhanceEnabled = false
|
||||||
|
}
|
||||||
|
print("Failed to download AI model: \(error)")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
private func startProcessing() {
|
private func startProcessing() {
|
||||||
Analytics.shared.log(.editorGenerateClick, parameters: [
|
Analytics.shared.log(.editorGenerateClick, parameters: [
|
||||||
"trimStart": trimStart,
|
"trimStart": trimStart,
|
||||||
|
|||||||
@@ -71,11 +71,11 @@ struct HomeView: View {
|
|||||||
}
|
}
|
||||||
|
|
||||||
VStack(spacing: DesignTokens.Spacing.sm) {
|
VStack(spacing: DesignTokens.Spacing.sm) {
|
||||||
Text("Live Photo 制作")
|
Text(String(localized: "home.title"))
|
||||||
.font(.system(size: DesignTokens.FontSize.xxl, weight: .bold))
|
.font(.system(size: DesignTokens.FontSize.xxl, weight: .bold))
|
||||||
.foregroundColor(.textPrimary)
|
.foregroundColor(.textPrimary)
|
||||||
|
|
||||||
Text("选择视频,一键转换为动态壁纸")
|
Text(String(localized: "home.subtitle"))
|
||||||
.font(.system(size: DesignTokens.FontSize.base))
|
.font(.system(size: DesignTokens.FontSize.base))
|
||||||
.foregroundColor(.textSecondary)
|
.foregroundColor(.textSecondary)
|
||||||
.multilineTextAlignment(.center)
|
.multilineTextAlignment(.center)
|
||||||
@@ -90,7 +90,7 @@ struct HomeView: View {
|
|||||||
HStack(spacing: DesignTokens.Spacing.sm) {
|
HStack(spacing: DesignTokens.Spacing.sm) {
|
||||||
Image(systemName: "video.badge.plus")
|
Image(systemName: "video.badge.plus")
|
||||||
.font(.system(size: 18, weight: .semibold))
|
.font(.system(size: 18, weight: .semibold))
|
||||||
Text("选择视频")
|
Text(String(localized: "home.selectVideo"))
|
||||||
.font(.system(size: DesignTokens.FontSize.base, weight: .semibold))
|
.font(.system(size: DesignTokens.FontSize.base, weight: .semibold))
|
||||||
}
|
}
|
||||||
.foregroundColor(.white)
|
.foregroundColor(.white)
|
||||||
@@ -111,7 +111,7 @@ struct HomeView: View {
|
|||||||
HStack(spacing: DesignTokens.Spacing.sm) {
|
HStack(spacing: DesignTokens.Spacing.sm) {
|
||||||
ProgressView()
|
ProgressView()
|
||||||
.tint(.accentPurple)
|
.tint(.accentPurple)
|
||||||
Text("正在加载视频...")
|
Text(String(localized: "home.loading"))
|
||||||
.font(.system(size: DesignTokens.FontSize.sm))
|
.font(.system(size: DesignTokens.FontSize.sm))
|
||||||
.foregroundColor(.textSecondary)
|
.foregroundColor(.textSecondary)
|
||||||
}
|
}
|
||||||
@@ -149,7 +149,7 @@ struct HomeView: View {
|
|||||||
.foregroundColor(.accentOrange)
|
.foregroundColor(.accentOrange)
|
||||||
}
|
}
|
||||||
|
|
||||||
Text("快速上手")
|
Text(String(localized: "home.quickStart"))
|
||||||
.font(.system(size: DesignTokens.FontSize.lg, weight: .semibold))
|
.font(.system(size: DesignTokens.FontSize.lg, weight: .semibold))
|
||||||
.foregroundColor(.textPrimary)
|
.foregroundColor(.textPrimary)
|
||||||
|
|
||||||
@@ -157,15 +157,15 @@ struct HomeView: View {
|
|||||||
}
|
}
|
||||||
|
|
||||||
VStack(alignment: .leading, spacing: DesignTokens.Spacing.md) {
|
VStack(alignment: .leading, spacing: DesignTokens.Spacing.md) {
|
||||||
QuickStartStep(number: 1, text: "点击上方「选择视频」导入素材", color: .accentPurple)
|
QuickStartStep(number: 1, text: String(localized: "home.quickStart.step1"), color: .accentPurple)
|
||||||
QuickStartStep(number: 2, text: "调整比例和时长,选择封面帧", color: .accentCyan)
|
QuickStartStep(number: 2, text: String(localized: "home.quickStart.step2"), color: .accentCyan)
|
||||||
QuickStartStep(number: 3, text: "开启 AI 增强提升画质(可选)", color: .accentPink)
|
QuickStartStep(number: 3, text: String(localized: "home.quickStart.step3"), color: .accentPink)
|
||||||
QuickStartStep(number: 4, text: "生成后按引导设置为壁纸", color: .accentGreen)
|
QuickStartStep(number: 4, text: String(localized: "home.quickStart.step4"), color: .accentGreen)
|
||||||
}
|
}
|
||||||
|
|
||||||
HStack {
|
HStack {
|
||||||
Spacer()
|
Spacer()
|
||||||
Text("完成后的作品会显示在这里")
|
Text(String(localized: "home.emptyHint"))
|
||||||
.font(.system(size: DesignTokens.FontSize.xs))
|
.font(.system(size: DesignTokens.FontSize.xs))
|
||||||
.foregroundColor(.textMuted)
|
.foregroundColor(.textMuted)
|
||||||
Spacer()
|
Spacer()
|
||||||
@@ -190,13 +190,13 @@ struct HomeView: View {
|
|||||||
.foregroundColor(.accentCyan)
|
.foregroundColor(.accentCyan)
|
||||||
}
|
}
|
||||||
|
|
||||||
Text("最近作品")
|
Text(String(localized: "home.recentWorks"))
|
||||||
.font(.system(size: DesignTokens.FontSize.lg, weight: .semibold))
|
.font(.system(size: DesignTokens.FontSize.lg, weight: .semibold))
|
||||||
.foregroundColor(.textPrimary)
|
.foregroundColor(.textPrimary)
|
||||||
|
|
||||||
Spacer()
|
Spacer()
|
||||||
|
|
||||||
Text("\(recentWorks.recentWorks.count) 个")
|
Text(String(localized: "home.worksCount \(recentWorks.recentWorks.count)"))
|
||||||
.font(.system(size: DesignTokens.FontSize.sm))
|
.font(.system(size: DesignTokens.FontSize.sm))
|
||||||
.foregroundColor(.textMuted)
|
.foregroundColor(.textMuted)
|
||||||
}
|
}
|
||||||
@@ -224,7 +224,7 @@ struct HomeView: View {
|
|||||||
|
|
||||||
do {
|
do {
|
||||||
guard let movie = try await item.loadTransferable(type: VideoTransferable.self) else {
|
guard let movie = try await item.loadTransferable(type: VideoTransferable.self) else {
|
||||||
errorMessage = "无法加载视频"
|
errorMessage = String(localized: "home.loadFailed")
|
||||||
isLoading = false
|
isLoading = false
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import Photos
|
|||||||
|
|
||||||
struct SettingsView: View {
|
struct SettingsView: View {
|
||||||
@State private var photoLibraryStatus: PHAuthorizationStatus = .notDetermined
|
@State private var photoLibraryStatus: PHAuthorizationStatus = .notDetermined
|
||||||
@State private var cacheSize: String = "计算中..."
|
@State private var cacheSize: String = String(localized: "common.calculating")
|
||||||
@State private var showingClearCacheAlert = false
|
@State private var showingClearCacheAlert = false
|
||||||
@State private var showingClearRecentWorksAlert = false
|
@State private var showingClearRecentWorksAlert = false
|
||||||
@State private var feedbackPackageURL: URL?
|
@State private var feedbackPackageURL: URL?
|
||||||
@@ -21,7 +21,7 @@ struct SettingsView: View {
|
|||||||
// 权限状态
|
// 权限状态
|
||||||
Section {
|
Section {
|
||||||
HStack {
|
HStack {
|
||||||
Label("相册权限", systemImage: "photo.on.rectangle")
|
Label(String(localized: "settings.photoPermission"), systemImage: "photo.on.rectangle")
|
||||||
Spacer()
|
Spacer()
|
||||||
permissionStatusView
|
permissionStatusView
|
||||||
}
|
}
|
||||||
@@ -30,19 +30,37 @@ struct SettingsView: View {
|
|||||||
Button {
|
Button {
|
||||||
openSettings()
|
openSettings()
|
||||||
} label: {
|
} label: {
|
||||||
Label("前往设置授权", systemImage: "gear")
|
Label(String(localized: "settings.goToSettings"), systemImage: "gear")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} header: {
|
} header: {
|
||||||
Text("权限")
|
Text(String(localized: "settings.permission"))
|
||||||
} footer: {
|
} footer: {
|
||||||
Text("需要相册权限才能保存 Live Photo")
|
Text(String(localized: "settings.permissionFooter"))
|
||||||
|
}
|
||||||
|
|
||||||
|
// 语言设置
|
||||||
|
Section {
|
||||||
|
Picker(selection: Binding(
|
||||||
|
get: { LanguageManager.shared.current },
|
||||||
|
set: { LanguageManager.shared.current = $0 }
|
||||||
|
)) {
|
||||||
|
ForEach(LanguageManager.Language.allCases) { language in
|
||||||
|
Text(language.displayName).tag(language)
|
||||||
|
}
|
||||||
|
} label: {
|
||||||
|
Label(String(localized: "settings.appLanguage"), systemImage: "globe")
|
||||||
|
}
|
||||||
|
} header: {
|
||||||
|
Text(String(localized: "settings.language"))
|
||||||
|
} footer: {
|
||||||
|
Text(String(localized: "settings.languageChangeHint"))
|
||||||
}
|
}
|
||||||
|
|
||||||
// 存储
|
// 存储
|
||||||
Section {
|
Section {
|
||||||
HStack {
|
HStack {
|
||||||
Label("缓存大小", systemImage: "internaldrive")
|
Label(String(localized: "settings.cacheSize"), systemImage: "internaldrive")
|
||||||
Spacer()
|
Spacer()
|
||||||
Text(cacheSize)
|
Text(cacheSize)
|
||||||
.foregroundStyle(.secondary)
|
.foregroundStyle(.secondary)
|
||||||
@@ -51,18 +69,18 @@ struct SettingsView: View {
|
|||||||
Button(role: .destructive) {
|
Button(role: .destructive) {
|
||||||
showingClearCacheAlert = true
|
showingClearCacheAlert = true
|
||||||
} label: {
|
} label: {
|
||||||
Label("清理缓存", systemImage: "trash")
|
Label(String(localized: "settings.clearCache"), systemImage: "trash")
|
||||||
}
|
}
|
||||||
|
|
||||||
Button(role: .destructive) {
|
Button(role: .destructive) {
|
||||||
showingClearRecentWorksAlert = true
|
showingClearRecentWorksAlert = true
|
||||||
} label: {
|
} label: {
|
||||||
Label("清空最近作品记录", systemImage: "clock.arrow.circlepath")
|
Label(String(localized: "settings.clearRecentWorks"), systemImage: "clock.arrow.circlepath")
|
||||||
}
|
}
|
||||||
} header: {
|
} header: {
|
||||||
Text("存储")
|
Text(String(localized: "settings.storage"))
|
||||||
} footer: {
|
} footer: {
|
||||||
Text("清理缓存不会影响已保存到相册的 Live Photo")
|
Text(String(localized: "settings.storageFooter"))
|
||||||
}
|
}
|
||||||
|
|
||||||
// 反馈
|
// 反馈
|
||||||
@@ -70,27 +88,27 @@ struct SettingsView: View {
|
|||||||
Button {
|
Button {
|
||||||
exportFeedbackPackage()
|
exportFeedbackPackage()
|
||||||
} label: {
|
} label: {
|
||||||
Label("导出诊断报告", systemImage: "doc.text")
|
Label(String(localized: "settings.exportDiagnostics"), systemImage: "doc.text")
|
||||||
}
|
}
|
||||||
|
|
||||||
Link(destination: URL(string: "mailto:support@let5see.xyz")!) {
|
Link(destination: URL(string: "mailto:support@let5see.xyz")!) {
|
||||||
Label("反馈问题", systemImage: "envelope")
|
Label(String(localized: "settings.contactUs"), systemImage: "envelope")
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO: App Store 上架后替换为实际的 App ID
|
// TODO: App Store 上架后替换为实际的 App ID
|
||||||
Link(destination: URL(string: "https://apps.apple.com/app/id000000000")!) {
|
Link(destination: URL(string: "https://apps.apple.com/app/id000000000")!) {
|
||||||
Label("App Store 评分", systemImage: "star")
|
Label(String(localized: "settings.rateApp"), systemImage: "star")
|
||||||
}
|
}
|
||||||
} header: {
|
} header: {
|
||||||
Text("反馈")
|
Text(String(localized: "settings.feedback"))
|
||||||
} footer: {
|
} footer: {
|
||||||
Text("诊断报告仅包含日志和参数,不含媒体内容")
|
Text(String(localized: "settings.feedbackFooter"))
|
||||||
}
|
}
|
||||||
|
|
||||||
// 关于
|
// 关于
|
||||||
Section {
|
Section {
|
||||||
HStack {
|
HStack {
|
||||||
Label("版本", systemImage: "info.circle")
|
Label(String(localized: "settings.version"), systemImage: "info.circle")
|
||||||
Spacer()
|
Spacer()
|
||||||
Text(appVersion)
|
Text(appVersion)
|
||||||
.foregroundStyle(.secondary)
|
.foregroundStyle(.secondary)
|
||||||
@@ -99,39 +117,39 @@ struct SettingsView: View {
|
|||||||
NavigationLink {
|
NavigationLink {
|
||||||
PrivacyPolicyView()
|
PrivacyPolicyView()
|
||||||
} label: {
|
} label: {
|
||||||
Label("隐私政策", systemImage: "hand.raised")
|
Label(String(localized: "settings.privacyPolicy"), systemImage: "hand.raised")
|
||||||
}
|
}
|
||||||
|
|
||||||
NavigationLink {
|
NavigationLink {
|
||||||
TermsOfServiceView()
|
TermsOfServiceView()
|
||||||
} label: {
|
} label: {
|
||||||
Label("使用条款", systemImage: "doc.text")
|
Label(String(localized: "settings.termsOfService"), systemImage: "doc.text")
|
||||||
}
|
}
|
||||||
} header: {
|
} header: {
|
||||||
Text("关于")
|
Text(String(localized: "settings.about"))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
.navigationTitle("设置")
|
.navigationTitle(String(localized: "settings.title"))
|
||||||
.navigationBarTitleDisplayMode(.inline)
|
.navigationBarTitleDisplayMode(.inline)
|
||||||
.onAppear {
|
.onAppear {
|
||||||
checkPermissionStatus()
|
checkPermissionStatus()
|
||||||
calculateCacheSize()
|
calculateCacheSize()
|
||||||
}
|
}
|
||||||
.alert("清理缓存", isPresented: $showingClearCacheAlert) {
|
.alert(String(localized: "settings.clearCache"), isPresented: $showingClearCacheAlert) {
|
||||||
Button("取消", role: .cancel) {}
|
Button(String(localized: "common.cancel"), role: .cancel) {}
|
||||||
Button("清理", role: .destructive) {
|
Button(String(localized: "settings.clear"), role: .destructive) {
|
||||||
clearCache()
|
clearCache()
|
||||||
}
|
}
|
||||||
} message: {
|
} message: {
|
||||||
Text("确定要清理所有缓存文件吗?")
|
Text(String(localized: "settings.clearCacheConfirm"))
|
||||||
}
|
}
|
||||||
.alert("清空记录", isPresented: $showingClearRecentWorksAlert) {
|
.alert(String(localized: "settings.clearRecordsTitle"), isPresented: $showingClearRecentWorksAlert) {
|
||||||
Button("取消", role: .cancel) {}
|
Button(String(localized: "common.cancel"), role: .cancel) {}
|
||||||
Button("清空", role: .destructive) {
|
Button(String(localized: "settings.clear"), role: .destructive) {
|
||||||
clearRecentWorks()
|
clearRecentWorks()
|
||||||
}
|
}
|
||||||
} message: {
|
} message: {
|
||||||
Text("确定要清空最近作品记录吗?这不会删除相册中的 Live Photo。")
|
Text(String(localized: "settings.clearRecordsConfirm"))
|
||||||
}
|
}
|
||||||
.sheet(isPresented: $showingShareSheet) {
|
.sheet(isPresented: $showingShareSheet) {
|
||||||
if let url = feedbackPackageURL {
|
if let url = feedbackPackageURL {
|
||||||
@@ -144,19 +162,19 @@ struct SettingsView: View {
|
|||||||
private var permissionStatusView: some View {
|
private var permissionStatusView: some View {
|
||||||
switch photoLibraryStatus {
|
switch photoLibraryStatus {
|
||||||
case .authorized:
|
case .authorized:
|
||||||
Label("已授权", systemImage: "checkmark.circle.fill")
|
Label(String(localized: "settings.authorized"), systemImage: "checkmark.circle.fill")
|
||||||
.foregroundStyle(.green)
|
.foregroundStyle(.green)
|
||||||
.labelStyle(.iconOnly)
|
.labelStyle(.iconOnly)
|
||||||
case .limited:
|
case .limited:
|
||||||
Label("部分授权", systemImage: "exclamationmark.circle.fill")
|
Label(String(localized: "settings.limited"), systemImage: "exclamationmark.circle.fill")
|
||||||
.foregroundStyle(.orange)
|
.foregroundStyle(.orange)
|
||||||
.labelStyle(.iconOnly)
|
.labelStyle(.iconOnly)
|
||||||
case .denied, .restricted:
|
case .denied, .restricted:
|
||||||
Label("未授权", systemImage: "xmark.circle.fill")
|
Label(String(localized: "settings.denied"), systemImage: "xmark.circle.fill")
|
||||||
.foregroundStyle(.red)
|
.foregroundStyle(.red)
|
||||||
.labelStyle(.iconOnly)
|
.labelStyle(.iconOnly)
|
||||||
case .notDetermined:
|
case .notDetermined:
|
||||||
Label("未确定", systemImage: "questionmark.circle.fill")
|
Label(String(localized: "settings.notDetermined"), systemImage: "questionmark.circle.fill")
|
||||||
.foregroundStyle(.secondary)
|
.foregroundStyle(.secondary)
|
||||||
.labelStyle(.iconOnly)
|
.labelStyle(.iconOnly)
|
||||||
@unknown default:
|
@unknown default:
|
||||||
|
|||||||
Reference in New Issue
Block a user