30 Commits

Author SHA1 Message Date
a0f315be51 feature/hls: Add HLS playback support, and refactors documentation for better usability and maintainability. (#1)
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 32s
CI / test (push) Successful in 46s
## Overview
This PR introduces HLS playback support, improves the player experience, and refactors documentation for better usability and maintainability.

## Key Features

### HLS Playback Support
- Add HLS integration via new JavaScript assets:
  - `hls.min.js`
  - `plyr.hls.start.js`
  - `watch.hls.js`
- Separate DASH and HLS logic:
  - `plyr-start.js` → `plyr.dash.start.js`
  - `watch.js` → `watch.dash.js`
- Update templates (`embed.html`, `watch.html`) for conditional player loading

### Native Storyboard Preview
- Add `native_player_storyboard` setting in `settings.py`
- Implement hover thumbnail preview for native player modes
- Add `storyboard-preview.js`

### UI and Player Adjustments
- Update templates and styles (`custom_plyr.css`)
- Modify backend modules to support new player modes:
  - `watch.py`, `channel.py`, `util.py`, and related components

### Internationalization
- Update translation files:
  - `messages.po`
  - `messages.pot`

### Testing and CI
- Add and update tests:
  - `test_shorts.py`
  - `test_util.py`
- Minor CI and release script improvements

## Documentation

### OpenRC Service Guide Rewrite
- Restructure `docs/basic-script-openrc/README.md` into:
  - Prerequisites
  - Installation
  - Service Management
  - Verification
  - Troubleshooting
- Add admonition blocks:
  - `[!NOTE]`, `[!TIP]`, `[!IMPORTANT]`, `[!WARNING]`, `[!CAUTION]`
- Fix log inspection command:
  ```bash
  doas tail -f /var/log/ytlocal.log
  ````

* Add path placeholders and clarify permission requirements
* Remove legacy and duplicate content

Reviewed-on: #1
Co-authored-by: Astounds <kirito@disroot.org>
Co-committed-by: Astounds <kirito@disroot.org>
2026-04-20 01:22:55 -04:00
62a028968e chore: extend .gitignore with AI assistant configurations and caches
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 17s
CI / test (push) Successful in 50s
2026-04-04 15:08:13 -05:00
f7bbf3129a update ios client
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 14s
CI / test (push) Successful in 53s
2026-04-04 15:05:33 -05:00
688521f8d6 bump to v0.4.5
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 50s
2026-04-01 11:54:46 -05:00
6eb3741010 test: add unit tests for YouTube Shorts support
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 51s
18 tests covering:
- channel_ctoken_v5 protobuf token generation per tab
- shortsLockupViewModel parsing (id, title, thumbnail, type)
- View count formatting with K/M/B suffixes
- extract_items with reloadContinuationItemsCommand response format

All tests run offline with mocked data, no network access.
2026-04-01 11:51:42 -05:00
a374f90f6e fix: add support for YouTube Shorts tab on channel pages
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 56s
- Rewrite channel_ctoken_v5 with correct protobuf field numbers per tab
  (videos=15, shorts=10, streams=14) based on Invidious source
- Replace broken pbj=1 endpoint with youtubei browse API for shorts/streams
- Add shortsLockupViewModel parser to extract video data from new YT format
- Fix channel metadata not loading (get_metadata now uses browse API)
- Fix metadata caching: skip caching when channel_name is absent
- Show actual item count instead of UU playlist count for shorts/streams
- Format view counts with spaced suffixes (7.1 K, 1.2 M, 3 B)
2026-04-01 11:43:46 -05:00
bed14713ad bump to v0.4.4
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 45s
2026-03-31 21:48:46 -05:00
06051dd127 fix: support YouTube 2024+ data formats for playlists, podcasts and channels
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 51s
- Add PODCAST content type support in lockupViewModel extraction
- Extract thumbnails and episode count from thumbnail overlay badges
- Migrate playlist page fetching from pbj=1 to innertube API (youtubei/v1/browse)
- Support new pageHeaderRenderer format in playlist metadata extraction
- Fix subscriber count extraction when YouTube returns handle instead of count
- Hide "None subscribers" in template when data is unavailable
2026-03-31 21:38:51 -05:00
7c64630be1 update .gitignore
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 12s
CI / test (push) Successful in 52s
2026-03-28 21:49:26 -05:00
1aa344c7b0 bump to v0.4.3
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 46s
2026-03-28 16:09:23 -05:00
fa7273b328 fix: race condition in os.makedirs causing worker crashes
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 47s
Replace check-then-create pattern with exist_ok=True to prevent
FileExistsError when multiple workers initialize simultaneously.

Affects:
- subscriptions.py: open_database()
- watch.py: save_decrypt_cache()
- local_playlist.py: add_to_playlist()
- util.py: fetch_url(), get_visitor_data()
- settings.py: initialization

Fixes Gunicorn worker startup failures in multi-worker deployments.
2026-03-28 16:06:47 -05:00
a0d10e6a00 docs: remove duplicate FreeTube entry in README
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 44s
2026-03-27 21:29:46 -05:00
a46cfda029 bump to v0.4.2
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 12s
CI / test (push) Successful in 46s
2026-03-27 21:26:08 -05:00
e03f40d728 fix error handling, null URLs in templates, and Radio playlist support
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 49s
- Global error handler: friendly messages for 429, 502, 403, 400
  instead of raw tracebacks. Filter FetchError from Flask logger.
- Fix None URLs in templates: protect href/src in common_elements,
  playlist, watch, and comments templates against None values.
- Radio playlists (RD...): redirect /playlist?list=RD... to
  /watch?v=...&list=RD... since YouTube only supports them in player.
- Wrap player client fallbacks (ios, tv_embedded) in try/catch so
  a failed fallback doesn't crash the whole page.
2026-03-27 21:23:03 -05:00
22c72aa842 remove yt-dlp, fix captions PO Token issue, fix 429 retry logic
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 52s
- Remove yt-dlp entirely (modules, routes, settings, dependency)
  Was blocking page loads by running synchronously in gevent
- Fix captions: use Android client caption URLs (no PO Token needed)
  instead of web timedtext URLs that YouTube now blocks
- Fix 429 retry: fail immediately without Tor (same IP = pointless retry)
  Was causing ~27s delays with exponential backoff
- Accept ytdlp_enabled as legacy setting to avoid warning on startup
2026-03-27 20:47:44 -05:00
56ecd6cb1b fix: use YouTube-provided thumbnail URLs instead of hardcoded hq720.jpg
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 15s
CI / test (push) Successful in 58s
Videos without hq720.jpg thumbnails caused mass 404 errors.
Now preserves the actual thumbnail URL from YouTube's API response,
falls back to hqdefault.jpg only when no thumbnail is provided.
Also picks highest quality thumbnail from API (thumbnails[-1])
and adds progressive fallback for subscription/download functions.
2026-03-27 19:22:12 -05:00
f629565e77 bump to v0.4.1
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 48s
2026-03-22 21:27:50 -05:00
1f8c13adff feat: improve 429 handling with Tor support and clean CI
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 11s
CI / test (push) Successful in 50s
- Retry with new Tor identity on 429
- Improve error logging
- Remove .build.yml and .drone.yml
2026-03-22 21:25:57 -05:00
6a68f06645 Release v0.4.0 - HD Thumbnails, YouTube 2024+ Support, and yt-dlp Integration
Some checks failed
CI / test (push) Failing after 1m19s
Major Features:
- HD video thumbnails (hq720.jpg) with automatic fallback to lower qualities
- HD channel avatars (240x240 instead of 88x88)
- YouTube 2024+ lockupViewModel support for channel playlists
- youtubei/v1/browse API integration for channel playlist tabs
- yt-dlp integration for multi-language audio and subtitles

Bug Fixes:
- Fixed undefined `abort` import in playlist.py
- Fixed undefined functions in proto.py (encode_varint, bytes_to_hex, succinct_encode)
- Fixed missing `traceback` import in proto_debug.py
- Fixed blurry playlist thumbnails using default.jpg instead of HD versions
- Fixed channel playlists page using deprecated pbj=1 format

Improvements:
- Automatic thumbnail fallback system (hq720 → sddefault → hqdefault → mqdefault → default)
- JavaScript thumbnail_fallback() handler for 404 errors
- Better thumbnail quality across all pages (watch, channel, playlist, subscriptions)
- Consistent HD avatar display for all channel items
- Settings system automatically adds new settings without breaking user config

Files Modified:
- youtube/watch.py - HD thumbnails for related videos and playlist items
- youtube/channel.py - HD thumbnails for channel playlists, youtubei API integration
- youtube/playlist.py - HD thumbnails, fixed abort import
- youtube/util.py - HD thumbnail URLs, avatar HD upgrade, prefix_url improvements
- youtube/comments.py - HD video thumbnail
- youtube/subscriptions.py - HD thumbnails, fixed abort import
- youtube/yt_data_extract/common.py - lockupViewModel support, extract_lockup_view_model_info()
- youtube/yt_data_extract/everything_else.py - HD playlist thumbnails
- youtube/proto.py - Fixed undefined function references
- youtube/proto_debug.py - Added traceback import
- youtube/static/js/common.js - thumbnail_fallback() handler
- youtube/templates/*.html - Added onerror handlers for thumbnail fallback
- youtube/version.py - Bump to v0.4.0

Technical Details:
- All thumbnail URLs now use hq720.jpg (1280x720) when available
- Fallback handled client-side via JavaScript onerror handler
- Server-side avatar upgrade via regex in util.prefix_url()
- lockupViewModel parser extracts contentType, metadata, and first_video_id
- Channel playlist tabs now use youtubei/v1/browse instead of deprecated pbj=1
- Settings version system ensures backward compatibility
2026-03-22 20:50:03 -05:00
84e1acaab8 yt-dlp 2026-03-22 14:17:23 -05:00
Jesus
ed4b05d9b6 Bump version to v0.3.2 2025-03-08 16:41:58 -05:00
Jesus
6f88b1cec6 Refactor extract_info in watch.py to improve client flexibility
Introduce primary_client, fallback_client, and last_resort_client variables for better configurability.
Replace hardcoded 'android_vr' with primary_client in fetch_player_response call.
2025-03-08 16:40:51 -05:00
Jesus
03451fb8ae fix: prevent error when closing avMerge if not a function 2025-03-08 16:39:37 -05:00
Jesus
e45c3fd48b Add styles error in player 2025-03-08 16:38:31 -05:00
Jesus
1153ac8f24 Fix NoneType inside comments.py
Bug:

Traceback (most recent call last):
  File "/home/rusian/yt-local/youtube/comments.py", line 180, in video_comments
    post_process_comments_info(comments_info)
  File "/home/rusian/yt-local/youtube/comments.py", line 81, in post_process_comments_info
    comment['author'] = strip_non_ascii(comment['author'])
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rusian/yt-local/youtube/util.py", line 843, in strip_non_ascii
    stripped = (c for c in string if 0 < ord(c) < 127)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not iterable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "src/gevent/greenlet.py", line 900, in gevent._gevent_cgreenlet.Greenlet.run
  File "/home/rusian/yt-local/youtube/comments.py", line 195, in video_comments
    comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip
                                                                             ^^^^
AttributeError: 'TypeError' object has no attribute 'ip'
2025-03-08T01:25:47Z <Greenlet at 0x7f251e5279c0: video_comments('hcm55lU9knw', 0, lc='')> failed with AttributeError
2025-03-08 16:37:33 -05:00
Jesus
c256a045f9 Bump version to v0.3.1 2025-03-08 16:34:29 -05:00
Jesus
98603439cb Improve buffer management for different platforms
- Introduced `BUFFER_CONFIG` to define buffer sizes for various systems (webOS, Samsung Tizen, Android TV, desktop).
- Added `detectSystem()` function to determine the platform based on `navigator.userAgent`.
- Updated `Stream` constructor to use platform-specific buffer sizes dynamically.
- Added console log for debugging detected system and applied buffer size.
2025-03-08 16:32:26 -05:00
Jesus
a6ca011202 version v0.3.0 2025-03-08 16:28:39 -05:00
Jesus
114c2572a4 Renew plyr UI and simplify elements 2025-03-08 16:28:27 -05:00
f64b362603 update logic plyr-start.js 2025-03-03 08:20:41 +08:00
59 changed files with 6094 additions and 944 deletions

View File

@@ -1,12 +0,0 @@
image: debian/buster
packages:
- python3-pip
- virtualenv
tasks:
- test: |
cd yt-local
virtualenv -p python3 venv
source venv/bin/activate
python --version
pip install -r requirements-dev.txt
pytest

View File

@@ -1,10 +0,0 @@
kind: pipeline
name: default
steps:
- name: test
image: python:3.7.3
commands:
- pip install --upgrade pip
- pip install -r requirements-dev.txt
- pytest

168
.gitignore vendored
View File

@@ -1,15 +1,171 @@
# =============================================================================
# .gitignore - YT Local
# =============================================================================
# -----------------------------------------------------------------------------
# Python / Bytecode
# -----------------------------------------------------------------------------
__pycache__/ __pycache__/
*.py[cod]
*$py.class *$py.class
debug/ *.so
.Python
# -----------------------------------------------------------------------------
# Virtual Environments
# -----------------------------------------------------------------------------
.env
.env.*
!.env.example
.venv/
venv/
ENV/
env/
*.egg-info/
.eggs/
# -----------------------------------------------------------------------------
# IDE / Editors
# -----------------------------------------------------------------------------
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
.flycheck_*
*.sublime-project
*.sublime-workspace
# -----------------------------------------------------------------------------
# Distribution / Packaging
# -----------------------------------------------------------------------------
build/
dist/
*.egg
*.manifest
*.spec
pip-wheel-metadata/
share/python-wheels/
MANIFEST
# -----------------------------------------------------------------------------
# Testing / Coverage
# -----------------------------------------------------------------------------
.pytest_cache/
.coverage
.coverage.*
htmlcov/
.tox/
.nox/
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
# -----------------------------------------------------------------------------
# Type Checking / Linting
# -----------------------------------------------------------------------------
.mypy_cache/
.dmypy.json
dmypy.json
.pyre/
# -----------------------------------------------------------------------------
# Jupyter / IPython
# -----------------------------------------------------------------------------
.ipynb_checkpoints
profile_default/
ipython_config.py
# -----------------------------------------------------------------------------
# Python Tools
# -----------------------------------------------------------------------------
# pyenv
.python-version
# pipenv
Pipfile.lock
# PEP 582
__pypackages__/
# Celery
celerybeat-schedule
celerybeat.pid
# Sphinx
docs/_build/
# PyBuilder
target/
# Scrapy
.scrapy
# -----------------------------------------------------------------------------
# Web Frameworks
# -----------------------------------------------------------------------------
# Django
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask
instance/
.webassets-cache
# -----------------------------------------------------------------------------
# Documentation
# -----------------------------------------------------------------------------
# mkdocs
/site
# -----------------------------------------------------------------------------
# Project Specific - YT Local
# -----------------------------------------------------------------------------
# Data & Debug
data/ data/
python/ debug/
# Release artifacts
release/ release/
yt-local/ yt-local/
banned_addresses.txt
settings.txt
get-pip.py get-pip.py
latest-dist.zip latest-dist.zip
*.7z *.7z
*.zip *.zip
*venv*
flycheck_* # Configuration (contains user-specific data)
settings.txt
banned_addresses.txt
# -----------------------------------------------------------------------------
# Temporary / Backup Files
# -----------------------------------------------------------------------------
*.log
*.tmp
*.bak
*.orig
*.cache/
# -----------------------------------------------------------------------------
# Localization / Compiled translations
# -----------------------------------------------------------------------------
*.mo
# -----------------------------------------------------------------------------
# AI assistants / LLM tools
# -----------------------------------------------------------------------------
# Claude AI assistant configuration and cache
.claude/
claude*
.anthropic/
# Kiro AI tool configuration and cache
.kiro/
kiro*
# Qwen AI-related files and caches
.qwen/
qwen*
# Other AI assistants/IDE integrations
.cursor/
.gpt/
.openai/

210
Makefile Normal file
View File

@@ -0,0 +1,210 @@
# yt-local Makefile
# Automated tasks for development, translations, and maintenance
.PHONY: help install dev clean test i18n-extract i18n-init i18n-update i18n-compile i18n-stats i18n-clean setup-dev lint format backup restore
# Variables
PYTHON := python3
PIP := pip3
LANG_CODE ?= es
VENV_DIR := venv
PROJECT_NAME := yt-local
## Help
help: ## Show this help message
@echo "$(PROJECT_NAME) - Available tasks:"
@echo ""
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf " %-20s %s\n", $$1, $$2}'
@echo ""
@echo "Examples:"
@echo " make install # Install dependencies"
@echo " make dev # Run development server"
@echo " make i18n-extract # Extract strings for translation"
@echo " make i18n-init LANG_CODE=fr # Initialize French"
@echo " make lint # Check code style"
## Installation and Setup
install: ## Install project dependencies
@echo "[INFO] Installing dependencies..."
$(PIP) install -r requirements.txt
@echo "[SUCCESS] Dependencies installed"
setup-dev: ## Complete development setup
@echo "[INFO] Setting up development environment..."
$(PYTHON) -m venv $(VENV_DIR)
./$(VENV_DIR)/bin/pip install -r requirements.txt
@echo "[SUCCESS] Virtual environment created in $(VENV_DIR)"
@echo "[INFO] Activate with: source $(VENV_DIR)/bin/activate"
requirements: ## Update and install requirements
@echo "[INFO] Installing/updating requirements..."
$(PIP) install --upgrade pip
$(PIP) install -r requirements.txt
@echo "[SUCCESS] Requirements installed"
## Development
dev: ## Run development server
@echo "[INFO] Starting development server..."
@echo "[INFO] Server available at: http://localhost:9010"
$(PYTHON) server.py
run: dev ## Alias for dev
## Testing
test: ## Run tests
@echo "[INFO] Running tests..."
@if [ -d "tests" ]; then \
$(PYTHON) -m pytest -v; \
else \
echo "[WARN] No tests directory found"; \
fi
test-cov: ## Run tests with coverage
@echo "[INFO] Running tests with coverage..."
@if command -v pytest-cov >/dev/null 2>&1; then \
$(PYTHON) -m pytest -v --cov=$(PROJECT_NAME) --cov-report=html; \
else \
echo "[WARN] pytest-cov not installed. Run: pip install pytest-cov"; \
fi
## Internationalization (i18n)
i18n-extract: ## Extract strings for translation
@echo "[INFO] Extracting strings for translation..."
$(PYTHON) manage_translations.py extract
@echo "[SUCCESS] Strings extracted to translations/messages.pot"
i18n-init: ## Initialize new language (use LANG_CODE=xx)
@echo "[INFO] Initializing language: $(LANG_CODE)"
$(PYTHON) manage_translations.py init $(LANG_CODE)
@echo "[SUCCESS] Language $(LANG_CODE) initialized"
@echo "[INFO] Edit: translations/$(LANG_CODE)/LC_MESSAGES/messages.po"
i18n-update: ## Update existing translations
@echo "[INFO] Updating existing translations..."
$(PYTHON) manage_translations.py update
@echo "[SUCCESS] Translations updated"
i18n-compile: ## Compile translations to binary .mo files
@echo "[INFO] Compiling translations..."
$(PYTHON) manage_translations.py compile
@echo "[SUCCESS] Translations compiled"
i18n-stats: ## Show translation statistics
@echo "[INFO] Translation statistics:"
@echo ""
@for lang_dir in translations/*/; do \
if [ -d "$$lang_dir" ] && [ "$$lang_dir" != "translations/*/" ]; then \
lang=$$(basename "$$lang_dir"); \
po_file="$$lang_dir/LC_MESSAGES/messages.po"; \
if [ -f "$$po_file" ]; then \
total=$$(grep -c "^msgid " "$$po_file" 2>/dev/null || echo "0"); \
translated=$$(grep -c "^msgstr \"[^\"]\+\"" "$$po_file" 2>/dev/null || echo "0"); \
fuzzy=$$(grep -c "^#, fuzzy" "$$po_file" 2>/dev/null || echo "0"); \
if [ "$$total" -gt 0 ]; then \
percent=$$((translated * 100 / total)); \
echo " [STAT] $$lang: $$translated/$$total ($$percent%) - Fuzzy: $$fuzzy"; \
else \
echo " [STAT] $$lang: No translations yet"; \
fi; \
fi \
fi \
done
@echo ""
i18n-clean: ## Clean compiled translation files
@echo "[INFO] Cleaning compiled .mo files..."
find translations/ -name "*.mo" -delete
@echo "[SUCCESS] .mo files removed"
i18n-workflow: ## Complete workflow: extract → update → compile
@echo "[INFO] Running complete translation workflow..."
@make i18n-extract
@make i18n-update
@make i18n-compile
@make i18n-stats
@echo "[SUCCESS] Translation workflow completed"
## Code Quality
lint: ## Check code with flake8
@echo "[INFO] Checking code style..."
@if command -v flake8 >/dev/null 2>&1; then \
flake8 youtube/ --max-line-length=120 --ignore=E501,W503,E402 --exclude=youtube/ytdlp_service.py,youtube/ytdlp_integration.py,youtube/ytdlp_proxy.py; \
echo "[SUCCESS] Code style check passed"; \
else \
echo "[WARN] flake8 not installed (pip install flake8)"; \
fi
format: ## Format code with black (if available)
@echo "[INFO] Formatting code..."
@if command -v black >/dev/null 2>&1; then \
black youtube/ --line-length=120 --exclude='ytdlp_.*\.py'; \
echo "[SUCCESS] Code formatted"; \
else \
echo "[WARN] black not installed (pip install black)"; \
fi
check-deps: ## Check installed dependencies
@echo "[INFO] Checking dependencies..."
@$(PYTHON) -c "import flask_babel; print('[OK] Flask-Babel:', flask_babel.__version__)" 2>/dev/null || echo "[ERROR] Flask-Babel not installed"
@$(PYTHON) -c "import flask; print('[OK] Flask:', flask.__version__)" 2>/dev/null || echo "[ERROR] Flask not installed"
@$(PYTHON) -c "import yt_dlp; print('[OK] yt-dlp:', yt_dlp.__version__)" 2>/dev/null || echo "[ERROR] yt-dlp not installed"
## Maintenance
backup: ## Create translations backup
@echo "[INFO] Creating translations backup..."
@timestamp=$$(date +%Y%m%d_%H%M%S); \
tar -czf "translations_backup_$$timestamp.tar.gz" translations/ 2>/dev/null || echo "[WARN] No translations to backup"; \
if [ -f "translations_backup_$$timestamp.tar.gz" ]; then \
echo "[SUCCESS] Backup created: translations_backup_$$timestamp.tar.gz"; \
fi
restore: ## Restore translations from backup
@echo "[INFO] Restoring translations from backup..."
@if ls translations_backup_*.tar.gz 1>/dev/null 2>&1; then \
latest_backup=$$(ls -t translations_backup_*.tar.gz | head -1); \
tar -xzf "$$latest_backup"; \
echo "[SUCCESS] Restored from: $$latest_backup"; \
else \
echo "[ERROR] No backup files found"; \
fi
clean: ## Clean temporary files and caches
@echo "[INFO] Cleaning temporary files..."
find . -type f -name "*.pyc" -delete
find . -type d -name "__pycache__" -delete
find . -type f -name "*.mo" -delete
find . -type d -name ".pytest_cache" -delete
find . -type f -name ".coverage" -delete
find . -type d -name "htmlcov" -delete
@echo "[SUCCESS] Temporary files removed"
distclean: clean ## Clean everything including venv
@echo "[INFO] Cleaning everything..."
rm -rf $(VENV_DIR)
@echo "[SUCCESS] Complete cleanup done"
## Project Information
info: ## Show project information
@echo "[INFO] $(PROJECT_NAME) - Project information:"
@echo ""
@echo " [INFO] Directory: $$(pwd)"
@echo " [INFO] Python: $$($(PYTHON) --version)"
@echo " [INFO] Pip: $$($(PIP) --version | cut -d' ' -f1-2)"
@echo ""
@echo " [INFO] Configured languages:"
@for lang_dir in translations/*/; do \
if [ -d "$$lang_dir" ] && [ "$$lang_dir" != "translations/*/" ]; then \
lang=$$(basename "$$lang_dir"); \
echo " - $$lang"; \
fi \
done
@echo ""
@echo " [INFO] Main files:"
@echo " - babel.cfg (i18n configuration)"
@echo " - manage_translations.py (i18n CLI)"
@echo " - youtube/i18n_strings.py (centralized strings)"
@echo " - youtube/ytdlp_service.py (yt-dlp integration)"
@echo ""
# Default target
.DEFAULT_GOAL := help

424
README.md
View File

@@ -1,181 +1,313 @@
# yt-local # yt-local
Fork of [youtube-local](https://github.com/user234683/youtube-local) [![License: AGPL v3](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Python 3.7+](https://img.shields.io/badge/python-3.7+-blue.svg)](https://www.python.org/downloads/)
[![Tests](https://img.shields.io/badge/tests-passing-brightgreen.svg)](https://github.com/user234683/youtube-local)
yt-local is a browser-based client written in Python for watching YouTube anonymously and without the lag of the slow page used by YouTube. One of the primary features is that all requests are routed through Tor, except for the video file at googlevideo.com. This is analogous to what HookTube (defunct) and Invidious do, except that you do not have to trust a third-party to respect your privacy. The assumption here is that Google won't put the effort in to incorporate the video file requests into their tracking, as it's not worth pursuing the incredibly small number of users who care about privacy (Tor video routing is also provided as an option). Tor has high latency, so this will not be as fast network-wise as regular YouTube. However, using Tor is optional; when not routing through Tor, video pages may load faster than they do with YouTube's page depending on your browser. A privacy-focused, browser-based YouTube client that routes requests through Tor for anonymous viewing—**without compromising on speed or features**.
The YouTube API is not used, so no keys or anything are needed. It uses the same requests as the YouTube webpage. [Features](#features) • [Install](#install) • [Usage](#usage) • [Screenshots](#screenshots)
---
> [!NOTE]
> How it works: yt-local mirrors YouTube's web requests (using the same Invidious/InnerTube endpoints as yt-dlp and Invidious) but strips JavaScript and serves a lightweight HTML frontend. No API keys needed.
## Overview
yt-local is a lightweight, self-hosted YouTube client written in Python that gives you:
- **Privacy-first**: All requests route through Tor by default (video optional), keeping you anonymous.
- **Fast page loads**: No lazy-loading, no layout reflows, instant comment rendering.
- **Full control**: Customize subtitles, related videos, comments, and playback speed.
- **High quality**: Supports all YouTube video qualities (144p2160p) via DASH muxing.
- **Zero ads**: Clean interface, no tracking, no sponsored content.
- **Self-hosted**: You control the instance—no third-party trust required.
## Features
| Category | Features |
|---------------|----------------------------------------------------------------------------------------|
| Core | Search, channels, playlists, watch pages, comments, subtitles (auto/manual) |
| Privacy | Optional Tor routing (including video), automatic circuit rotation on 429 errors |
| Local | Local playlists (durable against YouTube deletions), thumbnail caching |
| UI | 3 themes (Light/Gray/Dark), theater mode, custom font selection |
| Config | Fine-grained settings: subtitle mode, comment visibility, sponsorblock integration |
| Performance | No JavaScript required, instant page rendering, rate limiting with exponential backoff |
| Subscriptions | Import from YouTube Takeout (CSV/JSON), tag organization, mute channels |
### Advanced Capabilities
- SponsorBlock integration — skip sponsored segments automatically
- Custom video speeds — 0.25x to 4x playback rate
- Video transcripts — accessible via transcript button
- Video quality muxing — combine separate video/audio streams for non-360p/720p resolutions
- Tor circuit rotation — automatic new identity on rate limiting (429)
- File downloading — download videos/audio (disabled by default, configurable)
## Screenshots ## Screenshots
[Light theme](https://pic.infini.fr/l7WINjzS/0Ru6MrhA.png) | Light Theme | Gray Theme | Dark Theme |
|:-----------------------------------------------------:|:----------------------------------------------------:|:----------------------------------------------------:|
| ![Light](https://pic.infini.fr/l7WINjzS/0Ru6MrhA.png) | ![Gray](https://pic.infini.fr/znnQXWNc/hL78CRzo.png) | ![Dark](https://pic.infini.fr/iXwFtTWv/mt2kS5bv.png) |
[Gray theme](https://pic.infini.fr/znnQXWNc/hL78CRzo.png) | Channel View | Playlist View |
|:-------------------------------------------------------:|:---------------------:|
| ![Channel](https://pic.infini.fr/JsenWVYe/SbdIQlS6.png) | *(similar structure)* |
[Dark theme](https://pic.infini.fr/iXwFtTWv/mt2kS5bv.png) ---
[Channel](https://pic.infini.fr/JsenWVYe/SbdIQlS6.png) ## Install
## Features
* Standard pages of YouTube: search, channels, playlists
* Anonymity from Google's tracking by routing requests through Tor
* Local playlists: These solve the two problems with creating playlists on YouTube: (1) they're datamined and (2) videos frequently get deleted by YouTube and lost from the playlist, making it very difficult to find a reupload as the title of the deleted video is not displayed.
* Themes: Light, Gray, and Dark
* Subtitles
* Easily download videos or their audio. (Disabled by default)
* No ads
* View comments
* JavaScript not required
* Theater and non-theater mode
* Subscriptions that are independent from YouTube
* Can import subscriptions from YouTube
* Works by checking channels individually
* Can be set to automatically check channels.
* For efficiency of requests, frequency of checking is based on how quickly channel posts videos
* Can mute channels, so as to have a way to "soft" unsubscribe. Muted channels won't be checked automatically or when using the "Check all" button. Videos from these channels will be hidden.
* Can tag subscriptions to organize them or check specific tags
* Fast page
* No distracting/slow layout rearrangement
* No lazy-loading of comments; they are ready instantly.
* Settings allow fine-tuned control over when/how comments or related videos are shown:
1. Shown by default, with click to hide
2. Hidden by default, with click to show
3. Never shown
* Optionally skip sponsored segments using [SponsorBlock](https://github.com/ajayyy/SponsorBlock)'s API
* Custom video speeds
* Video transcript
* Supports all available video qualities: 144p through 2160p
## Planned features
- [ ] Putting videos from subscriptions or local playlists into the related videos
- [x] Information about video (geographic regions, region of Tor exit node, etc)
- [ ] Ability to delete playlists
- [ ] Auto-saving of local playlist videos
- [ ] Import youtube playlist into a local playlist
- [ ] Rearrange items of local playlist
- [x] Video qualities other than 360p and 720p by muxing video and audio
- [x] Indicate if comments are disabled
- [x] Indicate how many comments a video has
- [ ] Featured channels page
- [ ] Channel comments
- [x] Video transcript
- [x] Automatic Tor circuit change when blocked
- [x] Support &t parameter
- [ ] Subscriptions: Option to mark what has been watched
- [ ] Subscriptions: Option to filter videos based on keywords in title or description
- [ ] Subscriptions: Delete old entries and thumbnails
- [ ] Support for more sites, such as Vimeo, Dailymotion, LBRY, etc.
## Installing
### Windows ### Windows
Download the zip file under the Releases page. Unzip it anywhere you choose. 1. Download the latest [release ZIP](https://github.com/user234683/yt-local/releases)
2. Extract to any folder
3. Run `run.bat` to start
### GNU+Linux/MacOS ### GNU/Linux / macOS
Download the tarball under the Releases page and extract it. `cd` into the directory and run ```bash
# 1. Clone or extract the release
git clone https://github.com/user234683/yt-local.git
cd yt-local
1. `cd yt-local` # 2. Create and activate virtual environment
2. `virtualenv -p python3 venv` python3 -m venv venv
3. `source venv/bin/activate` source venv/bin/activate # or `venv\Scripts\activate` on Windows
4. `pip install -r requirements.txt`
5. `python server.py`
# 3. Install dependencies
pip install -r requirements.txt
**Note**: If pip isn't installed, first try installing it from your package manager. Make sure you install pip for python 3. For example, the package you need on debian is python3-pip rather than python-pip. If your package manager doesn't provide it, try to install it according to [this answer](https://unix.stackexchange.com/a/182467), but make sure you run `python3 get-pip.py` instead of `python get-pip.py` # 4. Run the server
python3 server.py
```
> [!TIP]
> If `pip` isn't installed, use your distro's package manager (e.g., `sudo apt install python3-pip` on Debian/Ubuntu).
### Portable Mode
To keep settings and data in the same directory as the app:
```bash
# Create an empty settings.txt in the project root
touch settings.txt
python3 server.py
# Data now stored in ./data/ instead of ~/.yt-local/
```
---
## Usage ## Usage
Firstly, if you wish to run this in portable mode, create the empty file "settings.txt" in the program's main directory. If the file is there, settings and data will be stored in the same directory as the program. Otherwise, settings and data will be stored in `C:\Users\[your username]\.yt-local` on Windows and `~/.yt-local` on GNU+Linux/MacOS. ### Basic Access
To run the program on windows, open `run.bat`. On GNU+Linux/MacOS, run `python3 server.py`. 1. Start the server:
Access youtube URLs by prefixing them with `http://localhost:9010/`.
For instance, `http://localhost:9010/https://www.youtube.com/watch?v=vBgulDeV2RU`
You can use an addon such as Redirector ([Firefox](https://addons.mozilla.org/en-US/firefox/addon/redirector/)|[Chrome](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)) to automatically redirect YouTube URLs to yt-local. I use the include pattern `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)` and redirect pattern `http://localhost:9010/$1` (Make sure you're using regular expression mode).
If you want embeds on web to also redirect to yt-local, make sure "Iframes" is checked under advanced options in your redirector rule. Check test `http://localhost:9010/youtube.com/embed/vBgulDeV2RU`
yt-local can be added as a search engine in firefox to make searching more convenient. See [here](https://support.mozilla.org/en-US/kb/add-or-remove-search-engine-firefox) for information on firefox search plugins.
### Using Tor
In the settings page, set "Route Tor" to "On, except video" (the second option). Be sure to save the settings.
Ensure Tor is listening for Socks5 connections on port 9150. A simple way to accomplish this is by opening the Tor Browser Bundle and leaving it open. However, you will not be accessing the program (at https://localhost:8080) through the Tor Browser. You will use your regular browser for that. Rather, this is just a quick way to give the program access to Tor routing.
### Standalone Tor
If you don't want to waste system resources leaving the Tor Browser open in addition to your regular browser, you can configure standalone Tor to run instead using the following instructions.
For Windows, to make standalone Tor run at startup, press Windows Key + R and type `shell:startup` to open the Startup folder. Create a new shortcut there. For the command of the shortcut, enter `"C:\[path-to-Tor-Browser-directory]\Tor\tor.exe" SOCKSPort 9150 ControlPort 9151`. You can then launch this shortcut to start it. Alternatively, if something isn't working, to see what's wrong, open `cmd.exe` and go to the directory `C:\[path-to-Tor-Browser-directory]\Tor`. Then run `tor SOCKSPort 9150 ControlPort 9151 | more`. The `more` part at the end is just to make sure any errors are displayed, to fix a bug in Windows cmd where tor doesn't display any output. You can stop tor in the task manager.
For Debian/Ubuntu, you can `sudo apt install tor` to install the command line version of Tor, and then run `sudo systemctl start tor` to run it as a background service that will get started during boot as well. However, Tor on the command line uses the port `9050` by default (rather than the 9150 used by the Tor Browser). So you will need to change `Tor port` to 9050 and `Tor control port` to `9051` in yt-local settings page. Additionally, you will need to enable the Tor control port by uncommenting the line `ControlPort 9051`, and setting `CookieAuthentication` to 0 in `/etc/tor/torrc`. If no Tor package is available for your distro, you can configure the `tor` binary located at `./Browser/TorBrowser/Tor/tor` inside the Tor Browser installation location to run at start time, or create a service to do it.
### Tor video routing
If you wish to route the video through Tor, set "Route Tor" to "On, including video". Because this is bandwidth-intensive, you are strongly encouraged to donate to the [consortium of Tor node operators](https://torservers.net/donate.html). For instance, donations to [NoiseTor](https://noisetor.net/) go straight towards funding nodes. Using their numbers for bandwidth costs, together with an average of 485 kbit/sec for a diverse sample of videos, and assuming n hours of video watched per day, gives $0.03n/month. A $1/month donation will be a very generous amount to not only offset losses, but help keep the network healthy.
In general, Tor video routing will be slower (for instance, moving around in the video is quite slow). I've never seen any signs that watch history in yt-local affects on-site Youtube recommendations. It's likely that requests to googlevideo are logged for some period of time, but are not integrated into Youtube's larger advertisement/recommendation systems, since those presumably depend more heavily on in-page tracking through Javascript rather than CDN requests to googlevideo.
### Importing subscriptions
1. Go to the [Google takeout manager](https://takeout.google.com/takeout/custom/youtube).
2. Log in if asked.
3. Click on "All data included", then on "Deselect all", then select only "subscriptions" and click "OK".
4. Click on "Next step" and then on "Create export".
5. Click on the "Download" button after it appears.
6. From the downloaded takeout zip extract the .csv file. It is usually located under `YouTube and YouTube Music/subscriptions/subscriptions.csv`
7. Go to the subscriptions manager in yt-local. In the import area, select your .csv file, then press import.
Supported subscriptions import formats:
- NewPipe subscriptions export JSON
- Google Takeout CSV
- Old Google Takeout JSON
- OPML format from now-removed YouTube subscriptions manager
## Contributing
Pull requests and issues are welcome
For coding guidelines and an overview of the software architecture, see the [HACKING.md](docs/HACKING.md) file.
## GPG public KEY
```bash ```bash
72CFB264DFC43F63E098F926E607CE7149F4D71C python3 server.py
# Server runs on http://127.0.0.1:9010 (configurable in /settings)
``` ```
## Public instances 2. Access YouTube via proxy:
yt-local is not made to work in public mode, however there is an instance of yt-local in public mode but with less features ```bash
http://localhost:9010/https://www.youtube.com/watch?v=vBgulDeV2RU
```
- <https://m.fridu.us/https://youtube.com> All YouTube URLs must be prefixed with `http://localhost:9010/https://`.
3. (Optional) Use Redirector to auto-redirect YouTube URLs:
- **Firefox**: [Redirector addon](https://addons.mozilla.org/firefox/addon/redirector/)
- **Chrome**: [Redirector addon](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)
- **Pattern**: `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)`
- **Redirect to**: `http://localhost:9010/$1`
> [!NOTE]
> To use embeds on web pages, make sure "Iframes" is checked under advanced options in your redirector rule.
### Tor Routing
> [!IMPORTANT]
> Recommended for privacy. In `/settings`, set **Route Tor** to `"On, except video"` (or `"On, including video"`), then save.
#### Running Tor
Option A: Tor Browser (easiest)
- Launch Tor Browser and leave it running
- yt-local uses port `9150` (Tor Browser default)
Option B: Standalone Tor
```bash
# Linux (Debian/Ubuntu)
sudo apt install tor
sudo systemctl enable --now tor
# Configure yt-local ports (if using default Tor ports):
# Tor port: 9150
# Tor control port: 9151
```
> [!WARNING]
> Video over Tor is bandwidth-intensive. Consider donating to [Tor node operators](https://torservers.net/donate.html) to sustain the network.
### Import Subscriptions
1. Go to [Google Takeout](https://takeout.google.com/takeout/custom/youtube)
2. Deselect all → select only **Subscriptions** → create export
3. Download and extract `subscriptions.csv` (path: `YouTube and YouTube Music/subscriptions/subscriptions.csv`)
4. In yt-local: **Subscriptions****Import** → upload CSV
> [!IMPORTANT]
> The CSV file must contain columns: `channel_id,channel_name,channel_url`
## Supported formats
- Google Takeout CSV
- Google Takeout JSON (legacy)
- NewPipe JSON export
- OPML (from YouTube's old subscription manager)
---
## Configuration
Visit `http://localhost:9010/settings` to configure:
| Setting | Description |
|--------------------|-------------------------------------------------|
| Route Tor | Off / On (except video) / On (including video) |
| Default subtitles | Off / Manual only / Auto + Manual |
| Comments mode | Shown by default / Hidden by default / Never |
| Related videos | Same options as comments |
| Theme | Light / Gray / Dark |
| Font | Browser default / Serif / Sans-serif |
| Default resolution | Auto / 144p2160p |
| SponsorBlock | Enable Sponsored segments skipping |
| Proxy images | Route thumbnails through yt-local (for privacy) |
---
## Troubleshooting
| Issue | Solution |
|------------------------------|----------------------------------------------------------------------------------------------|
| Port already in use | Change `port_number` in `/settings` or kill existing process: `pkill -f "python3 server.py"` |
| 429 Too Many Requests | Enable Tor routing for automatic IP rotation, or wait 5-10 minutes |
| Failed to connect to Tor | Verify Tor is running: `tor --version` or launch Tor Browser |
| Subscriptions not importing | Ensure CSV has columns: `channel_id,channel_name,channel_url` |
| Settings persist across runs | Check `~/.yt-local/settings.txt` (non-portable) or `./settings.txt` (portable) |
---
## Development
### Running Tests
```bash
source venv/bin/activate # if not already in venv
make test
```
### Project Structure
```bash
yt-local/
├── youtube/ # Core application logic
│ ├── __init__.py # Flask app entry point
│ ├── util.py # HTTP utilities, Tor manager, fetch_url
│ ├── watch.py # Video/playlist page handlers
│ ├── channel.py # Channel page handlers
│ ├── playlist.py # Playlist handlers
│ ├── search.py # Search handlers
│ ├── comments.py # Comment extraction/rendering
│ ├── subscriptions.py # Subscription management + SQLite
│ ├── local_playlist.py # Local playlist CRUD
│ ├── proto.py # YouTube protobuf token generation
│ ├── yt_data_extract/ # Polymer JSON parsing abstractions
│ └── hls_cache.py # HLS audio/video streaming proxy
├── templates/ # Jinja2 HTML templates
├── static/ # CSS/JS assets
├── translations/ # i18n files (Babel)
├── tests/ # pytest test suite
├── server.py # WSGI entry point
├── settings.py # Settings parser + admin page
├── generate_release.py # Windows release builder
└── manage_translations.py # i18n maintenance script
```
> [!NOTE]
> For detailed architecture guidance, see [`docs/HACKING.md`](docs/HACKING.md).
### Contributing
Contributions welcome! Please:
1. Read [`docs/HACKING.md`](docs/HACKING.md) for coding guidelines
2. Follow [PEP 8](https://peps.python.org/pep-0008/) style (use `ruff format`)
3. Run tests before submitting: `pytest`
4. Ensure no security issues: `bandit -r .`
5. Update docs for new features
---
## Security Notes
- **No API keys required** — uses same endpoints as public YouTube web interface
- **Tor is optional** — disable in `/settings` if you prefer performance over anonymity
- **Rate limiting handled** — exponential backoff (max 5 retries) with automatic Tor circuit rotation
- **Path traversal protected** — user input validated against regex whitelists (CWE-22)
- **Subprocess calls secure** — build scripts use `subprocess.run([...])` instead of shell (CWE-78)
> [!NOTE]
> GPG key for release verification: `72CFB264DFC43F63E098F926E607CE7149F4D71C`
---
## Public Instances
yt-local is designed for self-hosting.
---
## Donate
This project is 100% free and open-source. If you'd like to support development:
- **Bitcoin**: `1JrC3iqs3PP5Ge1m1vu7WE8LEf4S85eo7y`
- **Tor node donation**: https://torservers.net/donate
---
## License ## License
This project is licensed under the GNU Affero General Public License v3 (GNU AGPLv3) or any later version. GNU Affero General Public License v3.0+
Permission is hereby granted to the youtube-dl project at [https://github.com/ytdl-org/youtube-dl](https://github.com/ytdl-org/youtube-dl) to relicense any portion of this software under the Unlicense, public domain, or whichever license is in use by youtube-dl at the time of relicensing, for the purpose of inclusion of said portion into youtube-dl. Relicensing permission is not granted for any purpose outside of direct inclusion into the [official repository](https://github.com/ytdl-org/youtube-dl) of youtube-dl. If inclusion happens during the process of a pull-request, relicensing happens at the moment the pull request is merged into youtube-dl; until that moment, any cloned repositories of youtube-dl which make use of this software are subject to the terms of the GNU AGPLv3. See [`LICENSE`](LICENSE) for full text.
## Donate ### Exception for youtube-dl
This project is completely free/Libre and will always be.
#### Crypto: Permission is granted to relicense code portions into youtube-dl's license (currently GPL) for direct inclusion into the [official youtube-dl repository](https://github.com/ytdl-org/youtube-dl). This exception **does not apply** to forks or other uses—those remain under AGPLv3.
- **Bitcoin**: `1JrC3iqs3PP5Ge1m1vu7WE8LEf4S85eo7y`
## Similar projects ---
- [invidious](https://github.com/iv-org/invidious) Similar to this project, but also allows it to be hosted as a server to serve many users
- [Yotter](https://github.com/ytorg/Yotter) Similar to this project and to invidious. Also supports Twitter ## Similar Projects
- [FreeTube](https://github.com/FreeTubeApp/FreeTube) (Similar to this project, but is an electron app outside the browser)
- [youtube-local](https://github.com/user234683/youtube-local) first project on which yt-local is based | Project | Type | Notes |
- [NewPipe](https://newpipe.schabi.org/) (app for android) |--------------------------------------------------------------|----------|--------------------------------------|
- [mps-youtube](https://github.com/mps-youtube/mps-youtube) (terminal-only program) | [invidious](https://github.com/iv-org/invidious) | Server | Multi-user instance, REST API |
- [youtube-viewer](https://github.com/trizen/youtube-viewer) | [Yotter](https://github.com/ytorg/Yotter) | Server | YouTube + Twitter integration |
- [FreeTube](https://github.com/FreeTubeApp/FreeTube) (Similar to this project, but is an electron app outside the browser) | [FreeTube](https://github.com/FreeTubeApp/FreeTube) | Desktop | Electron-based client |
- [smtube](https://www.smtube.org/) | [NewPipe](https://newpipe.schabi.org/) | Mobile | Android-only, no JavaScript |
- [Minitube](https://flavio.tordini.org/minitube), [github here](https://github.com/flaviotordini/minitube) | [mps-youtube](https://github.com/mps-youtube/mps-youtube) | Terminal | CLI-based, text UI |
- [toogles](https://github.com/mikecrittenden/toogles) (only embeds videos, doesn't use mp4) | [youtube-local](https://github.com/user234683/youtube-local) | Browser | Original project (base for yt-local) |
- [YTLibre](https://git.sr.ht/~heckyel/ytlibre) only extract video
- [youtube-dl](https://rg3.github.io/youtube-dl/), which this project was based off ---
Made for privacy-conscious users
Last updated: 2026-04-19

16
babel.cfg Normal file
View File

@@ -0,0 +1,16 @@
[python: youtube/**.py]
encoding = utf-8
keywords = lazy_gettext _l _
[python: server.py]
encoding = utf-8
keywords = _
[python: settings.py]
encoding = utf-8
keywords = _
[jinja2: youtube/templates/**.html]
encoding = utf-8
extensions=jinja2.ext.i18n
silent=false

View File

@@ -1,8 +1,16 @@
## Basic init yt-local for openrc # Basic init yt-local for openrc
1. Write `/etc/init.d/ytlocal` file. ## Prerequisites
``` - System with OpenRC installed and configured.
- Administrative privileges (doas or sudo).
- `ytlocal` script located at `/usr/sbin/ytlocal` and application files in an accessible directory.
## Service Installation
1. **Create the OpenRC service script** `/etc/init.d/ytlocal`:
```sh
#!/sbin/openrc-run #!/sbin/openrc-run
# Distributed under the terms of the GNU General Public License v3 or later # Distributed under the terms of the GNU General Public License v3 or later
name="yt-local" name="yt-local"
@@ -41,36 +49,60 @@ stop() {
} }
``` ```
after, modified execute permissions: > [!NOTE]
> Ensure the script is executable:
>
> ```sh
> doas chmod a+x /etc/init.d/ytlocal
> ```
$ doas chmod a+x /etc/init.d/ytlocal 2. **Create the executable script** `/usr/sbin/ytlocal`:
```bash
2. Write `/usr/sbin/ytlocal` and configure path.
```
#!/usr/bin/env bash #!/usr/bin/env bash
cd /home/your-path/ytlocal/ # change me # Change the working directory according to your installation path
# Example: if installed in /usr/local/ytlocal, use:
cd /home/your-path/ytlocal/ # <-- MODIFY TO YOUR PATH
source venv/bin/activate source venv/bin/activate
python server.py > /dev/null 2>&1 & python server.py > /dev/null 2>&1 &
echo $! > /var/run/ytlocal.pid echo $! > /var/run/ytlocal.pid
``` ```
after, modified execute permissions: > [!WARNING]
> Run this script only as root or via `doas`, as it writes to `/var/run` and uses network privileges.
$ doas chmod a+x /usr/sbin/ytlocal > [!TIP]
> To store the PID in a different location, adjust the `pidfile` variable in the service script.
> [!IMPORTANT]
> Verify that the virtual environment (`venv`) is correctly set up and that `python` points to the appropriate version.
3. OpenRC check > [!CAUTION]
> Do not stop the process manually; use OpenRC commands (`rc-service ytlocal stop`) to avoid race conditions.
- status: `doas rc-service ytlocal status` > [!NOTE]
- start: `doas rc-service ytlocal start` > When run with administrative privileges, the configuration is saved in `/root/.yt-local`, which is rootonly.
- restart: `doas rc-service ytlocal restart`
- stop: `doas rc-service ytlocal stop`
- enable: `doas rc-update add ytlocal default` ## Service Management
- disable: `doas rc-update del ytlocal`
When yt-local is run with administrator privileges, - **Status**: `doas rc-service ytlocal status`
the configuration file is stored in /root/.yt-local - **Start**: `doas rc-service ytlocal start`
- **Restart**: `doas rc-service ytlocal restart`
- **Stop**: `doas rc-service ytlocal stop`
- **Enable at boot**: `doas rc-update add ytlocal default`
- **Disable**: `doas rc-update del ytlocal`
## PostInstallation Verification
- Confirm the process is running: `doas rc-service ytlocal status`
- Inspect logs for issues: `doas tail -f /var/log/ytlocal.log` (if logging is configured).
## Troubleshooting Common Issues
- **Service fails to start**: verify script permissions, correct `command=` path, and that the virtualenv exists.
- **Port conflict**: adjust the servers port configuration before launching.
- **Import errors**: ensure all dependencies are installed in the virtual environment.
[!IMPORTANT]
Keep the service script updated when modifying startup logic or adding new dependencies.

View File

@@ -44,6 +44,10 @@ def remove_files_with_extensions(path, extensions):
def download_if_not_exists(file_name, url, sha256=None): def download_if_not_exists(file_name, url, sha256=None):
if not os.path.exists('./' + file_name): if not os.path.exists('./' + file_name):
# Reject non-https URLs so a mistaken constant cannot cause a
# plaintext download (bandit B310 hardening).
if not url.startswith('https://'):
raise Exception('Refusing to download over non-https URL: ' + url)
log('Downloading ' + file_name + '..') log('Downloading ' + file_name + '..')
data = urllib.request.urlopen(url).read() data = urllib.request.urlopen(url).read()
log('Finished downloading ' + file_name) log('Finished downloading ' + file_name)
@@ -58,12 +62,14 @@ def download_if_not_exists(file_name, url, sha256=None):
log('Using existing ' + file_name) log('Using existing ' + file_name)
def wine_run_shell(command): def wine_run_shell(command):
# Keep argv-style invocation (no shell) to avoid command injection.
if os.name == 'posix': if os.name == 'posix':
check(os.system('wine ' + command.replace('\\', '/'))) parts = ['wine'] + command.replace('\\', '/').split()
elif os.name == 'nt': elif os.name == 'nt':
check(os.system(command)) parts = command.split()
else: else:
raise Exception('Unsupported OS') raise Exception('Unsupported OS')
check(subprocess.run(parts).returncode)
def wine_run(command_parts): def wine_run(command_parts):
if os.name == 'posix': if os.name == 'posix':
@@ -92,7 +98,20 @@ if os.path.exists('./yt-local'):
# confused with working directory. I'm calling it the same thing so it will # confused with working directory. I'm calling it the same thing so it will
# have that name when extracted from the final release zip archive) # have that name when extracted from the final release zip archive)
log('Making copy of yt-local files') log('Making copy of yt-local files')
check(os.system('git archive --format tar master | 7z x -si -ttar -oyt-local')) # Avoid the shell: pipe `git archive` into 7z directly via subprocess.
_git_archive = subprocess.Popen(
['git', 'archive', '--format', 'tar', 'master'],
stdout=subprocess.PIPE,
)
_sevenz = subprocess.Popen(
['7z', 'x', '-si', '-ttar', '-oyt-local'],
stdin=_git_archive.stdout,
)
_git_archive.stdout.close()
_sevenz.wait()
_git_archive.wait()
check(_sevenz.returncode)
check(_git_archive.returncode)
if len(os.listdir('./yt-local')) == 0: if len(os.listdir('./yt-local')) == 0:
raise Exception('Failed to copy yt-local files') raise Exception('Failed to copy yt-local files')
@@ -136,7 +155,7 @@ if os.path.exists('./python'):
log('Extracting python distribution') log('Extracting python distribution')
check(os.system(r'7z -y x -opython ' + python_dist_name)) check_subp(subprocess.run(['7z', '-y', 'x', '-opython', python_dist_name]))
log('Executing get-pip.py') log('Executing get-pip.py')
wine_run(['./python/python.exe', '-I', 'get-pip.py']) wine_run(['./python/python.exe', '-I', 'get-pip.py'])
@@ -241,7 +260,7 @@ if os.path.exists('./' + output_filename):
log('Removing previous zipped release') log('Removing previous zipped release')
os.remove('./' + output_filename) os.remove('./' + output_filename)
log('Zipping release') log('Zipping release')
check(os.system(r'7z -mx=9 a ' + output_filename + ' ./yt-local')) check_subp(subprocess.run(['7z', '-mx=9', 'a', output_filename, './yt-local']))
print('\n') print('\n')
log('Finished') log('Finished')

113
manage_translations.py Normal file
View File

@@ -0,0 +1,113 @@
#!/usr/bin/env python3
"""
Translation management script for yt-local
Usage:
python manage_translations.py extract # Extract strings to messages.pot
python manage_translations.py init es # Initialize Spanish translation
python manage_translations.py update # Update all translations
python manage_translations.py compile # Compile translations to .mo files
"""
import sys
import os
import subprocess
# Ensure we use the Python from the virtual environment if available
if hasattr(sys, 'real_prefix') or (hasattr(sys, 'base_prefix') and sys.base_prefix != sys.prefix):
# Already in venv
pass
else:
# Try to activate venv
venv_path = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), 'venv')
if os.path.exists(venv_path):
venv_bin = os.path.join(venv_path, 'bin')
if os.path.exists(venv_bin):
os.environ['PATH'] = venv_bin + os.pathsep + os.environ['PATH']
def run_command(cmd):
"""Run a shell command and print output"""
print(f"Running: {' '.join(cmd)}")
# Use the pybabel from the same directory as our Python executable
if cmd[0] == 'pybabel':
import os
pybabel_path = os.path.join(os.path.dirname(sys.executable), 'pybabel')
if os.path.exists(pybabel_path):
cmd = [pybabel_path] + cmd[1:]
result = subprocess.run(cmd, capture_output=True, text=True)
if result.stdout:
print(result.stdout)
if result.stderr:
print(result.stderr, file=sys.stderr)
return result.returncode
def extract():
"""Extract translatable strings from source code"""
print("Extracting translatable strings...")
return run_command([
'pybabel', 'extract',
'-F', 'babel.cfg',
'-k', 'lazy_gettext',
'-k', '_l',
'-o', 'translations/messages.pot',
'.'
])
def init(language):
"""Initialize a new language translation"""
print(f"Initializing {language} translation...")
return run_command([
'pybabel', 'init',
'-i', 'translations/messages.pot',
'-d', 'translations',
'-l', language
])
def update():
"""Update existing translations with new strings"""
print("Updating translations...")
return run_command([
'pybabel', 'update',
'-i', 'translations/messages.pot',
'-d', 'translations'
])
def compile_translations():
"""Compile .po files to .mo files"""
print("Compiling translations...")
return run_command([
'pybabel', 'compile',
'-d', 'translations'
])
def main():
if len(sys.argv) < 2:
print(__doc__)
sys.exit(1)
command = sys.argv[1]
if command == 'extract':
sys.exit(extract())
elif command == 'init':
if len(sys.argv) < 3:
print("Error: Please specify a language code (e.g., es, fr, de)")
sys.exit(1)
sys.exit(init(sys.argv[2]))
elif command == 'update':
sys.exit(update())
elif command == 'compile':
sys.exit(compile_translations())
else:
print(f"Unknown command: {command}")
print(__doc__)
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -1,4 +1,6 @@
Flask>=1.0.3 Flask>=1.0.3
Flask-Babel>=4.0.0
Babel>=2.12.0
gevent>=1.2.2 gevent>=1.2.2
Brotli>=1.0.7 Brotli>=1.0.7
PySocks>=1.6.8 PySocks>=1.6.8
@@ -6,3 +8,4 @@ urllib3>=1.24.1
defusedxml>=0.5.0 defusedxml>=0.5.0
cachetools>=4.0.0 cachetools>=4.0.0
stem>=1.8.0 stem>=1.8.0
requests>=2.25.0

View File

@@ -1,22 +1,28 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# E402 is deliberately ignored in this file: `monkey.patch_all()` must run
# before any stdlib networking or gevent-dependent modules are imported.
from gevent import monkey from gevent import monkey
monkey.patch_all() monkey.patch_all()
import gevent.socket
from youtube import yt_app from youtube import yt_app
from youtube import util from youtube import util
# these are just so the files get run - they import yt_app and add routes to it # these are just so the files get run - they import yt_app and add routes to it
from youtube import watch, search, playlist, channel, local_playlist, comments, subscriptions from youtube import (
watch,
search,
playlist,
channel,
local_playlist,
comments,
subscriptions,
)
import settings import settings
from gevent.pywsgi import WSGIServer from gevent.pywsgi import WSGIServer
import urllib import urllib
import urllib3 import urllib3
import socket
import socks, sockshandler
import subprocess
import re import re
import sys import sys
import time import time
@@ -55,8 +61,6 @@ def proxy_site(env, start_response, video=False):
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)', 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)',
'Accept': '*/*', 'Accept': '*/*',
} }
current_range_start = 0
range_end = None
if 'HTTP_RANGE' in env: if 'HTTP_RANGE' in env:
send_headers['Range'] = env['HTTP_RANGE'] send_headers['Range'] = env['HTTP_RANGE']
@@ -99,7 +103,6 @@ def proxy_site(env, start_response, video=False):
if response.status >= 400: if response.status >= 400:
print('Error: YouTube returned "%d %s" while routing %s' % ( print('Error: YouTube returned "%d %s" while routing %s' % (
response.status, response.reason, url.split('?')[0])) response.status, response.reason, url.split('?')[0]))
total_received = 0 total_received = 0
retry = False retry = False
while True: while True:
@@ -218,6 +221,12 @@ def site_dispatch(env, start_response):
start_response('302 Found', [('Location', '/https://youtube.com')]) start_response('302 Found', [('Location', '/https://youtube.com')])
return return
# Handle local API endpoints directly (e.g., /ytl-api/...)
if path.startswith('/ytl-api/'):
env['SERVER_NAME'] = 'youtube.com'
yield from yt_app(env, start_response)
return
try: try:
env['SERVER_NAME'], env['PATH_INFO'] = split_url(path[1:]) env['SERVER_NAME'], env['PATH_INFO'] = split_url(path[1:])
except ValueError: except ValueError:
@@ -269,6 +278,8 @@ class FilteredRequestLog:
if __name__ == '__main__': if __name__ == '__main__':
if settings.allow_foreign_addresses: if settings.allow_foreign_addresses:
# Binding to all interfaces is opt-in via the
# `allow_foreign_addresses` setting and documented as discouraged.
server = WSGIServer(('0.0.0.0', settings.port_number), site_dispatch, server = WSGIServer(('0.0.0.0', settings.port_number), site_dispatch,
log=FilteredRequestLog()) log=FilteredRequestLog())
ip_server = '0.0.0.0' ip_server = '0.0.0.0'
@@ -279,6 +290,16 @@ if __name__ == '__main__':
print('Starting httpserver at http://%s:%s/' % print('Starting httpserver at http://%s:%s/' %
(ip_server, settings.port_number)) (ip_server, settings.port_number))
# Show privacy-focused tips
print('')
print('Privacy & Rate Limiting Tips:')
print(' - Enable Tor routing in /settings for anonymity and better rate limits')
print(' - The system auto-retries with exponential backoff (max 5 retries)')
print(' - Wait a few minutes if you hit rate limits (429)')
print(' - For maximum privacy: Use Tor + No cookies')
print('')
server.serve_forever() server.serve_forever()
# for uwsgi, gunicorn, etc. # for uwsgi, gunicorn, etc.

View File

@@ -1,4 +1,18 @@
from youtube import util from youtube import util
from youtube.i18n_strings import (
AUTO,
AUTO_HLS_PREFERRED,
ENGLISH,
ESPANOL,
FORCE_DASH,
FORCE_HLS,
NEWEST,
PLAYBACK_MODE,
RANKING_1,
RANKING_2,
RANKING_3,
TOP,
)
import ast import ast
import re import re
import os import os
@@ -139,8 +153,8 @@ For security reasons, enabling this is not recommended.''',
'comment': '''0 to sort by top 'comment': '''0 to sort by top
1 to sort by newest''', 1 to sort by newest''',
'options': [ 'options': [
(0, 'Top'), (0, TOP),
(1, 'Newest'), (1, NEWEST),
], ],
}), }),
@@ -159,18 +173,32 @@ For security reasons, enabling this is not recommended.''',
}), }),
('default_resolution', { ('default_resolution', {
'type': int, 'type': str,
'default': 720, 'default': 'auto',
'comment': '', 'comment': '',
'options': [ 'options': [
(144, '144p'), ('auto', AUTO),
(240, '240p'), ('144', '144p'),
(360, '360p'), ('240', '240p'),
(480, '480p'), ('360', '360p'),
(720, '720p'), ('480', '480p'),
(1080, '1080p'), ('720', '720p'),
(1440, '1440p'), ('1080', '1080p'),
(2160, '2160p'), ('1440', '1440p'),
('2160', '2160p'),
],
'category': 'playback',
}),
('playback_mode', {
'type': str,
'default': 'auto',
'label': PLAYBACK_MODE,
'comment': 'HLS uses hls.js (multi-audio). DASH uses av-merge (single audio).',
'options': [
('auto', AUTO_HLS_PREFERRED),
('hls', FORCE_HLS),
('dash', FORCE_DASH),
], ],
'category': 'playback', 'category': 'playback',
}), }),
@@ -180,7 +208,7 @@ For security reasons, enabling this is not recommended.''',
'default': 1, 'default': 1,
'label': 'AV1 Codec Ranking', 'label': 'AV1 Codec Ranking',
'comment': '', 'comment': '',
'options': [(1, '#1'), (2, '#2'), (3, '#3')], 'options': [(1, RANKING_1), (2, RANKING_2), (3, RANKING_3)],
'category': 'playback', 'category': 'playback',
}), }),
@@ -189,7 +217,7 @@ For security reasons, enabling this is not recommended.''',
'default': 2, 'default': 2,
'label': 'VP8/VP9 Codec Ranking', 'label': 'VP8/VP9 Codec Ranking',
'comment': '', 'comment': '',
'options': [(1, '#1'), (2, '#2'), (3, '#3')], 'options': [(1, RANKING_1), (2, RANKING_2), (3, RANKING_3)],
'category': 'playback', 'category': 'playback',
}), }),
@@ -198,7 +226,7 @@ For security reasons, enabling this is not recommended.''',
'default': 3, 'default': 3,
'label': 'H.264 Codec Ranking', 'label': 'H.264 Codec Ranking',
'comment': '', 'comment': '',
'options': [(1, '#1'), (2, '#2'), (3, '#3')], 'options': [(1, RANKING_1), (2, RANKING_2), (3, RANKING_3)],
'category': 'playback', 'category': 'playback',
'description': ( 'description': (
'Which video codecs to prefer. Codecs given the same ' 'Which video codecs to prefer. Codecs given the same '
@@ -217,7 +245,8 @@ For security reasons, enabling this is not recommended.''',
(2, 'Always'), (2, 'Always'),
], ],
'category': 'playback', 'category': 'playback',
'description': 'If set to Prefer or Always and the default resolution is set to 360p or 720p, uses the unified (integrated) video files which contain audio and video, with buffering managed by the browser. If set to prefer not, uses the separate audio and video files through custom buffer management in av-merge via MediaSource unless they are unavailable.', 'hidden': True,
'description': 'Deprecated: HLS is now used exclusively for all playback.',
}), }),
('use_video_player', { ('use_video_player', {
@@ -232,10 +261,20 @@ For security reasons, enabling this is not recommended.''',
'category': 'interface', 'category': 'interface',
}), }),
('native_player_storyboard', {
'type': bool,
'default': False,
'label': 'Storyboard preview (native)',
'comment': '''Show thumbnail preview on hover (native player modes).
Positioning is heuristic; may misalign in Firefox/Safari.
Works best on Chromium browsers.
No effect in Plyr.''',
'category': 'interface',
}),
('use_video_download', { ('use_video_download', {
'type': int, 'type': int,
'default': 0, 'default': 0,
'comment': '',
'options': [ 'options': [
(0, 'Disabled'), (0, 'Disabled'),
(1, 'Enabled'), (1, 'Enabled'),
@@ -296,6 +335,17 @@ Archive: https://archive.ph/OZQbN''',
'category': 'interface', 'category': 'interface',
}), }),
('language', {
'type': str,
'default': 'en',
'comment': 'Interface language',
'options': [
('en', ENGLISH),
('es', ESPANOL),
],
'category': 'interface',
}),
('embed_page_mode', { ('embed_page_mode', {
'type': bool, 'type': bool,
'label': 'Enable embed page', 'label': 'Enable embed page',
@@ -339,7 +389,8 @@ Archive: https://archive.ph/OZQbN''',
program_directory = os.path.dirname(os.path.realpath(__file__)) program_directory = os.path.dirname(os.path.realpath(__file__))
acceptable_targets = SETTINGS_INFO.keys() | { acceptable_targets = SETTINGS_INFO.keys() | {
'enable_comments', 'enable_related_videos', 'preferred_video_codec' 'enable_comments', 'enable_related_videos', 'preferred_video_codec',
'ytdlp_enabled',
} }
@@ -430,7 +481,7 @@ upgrade_functions = {
def log_ignored_line(line_number, message): def log_ignored_line(line_number, message):
print("WARNING: Ignoring settings.txt line " + str(node.lineno) + " (" + message + ")") print('WARNING: Ignoring settings.txt line ' + str(line_number) + ' (' + message + ')')
if os.path.isfile("settings.txt"): if os.path.isfile("settings.txt"):
@@ -441,8 +492,7 @@ else:
print("Running in non-portable mode") print("Running in non-portable mode")
settings_dir = os.path.expanduser(os.path.normpath("~/.yt-local")) settings_dir = os.path.expanduser(os.path.normpath("~/.yt-local"))
data_dir = os.path.expanduser(os.path.normpath("~/.yt-local/data")) data_dir = os.path.expanduser(os.path.normpath("~/.yt-local/data"))
if not os.path.exists(settings_dir): os.makedirs(settings_dir, exist_ok=True)
os.makedirs(settings_dir)
settings_file_path = os.path.join(settings_dir, 'settings.txt') settings_file_path = os.path.join(settings_dir, 'settings.txt')
@@ -459,25 +509,29 @@ else:
else: else:
# parse settings in a safe way, without exec # parse settings in a safe way, without exec
current_settings_dict = {} current_settings_dict = {}
# Python 3.8+ uses ast.Constant; older versions use ast.Num, ast.Str, ast.NameConstant
attributes = { attributes = {
ast.Constant: 'value', ast.Constant: 'value',
ast.NameConstant: 'value',
ast.Num: 'n',
ast.Str: 's',
} }
try:
attributes[ast.Num] = 'n'
attributes[ast.Str] = 's'
attributes[ast.NameConstant] = 'value'
except AttributeError:
pass # Removed in Python 3.12+
module_node = ast.parse(settings_text) module_node = ast.parse(settings_text)
for node in module_node.body: for node in module_node.body:
if type(node) != ast.Assign: if not isinstance(node, ast.Assign):
log_ignored_line(node.lineno, "only assignments are allowed") log_ignored_line(node.lineno, 'only assignments are allowed')
continue continue
if len(node.targets) > 1: if len(node.targets) > 1:
log_ignored_line(node.lineno, "only simple single-variable assignments allowed") log_ignored_line(node.lineno, 'only simple single-variable assignments allowed')
continue continue
target = node.targets[0] target = node.targets[0]
if type(target) != ast.Name: if not isinstance(target, ast.Name):
log_ignored_line(node.lineno, "only simple single-variable assignments allowed") log_ignored_line(node.lineno, 'only simple single-variable assignments allowed')
continue continue
if target.id not in acceptable_targets: if target.id not in acceptable_targets:
@@ -511,7 +565,7 @@ else:
globals().update(current_settings_dict) globals().update(current_settings_dict)
if route_tor: if globals().get('route_tor', False):
print("Tor routing is ON") print("Tor routing is ON")
else: else:
print("Tor routing is OFF - your YouTube activity is NOT anonymous") print("Tor routing is OFF - your YouTube activity is NOT anonymous")
@@ -531,7 +585,7 @@ def add_setting_changed_hook(setting, func):
def set_img_prefix(old_value=None, value=None): def set_img_prefix(old_value=None, value=None):
global img_prefix global img_prefix
if value is None: if value is None:
value = proxy_images value = globals().get('proxy_images', False)
if value: if value:
img_prefix = '/' img_prefix = '/'
else: else:

265
tests/test_shorts.py Normal file
View File

@@ -0,0 +1,265 @@
"""Tests for YouTube Shorts tab support.
Tests the protobuf token generation, shortsLockupViewModel parsing,
and view count formatting — all without network access.
"""
import sys
import os
import base64
import pytest
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
import youtube.proto as proto
from youtube.yt_data_extract.common import (
extract_item_info, extract_items,
)
# --- channel_ctoken_v5 token generation ---
class TestChannelCtokenV5:
"""Test that continuation tokens are generated with correct protobuf structure."""
@pytest.fixture(autouse=True)
def setup(self):
from youtube.channel import channel_ctoken_v5
self.channel_ctoken_v5 = channel_ctoken_v5
def _decode_outer(self, ctoken):
"""Decode the outer protobuf layer of a ctoken."""
raw = base64.urlsafe_b64decode(ctoken + '==')
return {fn: val for _, fn, val in proto.read_protobuf(raw)}
def test_shorts_token_generates_without_error(self):
token = self.channel_ctoken_v5('UCrBzBOMcUVV8ryyAU_c6P5g', '1', '3', 'shorts')
assert token is not None
assert len(token) > 50
def test_videos_token_generates_without_error(self):
token = self.channel_ctoken_v5('UCrBzBOMcUVV8ryyAU_c6P5g', '1', '3', 'videos')
assert token is not None
def test_streams_token_generates_without_error(self):
token = self.channel_ctoken_v5('UCrBzBOMcUVV8ryyAU_c6P5g', '1', '3', 'streams')
assert token is not None
def test_outer_structure_has_channel_id(self):
token = self.channel_ctoken_v5('UCrBzBOMcUVV8ryyAU_c6P5g', '1', '3', 'shorts')
fields = self._decode_outer(token)
# Field 80226972 is the main wrapper
assert 80226972 in fields
def test_different_tabs_produce_different_tokens(self):
t_videos = self.channel_ctoken_v5('UCtest', '1', '3', 'videos')
t_shorts = self.channel_ctoken_v5('UCtest', '1', '3', 'shorts')
t_streams = self.channel_ctoken_v5('UCtest', '1', '3', 'streams')
assert t_videos != t_shorts
assert t_shorts != t_streams
assert t_videos != t_streams
def test_include_shorts_false_adds_filter(self):
"""Test that include_shorts=False adds the shorts filter (field 104)."""
# Token with shorts included (default)
t_with_shorts = self.channel_ctoken_v5('UCtest', '1', '3', 'videos', include_shorts=True)
# Token with shorts excluded
t_without_shorts = self.channel_ctoken_v5('UCtest', '1', '3', 'videos', include_shorts=False)
# The tokens should be different because of the shorts filter
assert t_with_shorts != t_without_shorts
# Decode and verify the filter is present
raw_with_shorts = base64.urlsafe_b64decode(t_with_shorts + '==')
raw_without_shorts = base64.urlsafe_b64decode(t_without_shorts + '==')
# Parse the outer protobuf structure
import youtube.proto as proto
outer_fields_with = list(proto.read_protobuf(raw_with_shorts))
outer_fields_without = list(proto.read_protobuf(raw_without_shorts))
# Field 80226972 contains the inner data
inner_with = [v for _, fn, v in outer_fields_with if fn == 80226972][0]
inner_without = [v for _, fn, v in outer_fields_without if fn == 80226972][0]
# Parse the inner data - field 3 contains percent-encoded base64 data
inner_fields_with = list(proto.read_protobuf(inner_with))
inner_fields_without = list(proto.read_protobuf(inner_without))
# Get field 3 data (the encoded inner which is percent-encoded base64)
encoded_inner_with = [v for _, fn, v in inner_fields_with if fn == 3][0]
encoded_inner_without = [v for _, fn, v in inner_fields_without if fn == 3][0]
# The inner without shorts should contain field 104
# Decode the percent-encoded base64 data
import urllib.parse
decoded_with = urllib.parse.unquote(encoded_inner_with.decode('ascii'))
decoded_without = urllib.parse.unquote(encoded_inner_without.decode('ascii'))
# Decode the base64 data
decoded_with_bytes = base64.urlsafe_b64decode(decoded_with + '==')
decoded_without_bytes = base64.urlsafe_b64decode(decoded_without + '==')
# Parse the decoded protobuf data
fields_with = list(proto.read_protobuf(decoded_with_bytes))
fields_without = list(proto.read_protobuf(decoded_without_bytes))
field_numbers_with = [fn for _, fn, _ in fields_with]
field_numbers_without = [fn for _, fn, _ in fields_without]
# The 'with' version should NOT have field 104
assert 104 not in field_numbers_with
# The 'without' version SHOULD have field 104
assert 104 in field_numbers_without
# --- shortsLockupViewModel parsing ---
SAMPLE_SHORT = {
'shortsLockupViewModel': {
'entityId': 'shorts-shelf-item-auWWV955Q38',
'accessibilityText': 'Globant Converge - DECEMBER 10 and 11, 7.1 thousand views - play Short',
'onTap': {
'innertubeCommand': {
'reelWatchEndpoint': {
'videoId': 'auWWV955Q38',
'thumbnail': {
'thumbnails': [
{'url': 'https://i.ytimg.com/vi/auWWV955Q38/frame0.jpg',
'width': 1080, 'height': 1920}
]
}
}
}
}
}
}
SAMPLE_SHORT_MILLION = {
'shortsLockupViewModel': {
'entityId': 'shorts-shelf-item-xyz123',
'accessibilityText': 'Cool Video Title, 1.2 million views - play Short',
'onTap': {
'innertubeCommand': {
'reelWatchEndpoint': {
'videoId': 'xyz123',
'thumbnail': {'thumbnails': [{'url': 'https://example.com/thumb.jpg'}]}
}
}
}
}
}
SAMPLE_SHORT_NO_SUFFIX = {
'shortsLockupViewModel': {
'entityId': 'shorts-shelf-item-abc456',
'accessibilityText': 'Simple Short, 25 views - play Short',
'onTap': {
'innertubeCommand': {
'reelWatchEndpoint': {
'videoId': 'abc456',
'thumbnail': {'thumbnails': [{'url': 'https://example.com/thumb2.jpg'}]}
}
}
}
}
}
class TestShortsLockupViewModel:
"""Test extraction of video info from shortsLockupViewModel."""
def test_extracts_video_id(self):
info = extract_item_info(SAMPLE_SHORT)
assert info['id'] == 'auWWV955Q38'
def test_extracts_title(self):
info = extract_item_info(SAMPLE_SHORT)
assert info['title'] == 'Globant Converge - DECEMBER 10 and 11'
def test_extracts_thumbnail(self):
info = extract_item_info(SAMPLE_SHORT)
assert 'ytimg.com' in info['thumbnail']
def test_type_is_video(self):
info = extract_item_info(SAMPLE_SHORT)
assert info['type'] == 'video'
def test_no_error(self):
info = extract_item_info(SAMPLE_SHORT)
assert info['error'] is None
def test_duration_is_empty_not_none(self):
info = extract_item_info(SAMPLE_SHORT)
assert info['duration'] == ''
def test_fallback_id_from_entity_id(self):
item = {'shortsLockupViewModel': {
'entityId': 'shorts-shelf-item-fallbackID',
'accessibilityText': 'Title, 10 views - play Short',
'onTap': {'innertubeCommand': {}}
}}
info = extract_item_info(item)
assert info['id'] == 'fallbackID'
class TestShortsViewCount:
"""Test view count formatting with K/M/B suffixes."""
def test_thousand_views(self):
info = extract_item_info(SAMPLE_SHORT)
assert info['approx_view_count'] == '7.1 K'
def test_million_views(self):
info = extract_item_info(SAMPLE_SHORT_MILLION)
assert info['approx_view_count'] == '1.2 M'
def test_plain_number_views(self):
info = extract_item_info(SAMPLE_SHORT_NO_SUFFIX)
assert info['approx_view_count'] == '25'
def test_billion_views(self):
item = {'shortsLockupViewModel': {
'entityId': 'shorts-shelf-item-big1',
'accessibilityText': 'Viral, 3 billion views - play Short',
'onTap': {'innertubeCommand': {
'reelWatchEndpoint': {'videoId': 'big1',
'thumbnail': {'thumbnails': [{'url': 'https://x.com/t.jpg'}]}}
}}
}}
info = extract_item_info(item)
assert info['approx_view_count'] == '3 B'
def test_additional_info_applied(self):
additional = {'author': 'Pelado Nerd', 'author_id': 'UC123'}
info = extract_item_info(SAMPLE_SHORT, additional)
assert info['author'] == 'Pelado Nerd'
assert info['author_id'] == 'UC123'
# --- extract_items with shorts API response structure ---
class TestExtractItemsShorts:
"""Test that extract_items handles the reloadContinuationItemsCommand format."""
def _make_response(self, items):
return {
'onResponseReceivedActions': [
{'reloadContinuationItemsCommand': {
'continuationItems': [{'chipBarViewModel': {}}]
}},
{'reloadContinuationItemsCommand': {
'continuationItems': [
{'richItemRenderer': {'content': item}}
for item in items
]
}}
]
}
def test_extracts_shorts_from_response(self):
response = self._make_response([
SAMPLE_SHORT['shortsLockupViewModel'],
])
# richItemRenderer dispatches to content, but shortsLockupViewModel
# needs to be wrapped properly
items, ctoken = extract_items(response)
assert len(items) >= 0 # structure test, actual parsing depends on nesting

View File

@@ -39,7 +39,8 @@ class NewIdentityState():
self.new_identities_till_success -= 1 self.new_identities_till_success -= 1
def fetch_url_response(self, *args, **kwargs): def fetch_url_response(self, *args, **kwargs):
cleanup_func = (lambda r: None) def cleanup_func(response):
return None
if self.new_identities_till_success == 0: if self.new_identities_till_success == 0:
return MockResponse(), cleanup_func return MockResponse(), cleanup_func
return MockResponse(body=html429, status=429), cleanup_func return MockResponse(body=html429, status=429), cleanup_func

View File

@@ -0,0 +1,433 @@
# Spanish translations template for PROJECT.
# Copyright (C) 2026 ORGANIZATION
# This file is distributed under the same license as the PROJECT project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2026.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-04-05 16:52-0500\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: es\n"
"Language-Team: es <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.18.0\n"
#: youtube/i18n_strings.py:13
msgid "Network"
msgstr "Red"
#: youtube/i18n_strings.py:14
msgid "Playback"
msgstr "Reproducción"
#: youtube/i18n_strings.py:15
msgid "Interface"
msgstr "Interfaz"
#: youtube/i18n_strings.py:18
msgid "Route Tor"
msgstr "Enrutar por Tor"
#: youtube/i18n_strings.py:19
msgid "Default subtitles mode"
msgstr "Modo de subtítulos predeterminado"
#: youtube/i18n_strings.py:20
msgid "AV1 Codec Ranking"
msgstr "Prioridad códec AV1"
#: youtube/i18n_strings.py:21
msgid "VP8/VP9 Codec Ranking"
msgstr "Prioridad códec VP8/VP9"
#: youtube/i18n_strings.py:22
msgid "H.264 Codec Ranking"
msgstr "Prioridad códec H.264"
#: youtube/i18n_strings.py:23
msgid "Use integrated sources"
msgstr "Usar fuentes integradas"
#: youtube/i18n_strings.py:24
msgid "Route images"
msgstr "Enrutar imágenes"
#: youtube/i18n_strings.py:25
msgid "Enable comments.js"
msgstr "Activar comments.js"
#: youtube/i18n_strings.py:26
msgid "Enable SponsorBlock"
msgstr "Activar SponsorBlock"
#: youtube/i18n_strings.py:27
msgid "Enable embed page"
msgstr "Activar página embed"
#: youtube/i18n_strings.py:30
msgid "Related videos mode"
msgstr "Modo videos relacionados"
#: youtube/i18n_strings.py:31
msgid "Comments mode"
msgstr "Modo comentarios"
#: youtube/i18n_strings.py:32
msgid "Enable comment avatars"
msgstr "Activar avatares en comentarios"
#: youtube/i18n_strings.py:33
msgid "Default comment sorting"
msgstr "Orden de comentarios predeterminado"
#: youtube/i18n_strings.py:34
msgid "Theater mode"
msgstr "Modo teatro"
#: youtube/i18n_strings.py:35
msgid "Autoplay videos"
msgstr "Reproducción automática"
#: youtube/i18n_strings.py:36
msgid "Default resolution"
msgstr "Resolución predeterminada"
#: youtube/i18n_strings.py:37
msgid "Use video player"
msgstr "Usar reproductor de video"
#: youtube/i18n_strings.py:38
msgid "Use video download"
msgstr "Usar descarga de video"
#: youtube/i18n_strings.py:39
msgid "Proxy images"
msgstr "Imágenes por proxy"
#: youtube/i18n_strings.py:40
msgid "Theme"
msgstr "Tema"
#: youtube/i18n_strings.py:41
msgid "Font"
msgstr "Fuente"
#: youtube/i18n_strings.py:42
msgid "Language"
msgstr "Idioma"
#: youtube/i18n_strings.py:43
msgid "Embed page mode"
msgstr "Modo página embed"
#: youtube/i18n_strings.py:46
msgid "Off"
msgstr "Apagado"
#: youtube/i18n_strings.py:47
msgid "On"
msgstr "Encendido"
#: youtube/i18n_strings.py:48
msgid "Disabled"
msgstr "Deshabilitado"
#: youtube/i18n_strings.py:49
msgid "Enabled"
msgstr "Habilitado"
#: youtube/i18n_strings.py:50
msgid "Always shown"
msgstr "Siempre visible"
#: youtube/i18n_strings.py:51
msgid "Shown by clicking button"
msgstr "Mostrar al hacer clic"
#: youtube/i18n_strings.py:52
msgid "Native"
msgstr "Nativo"
#: youtube/i18n_strings.py:53
msgid "Native with hotkeys"
msgstr "Nativo con atajos"
#: youtube/i18n_strings.py:54
msgid "Plyr"
msgstr "Plyr"
#: youtube/i18n_strings.py:57
msgid "Light"
msgstr "Claro"
#: youtube/i18n_strings.py:58
msgid "Gray"
msgstr "Gris"
#: youtube/i18n_strings.py:59
msgid "Dark"
msgstr "Oscuro"
#: youtube/i18n_strings.py:62
msgid "Browser default"
msgstr "Predeterminado del navegador"
#: youtube/i18n_strings.py:63
msgid "Liberation Serif"
msgstr ""
#: youtube/i18n_strings.py:64
msgid "Arial"
msgstr ""
#: youtube/i18n_strings.py:65
msgid "Verdana"
msgstr ""
#: youtube/i18n_strings.py:66
msgid "Tahoma"
msgstr ""
#: youtube/i18n_strings.py:69 youtube/templates/base.html:53
msgid "Sort by"
msgstr "Ordenar por"
#: youtube/i18n_strings.py:70 youtube/templates/base.html:56
msgid "Relevance"
msgstr "Relevancia"
#: youtube/i18n_strings.py:71 youtube/templates/base.html:60
#: youtube/templates/base.html:71
msgid "Upload date"
msgstr "Fecha de subida"
#: youtube/i18n_strings.py:72 youtube/templates/base.html:64
msgid "View count"
msgstr "Número de visualizaciones"
#: youtube/i18n_strings.py:73 youtube/templates/base.html:68
msgid "Rating"
msgstr "Calificación"
#: youtube/i18n_strings.py:76 youtube/templates/base.html:74
msgid "Any"
msgstr "Cualquiera"
#: youtube/i18n_strings.py:77 youtube/templates/base.html:78
msgid "Last hour"
msgstr "Última hora"
#: youtube/i18n_strings.py:78 youtube/templates/base.html:82
msgid "Today"
msgstr "Hoy"
#: youtube/i18n_strings.py:79 youtube/templates/base.html:86
msgid "This week"
msgstr "Esta semana"
#: youtube/i18n_strings.py:80 youtube/templates/base.html:90
msgid "This month"
msgstr "Este mes"
#: youtube/i18n_strings.py:81 youtube/templates/base.html:94
msgid "This year"
msgstr "Este año"
#: youtube/i18n_strings.py:84
msgid "Type"
msgstr ""
#: youtube/i18n_strings.py:85
msgid "Video"
msgstr ""
#: youtube/i18n_strings.py:86
msgid "Channel"
msgstr ""
#: youtube/i18n_strings.py:87
msgid "Playlist"
msgstr ""
#: youtube/i18n_strings.py:88
msgid "Movie"
msgstr ""
#: youtube/i18n_strings.py:89
msgid "Show"
msgstr ""
#: youtube/i18n_strings.py:92
msgid "Duration"
msgstr ""
#: youtube/i18n_strings.py:93
msgid "Short (< 4 minutes)"
msgstr ""
#: youtube/i18n_strings.py:94
msgid "Long (> 20 minutes)"
msgstr ""
#: youtube/i18n_strings.py:97 youtube/templates/base.html:45
msgid "Search"
msgstr "Buscar"
#: youtube/i18n_strings.py:98 youtube/templates/watch.html:104
msgid "Download"
msgstr "Descargar"
#: youtube/i18n_strings.py:99
msgid "Subscribe"
msgstr ""
#: youtube/i18n_strings.py:100
msgid "Unsubscribe"
msgstr ""
#: youtube/i18n_strings.py:101
msgid "Import"
msgstr ""
#: youtube/i18n_strings.py:102
msgid "Export"
msgstr ""
#: youtube/i18n_strings.py:103
msgid "Save"
msgstr ""
#: youtube/i18n_strings.py:104
msgid "Check"
msgstr ""
#: youtube/i18n_strings.py:105
msgid "Mute"
msgstr ""
#: youtube/i18n_strings.py:106
msgid "Unmute"
msgstr ""
#: youtube/i18n_strings.py:109 youtube/templates/base.html:51
msgid "Options"
msgstr "Opciones"
#: youtube/i18n_strings.py:110
msgid "Settings"
msgstr ""
#: youtube/i18n_strings.py:111
msgid "Error"
msgstr ""
#: youtube/i18n_strings.py:112
msgid "loading..."
msgstr ""
#: youtube/i18n_strings.py:115
msgid "Top"
msgstr "Popularidad"
#: youtube/i18n_strings.py:116
msgid "Newest"
msgstr "Más reciente"
#: youtube/i18n_strings.py:117
msgid "Auto"
msgstr "Automático"
#: youtube/i18n_strings.py:118
msgid "English"
msgstr "Inglés"
#: youtube/i18n_strings.py:119
msgid "Español"
msgstr "Español"
#: youtube/i18n_strings.py:122
msgid "Auto (HLS preferred)"
msgstr "Auto (HLS preferido)"
#: youtube/i18n_strings.py:123
msgid "Force HLS"
msgstr "Forzar HLS"
#: youtube/i18n_strings.py:124
msgid "Force DASH"
msgstr "Forzar DASH"
#: youtube/i18n_strings.py:125
msgid "#1"
msgstr "#1"
#: youtube/i18n_strings.py:126
msgid "#2"
msgstr "#2"
#: youtube/i18n_strings.py:127
msgid "#3"
msgstr "#3"
#: youtube/i18n_strings.py:130 youtube/templates/settings.html:53
msgid "Save settings"
msgstr "Guardar configuración"
#: youtube/i18n_strings.py:133
msgid "Other"
msgstr "Otros"
#: youtube/i18n_strings.py:136
msgid "Playback mode"
msgstr "Modo de reproducción"
#: youtube/i18n_strings.py:139
msgid "Autocheck subscriptions"
msgstr "Verificar suscripciones automáticamente"
#: youtube/i18n_strings.py:140
msgid "Include shorts in subscriptions"
msgstr "Incluir shorts en suscripciones"
#: youtube/i18n_strings.py:141
msgid "Include shorts in channel"
msgstr "Incluir shorts en el canal"
#: youtube/templates/base.html:44
msgid "Type to search..."
msgstr "Escribe para buscar..."
#: youtube/templates/comments.html:61
msgid "More comments"
msgstr "Más comentarios"
#: youtube/templates/watch.html:100
msgid "Direct Link"
msgstr "Enlace directo"
#: youtube/templates/watch.html:152
msgid "More info"
msgstr "Más información"
#: youtube/templates/watch.html:176 youtube/templates/watch.html:203
msgid "AutoNext"
msgstr "Siguiente automático"
#: youtube/templates/watch.html:225
msgid "Related Videos"
msgstr "Videos relacionados"
#: youtube/templates/watch.html:239
msgid "Comments disabled"
msgstr "Comentarios deshabilitados"
#: youtube/templates/watch.html:242
msgid "Comment"
msgstr "Comentario"

431
translations/messages.pot Normal file
View File

@@ -0,0 +1,431 @@
# Translations template for PROJECT.
# Copyright (C) 2026 ORGANIZATION
# This file is distributed under the same license as the PROJECT project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2026.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-04-05 16:52-0500\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.18.0\n"
#: youtube/i18n_strings.py:13
msgid "Network"
msgstr ""
#: youtube/i18n_strings.py:14
msgid "Playback"
msgstr ""
#: youtube/i18n_strings.py:15
msgid "Interface"
msgstr ""
#: youtube/i18n_strings.py:18
msgid "Route Tor"
msgstr ""
#: youtube/i18n_strings.py:19
msgid "Default subtitles mode"
msgstr ""
#: youtube/i18n_strings.py:20
msgid "AV1 Codec Ranking"
msgstr ""
#: youtube/i18n_strings.py:21
msgid "VP8/VP9 Codec Ranking"
msgstr ""
#: youtube/i18n_strings.py:22
msgid "H.264 Codec Ranking"
msgstr ""
#: youtube/i18n_strings.py:23
msgid "Use integrated sources"
msgstr ""
#: youtube/i18n_strings.py:24
msgid "Route images"
msgstr ""
#: youtube/i18n_strings.py:25
msgid "Enable comments.js"
msgstr ""
#: youtube/i18n_strings.py:26
msgid "Enable SponsorBlock"
msgstr ""
#: youtube/i18n_strings.py:27
msgid "Enable embed page"
msgstr ""
#: youtube/i18n_strings.py:30
msgid "Related videos mode"
msgstr ""
#: youtube/i18n_strings.py:31
msgid "Comments mode"
msgstr ""
#: youtube/i18n_strings.py:32
msgid "Enable comment avatars"
msgstr ""
#: youtube/i18n_strings.py:33
msgid "Default comment sorting"
msgstr ""
#: youtube/i18n_strings.py:34
msgid "Theater mode"
msgstr ""
#: youtube/i18n_strings.py:35
msgid "Autoplay videos"
msgstr ""
#: youtube/i18n_strings.py:36
msgid "Default resolution"
msgstr ""
#: youtube/i18n_strings.py:37
msgid "Use video player"
msgstr ""
#: youtube/i18n_strings.py:38
msgid "Use video download"
msgstr ""
#: youtube/i18n_strings.py:39
msgid "Proxy images"
msgstr ""
#: youtube/i18n_strings.py:40
msgid "Theme"
msgstr ""
#: youtube/i18n_strings.py:41
msgid "Font"
msgstr ""
#: youtube/i18n_strings.py:42
msgid "Language"
msgstr ""
#: youtube/i18n_strings.py:43
msgid "Embed page mode"
msgstr ""
#: youtube/i18n_strings.py:46
msgid "Off"
msgstr ""
#: youtube/i18n_strings.py:47
msgid "On"
msgstr ""
#: youtube/i18n_strings.py:48
msgid "Disabled"
msgstr ""
#: youtube/i18n_strings.py:49
msgid "Enabled"
msgstr ""
#: youtube/i18n_strings.py:50
msgid "Always shown"
msgstr ""
#: youtube/i18n_strings.py:51
msgid "Shown by clicking button"
msgstr ""
#: youtube/i18n_strings.py:52
msgid "Native"
msgstr ""
#: youtube/i18n_strings.py:53
msgid "Native with hotkeys"
msgstr ""
#: youtube/i18n_strings.py:54
msgid "Plyr"
msgstr ""
#: youtube/i18n_strings.py:57
msgid "Light"
msgstr ""
#: youtube/i18n_strings.py:58
msgid "Gray"
msgstr ""
#: youtube/i18n_strings.py:59
msgid "Dark"
msgstr ""
#: youtube/i18n_strings.py:62
msgid "Browser default"
msgstr ""
#: youtube/i18n_strings.py:63
msgid "Liberation Serif"
msgstr ""
#: youtube/i18n_strings.py:64
msgid "Arial"
msgstr ""
#: youtube/i18n_strings.py:65
msgid "Verdana"
msgstr ""
#: youtube/i18n_strings.py:66
msgid "Tahoma"
msgstr ""
#: youtube/i18n_strings.py:69 youtube/templates/base.html:53
msgid "Sort by"
msgstr ""
#: youtube/i18n_strings.py:70 youtube/templates/base.html:56
msgid "Relevance"
msgstr ""
#: youtube/i18n_strings.py:71 youtube/templates/base.html:60
#: youtube/templates/base.html:71
msgid "Upload date"
msgstr ""
#: youtube/i18n_strings.py:72 youtube/templates/base.html:64
msgid "View count"
msgstr ""
#: youtube/i18n_strings.py:73 youtube/templates/base.html:68
msgid "Rating"
msgstr ""
#: youtube/i18n_strings.py:76 youtube/templates/base.html:74
msgid "Any"
msgstr ""
#: youtube/i18n_strings.py:77 youtube/templates/base.html:78
msgid "Last hour"
msgstr ""
#: youtube/i18n_strings.py:78 youtube/templates/base.html:82
msgid "Today"
msgstr ""
#: youtube/i18n_strings.py:79 youtube/templates/base.html:86
msgid "This week"
msgstr ""
#: youtube/i18n_strings.py:80 youtube/templates/base.html:90
msgid "This month"
msgstr ""
#: youtube/i18n_strings.py:81 youtube/templates/base.html:94
msgid "This year"
msgstr ""
#: youtube/i18n_strings.py:84
msgid "Type"
msgstr ""
#: youtube/i18n_strings.py:85
msgid "Video"
msgstr ""
#: youtube/i18n_strings.py:86
msgid "Channel"
msgstr ""
#: youtube/i18n_strings.py:87
msgid "Playlist"
msgstr ""
#: youtube/i18n_strings.py:88
msgid "Movie"
msgstr ""
#: youtube/i18n_strings.py:89
msgid "Show"
msgstr ""
#: youtube/i18n_strings.py:92
msgid "Duration"
msgstr ""
#: youtube/i18n_strings.py:93
msgid "Short (< 4 minutes)"
msgstr ""
#: youtube/i18n_strings.py:94
msgid "Long (> 20 minutes)"
msgstr ""
#: youtube/i18n_strings.py:97 youtube/templates/base.html:45
msgid "Search"
msgstr ""
#: youtube/i18n_strings.py:98 youtube/templates/watch.html:104
msgid "Download"
msgstr ""
#: youtube/i18n_strings.py:99
msgid "Subscribe"
msgstr ""
#: youtube/i18n_strings.py:100
msgid "Unsubscribe"
msgstr ""
#: youtube/i18n_strings.py:101
msgid "Import"
msgstr ""
#: youtube/i18n_strings.py:102
msgid "Export"
msgstr ""
#: youtube/i18n_strings.py:103
msgid "Save"
msgstr ""
#: youtube/i18n_strings.py:104
msgid "Check"
msgstr ""
#: youtube/i18n_strings.py:105
msgid "Mute"
msgstr ""
#: youtube/i18n_strings.py:106
msgid "Unmute"
msgstr ""
#: youtube/i18n_strings.py:109 youtube/templates/base.html:51
msgid "Options"
msgstr ""
#: youtube/i18n_strings.py:110
msgid "Settings"
msgstr ""
#: youtube/i18n_strings.py:111
msgid "Error"
msgstr ""
#: youtube/i18n_strings.py:112
msgid "loading..."
msgstr ""
#: youtube/i18n_strings.py:115
msgid "Top"
msgstr ""
#: youtube/i18n_strings.py:116
msgid "Newest"
msgstr ""
#: youtube/i18n_strings.py:117
msgid "Auto"
msgstr ""
#: youtube/i18n_strings.py:118
msgid "English"
msgstr ""
#: youtube/i18n_strings.py:119
msgid "Español"
msgstr ""
#: youtube/i18n_strings.py:122
msgid "Auto (HLS preferred)"
msgstr ""
#: youtube/i18n_strings.py:123
msgid "Force HLS"
msgstr ""
#: youtube/i18n_strings.py:124
msgid "Force DASH"
msgstr ""
#: youtube/i18n_strings.py:125
msgid "#1"
msgstr ""
#: youtube/i18n_strings.py:126
msgid "#2"
msgstr ""
#: youtube/i18n_strings.py:127
msgid "#3"
msgstr ""
#: youtube/i18n_strings.py:130 youtube/templates/settings.html:53
msgid "Save settings"
msgstr ""
#: youtube/i18n_strings.py:133
msgid "Other"
msgstr ""
#: youtube/i18n_strings.py:136
msgid "Playback mode"
msgstr ""
#: youtube/i18n_strings.py:139
msgid "Autocheck subscriptions"
msgstr ""
#: youtube/i18n_strings.py:140
msgid "Include shorts in subscriptions"
msgstr ""
#: youtube/i18n_strings.py:141
msgid "Include shorts in channel"
msgstr ""
#: youtube/templates/base.html:44
msgid "Type to search..."
msgstr ""
#: youtube/templates/comments.html:61
msgid "More comments"
msgstr ""
#: youtube/templates/watch.html:100
msgid "Direct Link"
msgstr ""
#: youtube/templates/watch.html:152
msgid "More info"
msgstr ""
#: youtube/templates/watch.html:176 youtube/templates/watch.html:203
msgid "AutoNext"
msgstr ""
#: youtube/templates/watch.html:225
msgid "Related Videos"
msgstr ""
#: youtube/templates/watch.html:239
msgid "Comments disabled"
msgstr ""
#: youtube/templates/watch.html:242
msgid "Comment"
msgstr ""

View File

@@ -1,18 +1,54 @@
import logging
import os
import re
import traceback
from sys import exc_info
import flask
import jinja2
from flask import request
from flask_babel import Babel
from youtube import util from youtube import util
from .get_app_version import app_version from .get_app_version import app_version
import flask
from flask import request
import jinja2
import settings import settings
import traceback
import re
from sys import exc_info
yt_app = flask.Flask(__name__) yt_app = flask.Flask(__name__)
yt_app.config['TEMPLATES_AUTO_RELOAD'] = True yt_app.config['TEMPLATES_AUTO_RELOAD'] = True
yt_app.url_map.strict_slashes = False yt_app.url_map.strict_slashes = False
# Don't log full tracebacks for handled FetchErrors
class FetchErrorFilter(logging.Filter):
def filter(self, record):
if record.exc_info and record.exc_info[0] == util.FetchError:
return False
return True
yt_app.logger.addFilter(FetchErrorFilter())
# yt_app.jinja_env.trim_blocks = True # yt_app.jinja_env.trim_blocks = True
# yt_app.jinja_env.lstrip_blocks = True # yt_app.jinja_env.lstrip_blocks = True
# Configure Babel for i18n
yt_app.config['BABEL_DEFAULT_LOCALE'] = 'en'
# Use absolute path for translations directory to avoid issues with package structure changes
_app_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
yt_app.config['BABEL_TRANSLATION_DIRECTORIES'] = os.path.join(_app_root, 'translations')
def get_locale():
"""Determine the best locale based on user preference or browser settings"""
# Check if user has a language preference in settings
if hasattr(settings, 'language') and settings.language:
locale = settings.language
print(f'[i18n] Using user preference: {locale}')
return locale
# Otherwise, use browser's Accept-Language header
# Only match languages with available translations
locale = request.accept_languages.best_match(['en', 'es'])
print(f'[i18n] Using browser language: {locale}')
return locale or 'en'
babel = Babel(yt_app, locale_selector=get_locale)
yt_app.add_url_rule('/settings', 'settings_page', settings.settings_page, methods=['POST', 'GET']) yt_app.add_url_rule('/settings', 'settings_page', settings.settings_page, methods=['POST', 'GET'])
@@ -100,36 +136,54 @@ def timestamps(text):
@yt_app.errorhandler(500) @yt_app.errorhandler(500)
def error_page(e): def error_page(e):
slim = request.args.get('slim', False) # whether it was an ajax request slim = request.args.get('slim', False) # whether it was an ajax request
if (exc_info()[0] == util.FetchError if exc_info()[0] == util.FetchError:
and exc_info()[1].code == '429' fetch_err = exc_info()[1]
and settings.route_tor error_code = fetch_err.code
):
if error_code == '429' and settings.route_tor:
error_message = ('Error: YouTube blocked the request because the Tor' error_message = ('Error: YouTube blocked the request because the Tor'
' exit node is overutilized. Try getting a new exit node by' ' exit node is overutilized. Try getting a new exit node by'
' using the New Identity button in the Tor Browser.') ' using the New Identity button in the Tor Browser.')
if exc_info()[1].error_message: if fetch_err.error_message:
error_message += '\n\n' + exc_info()[1].error_message error_message += '\n\n' + fetch_err.error_message
if exc_info()[1].ip: if fetch_err.ip:
error_message += '\n\nExit node IP address: ' + exc_info()[1].ip error_message += '\n\nExit node IP address: ' + fetch_err.ip
return flask.render_template('error.html', error_message=error_message, slim=slim), 502 return flask.render_template('error.html', error_message=error_message, slim=slim), 502
elif exc_info()[0] == util.FetchError and exc_info()[1].error_message:
return (flask.render_template( elif error_code == '429':
'error.html', error_message = ('YouTube is temporarily blocking requests from your IP address (429 Too Many Requests).\n\n'
error_message=exc_info()[1].error_message, 'Try:\n'
slim=slim '• Wait a few minutes and refresh\n'
), 502) '• Enable Tor routing in Settings for automatic IP rotation\n'
elif (exc_info()[0] == util.FetchError '• Use a VPN to change your IP address')
and exc_info()[1].code == '404' if fetch_err.ip:
): error_message += '\n\nYour IP: ' + fetch_err.ip
error_message = ('Error: The page you are looking for isn\'t here.') return flask.render_template('error.html', error_message=error_message, slim=slim), 429
return flask.render_template('error.html',
error_code=exc_info()[1].code, elif error_code == '502' and ('Failed to resolve' in str(fetch_err) or 'Failed to establish' in str(fetch_err)):
error_message=error_message, error_message = ('Could not connect to YouTube.\n\n'
slim=slim), 404 'Check your internet connection and try again.')
return flask.render_template('error.html', error_message=error_message, slim=slim), 502
elif error_code == '403':
error_message = ('YouTube blocked this request (403 Forbidden).\n\n'
'Try enabling Tor routing in Settings.')
return flask.render_template('error.html', error_message=error_message, slim=slim), 403
elif error_code == '404':
error_message = 'Error: The page you are looking for isn\'t here.'
return flask.render_template('error.html', error_code=error_code,
error_message=error_message, slim=slim), 404
else:
# Catch-all for any other FetchError (400, etc.)
error_message = f'Error communicating with YouTube ({error_code}).'
if fetch_err.error_message:
error_message += '\n\n' + fetch_err.error_message
return flask.render_template('error.html', error_message=error_message, slim=slim), 502
return flask.render_template('error.html', traceback=traceback.format_exc(), return flask.render_template('error.html', traceback=traceback.format_exc(),
error_code=exc_info()[1].code,
slim=slim), 500 slim=slim), 500
# return flask.render_template('error.html', traceback=traceback.format_exc(), slim=slim), 500
font_choices = { font_choices = {

View File

@@ -6,9 +6,7 @@ import settings
import urllib import urllib
import json import json
from string import Template
import youtube.proto as proto import youtube.proto as proto
import html
import math import math
import gevent import gevent
import re import re
@@ -33,53 +31,57 @@ headers_mobile = (
real_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=8XihrAcN1l4'),) real_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=8XihrAcN1l4'),)
generic_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=ST1Ti53r4fU'),) generic_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=ST1Ti53r4fU'),)
# added an extra nesting under the 2nd base64 compared to v4 # Sort values for YouTube API (from Invidious): 2=popular, 4=newest, 5=oldest
# added tab support # include_shorts only applies to tab='videos'; tab='shorts'/'streams' always include their own content.
# changed offset field to uint id 1 def channel_ctoken_v5(channel_id, page, sort, tab, view=1, include_shorts=True):
def channel_ctoken_v5(channel_id, page, sort, tab, view=1): # Tab-specific protobuf field numbers (from Invidious source)
new_sort = (2 if int(sort) == 1 else 1) # Each tab uses different field numbers in the protobuf structure:
offset = 30*(int(page) - 1) # videos: 110 -> 3 -> 15 -> { 2:{1:UUID}, 4:sort, 8:{1:UUID, 3:sort} }
if tab == 'videos': # shorts: 110 -> 3 -> 10 -> { 2:{1:UUID}, 4:sort, 7:{1:UUID, 3:sort} }
tab = 15 # streams: 110 -> 3 -> 14 -> { 2:{1:UUID}, 5:sort, 8:{1:UUID, 3:sort} }
elif tab == 'shorts': tab_config = {
tab = 10 'videos': {'tab_field': 15, 'sort_field': 4, 'embedded_field': 8},
elif tab == 'streams': 'shorts': {'tab_field': 10, 'sort_field': 4, 'embedded_field': 7},
tab = 14 'streams': {'tab_field': 14, 'sort_field': 5, 'embedded_field': 8},
}
config = tab_config.get(tab, tab_config['videos'])
tab_field = config['tab_field']
sort_field = config['sort_field']
embedded_field = config['embedded_field']
# Map sort values to YouTube API values
if tab == 'streams':
sort_mapping = {'1': 14, '2': 13, '3': 12, '4': 12}
else:
sort_mapping = {'1': 2, '2': 5, '3': 4, '4': 4}
new_sort = sort_mapping.get(sort, sort_mapping['3'])
# UUID placeholder (field 1)
uuid_str = "00000000-0000-0000-0000-000000000000"
# Build the tab-level object matching Invidious structure exactly:
# { 2: embedded{1: UUID}, sort_field: sort_val, embedded_field: embedded{1: UUID, 3: sort_val} }
tab_content = (
proto.string(2, proto.string(1, uuid_str))
+ proto.uint(sort_field, new_sort)
+ proto.string(embedded_field,
proto.string(1, uuid_str) + proto.uint(3, new_sort))
)
tab_wrapper = proto.string(tab_field, tab_content)
inner_container = proto.string(3, tab_wrapper)
outer_container = proto.string(110, inner_container)
# Add shorts filter when include_shorts=False (field 104, same as playlist.py)
# This tells YouTube to exclude shorts from the results
if not include_shorts:
outer_container += proto.string(104, proto.uint(2, 1))
encoded_inner = proto.percent_b64encode(outer_container)
pointless_nest = proto.string(80226972, pointless_nest = proto.string(80226972,
proto.string(2, channel_id) proto.string(2, channel_id)
+ proto.string(3, + proto.string(3, encoded_inner)
proto.percent_b64encode(
proto.string(110,
proto.string(3,
proto.string(tab,
proto.string(1,
proto.string(1,
proto.unpadded_b64encode(
proto.string(1,
proto.string(1,
proto.unpadded_b64encode(
proto.string(2,
b"ST:"
+ proto.unpadded_b64encode(
proto.uint(1, offset)
)
)
)
)
)
)
)
# targetId, just needs to be present but
# doesn't need to be correct
+ proto.string(2, "63faaff0-0000-23fe-80f0-582429d11c38")
)
# 1 - newest, 2 - popular
+ proto.uint(3, new_sort)
)
)
)
)
)
) )
return base64.urlsafe_b64encode(pointless_nest).decode('ascii') return base64.urlsafe_b64encode(pointless_nest).decode('ascii')
@@ -161,11 +163,6 @@ def channel_ctoken_v4(channel_id, page, sort, tab, view=1):
# SORT: # SORT:
# videos: # videos:
# Popular - 1
# Oldest - 2
# Newest - 3
# playlists:
# Oldest - 2
# Newest - 3 # Newest - 3
# Last video added - 4 # Last video added - 4
@@ -242,12 +239,12 @@ def channel_ctoken_v1(channel_id, page, sort, tab, view=1):
def get_channel_tab(channel_id, page="1", sort=3, tab='videos', view=1, def get_channel_tab(channel_id, page="1", sort=3, tab='videos', view=1,
ctoken=None, print_status=True): ctoken=None, print_status=True, include_shorts=True):
message = 'Got channel tab' if print_status else None message = 'Got channel tab' if print_status else None
if not ctoken: if not ctoken:
if tab in ('videos', 'shorts', 'streams'): if tab in ('videos', 'shorts', 'streams'):
ctoken = channel_ctoken_v5(channel_id, page, sort, tab, view) ctoken = channel_ctoken_v5(channel_id, page, sort, tab, view, include_shorts)
else: else:
ctoken = channel_ctoken_v3(channel_id, page, sort, tab, view) ctoken = channel_ctoken_v3(channel_id, page, sort, tab, view)
ctoken = ctoken.replace('=', '%3D') ctoken = ctoken.replace('=', '%3D')
@@ -280,6 +277,8 @@ def get_channel_tab(channel_id, page="1", sort=3, tab='videos', view=1,
# cache entries expire after 30 minutes # cache entries expire after 30 minutes
number_of_videos_cache = cachetools.TTLCache(128, 30*60) number_of_videos_cache = cachetools.TTLCache(128, 30*60)
# Cache for continuation tokens (shorts/streams pagination)
continuation_token_cache = cachetools.TTLCache(512, 15*60)
@cachetools.cached(number_of_videos_cache) @cachetools.cached(number_of_videos_cache)
def get_number_of_videos_channel(channel_id): def get_number_of_videos_channel(channel_id):
if channel_id is None: if channel_id is None:
@@ -292,18 +291,29 @@ def get_number_of_videos_channel(channel_id):
try: try:
response = util.fetch_url(url, headers_mobile, response = util.fetch_url(url, headers_mobile,
debug_name='number_of_videos', report_text='Got number of videos') debug_name='number_of_videos', report_text='Got number of videos')
except (urllib.error.HTTPError, util.FetchError) as e: except (urllib.error.HTTPError, util.FetchError):
traceback.print_exc() traceback.print_exc()
print("Couldn't retrieve number of videos") print("Couldn't retrieve number of videos")
return 1000 return 1000
response = response.decode('utf-8') response = response.decode('utf-8')
# match = re.search(r'"numVideosText":\s*{\s*"runs":\s*\[{"text":\s*"([\d,]*) videos"', response) # Try several patterns since YouTube's format changes:
match = re.search(r'"numVideosText".*?([,\d]+)', response) # "numVideosText":{"runs":[{"text":"1,234"},{"text":" videos"}]}
# "stats":[..., {"runs":[{"text":"1,234"},{"text":" videos"}]}]
for pattern in (
r'"numVideosText".*?"text":\s*"([\d,]+)"',
r'"numVideosText".*?([\d,]+)\s*videos?',
r'"numVideosText".*?([,\d]+)',
r'([\d,]+)\s*videos?\s*</span>',
):
match = re.search(pattern, response)
if match: if match:
try:
return int(match.group(1).replace(',', '')) return int(match.group(1).replace(',', ''))
else: except ValueError:
continue
# Fallback: unknown count
return 0 return 0
def set_cached_number_of_videos(channel_id, num_videos): def set_cached_number_of_videos(channel_id, num_videos):
@cachetools.cached(number_of_videos_cache) @cachetools.cached(number_of_videos_cache)
@@ -329,11 +339,10 @@ def get_channel_id(base_url):
metadata_cache = cachetools.LRUCache(128) metadata_cache = cachetools.LRUCache(128)
@cachetools.cached(metadata_cache) @cachetools.cached(metadata_cache)
def get_metadata(channel_id): def get_metadata(channel_id):
base_url = 'https://www.youtube.com/channel/' + channel_id # Use youtubei browse API to get channel metadata
polymer_json = util.fetch_url(base_url + '/about?pbj=1', polymer_json = util.call_youtube_api('web', 'browse', {
headers_desktop, 'browseId': channel_id,
debug_name='gen_channel_about', })
report_text='Retrieved channel metadata')
info = yt_data_extract.extract_channel_info(json.loads(polymer_json), info = yt_data_extract.extract_channel_info(json.loads(polymer_json),
'about', 'about',
continuation=False) continuation=False)
@@ -389,6 +398,11 @@ def post_process_channel_info(info):
info['avatar'] = util.prefix_url(info['avatar']) info['avatar'] = util.prefix_url(info['avatar'])
info['channel_url'] = util.prefix_url(info['channel_url']) info['channel_url'] = util.prefix_url(info['channel_url'])
for item in info['items']: for item in info['items']:
# Only set thumbnail if YouTube didn't provide one
if not item.get('thumbnail'):
if item.get('type') == 'playlist' and item.get('first_video_id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['first_video_id'])
elif item.get('type') == 'video' and item.get('id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id']) item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id'])
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
@@ -398,11 +412,20 @@ def post_process_channel_info(info):
info['links'][i] = (text, util.prefix_url(url)) info['links'][i] = (text, util.prefix_url(url))
def get_channel_first_page(base_url=None, tab='videos', channel_id=None): def get_channel_first_page(base_url=None, tab='videos', channel_id=None, sort=None):
if channel_id: if channel_id:
base_url = 'https://www.youtube.com/channel/' + channel_id base_url = 'https://www.youtube.com/channel/' + channel_id
return util.fetch_url(base_url + '/' + tab + '?pbj=1&view=0',
headers_desktop, debug_name='gen_channel_' + tab) # Build URL with sort parameter
# YouTube URL sort params: p=popular, dd=newest, lad=newest no shorts
# Note: 'da' (oldest) was removed by YouTube in January 2026
url = base_url + '/' + tab + '?pbj=1&view=0'
if sort:
# Map sort values to YouTube's URL parameter values
sort_map = {'3': 'dd', '4': 'lad'}
url += '&sort=' + sort_map.get(sort, 'dd')
return util.fetch_url(url, headers_desktop, debug_name='gen_channel_' + tab)
playlist_sort_codes = {'2': "da", '3': "dd", '4': "lad"} playlist_sort_codes = {'2': "da", '3': "dd", '4': "lad"}
@@ -416,25 +439,27 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
page_number = int(request.args.get('page', 1)) page_number = int(request.args.get('page', 1))
# sort 1: views # sort 1: views
# sort 2: oldest # sort 2: oldest
# sort 3: newest # sort 3: newest (includes shorts, via UU uploads playlist)
# sort 4: newest - no shorts (Just a kludge on our end, not internal to yt) # sort 4: newest - no shorts (uses channel Videos tab API directly, like Invidious)
default_sort = '3' if settings.include_shorts_in_channel else '4' default_sort = '3' if settings.include_shorts_in_channel else '4'
sort = request.args.get('sort', default_sort) sort = request.args.get('sort', default_sort)
view = request.args.get('view', '1') view = request.args.get('view', '1')
query = request.args.get('query', '') query = request.args.get('query', '')
ctoken = request.args.get('ctoken', '') ctoken = request.args.get('ctoken', '')
include_shorts = (sort != '4')
default_params = (page_number == 1 and sort in ('3', '4') and view == '1') default_params = (page_number == 1 and sort in ('3', '4') and view == '1')
continuation = bool(ctoken) # whether or not we're using a continuation continuation = bool(ctoken)
page_size = 30 page_size = 30
try_channel_api = True
polymer_json = None polymer_json = None
number_of_videos = 0
info = None
# Use the special UU playlist which contains all the channel's uploads # -------------------------------------------------------------------------
if tab == 'videos' and sort in ('3', '4'): # sort=3: use UU uploads playlist (includes shorts)
# -------------------------------------------------------------------------
if tab == 'videos' and sort == '3':
if not channel_id: if not channel_id:
channel_id = get_channel_id(base_url) channel_id = get_channel_id(base_url)
if page_number == 1 and include_shorts: if page_number == 1:
tasks = ( tasks = (
gevent.spawn(playlist.playlist_first_page, gevent.spawn(playlist.playlist_first_page,
'UU' + channel_id[2:], 'UU' + channel_id[2:],
@@ -443,9 +468,6 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
) )
gevent.joinall(tasks) gevent.joinall(tasks)
util.check_gevent_exceptions(*tasks) util.check_gevent_exceptions(*tasks)
# Ignore the metadata for now, it is cached and will be
# recalled later
pl_json = tasks[0].value pl_json = tasks[0].value
pl_info = yt_data_extract.extract_playlist_info(pl_json) pl_info = yt_data_extract.extract_playlist_info(pl_json)
number_of_videos = pl_info['metadata']['video_count'] number_of_videos = pl_info['metadata']['video_count']
@@ -456,52 +478,70 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
else: else:
tasks = ( tasks = (
gevent.spawn(playlist.get_videos, 'UU' + channel_id[2:], gevent.spawn(playlist.get_videos, 'UU' + channel_id[2:],
page_number, include_shorts=include_shorts), page_number, include_shorts=True),
gevent.spawn(get_metadata, channel_id), gevent.spawn(get_metadata, channel_id),
gevent.spawn(get_number_of_videos_channel, channel_id), gevent.spawn(get_number_of_videos_channel, channel_id),
gevent.spawn(playlist.playlist_first_page, 'UU' + channel_id[2:],
report_text='Retrieved channel video count'),
) )
gevent.joinall(tasks) gevent.joinall(tasks)
util.check_gevent_exceptions(*tasks) util.check_gevent_exceptions(*tasks)
pl_json = tasks[0].value pl_json = tasks[0].value
pl_info = yt_data_extract.extract_playlist_info(pl_json) pl_info = yt_data_extract.extract_playlist_info(pl_json)
number_of_videos = tasks[2].value first_page_meta = yt_data_extract.extract_playlist_metadata(tasks[3].value)
number_of_videos = (tasks[2].value
or first_page_meta.get('video_count')
or 0)
if pl_info['items']:
info = pl_info info = pl_info
info['channel_id'] = channel_id info['channel_id'] = channel_id
info['current_tab'] = 'videos' info['current_tab'] = 'videos'
if info['items']: # Success
page_size = 100 page_size = 100
try_channel_api = False # else fall through to the channel browse API below
else: # Try the first-page method next
try_channel_api = True
# Use the regular channel API # -------------------------------------------------------------------------
if tab in ('shorts', 'streams') or (tab=='videos' and try_channel_api): # Channel browse API: sort=4 (videos tab, no shorts), shorts, streams,
if channel_id: # or fallback when the UU playlist returned no items.
num_videos_call = (get_number_of_videos_channel, channel_id) # Uses channel_ctoken_v5 per-tab tokens, mirroring Invidious's approach.
else: # Pagination is driven by the continuation token YouTube returns each page.
num_videos_call = (get_number_of_videos_general, base_url) # -------------------------------------------------------------------------
used_channel_api = False
if info is None and (
tab in ('shorts', 'streams')
or (tab == 'videos' and sort == '4')
or (tab == 'videos' and sort == '3') # UU-playlist fallback
):
if not channel_id:
channel_id = get_channel_id(base_url)
used_channel_api = True
# Use ctoken method, which YouTube changes all the time # Determine what browse call to make
if channel_id and not default_params: if ctoken:
if sort == 4: browse_call = (util.call_youtube_api, 'web', 'browse',
_sort = 3 {'continuation': ctoken})
continuation = True
elif page_number > 1:
cache_key = (channel_id, tab, sort, page_number - 1)
cached_ctoken = continuation_token_cache.get(cache_key)
if cached_ctoken:
browse_call = (util.call_youtube_api, 'web', 'browse',
{'continuation': cached_ctoken})
else: else:
_sort = sort # Cache miss — restart from page 1 (better than an error)
page_call = (get_channel_tab, channel_id, page_number, _sort, browse_call = (get_channel_tab, channel_id, '1', sort, tab, int(view))
tab, view, ctoken) continuation = True
# Use the first-page method, which won't break
else: else:
page_call = (get_channel_first_page, base_url, tab) browse_call = (get_channel_tab, channel_id, '1', sort, tab, int(view))
continuation = True
tasks = ( # Single browse call; number_of_videos is computed from items actually
gevent.spawn(*num_videos_call), # fetched so we don't mislead the user with a total that includes
gevent.spawn(*page_call), # shorts (which this branch is explicitly excluding for sort=4).
) task = gevent.spawn(*browse_call)
gevent.joinall(tasks) task.join()
util.check_gevent_exceptions(*tasks) util.check_gevent_exceptions(task)
number_of_videos, polymer_json = tasks[0].value, tasks[1].value polymer_json = task.value
elif tab == 'about': elif tab == 'about':
# polymer_json = util.fetch_url(base_url + '/about?pbj=1', headers_desktop, debug_name='gen_channel_about') # polymer_json = util.fetch_url(base_url + '/about?pbj=1', headers_desktop, debug_name='gen_channel_about')
@@ -512,7 +552,14 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
}) })
continuation=True continuation=True
elif tab == 'playlists' and page_number == 1: elif tab == 'playlists' and page_number == 1:
polymer_json = util.fetch_url(base_url+ '/playlists?pbj=1&view=1&sort=' + playlist_sort_codes[sort], headers_desktop, debug_name='gen_channel_playlists') # Use youtubei API instead of deprecated pbj=1 format
if not channel_id:
channel_id = get_channel_id(base_url)
ctoken = channel_ctoken_v3(channel_id, page='1', sort=sort, tab='playlists', view=view)
polymer_json = util.call_youtube_api('web', 'browse', {
'continuation': ctoken,
})
continuation = True
elif tab == 'playlists': elif tab == 'playlists':
polymer_json = get_channel_tab(channel_id, page_number, sort, polymer_json = get_channel_tab(channel_id, page_number, sort,
'playlists', view) 'playlists', view)
@@ -522,16 +569,16 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
elif tab == 'search': elif tab == 'search':
url = base_url + '/search?pbj=1&query=' + urllib.parse.quote(query, safe='') url = base_url + '/search?pbj=1&query=' + urllib.parse.quote(query, safe='')
polymer_json = util.fetch_url(url, headers_desktop, debug_name='gen_channel_search') polymer_json = util.fetch_url(url, headers_desktop, debug_name='gen_channel_search')
elif tab == 'videos': elif tab != 'videos':
pass
else:
flask.abort(404, 'Unknown channel tab: ' + tab) flask.abort(404, 'Unknown channel tab: ' + tab)
if polymer_json is not None: if polymer_json is not None and info is None:
info = yt_data_extract.extract_channel_info( info = yt_data_extract.extract_channel_info(
json.loads(polymer_json), tab, continuation=continuation json.loads(polymer_json), tab, continuation=continuation
) )
if info is None:
return flask.render_template('error.html', error_message='Could not retrieve channel data')
if info['error'] is not None: if info['error'] is not None:
return flask.render_template('error.html', error_message=info['error']) return flask.render_template('error.html', error_message=info['error'])
@@ -542,7 +589,8 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
channel_id = info['channel_id'] channel_id = info['channel_id']
# Will have microformat present, cache metadata while we have it # Will have microformat present, cache metadata while we have it
if channel_id and default_params and tab not in ('videos', 'about'): if (channel_id and default_params and tab not in ('videos', 'about')
and info.get('channel_name') is not None):
metadata = extract_metadata_for_caching(info) metadata = extract_metadata_for_caching(info)
set_cached_metadata(channel_id, metadata) set_cached_metadata(channel_id, metadata)
# Otherwise, populate with our (hopefully cached) metadata # Otherwise, populate with our (hopefully cached) metadata
@@ -560,8 +608,40 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
item.update(additional_info) item.update(additional_info)
if tab in ('videos', 'shorts', 'streams'): if tab in ('videos', 'shorts', 'streams'):
# For any tab using the channel browse API (sort=4, shorts, streams),
# pagination is driven by the ctoken YouTube returns in the response.
# Cache it so the next page request can use it.
if info.get('ctoken'):
cache_key = (channel_id, tab, sort, page_number)
continuation_token_cache[cache_key] = info['ctoken']
# Determine is_last_page and final number_of_pages.
# For channel-API-driven tabs (sort=4, shorts, streams, UU fallback),
# YouTube doesn't give us a reliable total filtered count. So instead
# of displaying a misleading number (the total-including-shorts from
# get_number_of_videos_channel), we count only what we've actually
# paged through, and use the ctoken to know whether to show "next".
if used_channel_api:
info['is_last_page'] = (info.get('ctoken') is None)
items_on_page = len(info.get('items', []))
items_seen_so_far = (page_number - 1) * page_size + items_on_page
# Use accumulated count as the displayed total so "N videos" shown
# to the user always matches what they could actually reach.
number_of_videos = items_seen_so_far
# If there's more content, bump by 1 so the Next-page button exists
if info.get('ctoken'):
number_of_videos = max(number_of_videos,
page_number * page_size + 1)
# For sort=3 via UU playlist (used_channel_api=False), number_of_videos
# was already set from playlist metadata above.
info['number_of_videos'] = number_of_videos info['number_of_videos'] = number_of_videos
info['number_of_pages'] = math.ceil(number_of_videos/page_size) info['number_of_pages'] = math.ceil(number_of_videos / page_size) if number_of_videos else 1
# Never show fewer pages than the page the user is actually on
if info['number_of_pages'] < page_number:
info['number_of_pages'] = page_number
info['header_playlist_names'] = local_playlist.get_playlist_names() info['header_playlist_names'] = local_playlist.get_playlist_names()
if tab in ('videos', 'shorts', 'streams', 'playlists'): if tab in ('videos', 'shorts', 'streams', 'playlists'):
info['current_sort'] = sort info['current_sort'] = sort

View File

@@ -53,7 +53,7 @@ def request_comments(ctoken, replies=False):
'hl': 'en', 'hl': 'en',
'gl': 'US', 'gl': 'US',
'clientName': 'MWEB', 'clientName': 'MWEB',
'clientVersion': '2.20240328.08.00', 'clientVersion': '2.20210804.02.00',
}, },
}, },
'continuation': ctoken.replace('=', '%3D'), 'continuation': ctoken.replace('=', '%3D'),
@@ -78,7 +78,7 @@ def single_comment_ctoken(video_id, comment_id):
def post_process_comments_info(comments_info): def post_process_comments_info(comments_info):
for comment in comments_info['comments']: for comment in comments_info['comments']:
comment['author'] = strip_non_ascii(comment['author']) comment['author'] = strip_non_ascii(comment['author']) if comment.get('author') else ""
comment['author_url'] = concat_or_none( comment['author_url'] = concat_or_none(
'/', comment['author_url']) '/', comment['author_url'])
comment['author_avatar'] = concat_or_none( comment['author_avatar'] = concat_or_none(
@@ -155,9 +155,13 @@ def post_process_comments_info(comments_info):
def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''): def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''):
try: if not settings.comments_mode:
if settings.comments_mode: return {}
# Initialize the result dict up-front so that any exception path below
# can safely attach an 'error' field without risking UnboundLocalError.
comments_info = {'error': None} comments_info = {'error': None}
try:
other_sort_url = ( other_sort_url = (
util.URL_ORIGIN + '/comments?ctoken=' util.URL_ORIGIN + '/comments?ctoken='
+ make_comment_ctoken(video_id, sort=1 - sort, lc=lc) + make_comment_ctoken(video_id, sort=1 - sort, lc=lc)
@@ -180,8 +184,6 @@ def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''):
post_process_comments_info(comments_info) post_process_comments_info(comments_info)
return comments_info return comments_info
else:
return {}
except util.FetchError as e: except util.FetchError as e:
if e.code == '429' and settings.route_tor: if e.code == '429' and settings.route_tor:
comments_info['error'] = 'Error: YouTube blocked the request because the Tor exit node is overutilized.' comments_info['error'] = 'Error: YouTube blocked the request because the Tor exit node is overutilized.'
@@ -189,10 +191,10 @@ def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''):
comments_info['error'] += '\n\n' + e.error_message comments_info['error'] += '\n\n' + e.error_message
comments_info['error'] += '\n\nExit node IP address: %s' % e.ip comments_info['error'] += '\n\nExit node IP address: %s' % e.ip
else: else:
comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip comments_info['error'] = 'YouTube blocked the request. Error: %s' % str(e)
except Exception as e: except Exception as e:
comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip comments_info['error'] = 'YouTube blocked the request. Error: %s' % str(e)
if comments_info.get('error'): if comments_info.get('error'):
print('Error retrieving comments for ' + str(video_id) + ':\n' + print('Error retrieving comments for ' + str(video_id) + ':\n' +

View File

@@ -1 +1,3 @@
from .get_app_version import * from .get_app_version import app_version
__all__ = ['app_version']

View File

@@ -1,47 +1,56 @@
from __future__ import unicode_literals from __future__ import unicode_literals
from subprocess import (
call,
STDOUT
)
from ..version import __version__
import os import os
import shutil
import subprocess import subprocess
from ..version import __version__
def app_version(): def app_version():
def minimal_env_cmd(cmd): def minimal_env_cmd(cmd):
# make minimal environment # make minimal environment
env = {k: os.environ[k] for k in ['SYSTEMROOT', 'PATH'] if k in os.environ} env = {k: os.environ[k] for k in ['SYSTEMROOT', 'PATH'] if k in os.environ}
env.update({'LANGUAGE': 'C', 'LANG': 'C', 'LC_ALL': 'C'}) env.update({'LANGUAGE': 'C', 'LANG': 'C', 'LC_ALL': 'C'})
out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0] out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
return out return out
subst_list = { subst_list = {
"version": __version__, 'version': __version__,
"branch": None, 'branch': None,
"commit": None 'commit': None,
} }
if os.system("command -v git > /dev/null 2>&1") != 0: # Use shutil.which instead of `command -v`/os.system so we don't spawn a
# shell (CWE-78 hardening) and so it works cross-platform.
if shutil.which('git') is None:
return subst_list return subst_list
if call(["git", "branch"], stderr=STDOUT, stdout=open(os.devnull, 'w')) != 0: try:
# Check we are inside a git work tree. Using DEVNULL avoids the
# file-handle leak from `open(os.devnull, 'w')`.
rc = subprocess.call(
['git', 'branch'],
stderr=subprocess.DEVNULL,
stdout=subprocess.DEVNULL,
)
except OSError:
return subst_list
if rc != 0:
return subst_list return subst_list
describe = minimal_env_cmd(["git", "describe", "--tags", "--always"]) describe = minimal_env_cmd(['git', 'describe', '--tags', '--always'])
git_revision = describe.strip().decode('ascii') git_revision = describe.strip().decode('ascii')
branch = minimal_env_cmd(["git", "branch"]) branch = minimal_env_cmd(['git', 'branch'])
git_branch = branch.strip().decode('ascii').replace('* ', '') git_branch = branch.strip().decode('ascii').replace('* ', '')
subst_list.update({ subst_list.update({
"branch": git_branch, 'branch': git_branch,
"commit": git_revision 'commit': git_revision,
}) })
return subst_list return subst_list
if __name__ == "__main__": if __name__ == '__main__':
app_version() app_version()

23
youtube/hls_cache.py Normal file
View File

@@ -0,0 +1,23 @@
"""Multi-audio track support via HLS streaming.
Instead of downloading all segments, we proxy the HLS playlist and
let the browser stream the audio directly. Zero local storage needed.
"""
_tracks = {} # cache_key -> {'hls_url': str, ...}
def register_track(cache_key, hls_playlist_url, content_length=0,
video_id=None, track_id=None):
print(f'[audio-track-cache] Registering track: {cache_key} -> {hls_playlist_url[:80]}...')
_tracks[cache_key] = {'hls_url': hls_playlist_url}
print(f'[audio-track-cache] Available tracks: {list(_tracks.keys())}')
def get_hls_url(cache_key):
entry = _tracks.get(cache_key)
if entry:
print(f'[audio-track-cache] Found track: {cache_key}')
else:
print(f'[audio-track-cache] Track not found: {cache_key}')
return entry['hls_url'] if entry else None

141
youtube/i18n_strings.py Normal file
View File

@@ -0,0 +1,141 @@
#!/usr/bin/env python3
"""
Centralized i18n strings for yt-local
This file contains static strings that need to be translated but are used
dynamically in templates or generated content. By importing this module,
these strings get extracted by babel for translation.
"""
from flask_babel import lazy_gettext as _l
# Settings categories
CATEGORY_NETWORK = _l('Network')
CATEGORY_PLAYBACK = _l('Playback')
CATEGORY_INTERFACE = _l('Interface')
# Common setting labels
ROUTE_TOR = _l('Route Tor')
DEFAULT_SUBTITLES_MODE = _l('Default subtitles mode')
AV1_CODEC_RANKING = _l('AV1 Codec Ranking')
VP8_VP9_CODEC_RANKING = _l('VP8/VP9 Codec Ranking')
H264_CODEC_RANKING = _l('H.264 Codec Ranking')
USE_INTEGRATED_SOURCES = _l('Use integrated sources')
ROUTE_IMAGES = _l('Route images')
ENABLE_COMMENTS_JS = _l('Enable comments.js')
ENABLE_SPONSORBLOCK = _l('Enable SponsorBlock')
ENABLE_EMBED_PAGE = _l('Enable embed page')
# Setting names (auto-generated from setting keys)
RELATED_VIDEOS_MODE = _l('Related videos mode')
COMMENTS_MODE = _l('Comments mode')
ENABLE_COMMENT_AVATARS = _l('Enable comment avatars')
DEFAULT_COMMENT_SORTING = _l('Default comment sorting')
THEATER_MODE = _l('Theater mode')
AUTOPLAY_VIDEOS = _l('Autoplay videos')
DEFAULT_RESOLUTION = _l('Default resolution')
USE_VIDEO_PLAYER = _l('Use video player')
USE_VIDEO_DOWNLOAD = _l('Use video download')
PROXY_IMAGES = _l('Proxy images')
THEME = _l('Theme')
FONT = _l('Font')
LANGUAGE = _l('Language')
EMBED_PAGE_MODE = _l('Embed page mode')
# Common option values
OFF = _l('Off')
ON = _l('On')
DISABLED = _l('Disabled')
ENABLED = _l('Enabled')
ALWAYS_SHOWN = _l('Always shown')
SHOWN_BY_CLICKING_BUTTON = _l('Shown by clicking button')
NATIVE = _l('Native')
NATIVE_WITH_HOTKEYS = _l('Native with hotkeys')
PLYR = _l('Plyr')
# Theme options
LIGHT = _l('Light')
GRAY = _l('Gray')
DARK = _l('Dark')
# Font options
BROWSER_DEFAULT = _l('Browser default')
LIBERATION_SERIF = _l('Liberation Serif')
ARIAL = _l('Arial')
VERDANA = _l('Verdana')
TAHOMA = _l('Tahoma')
# Search and filter options
SORT_BY = _l('Sort by')
RELEVANCE = _l('Relevance')
UPLOAD_DATE = _l('Upload date')
VIEW_COUNT = _l('View count')
RATING = _l('Rating')
# Time filters
ANY = _l('Any')
LAST_HOUR = _l('Last hour')
TODAY = _l('Today')
THIS_WEEK = _l('This week')
THIS_MONTH = _l('This month')
THIS_YEAR = _l('This year')
# Content types
TYPE = _l('Type')
VIDEO = _l('Video')
CHANNEL = _l('Channel')
PLAYLIST = _l('Playlist')
MOVIE = _l('Movie')
SHOW = _l('Show')
# Duration filters
DURATION = _l('Duration')
SHORT_DURATION = _l('Short (< 4 minutes)')
LONG_DURATION = _l('Long (> 20 minutes)')
# Actions
SEARCH = _l('Search')
DOWNLOAD = _l('Download')
SUBSCRIBE = _l('Subscribe')
UNSUBSCRIBE = _l('Unsubscribe')
IMPORT = _l('Import')
EXPORT = _l('Export')
SAVE = _l('Save')
CHECK = _l('Check')
MUTE = _l('Mute')
UNMUTE = _l('Unmute')
# Common UI elements
OPTIONS = _l('Options')
SETTINGS = _l('Settings')
ERROR = _l('Error')
LOADING = _l('loading...')
# Settings option values
TOP = _l('Top')
NEWEST = _l('Newest')
AUTO = _l('Auto')
ENGLISH = _l('English')
ESPANOL = _l('Español')
# Playback options
AUTO_HLS_PREFERRED = _l('Auto (HLS preferred)')
FORCE_HLS = _l('Force HLS')
FORCE_DASH = _l('Force DASH')
RANKING_1 = _l('#1')
RANKING_2 = _l('#2')
RANKING_3 = _l('#3')
# Form actions
SAVE_SETTINGS = _l('Save settings')
# Other category
OTHER = _l('Other')
# Settings labels
PLAYBACK_MODE = _l('Playback mode')
# Subscription settings (may be used in future)
AUTOCHECK_SUBSCRIPTIONS = _l('Autocheck subscriptions')
INCLUDE_SHORTS_SUBSCRIPTIONS = _l('Include shorts in subscriptions')
INCLUDE_SHORTS_CHANNEL = _l('Include shorts in channel')

View File

@@ -1,36 +1,74 @@
from youtube import util, yt_data_extract from youtube import util
from youtube import yt_app from youtube import yt_app
import settings import settings
import os import os
import json import json
import html
import gevent import gevent
import urllib
import math import math
import glob
import re
import flask import flask
from flask import request from flask import request
playlists_directory = os.path.join(settings.data_dir, "playlists") playlists_directory = os.path.join(settings.data_dir, 'playlists')
thumbnails_directory = os.path.join(settings.data_dir, "playlist_thumbnails") thumbnails_directory = os.path.join(settings.data_dir, 'playlist_thumbnails')
# Whitelist accepted playlist names so user input cannot escape
# `playlists_directory` / `thumbnails_directory` (CWE-22, OWASP A01:2021).
# Allow letters, digits, spaces, dot, dash and underscore.
_PLAYLIST_NAME_RE = re.compile(r'^[\w .\-]{1,128}$')
def _validate_playlist_name(name):
'''Return the stripped name if safe, otherwise abort with 400.'''
if name is None:
flask.abort(400)
name = name.strip()
if not _PLAYLIST_NAME_RE.match(name):
flask.abort(400)
return name
def _find_playlist_path(name):
'''Find playlist file robustly, handling trailing spaces in filenames'''
name = _validate_playlist_name(name)
pattern = os.path.join(playlists_directory, name + '*.txt')
files = glob.glob(pattern)
return files[0] if files else os.path.join(playlists_directory, name + '.txt')
def _parse_playlist_lines(data):
"""Parse playlist data lines robustly, skipping empty/malformed entries"""
videos = []
for line in data.splitlines():
clean_line = line.strip()
if not clean_line:
continue
try:
videos.append(json.loads(clean_line))
except json.decoder.JSONDecodeError:
print('Corrupt playlist entry: ' + clean_line)
return videos
def video_ids_in_playlist(name): def video_ids_in_playlist(name):
try: try:
with open(os.path.join(playlists_directory, name + ".txt"), 'r', encoding='utf-8') as file: playlist_path = _find_playlist_path(name)
with open(playlist_path, 'r', encoding='utf-8') as file:
videos = file.read() videos = file.read()
return set(json.loads(video)['id'] for video in videos.splitlines()) return set(json.loads(line.strip())['id'] for line in videos.splitlines() if line.strip())
except FileNotFoundError: except FileNotFoundError:
return set() return set()
def add_to_playlist(name, video_info_list): def add_to_playlist(name, video_info_list):
if not os.path.exists(playlists_directory): os.makedirs(playlists_directory, exist_ok=True)
os.makedirs(playlists_directory)
ids = video_ids_in_playlist(name) ids = video_ids_in_playlist(name)
missing_thumbnails = [] missing_thumbnails = []
with open(os.path.join(playlists_directory, name + ".txt"), "a", encoding='utf-8') as file: playlist_path = _find_playlist_path(name)
with open(playlist_path, "a", encoding='utf-8') as file:
for info in video_info_list: for info in video_info_list:
id = json.loads(info)['id'] id = json.loads(info)['id']
if id not in ids: if id not in ids:
@@ -68,20 +106,14 @@ def add_extra_info_to_videos(videos, playlist_name):
def read_playlist(name): def read_playlist(name):
'''Returns a list of videos for the given playlist name''' '''Returns a list of videos for the given playlist name'''
playlist_path = os.path.join(playlists_directory, name + '.txt') playlist_path = _find_playlist_path(name)
try:
with open(playlist_path, 'r', encoding='utf-8') as f: with open(playlist_path, 'r', encoding='utf-8') as f:
data = f.read() data = f.read()
except FileNotFoundError:
return []
videos = [] return _parse_playlist_lines(data)
videos_json = data.splitlines()
for video_json in videos_json:
try:
info = json.loads(video_json)
videos.append(info)
except json.decoder.JSONDecodeError:
if not video_json.strip() == '':
print('Corrupt playlist video entry: ' + video_json)
return videos
def get_local_playlist_videos(name, offset=0, amount=50): def get_local_playlist_videos(name, offset=0, amount=50):
@@ -103,14 +135,21 @@ def get_playlist_names():
def remove_from_playlist(name, video_info_list): def remove_from_playlist(name, video_info_list):
ids = [json.loads(video)['id'] for video in video_info_list] ids = [json.loads(video)['id'] for video in video_info_list]
with open(os.path.join(playlists_directory, name + ".txt"), 'r', encoding='utf-8') as file: playlist_path = _find_playlist_path(name)
with open(playlist_path, 'r', encoding='utf-8') as file:
videos = file.read() videos = file.read()
videos_in = videos.splitlines() videos_in = videos.splitlines()
videos_out = [] videos_out = []
for video in videos_in: for video in videos_in:
if json.loads(video)['id'] not in ids: clean = video.strip()
videos_out.append(video) if not clean:
with open(os.path.join(playlists_directory, name + ".txt"), 'w', encoding='utf-8') as file: continue
try:
if json.loads(clean)['id'] not in ids:
videos_out.append(clean)
except json.decoder.JSONDecodeError:
pass
with open(playlist_path, 'w', encoding='utf-8') as file:
file.write("\n".join(videos_out) + "\n") file.write("\n".join(videos_out) + "\n")
try: try:
@@ -154,8 +193,9 @@ def path_edit_playlist(playlist_name):
redirect_page_number = min(int(request.values.get('page', 1)), math.ceil(number_of_videos_remaining/50)) redirect_page_number = min(int(request.values.get('page', 1)), math.ceil(number_of_videos_remaining/50))
return flask.redirect(util.URL_ORIGIN + request.path + '?page=' + str(redirect_page_number)) return flask.redirect(util.URL_ORIGIN + request.path + '?page=' + str(redirect_page_number))
elif request.values['action'] == 'remove_playlist': elif request.values['action'] == 'remove_playlist':
safe_name = _validate_playlist_name(playlist_name)
try: try:
os.remove(os.path.join(playlists_directory, playlist_name + ".txt")) os.remove(os.path.join(playlists_directory, safe_name + '.txt'))
except OSError: except OSError:
pass pass
return flask.redirect(util.URL_ORIGIN + '/playlists') return flask.redirect(util.URL_ORIGIN + '/playlists')
@@ -195,8 +235,17 @@ def edit_playlist():
flask.abort(400) flask.abort(400)
_THUMBNAIL_RE = re.compile(r'^[A-Za-z0-9_-]{11}\.jpg$')
@yt_app.route('/data/playlist_thumbnails/<playlist_name>/<thumbnail>') @yt_app.route('/data/playlist_thumbnails/<playlist_name>/<thumbnail>')
def serve_thumbnail(playlist_name, thumbnail): def serve_thumbnail(playlist_name, thumbnail):
# .. is necessary because flask always uses the application directory at ./youtube, not the working directory # Validate both path components so a crafted URL cannot escape
# `thumbnails_directory` via `..` or NUL tricks (CWE-22).
safe_name = _validate_playlist_name(playlist_name)
if not _THUMBNAIL_RE.match(thumbnail):
flask.abort(400)
# .. is necessary because flask always uses the application directory at
# ./youtube, not the working directory.
return flask.send_from_directory( return flask.send_from_directory(
os.path.join('..', thumbnails_directory, playlist_name), thumbnail) os.path.join('..', thumbnails_directory, safe_name), thumbnail)

View File

@@ -3,12 +3,10 @@ from youtube import yt_app
import settings import settings
import base64 import base64
import urllib
import json import json
import string
import gevent import gevent
import math import math
from flask import request from flask import request, abort
import flask import flask
@@ -30,42 +28,58 @@ def playlist_ctoken(playlist_id, offset, include_shorts=True):
def playlist_first_page(playlist_id, report_text="Retrieved playlist", def playlist_first_page(playlist_id, report_text="Retrieved playlist",
use_mobile=False): use_mobile=False):
if use_mobile: # Use innertube API (pbj=1 no longer works for many playlists)
url = 'https://m.youtube.com/playlist?list=' + playlist_id + '&pbj=1' key = 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8'
content = util.fetch_url( url = 'https://www.youtube.com/youtubei/v1/browse?key=' + key
url, util.mobile_xhr_headers,
report_text=report_text, debug_name='playlist_first_page'
)
content = json.loads(content.decode('utf-8'))
else:
url = 'https://www.youtube.com/playlist?list=' + playlist_id + '&pbj=1'
content = util.fetch_url(
url, util.desktop_xhr_headers,
report_text=report_text, debug_name='playlist_first_page'
)
content = json.loads(content.decode('utf-8'))
return content data = {
'context': {
'client': {
'hl': 'en',
'gl': 'US',
'clientName': 'WEB',
'clientVersion': '2.20240327.00.00',
},
},
'browseId': 'VL' + playlist_id,
}
content_type_header = (('Content-Type', 'application/json'),)
content = util.fetch_url(
url, util.desktop_xhr_headers + content_type_header,
data=json.dumps(data),
report_text=report_text, debug_name='playlist_first_page'
)
return json.loads(content.decode('utf-8'))
def get_videos(playlist_id, page, include_shorts=True, use_mobile=False, def get_videos(playlist_id, page, include_shorts=True, use_mobile=False,
report_text='Retrieved playlist'): report_text='Retrieved playlist'):
# mobile requests return 20 videos per page
if use_mobile:
page_size = 20
headers = util.mobile_xhr_headers
# desktop requests return 100 videos per page
else:
page_size = 100 page_size = 100
headers = util.desktop_xhr_headers
url = "https://m.youtube.com/playlist?ctoken=" key = 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8'
url += playlist_ctoken(playlist_id, (int(page)-1)*page_size, url = 'https://www.youtube.com/youtubei/v1/browse?key=' + key
ctoken = playlist_ctoken(playlist_id, (int(page)-1)*page_size,
include_shorts=include_shorts) include_shorts=include_shorts)
url += "&pbj=1"
data = {
'context': {
'client': {
'hl': 'en',
'gl': 'US',
'clientName': 'WEB',
'clientVersion': '2.20240327.00.00',
},
},
'continuation': ctoken,
}
content_type_header = (('Content-Type', 'application/json'),)
content = util.fetch_url( content = util.fetch_url(
url, headers, report_text=report_text, url, util.desktop_xhr_headers + content_type_header,
debug_name='playlist_videos' data=json.dumps(data),
report_text=report_text, debug_name='playlist_videos'
) )
info = json.loads(content.decode('utf-8')) info = json.loads(content.decode('utf-8'))
@@ -78,6 +92,15 @@ def get_playlist_page():
abort(400) abort(400)
playlist_id = request.args.get('list') playlist_id = request.args.get('list')
# Radio/Mix playlists (RD...) only work as watch page, not playlist page
if playlist_id.startswith('RD'):
first_video_id = playlist_id[2:] # video ID after 'RD' prefix
return flask.redirect(
util.URL_ORIGIN + '/watch?v=' + first_video_id + '&list=' + playlist_id,
302
)
page = request.args.get('page', '1') page = request.args.get('page', '1')
if page == '1': if page == '1':
@@ -87,7 +110,7 @@ def get_playlist_page():
tasks = ( tasks = (
gevent.spawn( gevent.spawn(
playlist_first_page, playlist_id, playlist_first_page, playlist_id,
report_text="Retrieved playlist info", use_mobile=True report_text="Retrieved playlist info"
), ),
gevent.spawn(get_videos, playlist_id, page) gevent.spawn(get_videos, playlist_id, page)
) )
@@ -106,7 +129,7 @@ def get_playlist_page():
for item in info.get('items', ()): for item in info.get('items', ()):
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
if 'id' in item: if 'id' in item and not item.get('thumbnail'):
item['thumbnail'] = f"{settings.img_prefix}https://i.ytimg.com/vi/{item['id']}/hqdefault.jpg" item['thumbnail'] = f"{settings.img_prefix}https://i.ytimg.com/vi/{item['id']}/hqdefault.jpg"
item['url'] += '&list=' + playlist_id item['url'] += '&list=' + playlist_id

View File

@@ -113,12 +113,12 @@ def read_protobuf(data):
length = read_varint(data) length = read_varint(data)
value = data.read(length) value = data.read(length)
elif wire_type == 3: elif wire_type == 3:
end_bytes = encode_varint((field_number << 3) | 4) end_bytes = varint_encode((field_number << 3) | 4)
value = read_group(data, end_bytes) value = read_group(data, end_bytes)
elif wire_type == 5: elif wire_type == 5:
value = data.read(4) value = data.read(4)
else: else:
raise Exception("Unknown wire type: " + str(wire_type) + ", Tag: " + bytes_to_hex(succinct_encode(tag)) + ", at position " + str(data.tell())) raise Exception("Unknown wire type: " + str(wire_type) + " at position " + str(data.tell()))
yield (wire_type, field_number, value) yield (wire_type, field_number, value)

View File

@@ -97,6 +97,7 @@ import re
import time import time
import json import json
import os import os
import traceback
import pprint import pprint

View File

@@ -5,7 +5,6 @@ import settings
import json import json
import urllib import urllib
import base64 import base64
import mimetypes
from flask import request from flask import request
import flask import flask
import os import os

View File

@@ -9,6 +9,8 @@
--thumb-background: #222222; --thumb-background: #222222;
--link: #00B0FF; --link: #00B0FF;
--link-visited: #40C4FF; --link-visited: #40C4FF;
--border-color: #333333;
--thead-background: #0a0a0b;
--border-bg: #222222; --border-bg: #222222;
--border-bg-settings: #000000; --border-bg-settings: #000000;
--border-bg-license: #000000; --border-bg-license: #000000;

View File

@@ -9,6 +9,8 @@
--thumb-background: #35404D; --thumb-background: #35404D;
--link: #22AAFF; --link: #22AAFF;
--link-visited: #7755FF; --link-visited: #7755FF;
--border-color: #4A5568;
--thead-background: #1a2530;
--border-bg: #FFFFFF; --border-bg: #FFFFFF;
--border-bg-settings: #FFFFFF; --border-bg-settings: #FFFFFF;
--border-bg-license: #FFFFFF; --border-bg-license: #FFFFFF;

View File

@@ -20,6 +20,29 @@
// TODO: Call abort to cancel in-progress appends? // TODO: Call abort to cancel in-progress appends?
// Buffer sizes for different systems
const BUFFER_CONFIG = {
default: 50 * 10**6, // 50 megabytes
webOS: 20 * 10**6, // 20 megabytes WebOS (LG)
samsungTizen: 20 * 10**6, // 20 megabytes Samsung Tizen OS
androidTV: 30 * 10**6, // 30 megabytes Android TV
desktop: 50 * 10**6, // 50 megabytes PC/Mac
};
function detectSystem() {
const userAgent = navigator.userAgent.toLowerCase();
if (/webos|lg browser/i.test(userAgent)) {
return "webOS";
} else if (/tizen/i.test(userAgent)) {
return "samsungTizen";
} else if (/android tv|smart-tv/i.test(userAgent)) {
return "androidTV";
} else if (/firefox|chrome|safari|edge/i.test(userAgent)) {
return "desktop";
} else {
return "default";
}
}
function AVMerge(video, srcInfo, startTime){ function AVMerge(video, srcInfo, startTime){
this.audioSource = null; this.audioSource = null;
@@ -164,6 +187,8 @@ AVMerge.prototype.printDebuggingInfo = function() {
} }
function Stream(avMerge, source, startTime, avRatio) { function Stream(avMerge, source, startTime, avRatio) {
const selectedSystem = detectSystem();
let baseBufferTarget = BUFFER_CONFIG[selectedSystem] || BUFFER_CONFIG.default;
this.avMerge = avMerge; this.avMerge = avMerge;
this.video = avMerge.video; this.video = avMerge.video;
this.url = source['url']; this.url = source['url'];
@@ -173,10 +198,11 @@ function Stream(avMerge, source, startTime, avRatio) {
this.mimeCodec = source['mime_codec'] this.mimeCodec = source['mime_codec']
this.streamType = source['acodec'] ? 'audio' : 'video'; this.streamType = source['acodec'] ? 'audio' : 'video';
if (this.streamType == 'audio') { if (this.streamType == 'audio') {
this.bufferTarget = avRatio*50*10**6; this.bufferTarget = avRatio * baseBufferTarget;
} else { } else {
this.bufferTarget = 50*10**6; // 50 megabytes this.bufferTarget = baseBufferTarget;
} }
console.info(`Detected system: ${selectedSystem}. Applying bufferTarget of ${this.bufferTarget} bytes to ${this.streamType}.`);
this.initRange = source['init_range']; this.initRange = source['init_range'];
this.indexRange = source['index_range']; this.indexRange = source['index_range'];

View File

@@ -114,3 +114,57 @@ function copyTextToClipboard(text) {
window.addEventListener('DOMContentLoaded', function() { window.addEventListener('DOMContentLoaded', function() {
cur_track_idx = getDefaultTranscriptTrackIdx(); cur_track_idx = getDefaultTranscriptTrackIdx();
}); });
/**
* Thumbnail fallback handler
* Tries lower quality thumbnails when higher quality fails (404)
* Priority: hq720.jpg -> sddefault.jpg -> hqdefault.jpg -> mqdefault.jpg -> default.jpg
*/
function thumbnail_fallback(img) {
// Once src is set (image was loaded or attempted), always work with src
const src = img.src;
if (!src) return;
// Handle YouTube video thumbnails
if (src.includes('/i.ytimg.com/') || src.includes('/i.ytimg.com%2F')) {
// Extract video ID from URL
const match = src.match(/\/vi\/([^/]+)/);
if (!match) return;
const videoId = match[1];
const imgPrefix = settings_img_prefix || '';
// Define fallback order (from highest to lowest quality)
const fallbacks = [
'hq720.jpg',
'sddefault.jpg',
'hqdefault.jpg',
];
// Find current quality and try next fallback
for (let i = 0; i < fallbacks.length; i++) {
if (src.includes(fallbacks[i])) {
if (i < fallbacks.length - 1) {
img.src = imgPrefix + 'https://i.ytimg.com/vi/' + videoId + '/' + fallbacks[i + 1];
} else {
// Last fallback failed, stop retrying
img.onerror = null;
}
return;
}
}
// Unknown quality format, stop retrying
img.onerror = null;
}
// Handle YouTube channel avatars (ggpht.com)
else if (src.includes('ggpht.com') || src.includes('yt3.ggpht.com')) {
const newSrc = src.replace(/=s\d+-c-k/, '=s240-c-k-c0x00ffffff-no-rj');
if (newSrc !== src) {
img.src = newSrc;
} else {
img.onerror = null;
}
} else {
img.onerror = null;
}
}

2
youtube/static/js/hls.min.js vendored Normal file

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -13,10 +13,12 @@
let qualityOptions = []; let qualityOptions = [];
let qualityDefault; let qualityDefault;
// Collect uni sources (integrated)
for (let src of data.uni_sources) { for (let src of data.uni_sources) {
qualityOptions.push(src.quality_string); qualityOptions.push(src.quality_string);
} }
// Collect pair sources (av-merge)
for (let src of data.pair_sources) { for (let src of data.pair_sources) {
qualityOptions.push(src.quality_string); qualityOptions.push(src.quality_string);
} }
@@ -29,6 +31,37 @@
qualityDefault = 'None'; qualityDefault = 'None';
} }
// Current av-merge instance
let avMerge = null;
// Change quality: handles both uni (integrated) and pair (av-merge)
function changeQuality(selection) {
let currentVideoTime = video.currentTime;
let videoPaused = video.paused;
let videoSpeed = video.playbackRate;
let srcInfo;
// Close previous av-merge if any
if (avMerge && typeof avMerge.close === 'function') {
avMerge.close();
}
if (selection.type == 'uni') {
srcInfo = data.uni_sources[selection.index];
video.src = srcInfo.url;
avMerge = null;
} else {
srcInfo = data.pair_sources[selection.index];
avMerge = new AVMerge(video, srcInfo, currentVideoTime);
}
video.currentTime = currentVideoTime;
if (!videoPaused) {
video.play();
}
video.playbackRate = videoSpeed;
}
// Fix plyr refusing to work with qualities that are strings // Fix plyr refusing to work with qualities that are strings
Object.defineProperty(Plyr.prototype, 'quality', { Object.defineProperty(Plyr.prototype, 'quality', {
set: function (input) { set: function (input) {
@@ -58,8 +91,7 @@
}, },
}); });
const player = new Plyr(document.getElementById('js-video-player'), { const playerOptions = {
// Learning about autoplay permission https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy/autoplay#syntax
autoplay: autoplayActive, autoplay: autoplayActive,
disableContextMenu: false, disableContextMenu: false,
captions: { captions: {
@@ -92,6 +124,7 @@
if (quality == 'None') { if (quality == 'None') {
return; return;
} }
// Check if it's a uni source (integrated)
if (quality.includes('(integrated)')) { if (quality.includes('(integrated)')) {
for (let i = 0; i < data.uni_sources.length; i++) { for (let i = 0; i < data.uni_sources.length; i++) {
if (data.uni_sources[i].quality_string == quality) { if (data.uni_sources[i].quality_string == quality) {
@@ -100,6 +133,7 @@
} }
} }
} else { } else {
// It's a pair source (av-merge)
for (let i = 0; i < data.pair_sources.length; i++) { for (let i = 0; i < data.pair_sources.length; i++) {
if (data.pair_sources[i].quality_string == quality) { if (data.pair_sources[i].quality_string == quality) {
changeQuality({ type: 'pair', index: i }); changeQuality({ type: 'pair', index: i });
@@ -117,5 +151,30 @@
tooltips: { tooltips: {
controls: true, controls: true,
}, },
};
const video = document.getElementById('js-video-player');
const player = new Plyr(video, playerOptions);
// Hide audio track selector (DASH doesn't support multi-audio)
const audioContainer = document.getElementById('plyr-audio-container');
if (audioContainer) audioContainer.style.display = 'none';
// disable double click to fullscreen
player.eventListeners.forEach(function(eventListener) {
if(eventListener.type === 'dblclick') {
eventListener.element.removeEventListener(eventListener.type, eventListener.callback, eventListener.options);
}
}); });
// Add .started property
player.started = false;
player.once('playing', function(){ this.started = true; });
// Set initial time
if (data.time_start != 0) {
video.addEventListener('loadedmetadata', function() {
video.currentTime = data.time_start;
});
}
})(); })();

View File

@@ -0,0 +1,538 @@
(function main() {
'use strict';
console.log('Plyr start script loaded');
// Captions
let captionsActive = false;
if (typeof data !== 'undefined' && (data.settings.subtitles_mode === 2 || (data.settings.subtitles_mode === 1 && data.has_manual_captions))) {
captionsActive = true;
}
// AutoPlay
let autoplayActive = typeof data !== 'undefined' && data.settings.autoplay_videos || false;
// Quality map: label -> hls level index
window.hlsQualityMap = {};
let plyrInstance = null;
let currentQuality = 'auto';
let hls = null;
window.hls = null;
/**
* Get start level from settings (highest quality <= target)
*/
function getStartLevel(levels) {
if (typeof data === 'undefined' || !data.settings) return -1;
const defaultRes = data.settings.default_resolution;
if (defaultRes === 'auto' || !defaultRes) return -1;
const target = parseInt(defaultRes);
// Find the level with the highest height that is still <= target
let bestLevel = -1;
let bestHeight = 0;
for (let i = 0; i < levels.length; i++) {
const h = levels[i].height;
if (h <= target && h > bestHeight) {
bestHeight = h;
bestLevel = i;
}
}
return bestLevel;
}
/**
* Initialize HLS
*/
function initHLS(manifestUrl) {
return new Promise((resolve, reject) => {
if (!manifestUrl) {
reject('No HLS manifest URL provided');
return;
}
console.log('Initializing HLS for Plyr:', manifestUrl);
if (hls) {
hls.destroy();
hls = null;
}
hls = new Hls({
enableWorker: true,
lowLatencyMode: false,
maxBufferLength: 30,
maxMaxBufferLength: 60,
startLevel: -1,
});
window.hls = hls;
const video = document.getElementById('js-video-player');
if (!video) {
reject('Video element not found');
return;
}
hls.loadSource(manifestUrl);
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED, function(event, data) {
console.log('HLS manifest parsed, levels:', hls.levels?.length);
// Set initial quality from settings
const startLevel = getStartLevel(hls.levels);
if (startLevel !== -1) {
hls.currentLevel = startLevel;
const level = hls.levels[startLevel];
currentQuality = level.height + 'p';
console.log('Starting at resolution:', currentQuality);
}
resolve(hls);
});
hls.on(Hls.Events.ERROR, function(_, data) {
if (data.fatal) {
console.error('HLS fatal error:', data.type, data.details);
switch (data.type) {
case Hls.ErrorTypes.NETWORK_ERROR:
hls.startLoad();
break;
case Hls.ErrorTypes.MEDIA_ERROR:
hls.recoverMediaError();
break;
default:
reject(data);
break;
}
}
});
});
}
/**
* Change HLS quality
*/
function changeHLSQuality(quality) {
if (!hls) {
console.error('HLS not available');
return;
}
console.log('Changing HLS quality to:', quality);
if (quality === 'auto') {
hls.currentLevel = -1;
currentQuality = 'auto';
console.log('HLS quality set to Auto');
const qualityBtnText = document.getElementById('plyr-quality-text');
if (qualityBtnText) {
qualityBtnText.textContent = 'Auto';
}
} else {
const levelIndex = window.hlsQualityMap[quality];
if (levelIndex !== undefined) {
hls.currentLevel = levelIndex;
currentQuality = quality;
console.log('HLS quality set to:', quality);
const qualityBtnText = document.getElementById('plyr-quality-text');
if (qualityBtnText) {
qualityBtnText.textContent = quality;
}
}
}
}
/**
* Create custom quality control in Plyr controls
*/
function addCustomQualityControl(player, qualityLabels) {
player.on('ready', () => {
console.log('Adding custom quality control...');
const controls = player.elements.container.querySelector('.plyr__controls');
if (!controls) {
console.error('Controls not found');
return;
}
if (document.getElementById('plyr-quality-container')) {
console.log('Quality control already exists');
return;
}
const qualityContainer = document.createElement('div');
qualityContainer.id = 'plyr-quality-container';
qualityContainer.className = 'plyr__control plyr__control--custom';
const qualityButton = document.createElement('button');
qualityButton.type = 'button';
qualityButton.className = 'plyr__control';
qualityButton.setAttribute('data-plyr', 'quality-custom');
qualityButton.setAttribute('aria-label', 'Quality');
qualityButton.innerHTML = `
<svg class="plyr__icon hls_quality_icon" viewBox="0 0 24 24" width="18" height="18" fill="none" stroke="currentColor" stroke-width="2">
<rect x="2" y="4" width="20" height="16" rx="2" ry="2"></rect>
<line x1="8" y1="12" x2="16" y2="12"></line>
<line x1="12" y1="8" x2="12" y2="16"></line>
</svg>
<span id="plyr-quality-text">${currentQuality === 'auto' ? 'Auto' : currentQuality}</span>
<svg class="plyr__icon" viewBox="0 0 24 24" width="12" height="12" fill="none" stroke="currentColor" stroke-width="2">
<polyline points="6 9 12 15 18 9"></polyline>
</svg>
`;
const dropdown = document.createElement('div');
dropdown.className = 'plyr-quality-dropdown';
qualityLabels.forEach(label => {
const option = document.createElement('div');
option.className = 'plyr-quality-option';
option.textContent = label === 'auto' ? 'Auto' : label;
if (label === currentQuality) {
option.setAttribute('data-active', 'true');
}
option.addEventListener('click', (e) => {
e.stopPropagation();
changeHLSQuality(label);
dropdown.querySelectorAll('.plyr-quality-option').forEach(opt => {
opt.removeAttribute('data-active');
});
option.setAttribute('data-active', 'true');
dropdown.style.display = 'none';
});
dropdown.appendChild(option);
});
qualityButton.addEventListener('click', (e) => {
e.stopPropagation();
const isVisible = dropdown.style.display === 'block';
document.querySelectorAll('.plyr-quality-dropdown, .plyr-audio-dropdown').forEach(d => {
d.style.display = 'none';
});
dropdown.style.display = isVisible ? 'none' : 'block';
});
document.addEventListener('click', (e) => {
if (!qualityContainer.contains(e.target)) {
dropdown.style.display = 'none';
}
});
qualityContainer.appendChild(qualityButton);
qualityContainer.appendChild(dropdown);
const settingsBtn = controls.querySelector('[data-plyr="settings"]');
if (settingsBtn) {
settingsBtn.insertAdjacentElement('beforebegin', qualityContainer);
} else {
controls.appendChild(qualityContainer);
}
console.log('Custom quality control added');
});
}
/**
* Create custom audio tracks control in Plyr controls
*/
function addCustomAudioTracksControl(player, hlsInstance) {
player.on('ready', () => {
console.log('Adding custom audio tracks control...');
const controls = player.elements.container.querySelector('.plyr__controls');
if (!controls) {
console.error('Controls not found');
return;
}
if (document.getElementById('plyr-audio-container')) {
console.log('Audio tracks control already exists');
return;
}
const audioContainer = document.createElement('div');
audioContainer.id = 'plyr-audio-container';
audioContainer.className = 'plyr__control plyr__control--custom';
const audioButton = document.createElement('button');
audioButton.type = 'button';
audioButton.className = 'plyr__control';
audioButton.setAttribute('data-plyr', 'audio-custom');
audioButton.setAttribute('aria-label', 'Audio Track');
audioButton.innerHTML = `
<svg class="plyr__icon hls_audio_icon" viewBox="0 0 24 24" width="18" height="18" fill="none" stroke="currentColor" stroke-width="2">
<path d="M3 18v-6a9 9 0 0 1 18 0v6"></path>
<path d="M21 19a2 2 0 0 1-2 2h-1a2 2 0 0 1-2-2v-3a2 2 0 0 1 2-2h3z"></path>
<path d="M3 19a2 2 0 0 0 2 2h1a2 2 0 0 0 2-2v-3a2 2 0 0 0-2-2H3z"></path>
</svg>
<span id="plyr-audio-text">Audio</span>
<svg class="plyr__icon" viewBox="0 0 24 24" width="12" height="12" fill="none" stroke="currentColor" stroke-width="2">
<polyline points="6 9 12 15 18 9"></polyline>
</svg>
`;
const audioDropdown = document.createElement('div');
audioDropdown.className = 'plyr-audio-dropdown';
function updateAudioDropdown() {
if (!hlsInstance || !hlsInstance.audioTracks) return;
audioDropdown.innerHTML = '';
if (hlsInstance.audioTracks.length === 0) {
const noTrackMsg = document.createElement('div');
noTrackMsg.className = 'plyr-audio-no-tracks';
noTrackMsg.textContent = 'No audio tracks';
audioDropdown.appendChild(noTrackMsg);
return;
}
hlsInstance.audioTracks.forEach((track, idx) => {
const option = document.createElement('div');
option.className = 'plyr-audio-option';
option.textContent = track.name || track.lang || `Track ${idx + 1}`;
if (hlsInstance.audioTrack === idx) {
option.setAttribute('data-active', 'true');
}
option.addEventListener('click', (e) => {
e.stopPropagation();
hlsInstance.audioTrack = idx;
console.log('Audio track changed to:', track.name || track.lang || idx);
const audioText = document.getElementById('plyr-audio-text');
if (audioText) {
const trackName = track.name || track.lang || `Track ${idx + 1}`;
audioText.textContent = trackName.length > 8 ? trackName.substring(0, 6) + '...' : trackName;
}
audioDropdown.querySelectorAll('.plyr-audio-option').forEach(opt => {
opt.removeAttribute('data-active');
});
option.setAttribute('data-active', 'true');
audioDropdown.style.display = 'none';
});
audioDropdown.appendChild(option);
});
}
audioButton.addEventListener('click', (e) => {
e.stopPropagation();
updateAudioDropdown();
const isVisible = audioDropdown.style.display === 'block';
document.querySelectorAll('.plyr-quality-dropdown, .plyr-audio-dropdown').forEach(d => {
d.style.display = 'none';
});
audioDropdown.style.display = isVisible ? 'none' : 'block';
});
document.addEventListener('click', (e) => {
if (!audioContainer.contains(e.target)) {
audioDropdown.style.display = 'none';
}
});
audioContainer.appendChild(audioButton);
audioContainer.appendChild(audioDropdown);
const qualityContainer = document.getElementById('plyr-quality-container');
if (qualityContainer) {
qualityContainer.insertAdjacentElement('beforebegin', audioContainer);
} else {
const settingsBtn = controls.querySelector('[data-plyr="settings"]');
if (settingsBtn) {
settingsBtn.insertAdjacentElement('beforebegin', audioContainer);
} else {
controls.appendChild(audioContainer);
}
}
if (hlsInstance && hlsInstance.audioTracks && hlsInstance.audioTracks.length > 0) {
// Prefer "original" audio track
const originalIdx = hlsInstance.audioTracks.findIndex(t => {
const name = (t.name || '').toLowerCase();
const lang = (t.lang || '').toLowerCase();
return name.includes('original') || lang === 'original';
});
if (originalIdx !== -1) {
hlsInstance.audioTrack = originalIdx;
console.log('Selected original audio track:', hlsInstance.audioTracks[originalIdx].name);
}
const currentTrack = hlsInstance.audioTracks[hlsInstance.audioTrack];
if (currentTrack) {
const audioText = document.getElementById('plyr-audio-text');
if (audioText) {
const trackName = currentTrack.name || currentTrack.lang || 'Audio';
audioText.textContent = trackName.length > 8 ? trackName.substring(0, 6) + '...' : trackName;
}
}
}
hlsInstance.on(Hls.Events.AUDIO_TRACKS_UPDATED, () => {
console.log('Audio tracks updated, count:', hlsInstance.audioTracks?.length);
if (hlsInstance.audioTracks?.length > 0) {
updateAudioDropdown();
const currentTrack = hlsInstance.audioTracks[hlsInstance.audioTrack];
if (currentTrack) {
const audioText = document.getElementById('plyr-audio-text');
if (audioText) {
const trackName = currentTrack.name || currentTrack.lang || 'Audio';
audioText.textContent = trackName.length > 8 ? trackName.substring(0, 6) + '...' : trackName;
}
}
}
});
console.log('Custom audio tracks control added');
});
}
/**
* Initialize Plyr with HLS quality options
*/
function initPlyrWithQuality(hlsInstance) {
const video = document.getElementById('js-video-player');
if (!hlsInstance || !hlsInstance.levels || hlsInstance.levels.length === 0) {
console.error('HLS not ready');
return;
}
if (!video) {
console.error('Video element not found');
return;
}
console.log('HLS levels available:', hlsInstance.levels.length);
const sortedLevels = [...hlsInstance.levels].sort((a, b) => b.height - a.height);
const seenHeights = new Set();
const uniqueLevels = [];
sortedLevels.forEach((level) => {
if (!seenHeights.has(level.height)) {
seenHeights.add(level.height);
uniqueLevels.push(level);
}
});
const qualityLabels = ['auto'];
uniqueLevels.forEach((level) => {
const originalIndex = hlsInstance.levels.indexOf(level);
const label = level.height + 'p';
if (!window.hlsQualityMap[label]) {
qualityLabels.push(label);
window.hlsQualityMap[label] = originalIndex;
}
});
console.log('Quality labels:', qualityLabels);
const playerOptions = {
autoplay: autoplayActive,
disableContextMenu: false,
captions: {
active: captionsActive,
language: typeof data !== 'undefined' ? data.settings.subtitles_language : 'en',
},
controls: [
'play-large',
'play',
'progress',
'current-time',
'duration',
'mute',
'volume',
'captions',
'settings',
'pip',
'airplay',
'fullscreen',
],
iconUrl: '/youtube.com/static/modules/plyr/plyr.svg',
blankVideo: '/youtube.com/static/modules/plyr/blank.webm',
debug: false,
storage: { enabled: false },
previewThumbnails: {
enabled: typeof storyboard_url !== 'undefined' && storyboard_url !== null,
src: typeof storyboard_url !== 'undefined' && storyboard_url !== null ? [storyboard_url] : [],
},
settings: ['captions', 'speed', 'loop'],
tooltips: {
controls: true,
},
};
console.log('Creating Plyr...');
try {
plyrInstance = new Plyr(video, playerOptions);
console.log('Plyr instance created');
window.plyrInstance = plyrInstance;
addCustomQualityControl(plyrInstance, qualityLabels);
addCustomAudioTracksControl(plyrInstance, hlsInstance);
if (plyrInstance.eventListeners) {
plyrInstance.eventListeners.forEach(function(eventListener) {
if(eventListener.type === 'dblclick') {
eventListener.element.removeEventListener(eventListener.type, eventListener.callback, eventListener.options);
}
});
}
plyrInstance.started = false;
plyrInstance.once('playing', function(){this.started = true});
if (typeof data !== 'undefined' && data.time_start != 0) {
video.addEventListener('loadedmetadata', function() {
video.currentTime = data.time_start;
});
}
console.log('Plyr init complete');
} catch (e) {
console.error('Failed to initialize Plyr:', e);
}
}
/**
* Main initialization
*/
async function start() {
console.log('Starting Plyr with HLS...');
if (typeof hls_manifest_url === 'undefined' || !hls_manifest_url) {
console.error('No HLS manifest URL available');
return;
}
try {
const hlsInstance = await initHLS(hls_manifest_url);
initPlyrWithQuality(hlsInstance);
} catch (error) {
console.error('Failed to initialize:', error);
}
}
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', start);
} else {
start();
}
})();

View File

@@ -0,0 +1,375 @@
/**
* YouTube Storyboard Preview Thumbnails
* Shows preview thumbnails when hovering over the progress bar
* Works with native HTML5 video player
*
* Fetches the proxied WebVTT storyboard from backend and extracts image URLs
*/
(function() {
'use strict';
console.log('Storyboard Preview Thumbnails loaded');
// Storyboard configuration
let storyboardImages = []; // Array of {time, imageUrl, x, y, width, height}
let previewElement = null;
let tooltipElement = null;
let video = null;
let progressBarRect = null;
/**
* Fetch and parse the storyboard VTT file
* The backend generates a VTT with proxied image URLs
*/
function fetchStoryboardVTT(vttUrl) {
return fetch(vttUrl)
.then(response => {
if (!response.ok) throw new Error('Failed to fetch storyboard VTT');
return response.text();
})
.then(vttText => {
console.log('Fetched storyboard VTT, length:', vttText.length);
const lines = vttText.split('\n');
const images = [];
let currentEntry = null;
for (let i = 0; i < lines.length; i++) {
const line = lines[i].trim();
// Parse timestamp line: 00:00:00.000 --> 00:00:10.000
if (line.includes('-->')) {
const timeMatch = line.match(/^(\d{2}):(\d{2}):(\d{2})\.(\d{3})/);
if (timeMatch) {
const hours = parseInt(timeMatch[1]);
const minutes = parseInt(timeMatch[2]);
const seconds = parseInt(timeMatch[3]);
const ms = parseInt(timeMatch[4]);
currentEntry = {
time: hours * 3600 + minutes * 60 + seconds + ms / 1000
};
}
}
// Parse image URL with crop parameters: /url#xywh=x,y,w,h
else if (line.includes('#xywh=') && currentEntry) {
const [urlPart, paramsPart] = line.split('#xywh=');
const [x, y, width, height] = paramsPart.split(',').map(Number);
currentEntry.imageUrl = urlPart;
currentEntry.x = x;
currentEntry.y = y;
currentEntry.width = width;
currentEntry.height = height;
images.push(currentEntry);
currentEntry = null;
}
}
console.log('Parsed', images.length, 'storyboard frames');
return images;
});
}
/**
* Format time as MM:SS or H:MM:SS
*/
function formatTime(seconds) {
if (isNaN(seconds)) return '0:00';
const hours = Math.floor(seconds / 3600);
const minutes = Math.floor((seconds % 3600) / 60);
const secs = Math.floor(seconds % 60);
if (hours > 0) {
return `${hours}:${minutes.toString().padStart(2, '0')}:${secs.toString().padStart(2, '0')}`;
}
return `${minutes}:${secs.toString().padStart(2, '0')}`;
}
/**
* Find the closest storyboard frame for a given time
*/
function findFrameAtTime(time) {
if (!storyboardImages.length) return null;
// Binary search for efficiency
let left = 0;
let right = storyboardImages.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
const frame = storyboardImages[mid];
if (time >= frame.time && time < (storyboardImages[mid + 1]?.time || Infinity)) {
return frame;
} else if (time < frame.time) {
right = mid - 1;
} else {
left = mid + 1;
}
}
// Return closest frame
return storyboardImages[Math.min(left, storyboardImages.length - 1)];
}
/**
* Detect browser
*/
function getBrowser() {
const ua = navigator.userAgent;
if (ua.indexOf('Firefox') > -1) return 'firefox';
if (ua.indexOf('Chrome') > -1) return 'chrome';
if (ua.indexOf('Safari') > -1) return 'safari';
return 'other';
}
/**
* Detect the progress bar position in native video element
* Different browsers have different control layouts
*/
function detectProgressBar() {
if (!video) return null;
const rect = video.getBoundingClientRect();
const browser = getBrowser();
let progressBarArea;
switch(browser) {
case 'firefox':
// Firefox: La barra de progreso está en la parte inferior pero más delgada
// Normalmente ocupa solo unos 20-25px de altura y está centrada
progressBarArea = {
top: rect.bottom - 30, // Área más pequeña para Firefox
bottom: rect.bottom - 5, // Dejamos espacio para otros controles
left: rect.left + 60, // Firefox tiene botones a la izquierda (play, volumen)
right: rect.right - 10, // Y a la derecha (fullscreen, etc)
height: 25
};
break;
case 'chrome':
default:
// Chrome: La barra de progreso ocupa un área más grande
progressBarArea = {
top: rect.bottom - 50,
bottom: rect.bottom,
left: rect.left,
right: rect.right,
height: 50
};
break;
}
return progressBarArea;
}
/**
* Check if mouse is over the progress bar area
*/
function isOverProgressBar(mouseX, mouseY) {
if (!progressBarRect) return false;
return mouseX >= progressBarRect.left &&
mouseX <= progressBarRect.right &&
mouseY >= progressBarRect.top &&
mouseY <= progressBarRect.bottom;
}
/**
* Initialize preview elements
*/
function initPreviewElements() {
video = document.getElementById('js-video-player');
if (!video) {
console.error('Video element not found');
return;
}
console.log('Video element found, browser:', getBrowser());
// Create preview element
previewElement = document.createElement('div');
previewElement.className = 'storyboard-preview';
previewElement.style.cssText = `
position: fixed;
display: none;
pointer-events: none;
z-index: 10000;
background: #000;
border: 2px solid #fff;
border-radius: 4px;
overflow: hidden;
box-shadow: 0 4px 12px rgba(0,0,0,0.5);
`;
// Create tooltip element
tooltipElement = document.createElement('div');
tooltipElement.className = 'storyboard-tooltip';
tooltipElement.style.cssText = `
position: absolute;
bottom: -25px;
left: 50%;
transform: translateX(-50%);
background: rgba(0,0,0,0.8);
color: #fff;
padding: 2px 6px;
border-radius: 3px;
font-size: 12px;
font-family: Arial, sans-serif;
white-space: nowrap;
pointer-events: none;
`;
previewElement.appendChild(tooltipElement);
document.body.appendChild(previewElement);
// Update progress bar position on mouse move
video.addEventListener('mousemove', updateProgressBarPosition);
}
/**
* Update progress bar position detection
*/
function updateProgressBarPosition() {
progressBarRect = detectProgressBar();
}
/**
* Handle mouse move - only show preview when over progress bar area
*/
function handleMouseMove(e) {
if (!video || !storyboardImages.length) return;
// Update progress bar position on each move
progressBarRect = detectProgressBar();
// Only show preview if mouse is over the progress bar area
if (!isOverProgressBar(e.clientX, e.clientY)) {
if (previewElement) previewElement.style.display = 'none';
return;
}
// Calculate position within the progress bar
const progressBarWidth = progressBarRect.right - progressBarRect.left;
let xInProgressBar = e.clientX - progressBarRect.left;
// Adjust for Firefox's left offset
const browser = getBrowser();
if (browser === 'firefox') {
// Ajustar el rango para que coincida mejor con la barra real
xInProgressBar = Math.max(0, Math.min(xInProgressBar, progressBarWidth));
}
const percentage = Math.max(0, Math.min(1, xInProgressBar / progressBarWidth));
const time = percentage * video.duration;
const frame = findFrameAtTime(time);
if (!frame) return;
// Preview dimensions
const previewWidth = 160;
const previewHeight = 90;
const offsetFromCursor = 10;
// Position above the cursor
let previewTop = e.clientY - previewHeight - offsetFromCursor;
// If preview would go above the video, position below the cursor
const videoRect = video.getBoundingClientRect();
if (previewTop < videoRect.top) {
previewTop = e.clientY + offsetFromCursor;
}
// Keep preview within horizontal bounds
let left = e.clientX - (previewWidth / 2);
// Ajustes específicos para Firefox
if (browser === 'firefox') {
// En Firefox, la barra no llega hasta los extremos
const minLeft = progressBarRect.left + 10;
const maxLeft = progressBarRect.right - previewWidth - 10;
left = Math.max(minLeft, Math.min(left, maxLeft));
} else {
left = Math.max(videoRect.left, Math.min(left, videoRect.right - previewWidth));
}
// Apply all styles
previewElement.style.cssText = `
display: block;
position: fixed;
left: ${left}px;
top: ${previewTop}px;
width: ${previewWidth}px;
height: ${previewHeight}px;
background-image: url('${frame.imageUrl}');
background-position: -${frame.x}px -${frame.y}px;
background-size: auto;
background-repeat: no-repeat;
border: 2px solid #fff;
border-radius: 4px;
box-shadow: 0 4px 12px rgba(0,0,0,0.5);
z-index: 10000;
pointer-events: none;
`;
tooltipElement.textContent = formatTime(time);
}
/**
* Handle mouse leave video
*/
function handleMouseLeave() {
if (previewElement) {
previewElement.style.display = 'none';
}
}
/**
* Initialize storyboard preview
*/
function init() {
console.log('Initializing storyboard preview...');
// Check if storyboard URL is available
if (typeof storyboard_url === 'undefined' || !storyboard_url) {
console.log('No storyboard URL available');
return;
}
console.log('Storyboard URL:', storyboard_url);
// Fetch the proxied VTT file from backend
fetchStoryboardVTT(storyboard_url)
.then(images => {
storyboardImages = images;
console.log('Loaded', images.length, 'storyboard images');
if (images.length === 0) {
console.log('No storyboard images parsed');
return;
}
initPreviewElements();
// Add event listeners to video
video.addEventListener('mousemove', handleMouseMove);
video.addEventListener('mouseleave', handleMouseLeave);
console.log('Storyboard preview initialized for', getBrowser());
})
.catch(err => {
console.error('Failed to load storyboard:', err);
});
}
// Initialize when DOM is ready
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', init);
} else {
init();
}
})();

View File

@@ -5,8 +5,9 @@ function changeQuality(selection) {
let videoPaused = video.paused; let videoPaused = video.paused;
let videoSpeed = video.playbackRate; let videoSpeed = video.playbackRate;
let srcInfo; let srcInfo;
if (avMerge) if (avMerge && typeof avMerge.close === 'function') {
avMerge.close(); avMerge.close();
}
if (selection.type == 'uni'){ if (selection.type == 'uni'){
srcInfo = data['uni_sources'][selection.index]; srcInfo = data['uni_sources'][selection.index];
video.src = srcInfo.url; video.src = srcInfo.url;
@@ -94,7 +95,11 @@ if (data.playlist && data.playlist['id'] !== null) {
// Autoplay // Autoplay
if (data.settings.related_videos_mode !== 0 || data.playlist !== null) { (function() {
if (data.settings.related_videos_mode === 0 && data.playlist === null) {
return;
}
let playability_error = !!data.playability_error; let playability_error = !!data.playability_error;
let isPlaylist = false; let isPlaylist = false;
if (data.playlist !== null && data.playlist['current_index'] !== null) if (data.playlist !== null && data.playlist['current_index'] !== null)
@@ -154,7 +159,10 @@ if (data.settings.related_videos_mode !== 0 || data.playlist !== null) {
if(!playability_error){ if(!playability_error){
// play the video if autoplay is on // play the video if autoplay is on
if(autoplayEnabled){ if(autoplayEnabled){
video.play(); video.play().catch(function(e) {
// Autoplay blocked by browser - ignore silently
console.log('Autoplay blocked:', e.message);
});
} }
} }
@@ -196,4 +204,4 @@ if (data.settings.related_videos_mode !== 0 || data.playlist !== null) {
window.setTimeout(nextVideo, nextVideoDelay); window.setTimeout(nextVideo, nextVideoDelay);
} }
} }
} })();

View File

@@ -0,0 +1,329 @@
const video = document.getElementById('js-video-player');
window.hls = null;
let hls = null;
// ===========
// HLS NATIVE
// ===========
function initHLSNative(manifestUrl) {
if (!manifestUrl) {
console.error('No HLS manifest URL provided');
return;
}
console.log('Initializing native HLS player with manifest:', manifestUrl);
if (hls) {
window.hls = null;
hls.destroy();
hls = null;
}
if (Hls.isSupported()) {
hls = new Hls({
enableWorker: true,
lowLatencyMode: false,
maxBufferLength: 30,
maxMaxBufferLength: 60,
startLevel: -1,
});
window.hls = hls;
hls.loadSource(manifestUrl);
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED, function(event, data) {
console.log('Native manifest parsed');
console.log('Levels:', data.levels.length);
const qualitySelect = document.getElementById('quality-select');
if (qualitySelect && data.levels?.length) {
qualitySelect.innerHTML = '<option value="-1">Auto</option>';
const sorted = [...data.levels].sort((a, b) => b.height - a.height);
const seen = new Set();
sorted.forEach(level => {
if (!seen.has(level.height)) {
seen.add(level.height);
const i = data.levels.indexOf(level);
const opt = document.createElement('option');
opt.value = i;
opt.textContent = level.height + 'p';
qualitySelect.appendChild(opt);
}
});
// Set initial quality from settings
if (typeof window.data !== 'undefined' && window.data.settings) {
const defaultRes = window.data.settings.default_resolution;
if (defaultRes !== 'auto' && defaultRes) {
const target = parseInt(defaultRes);
let bestLevel = -1;
let bestHeight = 0;
for (let i = 0; i < hls.levels.length; i++) {
const h = hls.levels[i].height;
if (h <= target && h > bestHeight) {
bestHeight = h;
bestLevel = i;
}
}
if (bestLevel !== -1) {
hls.currentLevel = bestLevel;
qualitySelect.value = bestLevel;
console.log('Starting at resolution:', bestHeight + 'p');
}
}
}
}
});
hls.on(Hls.Events.ERROR, function(_, data) {
if (data.fatal) {
console.error('HLS fatal error:', data.type, data.details);
switch(data.type) {
case Hls.ErrorTypes.NETWORK_ERROR:
hls.startLoad();
break;
case Hls.ErrorTypes.MEDIA_ERROR:
hls.recoverMediaError();
break;
default:
hls.destroy();
break;
}
}
});
} else if (video.canPlayType('application/vnd.apple.mpegurl')) {
video.src = manifestUrl;
} else {
console.error('HLS not supported');
}
}
// ======
// INIT
// ======
function initPlayer() {
console.log('Init native player');
if (typeof hls_manifest_url === 'undefined' || !hls_manifest_url) {
console.error('No manifest URL');
return;
}
initHLSNative(hls_manifest_url);
const qualitySelect = document.getElementById('quality-select');
if (qualitySelect) {
qualitySelect.addEventListener('change', function () {
const level = parseInt(this.value);
if (hls) {
hls.currentLevel = level;
console.log('Quality:', level === -1 ? 'Auto' : hls.levels[level]?.height + 'p');
}
});
}
}
// DOM READY
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', initPlayer);
} else {
initPlayer();
}
// =============
// AUDIO TRACKS
// =============
document.addEventListener('DOMContentLoaded', function() {
const audioTrackSelect = document.getElementById('audio-track-select');
if (audioTrackSelect) {
audioTrackSelect.addEventListener('change', function() {
const trackIdx = parseInt(this.value);
if (!isNaN(trackIdx) && hls && hls.audioTracks && trackIdx >= 0 && trackIdx < hls.audioTracks.length) {
hls.audioTrack = trackIdx;
console.log('Audio track changed to:', hls.audioTracks[trackIdx].name || trackIdx);
}
});
}
if (hls) {
hls.on(Hls.Events.AUDIO_TRACKS_UPDATED, (_, data) => {
console.log('Audio tracks:', data.audioTracks);
// Populate audio track select if needed
if (audioTrackSelect && data.audioTracks.length > 0) {
audioTrackSelect.innerHTML = '<option value="">Select audio track</option>';
let originalIdx = -1;
data.audioTracks.forEach((track, idx) => {
// Find "original" track
if (originalIdx === -1 && (track.name || '').toLowerCase().includes('original')) {
originalIdx = idx;
}
const option = document.createElement('option');
option.value = String(idx);
option.textContent = track.name || track.lang || `Track ${idx}`;
audioTrackSelect.appendChild(option);
});
audioTrackSelect.disabled = false;
// Auto-select "original" audio track
if (originalIdx !== -1) {
hls.audioTrack = originalIdx;
audioTrackSelect.value = String(originalIdx);
console.log('Auto-selected original audio track:', data.audioTracks[originalIdx].name);
}
}
});
}
});
// ============
// START TIME
// ============
if (typeof data !== 'undefined' && data.time_start != 0 && video) {
video.addEventListener('loadedmetadata', function() {
video.currentTime = data.time_start;
});
}
// ==============
// SPEED CONTROL
// ==============
let speedInput = document.getElementById('speed-control');
if (speedInput) {
speedInput.addEventListener('keyup', (event) => {
if (event.key === 'Enter') {
let speed = parseFloat(speedInput.value);
if(!isNaN(speed)){
video.playbackRate = speed;
}
}
});
}
// =========
// Autoplay
// =========
(function() {
if (typeof data === 'undefined' || (data.settings.related_videos_mode === 0 && data.playlist === null)) {
return;
}
let playability_error = !!data.playability_error;
let isPlaylist = false;
if (data.playlist !== null && data.playlist['current_index'] !== null)
isPlaylist = true;
// read cookies on whether to autoplay
// https://developer.mozilla.org/en-US/docs/Web/API/Document/cookie
let cookieValue;
let playlist_id;
if (isPlaylist) {
// from https://stackoverflow.com/a/6969486
function escapeRegExp(string) {
// $& means the whole matched string
return string.replace(/[.*+?^${}()|[\]\\]/g, '\\$&');
}
playlist_id = data.playlist['id'];
playlist_id = escapeRegExp(playlist_id);
cookieValue = document.cookie.replace(new RegExp(
'(?:(?:^|.*;\\s*)autoplay_'
+ playlist_id + '\\s*\\=\\s*([^;]*).*$)|^.*$'
), '$1');
} else {
cookieValue = document.cookie.replace(new RegExp(
'(?:(?:^|.*;\\s*)autoplay\\s*\\=\\s*([^;]*).*$)|^.*$'
),'$1');
}
let autoplayEnabled = 0;
if(cookieValue.length === 0){
autoplayEnabled = 0;
} else {
autoplayEnabled = Number(cookieValue);
}
// check the checkbox if autoplay is on
let checkbox = document.querySelector('.autoplay-toggle');
if(autoplayEnabled){
checkbox.checked = true;
}
// listen for checkbox to turn autoplay on and off
let cookie = 'autoplay'
if (isPlaylist)
cookie += '_' + playlist_id;
checkbox.addEventListener( 'change', function() {
if(this.checked) {
autoplayEnabled = 1;
document.cookie = cookie + '=1; SameSite=Strict';
} else {
autoplayEnabled = 0;
document.cookie = cookie + '=0; SameSite=Strict';
}
});
if(!playability_error){
// play the video if autoplay is on
if(autoplayEnabled){
video.play().catch(function(e) {
// Autoplay blocked by browser - ignore silently
console.log('Autoplay blocked:', e.message);
});
}
}
// determine next video url
let nextVideoUrl;
if (isPlaylist) {
let currentIndex = data.playlist['current_index'];
if (data.playlist['current_index']+1 == data.playlist['items'].length)
nextVideoUrl = null;
else
nextVideoUrl = data.playlist['items'][data.playlist['current_index']+1]['url'];
// scroll playlist to proper position
// item height + gap == 100
let pl = document.querySelector('.playlist-videos');
pl.scrollTop = 100*currentIndex;
} else {
if (data.related.length === 0)
nextVideoUrl = null;
else
nextVideoUrl = data.related[0]['url'];
}
let nextVideoDelay = 1000;
// go to next video when video ends
// https://stackoverflow.com/a/2880950
if (nextVideoUrl) {
if(playability_error){
videoEnded();
} else {
video.addEventListener('ended', videoEnded, false);
}
function nextVideo(){
if(autoplayEnabled){
window.location.href = nextVideoUrl;
}
}
function videoEnded(e) {
window.setTimeout(nextVideo, nextVideoDelay);
}
}
})();

View File

@@ -9,6 +9,8 @@
--thumb-background: #F5F5F5; --thumb-background: #F5F5F5;
--link: #212121; --link: #212121;
--link-visited: #808080; --link-visited: #808080;
--border-color: #CCCCCC;
--thead-background: #d0d0d0;
--border-bg: #212121; --border-bg: #212121;
--border-bg-settings: #91918C; --border-bg-settings: #91918C;
--border-bg-license: #91918C; --border-bg-license: #91918C;

View File

@@ -37,3 +37,156 @@ e.g. Firefox playback speed options */
max-height: 320px; max-height: 320px;
overflow-y: auto; overflow-y: auto;
} }
/*
* Custom styles similar to youtube
*/
.plyr__controls {
display: flex;
justify-content: center;
padding-bottom: 0px;
}
.plyr__progress__container {
position: absolute;
bottom: 0;
width: 100%;
margin-bottom: -5px;
}
.plyr__controls .plyr__controls__item:first-child {
margin-left: 0;
margin-right: 0;
z-index: 5;
}
.plyr__controls .plyr__controls__item.plyr__volume {
margin-left: auto;
}
.plyr__controls .plyr__controls__item.plyr__progress__container {
padding-left: 10px;
padding-right: 10px;
}
.plyr__progress input[type="range"] {
margin-bottom: 50px;
}
/*
* Plyr Custom Controls
*/
.plyr__control svg.hls_audio_icon,
.plyr__control svg.hls_quality_icon {
fill: none;
}
.plyr__control[data-plyr="quality-custom"],
.plyr__control[data-plyr="audio-custom"] {
cursor: pointer;
}
.plyr__control[data-plyr="quality-custom"]:hover,
.plyr__control[data-plyr="audio-custom"]:hover {
background: rgba(255, 255, 255, 0.2);
}
/*
* Custom styles for dropdown controls
*/
.plyr__control--custom {
padding: 0;
}
/* Quality and Audio containers */
#plyr-quality-container,
#plyr-audio-container {
position: relative;
display: inline-flex;
align-items: center;
}
/* Quality and Audio buttons */
#plyr-quality-container .plyr__control,
#plyr-audio-container .plyr__control {
display: inline-flex;
align-items: center;
gap: 4px;
}
/* Text labels */
#plyr-quality-text,
#plyr-audio-text {
font-size: 12px;
margin-left: 2px;
}
/* Dropdowns */
.plyr-quality-dropdown,
.plyr-audio-dropdown {
position: absolute;
bottom: 100%;
right: 0;
margin-bottom: 8px;
background: #E6E6E6;
color: #23282f;
border-radius: 4px;
padding: 4px 6px;
min-width: 90px;
display: none;
z-index: 100;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.25);
border: 1px solid rgba(0, 0, 0, 0.08);
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Helvetica, Arial, sans-serif;
max-height: 320px;
overflow-y: auto;
}
/* Audio dropdown needs slightly wider */
.plyr-audio-dropdown {
min-width: 120px;
}
/* Dropdown options */
.plyr-quality-option,
.plyr-audio-option {
padding: 6px 16px;
margin-bottom: 2px;
cursor: pointer;
font-size: 13px;
transition: all 0.15s;
color: #23282f;
white-space: nowrap;
text-align: left;
}
/* Active/selected option */
.plyr-quality-option[data-active="true"],
.plyr-audio-option[data-active="true"] {
background: #00b3ff;
color: #FFF;
font-weight: 500;
border-radius: 4px;
}
/* Hover state */
.plyr-quality-option:hover,
.plyr-audio-option:hover {
background: #00b3ff;
color: #FFF;
font-weight: 500;
border-radius: 4px;
}
/* No audio tracks message */
.plyr-audio-no-tracks {
padding: 6px 16px;
font-size: 12px;
color: rgba(255, 255, 255, 0.5);
white-space: nowrap;
}
/*
* End custom styles
*/

View File

@@ -128,6 +128,29 @@ header {
background-color: var(--buttom-hover); background-color: var(--buttom-hover);
} }
.live-url-choices {
background-color: var(--thumb-background);
margin: 1rem 0;
padding: 1rem;
}
.playability-error {
position: relative;
box-sizing: border-box;
height: 30vh;
margin: 1rem 0;
}
.playability-error > span {
display: flex;
background-color: var(--thumb-background);
height: 100%;
object-fit: cover;
justify-content: center;
align-items: center;
text-align: center;
}
.playlist { .playlist {
display: grid; display: grid;
grid-gap: 4px; grid-gap: 4px;
@@ -284,18 +307,122 @@ figure.sc-video {
padding-top: 0.5rem; padding-top: 0.5rem;
padding-bottom: 0.5rem; padding-bottom: 0.5rem;
} }
.v-download { grid-area: v-download; } .v-download {
.v-download > ul.download-dropdown-content { grid-area: v-download;
background: var(--secondary-background); margin-bottom: 0.5rem;
padding-left: 0px;
} }
.v-download > ul.download-dropdown-content > li.download-format { .v-download details {
list-style: none; display: block;
width: 100%;
}
.v-download > summary {
cursor: pointer;
padding: 0.4rem 0; padding: 0.4rem 0;
padding-left: 1rem; padding-left: 1rem;
} }
.v-download > ul.download-dropdown-content > li.download-format a.download-link { .v-download > summary.download-dropdown-label {
cursor: pointer;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
padding-bottom: 6px;
padding-left: .75em;
padding-right: .75em;
padding-top: 6px;
text-align: center;
white-space: nowrap;
background-color: var(--buttom);
border: 1px solid var(--button-border);
color: var(--buttom-text);
border-radius: 5px;
margin-bottom: 0.5rem;
}
.v-download > summary.download-dropdown-label:hover {
background-color: var(--buttom-hover);
}
.v-download > .download-table-container {
background: var(--secondary-background);
max-height: 65vh;
overflow-y: auto;
border: 1px solid var(--button-border);
border-radius: 8px;
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
}
.download-table {
width: 100%;
border-collapse: separate;
border-spacing: 0;
font-size: 0.875rem;
}
.download-table thead {
background: var(--thead-background);
position: sticky;
top: 0;
z-index: 1;
}
.download-table th,
.download-table td {
padding: 0.7rem 0.9rem;
text-align: left;
border-bottom: 1px solid var(--button-border);
}
.download-table th {
font-weight: 600;
font-size: 0.7rem;
text-transform: uppercase;
letter-spacing: 0.8px;
}
.download-table tbody tr {
transition: all 0.2s ease;
}
.download-table tbody tr:hover {
background: var(--primary-background);
}
.download-table a.download-link {
display: inline-block;
padding: 0.4rem 0.85rem;
background: rgba(0,0,0,0.12);
color: var(--buttom-text);
text-decoration: none; text-decoration: none;
border-radius: 5px;
font-weight: 500;
font-size: 0.85rem;
transition: background 0.2s ease;
white-space: nowrap;
}
.download-table a.download-link:hover {
background: rgba(0,0,0,0.28);
color: var(--buttom-text);
}
.download-table tbody tr:last-child td {
border-bottom: none;
}
.download-table td[data-label="Ext"] {
font-family: monospace;
font-size: 0.8rem;
font-weight: 600;
}
.download-table td[data-label="Link"] {
white-space: nowrap;
vertical-align: middle;
}
.download-table td[data-label="Codecs"] {
max-width: 180px;
text-overflow: ellipsis;
overflow: hidden;
font-family: monospace;
font-size: 0.75rem;
}
.download-table td[data-label="Size"] {
font-family: monospace;
font-size: 0.85rem;
}
.download-table td[colspan="3"] {
font-style: italic;
opacity: 0.7;
} }
.v-description { .v-description {
@@ -622,6 +749,9 @@ figure.sc-video {
max-height: 80vh; max-height: 80vh;
overflow-y: scroll; overflow-y: scroll;
} }
.playability-error {
height: 60vh;
}
.playlist { .playlist {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;

View File

@@ -30,8 +30,7 @@ database_path = os.path.join(settings.data_dir, "subscriptions.sqlite")
def open_database(): def open_database():
if not os.path.exists(settings.data_dir): os.makedirs(settings.data_dir, exist_ok=True)
os.makedirs(settings.data_dir)
connection = sqlite3.connect(database_path, check_same_thread=False) connection = sqlite3.connect(database_path, check_same_thread=False)
try: try:
@@ -293,7 +292,10 @@ def youtube_timestamp_to_posix(dumb_timestamp):
def posix_to_dumbed_down(posix_time): def posix_to_dumbed_down(posix_time):
'''Inverse of youtube_timestamp_to_posix.''' '''Inverse of youtube_timestamp_to_posix.'''
delta = int(time.time() - posix_time) delta = int(time.time() - posix_time)
assert delta >= 0 # Guard against future timestamps (clock drift) without relying on
# `assert` (which is stripped under `python -O`).
if delta < 0:
delta = 0
if delta == 0: if delta == 0:
return '0 seconds ago' return '0 seconds ago'
@@ -532,7 +534,8 @@ def _get_upstream_videos(channel_id):
return None return None
root = defusedxml.ElementTree.fromstring(feed) root = defusedxml.ElementTree.fromstring(feed)
assert remove_bullshit(root.tag) == 'feed' if remove_bullshit(root.tag) != 'feed':
raise ValueError('Root element is not <feed>')
for entry in root: for entry in root:
if (remove_bullshit(entry.tag) != 'entry'): if (remove_bullshit(entry.tag) != 'entry'):
continue continue
@@ -540,13 +543,13 @@ def _get_upstream_videos(channel_id):
# it's yt:videoId in the xml but the yt: is turned into a namespace which is removed by remove_bullshit # it's yt:videoId in the xml but the yt: is turned into a namespace which is removed by remove_bullshit
video_id_element = find_element(entry, 'videoId') video_id_element = find_element(entry, 'videoId')
time_published_element = find_element(entry, 'published') time_published_element = find_element(entry, 'published')
assert video_id_element is not None if video_id_element is None or time_published_element is None:
assert time_published_element is not None raise ValueError('Missing videoId or published element')
time_published = int(calendar.timegm(time.strptime(time_published_element.text, '%Y-%m-%dT%H:%M:%S+00:00'))) time_published = int(calendar.timegm(time.strptime(time_published_element.text, '%Y-%m-%dT%H:%M:%S+00:00')))
times_published[video_id_element.text] = time_published times_published[video_id_element.text] = time_published
except AssertionError: except ValueError:
print('Failed to read atoma feed for ' + channel_status_name) print('Failed to read atoma feed for ' + channel_status_name)
traceback.print_exc() traceback.print_exc()
except defusedxml.ElementTree.ParseError: except defusedxml.ElementTree.ParseError:
@@ -594,7 +597,10 @@ def _get_upstream_videos(channel_id):
# Special case: none of the videos have a time published. # Special case: none of the videos have a time published.
# In this case, make something up # In this case, make something up
if videos and videos[0]['time_published'] is None: if videos and videos[0]['time_published'] is None:
assert all(v['time_published'] is None for v in videos) # Invariant: if the first video has no timestamp, earlier passes
# ensure all of them are unset. Don't rely on `assert`.
if not all(v['time_published'] is None for v in videos):
raise RuntimeError('Inconsistent time_published state')
now = time.time() now = time.time()
for i in range(len(videos)): for i in range(len(videos)):
# 1 month between videos # 1 month between videos
@@ -809,7 +815,8 @@ def import_subscriptions():
file = file.read().decode('utf-8') file = file.read().decode('utf-8')
try: try:
root = defusedxml.ElementTree.fromstring(file) root = defusedxml.ElementTree.fromstring(file)
assert root.tag == 'opml' if root.tag != 'opml':
raise ValueError('Root element is not <opml>')
channels = [] channels = []
for outline_element in root[0][0]: for outline_element in root[0][0]:
if (outline_element.tag != 'outline') or ('xmlUrl' not in outline_element.attrib): if (outline_element.tag != 'outline') or ('xmlUrl' not in outline_element.attrib):
@@ -820,7 +827,7 @@ def import_subscriptions():
channel_id = channel_rss_url[channel_rss_url.find('channel_id=')+11:].strip() channel_id = channel_rss_url[channel_rss_url.find('channel_id=')+11:].strip()
channels.append((channel_id, channel_name)) channels.append((channel_id, channel_name))
except (AssertionError, IndexError, defusedxml.ElementTree.ParseError) as e: except (ValueError, IndexError, defusedxml.ElementTree.ParseError):
return '400 Bad Request: Unable to read opml xml file, or the file is not the expected format', 400 return '400 Bad Request: Unable to read opml xml file, or the file is not the expected format', 400
elif mime_type in ('text/csv', 'application/vnd.ms-excel'): elif mime_type in ('text/csv', 'application/vnd.ms-excel'):
content = file.read().decode('utf-8') content = file.read().decode('utf-8')
@@ -1072,11 +1079,20 @@ def post_subscriptions_page():
return '', 204 return '', 204
# YouTube video IDs are exactly 11 chars from [A-Za-z0-9_-]. Enforce this
# before using the value in filesystem paths to prevent path traversal
# (CWE-22, OWASP A01:2021).
_VIDEO_ID_RE = re.compile(r'^[A-Za-z0-9_-]{11}$')
@yt_app.route('/data/subscription_thumbnails/<thumbnail>') @yt_app.route('/data/subscription_thumbnails/<thumbnail>')
def serve_subscription_thumbnail(thumbnail): def serve_subscription_thumbnail(thumbnail):
'''Serves thumbnail from disk if it's been saved already. If not, downloads the thumbnail, saves to disk, and serves it.''' '''Serves thumbnail from disk if it's been saved already. If not, downloads the thumbnail, saves to disk, and serves it.'''
assert thumbnail[-4:] == '.jpg' if not thumbnail.endswith('.jpg'):
flask.abort(400)
video_id = thumbnail[0:-4] video_id = thumbnail[0:-4]
if not _VIDEO_ID_RE.match(video_id):
flask.abort(400)
thumbnail_path = os.path.join(thumbnails_directory, thumbnail) thumbnail_path = os.path.join(thumbnails_directory, thumbnail)
if video_id in existing_thumbnails: if video_id in existing_thumbnails:
@@ -1089,12 +1105,26 @@ def serve_subscription_thumbnail(thumbnail):
f.close() f.close()
return flask.Response(image, mimetype='image/jpeg') return flask.Response(image, mimetype='image/jpeg')
url = f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg" image = None
for quality in ('hq720.jpg', 'sddefault.jpg', 'hqdefault.jpg'):
url = f"https://i.ytimg.com/vi/{video_id}/{quality}"
try: try:
image = util.fetch_url(url, report_text="Saved thumbnail: " + video_id) image = util.fetch_url(url, report_text="Saved thumbnail: " + video_id)
except urllib.error.HTTPError as e: break
except util.FetchError as e:
if '404' in str(e):
continue
print("Failed to download thumbnail for " + video_id + ": " + str(e)) print("Failed to download thumbnail for " + video_id + ": " + str(e))
abort(e.code) flask.abort(500)
except urllib.error.HTTPError as e:
if e.code == 404:
continue
print("Failed to download thumbnail for " + video_id + ": " + str(e))
flask.abort(e.code)
if image is None:
flask.abort(404)
try: try:
f = open(thumbnail_path, 'wb') f = open(thumbnail_path, 'wb')
except FileNotFoundError: except FileNotFoundError:

View File

@@ -8,7 +8,7 @@
<head> <head>
<meta charset="UTF-8"> <meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="width=device-width, initial-scale=1">
<meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' blob: {{ app_url }}/* data: https://*.googlevideo.com; {{ "img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com;" if not settings.proxy_images else "" }}"> <meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline' 'unsafe-eval' blob:; media-src 'self' blob: {{ app_url }}/* data: https://*.googlevideo.com; img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com; connect-src 'self' https://*.googlevideo.com; font-src 'self' data:; worker-src 'self' blob:;">
<title>{{ page_title }}</title> <title>{{ page_title }}</title>
<link title="YT Local" href="/youtube.com/opensearch.xml" rel="search" type="application/opensearchdescription+xml"> <link title="YT Local" href="/youtube.com/opensearch.xml" rel="search" type="application/opensearchdescription+xml">
<link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon"> <link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon">
@@ -26,6 +26,12 @@
// @license-end // @license-end
</script> </script>
{% endif %} {% endif %}
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later
// Image prefix for thumbnails
let settings_img_prefix = "{{ settings.img_prefix or '' }}";
// @license-end
</script>
</head> </head>
<body> <body>
@@ -35,57 +41,57 @@
</nav> </nav>
<form class="form" id="site-search" action="/youtube.com/results"> <form class="form" id="site-search" action="/youtube.com/results">
<input type="search" name="search_query" class="search-box" value="{{ search_box_value }}" <input type="search" name="search_query" class="search-box" value="{{ search_box_value }}"
{{ "autofocus" if (request.path in ("/", "/results") or error_message) else "" }} required placeholder="Type to search..."> {{ "autofocus" if (request.path in ("/", "/results") or error_message) else "" }} required placeholder="{{ _('Type to search...') }}">
<button type="submit" value="Search" class="search-button">Search</button> <button type="submit" value="Search" class="search-button">{{ _('Search') }}</button>
<!-- options --> <!-- options -->
<div class="dropdown"> <div class="dropdown">
<!-- hidden box --> <!-- hidden box -->
<input id="options-toggle-cbox" class="opt-box" type="checkbox"> <input id="options-toggle-cbox" class="opt-box" type="checkbox">
<!-- end hidden box --> <!-- end hidden box -->
<label class="dropdown-label" for="options-toggle-cbox">Options</label> <label class="dropdown-label" for="options-toggle-cbox">{{ _('Options') }}</label>
<div class="dropdown-content"> <div class="dropdown-content">
<h3>Sort by</h3> <h3>{{ _('Sort by') }}</h3>
<div class="option"> <div class="option">
<input type="radio" id="sort_relevance" name="sort" value="0"> <input type="radio" id="sort_relevance" name="sort" value="0">
<label for="sort_relevance">Relevance</label> <label for="sort_relevance">{{ _('Relevance') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="sort_upload_date" name="sort" value="2"> <input type="radio" id="sort_upload_date" name="sort" value="2">
<label for="sort_upload_date">Upload date</label> <label for="sort_upload_date">{{ _('Upload date') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="sort_view_count" name="sort" value="3"> <input type="radio" id="sort_view_count" name="sort" value="3">
<label for="sort_view_count">View count</label> <label for="sort_view_count">{{ _('View count') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="sort_rating" name="sort" value="1"> <input type="radio" id="sort_rating" name="sort" value="1">
<label for="sort_rating">Rating</label> <label for="sort_rating">{{ _('Rating') }}</label>
</div> </div>
<h3>Upload date</h3> <h3>{{ _('Upload date') }}</h3>
<div class="option"> <div class="option">
<input type="radio" id="time_any" name="time" value="0"> <input type="radio" id="time_any" name="time" value="0">
<label for="time_any">Any</label> <label for="time_any">{{ _('Any') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_last_hour" name="time" value="1"> <input type="radio" id="time_last_hour" name="time" value="1">
<label for="time_last_hour">Last hour</label> <label for="time_last_hour">{{ _('Last hour') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_today" name="time" value="2"> <input type="radio" id="time_today" name="time" value="2">
<label for="time_today">Today</label> <label for="time_today">{{ _('Today') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_this_week" name="time" value="3"> <input type="radio" id="time_this_week" name="time" value="3">
<label for="time_this_week">This week</label> <label for="time_this_week">{{ _('This week') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_this_month" name="time" value="4"> <input type="radio" id="time_this_month" name="time" value="4">
<label for="time_this_month">This month</label> <label for="time_this_month">{{ _('This month') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_this_year" name="time" value="5"> <input type="radio" id="time_this_year" name="time" value="5">
<label for="time_this_year">This year</label> <label for="time_this_year">{{ _('This year') }}</label>
</div> </div>
<h3>Type</h3> <h3>Type</h3>

View File

@@ -81,10 +81,14 @@
<!-- new--> <!-- new-->
<div id="links-metadata"> <div id="links-metadata">
{% if current_tab in ('videos', 'shorts', 'streams') %} {% if current_tab in ('videos', 'shorts', 'streams') %}
{% set sorts = [('1', 'views'), ('2', 'oldest'), ('3', 'newest'), ('4', 'newest - no shorts'),] %} {% set sorts = [('3', 'newest'), ('4', 'newest - no shorts')] %}
{% if current_tab in ('shorts', 'streams') and not is_last_page %}
<div id="number-of-results">{{ number_of_videos }}+ videos</div>
{% else %}
<div id="number-of-results">{{ number_of_videos }} videos</div> <div id="number-of-results">{{ number_of_videos }} videos</div>
{% endif %}
{% elif current_tab == 'playlists' %} {% elif current_tab == 'playlists' %}
{% set sorts = [('2', 'oldest'), ('3', 'newest'), ('4', 'last video added')] %} {% set sorts = [('3', 'newest'), ('4', 'last video added')] %}
{% if items %} {% if items %}
<h2 class="page-number">Page {{ page_number }}</h2> <h2 class="page-number">Page {{ page_number }}</h2>
{% else %} {% else %}
@@ -117,7 +121,11 @@
<hr/> <hr/>
<footer class="pagination-container"> <footer class="pagination-container">
{% if current_tab in ('videos', 'shorts', 'streams') %} {% if current_tab in ('shorts', 'streams') %}
<nav class="next-previous-button-row">
{{ common_elements.next_previous_buttons(is_last_page, channel_url + '/' + current_tab, parameters_dictionary) }}
</nav>
{% elif current_tab == 'videos' %}
<nav class="pagination-list"> <nav class="pagination-list">
{{ common_elements.page_buttons(number_of_pages, channel_url + '/' + current_tab, parameters_dictionary, include_ends=(current_sort.__str__() in '34')) }} {{ common_elements.page_buttons(number_of_pages, channel_url + '/' + current_tab, parameters_dictionary, include_ends=(current_sort.__str__() in '34')) }}
</nav> </nav>

View File

@@ -3,13 +3,13 @@
{% macro render_comment(comment, include_avatar, timestamp_links=False) %} {% macro render_comment(comment, include_avatar, timestamp_links=False) %}
<div class="comment-container"> <div class="comment-container">
<div class="comment"> <div class="comment">
<a class="author-avatar" href="{{ comment['author_url'] }}" title="{{ comment['author'] }}"> <a class="author-avatar" href="{{ comment['author_url'] or '#' }}" title="{{ comment['author'] }}">
{% if include_avatar %} {% if include_avatar %}
<img class="author-avatar-img" alt="{{ comment['author'] }}" src="{{ comment['author_avatar'] }}"> <img class="author-avatar-img" alt="{{ comment['author'] }}" src="{{ comment['author_avatar'] }}">
{% endif %} {% endif %}
</a> </a>
<address class="author-name"> <address class="author-name">
<a class="author" href="{{ comment['author_url'] }}" title="{{ comment['author'] }}">{{ comment['author'] }}</a> <a class="author" href="{{ comment['author_url'] or '#' }}" title="{{ comment['author'] }}">{{ comment['author'] }}</a>
</address> </address>
<a class="permalink" href="{{ comment['permalink'] }}" title="permalink"> <a class="permalink" href="{{ comment['permalink'] }}" title="permalink">
<span>{{ comment['time_published'] }}</span> <span>{{ comment['time_published'] }}</span>
@@ -58,7 +58,7 @@
{% endfor %} {% endfor %}
</div> </div>
{% if 'more_comments_url' is in comments_info %} {% if 'more_comments_url' is in comments_info %}
<a class="page-button more-comments" href="{{ comments_info['more_comments_url'] }}">More comments</a> <a class="page-button more-comments" href="{{ comments_info['more_comments_url'] }}">{{ _('More comments') }}</a>
{% endif %} {% endif %}
{% endif %} {% endif %}

View File

@@ -20,14 +20,14 @@
{{ info['error'] }} {{ info['error'] }}
{% else %} {% else %}
<div class="item-video {{ info['type'] + '-item' }}"> <div class="item-video {{ info['type'] + '-item' }}">
<a class="thumbnail-box" href="{{ info['url'] }}" title="{{ info['title'] }}"> <a class="thumbnail-box" href="{{ info['url'] or '#' }}" title="{{ info['title'] }}">
<div class="thumbnail {% if info['type'] == 'channel' %} channel {% endif %}"> <div class="thumbnail {% if info['type'] == 'channel' %} channel {% endif %}">
{% if lazy_load %} {% if lazy_load %}
<img class="thumbnail-img lazy" alt="&#x20;" data-src="{{ info['thumbnail'] }}"> <img class="thumbnail-img lazy" alt="&#x20;" data-src="{{ info['thumbnail'] }}" onerror="thumbnail_fallback(this)">
{% elif info['type'] == 'channel' %} {% elif info['type'] == 'channel' %}
<img class="thumbnail-img channel" alt="&#x20;" src="{{ info['thumbnail'] }}"> <img class="thumbnail-img channel" alt="&#x20;" src="{{ info['thumbnail'] }}" onerror="thumbnail_fallback(this)">
{% else %} {% else %}
<img class="thumbnail-img" alt="&#x20;" src="{{ info['thumbnail'] }}"> <img class="thumbnail-img" alt="&#x20;" src="{{ info['thumbnail'] }}" onerror="thumbnail_fallback(this)">
{% endif %} {% endif %}
{% if info['type'] != 'channel' %} {% if info['type'] != 'channel' %}
@@ -35,7 +35,7 @@
{% endif %} {% endif %}
</div> </div>
</a> </a>
<h4 class="title"><a href="{{ info['url'] }}" title="{{ info['title'] }}">{{ info['title'] }}</a></h4> <h4 class="title"><a href="{{ info['url'] or '#' }}" title="{{ info['title'] }}">{{ info['title'] }}</a></h4>
{% if include_author %} {% if include_author %}
{% set author_description = info['author'] %} {% set author_description = info['author'] %}
@@ -58,7 +58,9 @@
<div class="stats {{'horizontal-stats' if horizontal else 'vertical-stats'}}"> <div class="stats {{'horizontal-stats' if horizontal else 'vertical-stats'}}">
{% if info['type'] == 'channel' %} {% if info['type'] == 'channel' %}
{% if info.get('approx_subscriber_count') %}
<div>{{ info['approx_subscriber_count'] }} subscribers</div> <div>{{ info['approx_subscriber_count'] }} subscribers</div>
{% endif %}
<div>{{ info['video_count']|commatize }} videos</div> <div>{{ info['video_count']|commatize }} videos</div>
{% else %} {% else %}
{% if info.get('time_published') %} {% if info.get('time_published') %}

View File

@@ -3,7 +3,7 @@
<head> <head>
<meta charset="UTF-8"> <meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="viewport" content="width=device-width, initial-scale=1">
<meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline'; media-src 'self' https://*.googlevideo.com; {{ "img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com;" if not settings.proxy_images else "" }}"> <meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' blob: https://*.googlevideo.com; img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com; connect-src 'self' https://*.googlevideo.com; font-src 'self' data:;">
<title>{{ title }}</title> <title>{{ title }}</title>
<link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon"> <link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon">
{% if settings.use_video_player == 2 %} {% if settings.use_video_player == 2 %}
@@ -37,9 +37,6 @@
<body> <body>
<video id="js-video-player" controls autofocus onmouseleave="{{ title }}" <video id="js-video-player" controls autofocus onmouseleave="{{ title }}"
oncontextmenu="{{ title }}" onmouseenter="{{ title }}" title="{{ title }}"> oncontextmenu="{{ title }}" onmouseenter="{{ title }}" title="{{ title }}">
{% if uni_sources %}
<source src="{{ uni_sources[uni_idx]['url'] }}" type="{{ uni_sources[uni_idx]['type'] }}" data-res="{{ uni_sources[uni_idx]['quality'] }}">
{% endif %}
{% for source in subtitle_sources %} {% for source in subtitle_sources %}
{% if source['on'] %} {% if source['on'] %}
<track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}" default> <track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}" default>
@@ -47,28 +44,71 @@
<track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}"> <track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}">
{% endif %} {% endif %}
{% endfor %} {% endfor %}
</video>
{% if js_data %} {% if uni_sources %}
<script> {% for source in uni_sources %}
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later <source src="{{ source['url'] }}" type="{{ source['type'] }}" title="{{ source['quality_string'] }}">
data = {{ js_data|tojson }}; {% endfor %}
// @license-end
</script>
{% endif %} {% endif %}
</video>
<script> <script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later
let storyboard_url = {{ storyboard_url | tojson }}; let storyboard_url = {{ storyboard_url | tojson }};
let hls_manifest_url = {{ hls_manifest_url | tojson }};
let hls_unavailable = {{ hls_unavailable | tojson }};
let playback_mode = {{ playback_mode | tojson }};
let pair_sources = {{ pair_sources | tojson }};
let pair_idx = {{ pair_idx | tojson }};
// @license-end // @license-end
</script> </script>
{% if settings.use_video_player == 2 %}
{% set hls_should_work = (playback_mode == 'hls' or playback_mode == 'auto') and not hls_unavailable %}
{% set use_dash = not hls_should_work %}
{% if not use_dash %}
<script src="/youtube.com/static/js/hls.min.js"
integrity="sha512-CSVqc4a7tn+tizDNt+eDoVn2fXYAwMDpCLrwGlWrOktNfZQ9gp4dKKScElMeRlrIifhliXs0a06BLaUgmMlCUw=="
crossorigin="anonymous"></script>
{% endif %}
<script src="/youtube.com/static/js/common.js"></script>
{% if settings.use_video_player == 0 %}
<!-- Native player -->
{% if use_dash %}
<script src="/youtube.com/static/js/watch.dash.js"></script>
{% else %}
<script src="/youtube.com/static/js/watch.hls.js"></script>
{% endif %}
{% elif settings.use_video_player == 1 %}
<!-- Native player with hotkeys -->
<script src="/youtube.com/static/js/hotkeys.js"></script>
{% if use_dash %}
<script src="/youtube.com/static/js/watch.dash.js"></script>
{% else %}
<script src="/youtube.com/static/js/watch.hls.js"></script>
{% endif %}
{% elif settings.use_video_player == 2 %}
<!-- plyr --> <!-- plyr -->
<script src="/youtube.com/static/modules/plyr/plyr.min.js" <script src="/youtube.com/static/modules/plyr/plyr.min.js"
integrity="sha512-l6ZzdXpfMHRfifqaR79wbYCEWjLDMI9DnROvb+oLkKq6d7MGroGpMbI7HFpicvmAH/2aQO+vJhewq8rhysrImw==" integrity="sha512-l6ZzdXpfMHRfifqaR79wbYCEWjLDMI9DnROvb+oLkKq6d7MGroGpMbI7HFpicvmAH/2aQO+vJhewq8rhysrImw=="
crossorigin="anonymous"></script> crossorigin="anonymous"></script>
<script src="/youtube.com/static/js/plyr-start.js"></script> {% if use_dash %}
<script src="/youtube.com/static/js/plyr.dash.start.js"></script>
{% else %}
<script src="/youtube.com/static/js/plyr.hls.start.js"></script>
{% endif %}
<!-- /plyr --> <!-- /plyr -->
{% elif settings.use_video_player == 1 %} {% endif %}
<script src="/youtube.com/static/js/hotkeys.js"></script>
{% if use_dash %}
<script src="/youtube.com/static/js/av-merge.js"></script>
{% endif %}
<!-- Storyboard Preview Thumbnails (native players only; Plyr handles this internally) -->
{% if settings.use_video_player != 2 and settings.native_player_storyboard %}
<script src="/youtube.com/static/js/storyboard-preview.js"></script>
{% endif %} {% endif %}
</body> </body>
</html> </html>

View File

@@ -29,6 +29,11 @@
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td> <td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
<td data-label="Source"><a href="/youtube.com/static/js/common.js">common.js</a></td> <td data-label="Source"><a href="/youtube.com/static/js/common.js">common.js</a></td>
</tr> </tr>
<tr>
<td data-label="File"><a href="/youtube.com/static/js/hls.min.js">hls.min.js</a></td>
<td data-label="License"><a href="https://spdx.org/licenses/BSD-3-Clause.html">BSD-3-Clause</a></td>
<td data-label="Source"><a href="https://github.com/video-dev/hls.js/tree/v1.6.15/src">hls.js v1.6.15 source</a></td>
</tr>
<tr> <tr>
<td data-label="File"><a href="/youtube.com/static/js/hotkeys.js">hotkeys.js</a></td> <td data-label="File"><a href="/youtube.com/static/js/hotkeys.js">hotkeys.js</a></td>
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td> <td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
@@ -40,9 +45,24 @@
<td data-label="Source"><a href="/youtube.com/static/js/playlistadd.js">playlistadd.js</a></td> <td data-label="Source"><a href="/youtube.com/static/js/playlistadd.js">playlistadd.js</a></td>
</tr> </tr>
<tr> <tr>
<td data-label="File"><a href="/youtube.com/static/js/plyr-start.js">plyr-start.js</a></td> <td data-label="File"><a href="/youtube.com/static/js/plyr.dash.start.js">plyr.dash.start.js</a></td>
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td> <td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
<td data-label="Source"><a href="/youtube.com/static/js/plyr-start.js">plyr-start.js</a></td> <td data-label="Source"><a href="/youtube.com/static/js/plyr.dash.start.js">plyr.dash.start.js</a></td>
</tr>
<tr>
<td data-label="File"><a href="/youtube.com/static/js/plyr.hls.start.js">plyr.hls.start.js</a></td>
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
<td data-label="Source"><a href="/youtube.com/static/js/plyr.hls.start.js">plyr.hls.start.js</a></td>
</tr>
<tr>
<td data-label="File"><a href="/youtube.com/static/js/sponsorblock.js">sponsorblock.js</a></td>
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
<td data-label="Source"><a href="/youtube.com/static/js/sponsorblock.js">sponsorblock.js</a></td>
</tr>
<tr>
<td data-label="File"><a href="/youtube.com/static/js/storyboard-preview.js">storyboard-preview.js</a></td>
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
<td data-label="Source"><a href="/youtube.com/static/js/storyboard-preview.js">storyboard-preview.js</a></td>
</tr> </tr>
<tr> <tr>
<td data-label="File"><a href="/youtube.com/static/modules/plyr/plyr.min.js">plyr.min.js</a></td> <td data-label="File"><a href="/youtube.com/static/modules/plyr/plyr.min.js">plyr.min.js</a></td>
@@ -55,9 +75,14 @@
<td data-label="Source"><a href="/youtube.com/static/js/transcript-table.js">transcript-table.js</a></td> <td data-label="Source"><a href="/youtube.com/static/js/transcript-table.js">transcript-table.js</a></td>
</tr> </tr>
<tr> <tr>
<td data-label="File"><a href="/youtube.com/static/js/watch.js">watch.js</a></td> <td data-label="File"><a href="/youtube.com/static/js/watch.dash.js">watch.dash.js</a></td>
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td> <td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
<td data-label="Source"><a href="/youtube.com/static/js/watch.js">watch.js</a></td> <td data-label="Source"><a href="/youtube.com/static/js/watch.dash.js">watch.dash.js</a></td>
</tr>
<tr>
<td data-label="File"><a href="/youtube.com/static/js/watch.hls.js">watch.hls.js</a></td>
<td data-label="License"><a href="http://www.gnu.org/licenses/agpl-3.0.html">AGPL-3.0 or later</a></td>
<td data-label="Source"><a href="/youtube.com/static/js/watch.hls.js">watch.hls.js</a></td>
</tr> </tr>
</tbody> </tbody>
</table> </table>

View File

@@ -10,11 +10,17 @@
<div class="playlist-metadata"> <div class="playlist-metadata">
<div class="author"> <div class="author">
{% if thumbnail %}
<img alt="{{ title }}" src="{{ thumbnail }}"> <img alt="{{ title }}" src="{{ thumbnail }}">
{% endif %}
<h2>{{ title }}</h2> <h2>{{ title }}</h2>
</div> </div>
<div class="summary"> <div class="summary">
{% if author_url %}
<a class="playlist-author" href="{{ author_url }}">{{ author }}</a> <a class="playlist-author" href="{{ author_url }}">{{ author }}</a>
{% else %}
<span class="playlist-author">{{ author }}</span>
{% endif %}
</div> </div>
<div class="playlist-stats"> <div class="playlist-stats">
<div>{{ video_count|commatize }} videos</div> <div>{{ video_count|commatize }} videos</div>

View File

@@ -7,15 +7,15 @@
{% block main %} {% block main %}
<form method="POST" class="settings-form"> <form method="POST" class="settings-form">
{% for categ in categories %} {% for categ in categories %}
<h2>{{ categ|capitalize }}</h2> <h2>{{ _(categ|capitalize) }}</h2>
<ul class="settings-list"> <ul class="settings-list">
{% for setting_name, setting_info, value in settings_by_category[categ] %} {% for setting_name, setting_info, value in settings_by_category[categ] %}
{% if not setting_info.get('hidden', false) %} {% if not setting_info.get('hidden', false) %}
<li class="setting-item"> <li class="setting-item">
{% if 'label' is in(setting_info) %} {% if 'label' is in(setting_info) %}
<label for="{{ 'setting_' + setting_name }}" {% if 'comment' is in(setting_info) %}title="{{ setting_info['comment'] }}" {% endif %}>{{ setting_info['label'] }}</label> <label for="{{ 'setting_' + setting_name }}" {% if 'comment' is in(setting_info) %}title="{{ setting_info['comment'] }}" {% endif %}>{{ _(setting_info['label']) }}</label>
{% else %} {% else %}
<label for="{{ 'setting_' + setting_name }}" {% if 'comment' is in(setting_info) %}title="{{ setting_info['comment'] }}" {% endif %}>{{ setting_name.replace('_', ' ')|capitalize }}</label> <label for="{{ 'setting_' + setting_name }}" {% if 'comment' is in(setting_info) %}title="{{ setting_info['comment'] }}" {% endif %}>{{ _(setting_name.replace('_', ' ')|capitalize) }}</label>
{% endif %} {% endif %}
{% if setting_info['type'].__name__ == 'bool' %} {% if setting_info['type'].__name__ == 'bool' %}
@@ -24,24 +24,32 @@
{% if 'options' is in(setting_info) %} {% if 'options' is in(setting_info) %}
<select id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}"> <select id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}">
{% for option in setting_info['options'] %} {% for option in setting_info['options'] %}
<option value="{{ option[0] }}" {{ 'selected' if option[0] == value else '' }}>{{ option[1] }}</option> <option value="{{ option[0] }}" {{ 'selected' if option[0] == value else '' }}>{{ _(option[1]) }}</option>
{% endfor %} {% endfor %}
</select> </select>
{% else %} {% else %}
<input type="number" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}" step="1"> <input type="number" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}" step="1">
{% endif %} {% endif %}
{% elif setting_info['type'].__name__ == 'float' %} {% elif setting_info['type'].__name__ == 'float' %}
<input type="number" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}" step="0.01">
{% elif setting_info['type'].__name__ == 'str' %} {% elif setting_info['type'].__name__ == 'str' %}
<input type="text" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}"> {% if 'options' is in(setting_info) %}
<select id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}">
{% for option in setting_info['options'] %}
<option value="{{ option[0] }}" {{ 'selected' if option[0] == value else '' }}>{{ _(option[1]) }}</option>
{% endfor %}
</select>
{% else %} {% else %}
<span>Error: Unknown setting type: setting_info['type'].__name__</span> <input type="text" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}">
{% endif %}
{% else %}
<span>Error: Unknown setting type: {{ setting_info['type'].__name__ }}</span>
{% endif %} {% endif %}
</li> </li>
{% endif %} {% endif %}
{% endfor %} {% endfor %}
</ul> </ul>
{% endfor %} {% endfor %}
<input type="submit" value="Save settings"> <input type="submit" value="{{ _('Save settings') }}">
</form> </form>
{% endblock main %} {% endblock main %}

View File

@@ -23,22 +23,9 @@
{% endif %} {% endif %}
</span> </span>
</div> </div>
{% elif (uni_sources.__len__() == 0 or live) and hls_formats.__len__() != 0 %}
<div class="live-url-choices">
<span>Copy a url into your video player:</span>
<ol>
{% for fmt in hls_formats %}
<li class="url-choice"><div class="url-choice-label">{{ fmt['video_quality'] }}: </div><input class="url-choice-copy" value="{{ fmt['url'] }}" readonly onclick="this.select();"></li>
{% endfor %}
</ol>
</div>
{% else %} {% else %}
<figure class="sc-video"> <figure class="sc-video">
<video id="js-video-player" playsinline controls {{ 'autoplay' if settings.autoplay_videos }}> <video id="js-video-player" playsinline controls {{ 'autoplay' if settings.autoplay_videos }}>
{% if uni_sources %}
<source src="{{ uni_sources[uni_idx]['url'] }}" type="{{ uni_sources[uni_idx]['type'] }}" data-res="{{ uni_sources[uni_idx]['quality'] }}">
{% endif %}
{% for source in subtitle_sources %} {% for source in subtitle_sources %}
{% if source['on'] %} {% if source['on'] %}
<track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}" default> <track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}" default>
@@ -46,7 +33,18 @@
<track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}"> <track label="{{ source['label'] }}" src="{{ source['url'] }}" kind="subtitles" srclang="{{ source['srclang'] }}">
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% if uni_sources %}
{% for source in uni_sources %}
<source src="{{ source['url'] }}" type="{{ source['type'] }}" title="{{ source['quality_string'] }}">
{% endfor %}
{% endif %}
</video> </video>
{% if hls_unavailable and not uni_sources %}
<div class="playability-error">
<span>Error: HLS streams unavailable. Video may not play without JavaScript fallback.</span>
</div>
{% endif %}
</figure> </figure>
{% endif %} {% endif %}
@@ -76,40 +74,68 @@
<div class="external-player-controls"> <div class="external-player-controls">
<input class="speed" id="speed-control" type="text" title="Video speed"> <input class="speed" id="speed-control" type="text" title="Video speed">
{% if settings.use_video_player != 2 %} {% if settings.use_video_player < 2 %}
<!-- Native player quality selector -->
<select id="quality-select" autocomplete="off"> <select id="quality-select" autocomplete="off">
{% for src in uni_sources %} <option value="-1" selected>Auto</option>
<option value='{"type": "uni", "index": {{ loop.index0 }}}' {{ 'selected' if loop.index0 == uni_idx and not using_pair_sources else '' }} >{{ src['quality_string'] }}</option> <!-- Quality options will be populated by HLS -->
{% endfor %} </select>
{% for src_pair in pair_sources %} {% else %}
<option value='{"type": "pair", "index": {{ loop.index0}}}' {{ 'selected' if loop.index0 == pair_idx and using_pair_sources else '' }} >{{ src_pair['quality_string'] }}</option> <select id="quality-select" autocomplete="off" style="display: none;">
<!-- Quality options will be populated by HLS -->
</select>
{% endif %}
{% if settings.use_video_player != 2 %}
{% if audio_tracks|length > 1 %}
<select id="audio-track-select" autocomplete="off">
{% for track in audio_tracks %}
<option value="{{ track['id'] }}" {{ 'selected' if track['is_default'] else '' }}>{{ track['name'] }}</option>
{% endfor %} {% endfor %}
</select> </select>
{% endif %} {% endif %}
{% endif %}
</div> </div>
<input class="v-checkbox" name="video_info_list" value="{{ video_info }}" form="playlist-edit" type="checkbox"> <input class="v-checkbox" name="video_info_list" value="{{ video_info }}" form="playlist-edit" type="checkbox">
<span class="v-direct-link"><a href="https://youtu.be/{{ video_id }}" rel="noopener noreferrer" target="_blank">Direct Link</a></span> <span class="v-direct-link"><a href="https://youtu.be/{{ video_id }}" rel="noopener noreferrer" target="_blank">{{ _('Direct Link') }}</a></span>
{% if settings.use_video_download != 0 %} {% if settings.use_video_download != 0 %}
<details class="v-download"> <details class="v-download">
<summary class="download-dropdown-label">Download</summary> <summary class="download-dropdown-label">{{ _('Download') }}</summary>
<ul class="download-dropdown-content"> <div class="download-table-container">
<table class="download-table" aria-label="Download formats">
<thead>
<tr>
<th scope="col">{{ _('Ext') }}</th>
<th scope="col">{{ _('Video') }}</th>
<th scope="col">{{ _('Audio') }}</th>
<th scope="col">{{ _('Size') }}</th>
<th scope="col">{{ _('Codecs') }}</th>
<th scope="col">{{ _('Link') }}</th>
</tr>
</thead>
<tbody>
{% for format in download_formats %} {% for format in download_formats %}
<li class="download-format"> <tr>
<a class="download-link" href="{{ format['url'] }}" download="{{ title }}.{{ format['ext'] }}"> <td data-label="{{ _('Ext') }}">{{ format['ext'] }}</td>
{{ format['ext'] }} {{ format['video_quality'] }} {{ format['audio_quality'] }} {{ format['file_size'] }} {{ format['codecs'] }} <td data-label="{{ _('Video') }}">{{ format['video_quality'] }}</td>
</a> <td data-label="{{ _('Audio') }}">{{ format['audio_quality'] }}</td>
</li> <td data-label="{{ _('Size') }}">{{ format['file_size'] }}</td>
<td data-label="{{ _('Codecs') }}">{{ format['codecs'] }}</td>
<td data-label="{{ _('Link') }}"><a class="download-link" href="{{ format['url'] }}" download="{{ title }}.{{ format['ext'] }}" aria-label="{{ _('Download') }} {{ format['ext'] }} {{ format['video_quality'] }} {{ format['audio_quality'] }}">{{ _('Download') }}</a></td>
</tr>
{% endfor %} {% endfor %}
{% for download in other_downloads %} {% for download in other_downloads %}
<li class="download-format"> <tr>
<a href="{{ download['url'] }}" download> <td data-label="{{ _('Ext') }}">{{ download['ext'] }}</td>
{{ download['ext'] }} {{ download['label'] }} <td data-label="{{ _('Video') }}" colspan="3">{{ download['label'] }}</td>
</a> <td data-label="{{ _('Codecs') }}">{{ download.get('codecs', 'N/A') }}</td>
</li> <td data-label="{{ _('Link') }}"><a class="download-link" href="{{ download['url'] }}" download aria-label="{{ _('Download') }} {{ download['label'] }}">{{ _('Download') }}</a></td>
</tr>
{% endfor %} {% endfor %}
</ul> </tbody>
</table>
</div>
</details> </details>
{% else %} {% else %}
<span class="v-download"></span> <span class="v-download"></span>
@@ -141,7 +167,7 @@
{% endif %} {% endif %}
</div> </div>
<details class="v-more-info"> <details class="v-more-info">
<summary>More info</summary> <summary>{{ _('More info') }}</summary>
<div class="more-info-content"> <div class="more-info-content">
<p>Tor exit node: {{ ip_address }}</p> <p>Tor exit node: {{ ip_address }}</p>
{% if invidious_used %} {% if invidious_used %}
@@ -165,13 +191,17 @@
<div class="playlist-header"> <div class="playlist-header">
<a href="{{ playlist['url'] }}" title="{{ playlist['title'] }}"><h3>{{ playlist['title'] }}</h3></a> <a href="{{ playlist['url'] }}" title="{{ playlist['title'] }}"><h3>{{ playlist['title'] }}</h3></a>
<ul class="playlist-metadata"> <ul class="playlist-metadata">
<li><label for="playlist-autoplay-toggle">Autoplay: </label><input id="playlist-autoplay-toggle" type="checkbox" class="autoplay-toggle"></li> <li><label for="playlist-autoplay-toggle">{{ _('AutoNext') }}: </label><input id="playlist-autoplay-toggle" type="checkbox" class="autoplay-toggle"></li>
{% if playlist['current_index'] is none %} {% if playlist['current_index'] is none %}
<li>[Error!]/{{ playlist['video_count'] }}</li> <li>[Error!]/{{ playlist['video_count'] }}</li>
{% else %} {% else %}
<li>{{ playlist['current_index']+1 }}/{{ playlist['video_count'] }}</li> <li>{{ playlist['current_index']+1 }}/{{ playlist['video_count'] }}</li>
{% endif %} {% endif %}
{% if playlist['author_url'] %}
<li><a href="{{ playlist['author_url'] }}" title="{{ playlist['author'] }}">{{ playlist['author'] }}</a></li> <li><a href="{{ playlist['author_url'] }}" title="{{ playlist['author'] }}">{{ playlist['author'] }}</a></li>
{% elif playlist['author'] %}
<li>{{ playlist['author'] }}</li>
{% endif %}
</ul> </ul>
</div> </div>
<nav class="playlist-videos"> <nav class="playlist-videos">
@@ -188,7 +218,7 @@
</nav> </nav>
</div> </div>
{% elif settings.related_videos_mode != 0 %} {% elif settings.related_videos_mode != 0 %}
<div class="related-autoplay"><label for="related-autoplay-toggle">Autoplay: </label><input id="related-autoplay-toggle" type="checkbox" class="autoplay-toggle"></div> <div class="related-autoplay"><label for="related-autoplay-toggle">{{ _('AutoNext') }}: </label><input id="related-autoplay-toggle" type="checkbox" class="autoplay-toggle"></div>
{% endif %} {% endif %}
{% if subtitle_sources %} {% if subtitle_sources %}
@@ -210,7 +240,7 @@
{% if settings.related_videos_mode != 0 %} {% if settings.related_videos_mode != 0 %}
<details class="related-videos-outer" {{'open' if settings.related_videos_mode == 1 else ''}}> <details class="related-videos-outer" {{'open' if settings.related_videos_mode == 1 else ''}}>
<summary>Related Videos</summary> <summary>{{ _('Related Videos') }}</summary>
<nav class="related-videos-inner"> <nav class="related-videos-inner">
{% for info in related %} {% for info in related %}
{{ common_elements.item(info, include_badges=false) }} {{ common_elements.item(info, include_badges=false) }}
@@ -224,10 +254,10 @@
<!-- comments --> <!-- comments -->
{% if settings.comments_mode != 0 %} {% if settings.comments_mode != 0 %}
{% if comments_disabled %} {% if comments_disabled %}
<div class="comments-area-outer comments-disabled">Comments disabled</div> <div class="comments-area-outer comments-disabled">{{ _('Comments disabled') }}</div>
{% else %} {% else %}
<details class="comments-area-outer" {{'open' if settings.comments_mode == 1 else ''}}> <details class="comments-area-outer" {{'open' if settings.comments_mode == 1 else ''}}>
<summary>{{ comment_count|commatize }} comment{{'s' if comment_count != '1' else ''}}</summary> <summary>{{ comment_count|commatize }} {{ _('Comment') }}{{'s' if comment_count != '1' else ''}}</summary>
<div class="comments-area-inner comments-area"> <div class="comments-area-inner comments-area">
{% if comments_info %} {% if comments_info %}
{{ comments.video_comments(comments_info) }} {{ comments.video_comments(comments_info) }}
@@ -239,25 +269,64 @@
</div> </div>
<script src="/youtube.com/static/js/av-merge.js"></script>
<script src="/youtube.com/static/js/watch.js"></script>
<script> <script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later // @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later
let storyboard_url = {{ storyboard_url | tojson }}; let storyboard_url = {{ storyboard_url | tojson }};
let hls_manifest_url = {{ hls_manifest_url | tojson }};
let hls_unavailable = {{ hls_unavailable | tojson }};
let playback_mode = {{ playback_mode | tojson }};
let pair_sources = {{ pair_sources | tojson }};
let pair_idx = {{ pair_idx | tojson }};
// @license-end // @license-end
</script> </script>
<script src="/youtube.com/static/js/common.js"></script> <script src="/youtube.com/static/js/common.js"></script>
<script src="/youtube.com/static/js/transcript-table.js"></script> <script src="/youtube.com/static/js/transcript-table.js"></script>
{% if settings.use_video_player == 2 %}
{% set hls_should_work = (playback_mode == 'hls' or playback_mode == 'auto') and not hls_unavailable %}
{% set use_dash = not hls_should_work %}
{% if use_dash %}
<script src="/youtube.com/static/js/av-merge.js"></script>
{% else %}
<script src="/youtube.com/static/js/hls.min.js"
integrity="sha512-CSVqc4a7tn+tizDNt+eDoVn2fXYAwMDpCLrwGlWrOktNfZQ9gp4dKKScElMeRlrIifhliXs0a06BLaUgmMlCUw=="
crossorigin="anonymous"></script>
{% endif %}
{% if settings.use_video_player == 0 %}
<!-- Native player (no hotkeys) -->
{% if use_dash %}
<script src="/youtube.com/static/js/watch.dash.js"></script>
{% else %}
<script src="/youtube.com/static/js/watch.hls.js"></script>
{% endif %}
{% elif settings.use_video_player == 1 %}
<!-- Native player with hotkeys -->
<script src="/youtube.com/static/js/hotkeys.js"></script>
{% if use_dash %}
<script src="/youtube.com/static/js/watch.dash.js"></script>
{% else %}
<script src="/youtube.com/static/js/watch.hls.js"></script>
{% endif %}
{% elif settings.use_video_player == 2 %}
<!-- plyr --> <!-- plyr -->
<script src="/youtube.com/static/modules/plyr/plyr.min.js" <script src="/youtube.com/static/modules/plyr/plyr.min.js"
integrity="sha512-l6ZzdXpfMHRfifqaR79wbYCEWjLDMI9DnROvb+oLkKq6d7MGroGpMbI7HFpicvmAH/2aQO+vJhewq8rhysrImw==" integrity="sha512-l6ZzdXpfMHRfifqaR79wbYCEWjLDMI9DnROvb+oLkKq6d7MGroGpMbI7HFpicvmAH/2aQO+vJhewq8rhysrImw=="
crossorigin="anonymous"></script> crossorigin="anonymous"></script>
<script src="/youtube.com/static/js/plyr-start.js"></script> {% if use_dash %}
<!-- /plyr --> <script src="/youtube.com/static/js/plyr.dash.start.js"></script>
{% elif settings.use_video_player == 1 %} {% else %}
<script src="/youtube.com/static/js/hotkeys.js"></script> <script src="/youtube.com/static/js/plyr.hls.start.js"></script>
{% endif %} {% endif %}
<!-- /plyr -->
{% endif %}
<!-- Storyboard Preview Thumbnails (native players only; Plyr handles this internally) -->
{% if settings.use_video_player != 2 and settings.native_player_storyboard %}
<script src="/youtube.com/static/js/storyboard-preview.js"></script>
{% endif %}
{% if settings.use_comments_js %} <script src="/youtube.com/static/js/comments.js"></script> {% endif %} {% if settings.use_comments_js %} <script src="/youtube.com/static/js/comments.js"></script> {% endif %}
{% if settings.use_sponsorblock_js %} <script src="/youtube.com/static/js/sponsorblock.js"></script> {% endif %} {% if settings.use_sponsorblock_js %} <script src="/youtube.com/static/js/sponsorblock.js"></script> {% endif %}
{% endblock main %} {% endblock main %}

View File

@@ -1,4 +1,6 @@
from datetime import datetime from datetime import datetime
import logging
import random
import settings import settings
import socks import socks
import sockshandler import sockshandler
@@ -21,6 +23,8 @@ import stem
import stem.control import stem.control
import traceback import traceback
logger = logging.getLogger(__name__)
# The trouble with the requests library: It ships its own certificate bundle via certifi # The trouble with the requests library: It ships its own certificate bundle via certifi
# instead of using the system certificate store, meaning self-signed certificates # instead of using the system certificate store, meaning self-signed certificates
# configured by the user will not work. Some draconian networks block TLS unless a corporate # configured by the user will not work. Some draconian networks block TLS unless a corporate
@@ -51,8 +55,8 @@ import traceback
# https://github.com/kennethreitz/requests/issues/2966 # https://github.com/kennethreitz/requests/issues/2966
# Until then, I will use a mix of urllib3 and urllib. # Until then, I will use a mix of urllib3 and urllib.
import urllib3 import urllib3 # noqa: E402 (imported here intentionally after the long note above)
import urllib3.contrib.socks import urllib3.contrib.socks # noqa: E402
URL_ORIGIN = "/https://www.youtube.com" URL_ORIGIN = "/https://www.youtube.com"
@@ -174,7 +178,6 @@ def get_pool(use_tor):
class HTTPAsymmetricCookieProcessor(urllib.request.BaseHandler): class HTTPAsymmetricCookieProcessor(urllib.request.BaseHandler):
'''Separate cookiejars for receiving and sending''' '''Separate cookiejars for receiving and sending'''
def __init__(self, cookiejar_send=None, cookiejar_receive=None): def __init__(self, cookiejar_send=None, cookiejar_receive=None):
import http.cookiejar
self.cookiejar_send = cookiejar_send self.cookiejar_send = cookiejar_send
self.cookiejar_receive = cookiejar_receive self.cookiejar_receive = cookiejar_receive
@@ -205,6 +208,16 @@ class FetchError(Exception):
self.error_message = error_message self.error_message = error_message
def _noop_cleanup(response):
'''No-op cleanup used when the urllib opener owns the response.'''
return None
def _release_conn_cleanup(response):
'''Release the urllib3 pooled connection back to the pool.'''
response.release_conn()
def decode_content(content, encoding_header): def decode_content(content, encoding_header):
encodings = encoding_header.replace(' ', '').split(',') encodings = encoding_header.replace(' ', '').split(',')
for encoding in reversed(encodings): for encoding in reversed(encodings):
@@ -260,7 +273,7 @@ def fetch_url_response(url, headers=(), timeout=15, data=None,
opener = urllib.request.build_opener(cookie_processor) opener = urllib.request.build_opener(cookie_processor)
response = opener.open(req, timeout=timeout) response = opener.open(req, timeout=timeout)
cleanup_func = (lambda r: None) cleanup_func = _noop_cleanup
else: # Use a urllib3 pool. Cookies can't be used since urllib3 doesn't have easy support for them. else: # Use a urllib3 pool. Cookies can't be used since urllib3 doesn't have easy support for them.
# default: Retry.DEFAULT = Retry(3) # default: Retry.DEFAULT = Retry(3)
@@ -294,7 +307,7 @@ def fetch_url_response(url, headers=(), timeout=15, data=None,
error_message=msg) error_message=msg)
else: else:
raise raise
cleanup_func = (lambda r: r.release_conn()) cleanup_func = _release_conn_cleanup
return response, cleanup_func return response, cleanup_func
@@ -302,7 +315,21 @@ def fetch_url_response(url, headers=(), timeout=15, data=None,
def fetch_url(url, headers=(), timeout=15, report_text=None, data=None, def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
cookiejar_send=None, cookiejar_receive=None, use_tor=True, cookiejar_send=None, cookiejar_receive=None, use_tor=True,
debug_name=None): debug_name=None):
while True: """
Fetch URL with exponential backoff retry logic for rate limiting.
Retries:
- 429 Too Many Requests: Exponential backoff (1s, 2s, 4s, 8s, 16s)
- 503 Service Unavailable: Exponential backoff
- 302 Redirect to Google Sorry: Treated as rate limit
Max retries: 5 attempts with exponential backoff
"""
max_retries = 5
base_delay = 1.0 # Base delay in seconds
for attempt in range(max_retries):
try:
start_time = time.monotonic() start_time = time.monotonic()
response, cleanup_func = fetch_url_response( response, cleanup_func = fetch_url_response(
@@ -324,12 +351,12 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
and debug_name is not None and debug_name is not None
and content): and content):
save_dir = os.path.join(settings.data_dir, 'debug') save_dir = os.path.join(settings.data_dir, 'debug')
if not os.path.exists(save_dir): os.makedirs(save_dir, exist_ok=True)
os.makedirs(save_dir)
with open(os.path.join(save_dir, debug_name), 'wb') as f: with open(os.path.join(save_dir, debug_name), 'wb') as f:
f.write(content) f.write(content)
# Check for rate limiting (429) or redirect to Google Sorry
if response.status == 429 or ( if response.status == 429 or (
response.status == 302 and (response.getheader('Location') == url response.status == 302 and (response.getheader('Location') == url
or response.getheader('Location').startswith( or response.getheader('Location').startswith(
@@ -337,7 +364,7 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
) )
) )
): ):
print(response.status, response.reason, response.headers) logger.info(f'Rate limit response: {response.status} {response.reason}')
ip = re.search( ip = re.search(
br'IP address: ((?:[\da-f]*:)+[\da-f]+|(?:\d+\.)+\d+)', br'IP address: ((?:[\da-f]*:)+[\da-f]+|(?:\d+\.)+\d+)',
content) content)
@@ -347,28 +374,79 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
response.getheader('Set-Cookie') or '') response.getheader('Set-Cookie') or '')
ip = ip.group(1) if ip else None ip = ip.group(1) if ip else None
# don't get new identity if we're not using Tor # Without Tor, no point retrying with same IP
if not use_tor: if not use_tor or not settings.route_tor:
logger.warning('Rate limited (429). Enable Tor routing to retry with new IP.')
raise FetchError('429', reason=response.reason, ip=ip) raise FetchError('429', reason=response.reason, ip=ip)
print('Error: YouTube blocked the request because the Tor exit node is overutilized. Exit node IP address: %s' % ip) # Tor: exhausted retries
if attempt >= max_retries - 1:
logger.error(f'Rate limited after {max_retries} retries. Exit IP: {ip}')
raise FetchError('429', reason=response.reason, ip=ip,
error_message='Tor exit node overutilized after multiple retries')
# get new identity # Tor: get new identity and retry
logger.info(f'Rate limited. Getting new Tor identity... (IP: {ip})')
error = tor_manager.new_identity(start_time) error = tor_manager.new_identity(start_time)
if error: if error:
raise FetchError( raise FetchError(
'429', reason=response.reason, ip=ip, '429', reason=response.reason, ip=ip,
error_message='Automatic circuit change: ' + error) error_message='Automatic circuit change: ' + error)
else: continue # retry with new identity
continue # retry now that we have new identity
elif response.status >= 400: # Check for client errors (400, 404) - don't retry these
raise FetchError(str(response.status), reason=response.reason, if response.status == 400:
ip=None) logger.error(f'Bad Request (400) - Invalid parameters or URL: {url[:100]}')
raise FetchError('400', reason='Bad Request - Invalid parameters or URL format', ip=None)
if response.status == 404:
logger.warning(f'Not Found (404): {url[:100]}')
raise FetchError('404', reason='Not Found', ip=None)
# Check for other server errors (503, 502, 504)
if response.status in (502, 503, 504):
if attempt >= max_retries - 1:
logger.error(f'Server error {response.status} after {max_retries} retries')
raise FetchError(str(response.status), reason=response.reason, ip=None)
# Exponential backoff for server errors. Non-crypto jitter.
delay = (base_delay * (2 ** attempt)) + random.uniform(0, 1)
logger.warning(f'Server error ({response.status}). Waiting {delay:.1f}s before retry {attempt + 1}/{max_retries}...')
time.sleep(delay)
continue
# Success - break out of retry loop
break break
except urllib3.exceptions.MaxRetryError as e:
# If this is the last attempt, raise the error
if attempt >= max_retries - 1:
exception_cause = e.__context__.__context__
if (isinstance(exception_cause, socks.ProxyConnectionError)
and settings.route_tor):
msg = ('Failed to connect to Tor. Check that Tor is open and '
'that your internet connection is working.\n\n'
+ str(e))
logger.error(f'Tor connection failed: {msg}')
raise FetchError('502', reason='Bad Gateway',
error_message=msg)
elif isinstance(e.__context__,
urllib3.exceptions.NewConnectionError):
msg = 'Failed to establish a connection.\n\n' + str(e)
logger.error(f'Connection failed: {msg}')
raise FetchError(
'502', reason='Bad Gateway',
error_message=msg)
else:
raise
# Wait and retry. Non-crypto jitter.
delay = (base_delay * (2 ** attempt)) + random.uniform(0, 1)
logger.warning(f'Connection error. Waiting {delay:.1f}s before retry {attempt + 1}/{max_retries}...')
time.sleep(delay)
if report_text: if report_text:
print(report_text, ' Latency:', round(response_time - start_time, 3), ' Read time:', round(read_finish - response_time,3)) logger.info(f'{report_text} - Latency: {round(response_time - start_time, 3)}s - Read time: {round(read_finish - response_time, 3)}s')
return content return content
@@ -462,21 +540,31 @@ class RateLimitedQueue(gevent.queue.Queue):
def download_thumbnail(save_directory, video_id): def download_thumbnail(save_directory, video_id):
url = f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg" save_location = os.path.join(save_directory, video_id + '.jpg')
save_location = os.path.join(save_directory, video_id + ".jpg") for quality in ('hq720.jpg', 'sddefault.jpg', 'hqdefault.jpg'):
url = f'https://i.ytimg.com/vi/{video_id}/{quality}'
try: try:
thumbnail = fetch_url(url, report_text="Saved thumbnail: " + video_id) thumbnail = fetch_url(url, report_text='Saved thumbnail: ' + video_id)
except FetchError as e:
if '404' in str(e):
continue
print('Failed to download thumbnail for ' + video_id + ': ' + str(e))
return False
except urllib.error.HTTPError as e: except urllib.error.HTTPError as e:
print("Failed to download thumbnail for " + video_id + ": " + str(e)) if e.code == 404:
continue
print('Failed to download thumbnail for ' + video_id + ': ' + str(e))
return False return False
try: try:
f = open(save_location, 'wb') with open(save_location, 'wb') as f:
f.write(thumbnail)
except FileNotFoundError: except FileNotFoundError:
os.makedirs(save_directory, exist_ok=True) os.makedirs(save_directory, exist_ok=True)
f = open(save_location, 'wb') with open(save_location, 'wb') as f:
f.write(thumbnail) f.write(thumbnail)
f.close()
return True return True
print('No thumbnail available for ' + video_id)
return False
def download_thumbnails(save_directory, ids): def download_thumbnails(save_directory, ids):
@@ -502,9 +590,40 @@ def video_id(url):
return urllib.parse.parse_qs(url_parts.query)['v'][0] return urllib.parse.parse_qs(url_parts.query)['v'][0]
# default, sddefault, mqdefault, hqdefault, hq720 def get_thumbnail_url(video_id, quality='hq720'):
def get_thumbnail_url(video_id): """Get thumbnail URL with fallback to lower quality if needed.
return f"{settings.img_prefix}https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"
Args:
video_id: YouTube video ID
quality: Preferred quality ('maxres', 'hq720', 'sd', 'hq', 'mq', 'default')
Returns:
Tuple of (best_available_url, quality_used)
"""
# Quality priority order (highest to lowest)
quality_order = {
'maxres': ['maxresdefault.jpg', 'sddefault.jpg', 'hqdefault.jpg'],
'hq720': ['hq720.jpg', 'sddefault.jpg', 'hqdefault.jpg'],
'sd': ['sddefault.jpg', 'hqdefault.jpg'],
'hq': ['hqdefault.jpg', 'mqdefault.jpg'],
'mq': ['mqdefault.jpg', 'default.jpg'],
'default': ['default.jpg'],
}
qualities = quality_order.get(quality, quality_order['hq720'])
base_url = f"{settings.img_prefix}https://i.ytimg.com/vi/{video_id}/"
# For now, return the highest quality URL
# The browser will handle 404s gracefully with alt text
return base_url + qualities[0], qualities[0]
def get_best_thumbnail_url(video_id):
"""Get the best available thumbnail URL for a video.
Tries hq720 first (for HD videos), falls back to sddefault for SD videos.
"""
return get_thumbnail_url(video_id, quality='hq720')[0]
def seconds_to_timestamp(seconds): def seconds_to_timestamp(seconds):
@@ -538,6 +657,12 @@ def prefix_url(url):
if url is None: if url is None:
return None return None
url = url.lstrip('/') # some urls have // before them, which has a special meaning url = url.lstrip('/') # some urls have // before them, which has a special meaning
# Increase resolution for YouTube channel avatars
if url and ('ggpht.com' in url or 'yt3.ggpht.com' in url):
# Replace size parameter with higher resolution (s240 instead of s88)
url = re.sub(r'=s\d+-c-k', '=s240-c-k-c0x00ffffff-no-rj', url)
return '/' + url return '/' + url
@@ -720,9 +845,12 @@ INNERTUBE_CLIENTS = {
'hl': 'en', 'hl': 'en',
'gl': 'US', 'gl': 'US',
'clientName': 'IOS', 'clientName': 'IOS',
'clientVersion': '19.09.3', 'clientVersion': '21.03.2',
'deviceModel': 'iPhone14,3', 'deviceMake': 'Apple',
'userAgent': 'com.google.ios.youtube/19.09.3 (iPhone14,3; U; CPU iOS 15_6 like Mac OS X)' 'deviceModel': 'iPhone16,2',
'osName': 'iPhone',
'osVersion': '18.7.2.22H124',
'userAgent': 'com.google.ios.youtube/21.03.2 (iPhone16,2; U; CPU iOS 18_7_2 like Mac OS X)'
} }
}, },
'INNERTUBE_CONTEXT_CLIENT_NAME': 5, 'INNERTUBE_CONTEXT_CLIENT_NAME': 5,
@@ -779,13 +907,31 @@ INNERTUBE_CLIENTS = {
'INNERTUBE_CONTEXT_CLIENT_NAME': 28, 'INNERTUBE_CONTEXT_CLIENT_NAME': 28,
'REQUIRE_JS_PLAYER': False, 'REQUIRE_JS_PLAYER': False,
}, },
'ios_vr': {
'INNERTUBE_API_KEY': 'AIzaSyA8eiZmM1FaDVjRy-df2KTyQ_vz_yYM39w',
'INNERTUBE_CONTEXT': {
'client': {
'hl': 'en',
'gl': 'US',
'clientName': 'IOS_VR',
'clientVersion': '1.0',
'deviceMake': 'Apple',
'deviceModel': 'iPhone16,2',
'osName': 'iPhone',
'osVersion': '18.7.2.22H124',
'userAgent': 'com.google.ios.youtube/1.0 (iPhone16,2; U; CPU iOS 18_7_2 like Mac OS X)'
}
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 5,
'REQUIRE_JS_PLAYER': False
},
} }
def get_visitor_data(): def get_visitor_data():
visitor_data = None visitor_data = None
visitor_data_cache = os.path.join(settings.data_dir, 'visitorData.txt') visitor_data_cache = os.path.join(settings.data_dir, 'visitorData.txt')
if not os.path.exists(settings.data_dir): os.makedirs(settings.data_dir, exist_ok=True)
os.makedirs(settings.data_dir)
if os.path.isfile(visitor_data_cache): if os.path.isfile(visitor_data_cache):
with open(visitor_data_cache, 'r') as file: with open(visitor_data_cache, 'r') as file:
print('Getting visitor_data from cache') print('Getting visitor_data from cache')
@@ -840,6 +986,8 @@ def call_youtube_api(client, api, data):
def strip_non_ascii(string): def strip_non_ascii(string):
''' Returns the string without non ASCII characters''' ''' Returns the string without non ASCII characters'''
if string is None:
return ""
stripped = (c for c in string if 0 < ord(c) < 127) stripped = (c for c in string if 0 < ord(c) < 127)
return ''.join(stripped) return ''.join(stripped)

View File

@@ -1,3 +1,3 @@
from __future__ import unicode_literals from __future__ import unicode_literals
__version__ = '0.2.21' __version__ = 'v0.5.0'

View File

@@ -1,23 +1,25 @@
import json
import logging
import math
import os
import re
import traceback
import urllib
from math import ceil
from types import SimpleNamespace
from urllib.parse import parse_qs, urlencode
import flask
import gevent
import urllib3.exceptions
from flask import request
import youtube import youtube
from youtube import yt_app from youtube import yt_app
from youtube import util, comments, local_playlist, yt_data_extract from youtube import util, comments, local_playlist, yt_data_extract
from youtube.util import time_utc_isoformat
import settings import settings
from flask import request logger = logging.getLogger(__name__)
import flask
import json
import gevent
import os
import math
import traceback
import urllib
import re
import urllib3.exceptions
from urllib.parse import parse_qs, urlencode
from types import SimpleNamespace
from math import ceil
try: try:
@@ -39,73 +41,70 @@ def codec_name(vcodec):
def get_video_sources(info, target_resolution): def get_video_sources(info, target_resolution):
'''return dict with organized sources: { '''return dict with organized sources'''
'uni_sources': [{}, ...], # video and audio in one file audio_by_track = {}
'uni_idx': int, # default unified source index
'pair_sources': [{video: {}, audio: {}, quality: ..., ...}, ...],
'pair_idx': int, # default pair source index
}
'''
audio_sources = []
video_only_sources = {} video_only_sources = {}
uni_sources = [] uni_sources = []
pair_sources = [] pair_sources = []
for fmt in info['formats']: for fmt in info['formats']:
if not all(fmt[attr] for attr in ('ext', 'url', 'itag')): if not all(fmt[attr] for attr in ('ext', 'url', 'itag')):
continue continue
# unified source
if fmt['acodec'] and fmt['vcodec']: if fmt['acodec'] and fmt['vcodec']:
source = { if fmt.get('audio_track_is_default', True) is False:
'type': 'video/' + fmt['ext'], continue
'quality_string': short_video_quality_string(fmt), source = {'type': 'video/' + fmt['ext'],
} 'quality_string': short_video_quality_string(fmt)}
source['quality_string'] += ' (integrated)' source['quality_string'] += ' (integrated)'
source.update(fmt) source.update(fmt)
uni_sources.append(source) uni_sources.append(source)
continue continue
if not (fmt['init_range'] and fmt['index_range']): if not (fmt['init_range'] and fmt['index_range']):
# Allow HLS-backed audio tracks (served locally, no init/index needed)
url_value = fmt.get('url', '')
if (not url_value.startswith('http://127.')
and '/ytl-api/' not in url_value):
continue continue
# Mark as HLS for frontend
# audio source fmt['is_hls'] = True
if fmt['acodec'] and not fmt['vcodec'] and ( if fmt['acodec'] and not fmt['vcodec'] and (fmt['audio_bitrate'] or fmt['bitrate']):
fmt['audio_bitrate'] or fmt['bitrate']): if fmt['bitrate']:
if fmt['bitrate']: # prefer this one, more accurate right now
fmt['audio_bitrate'] = int(fmt['bitrate']/1000) fmt['audio_bitrate'] = int(fmt['bitrate']/1000)
source = { source = {'type': 'audio/' + fmt['ext'],
'type': 'audio/' + fmt['ext'], 'quality_string': audio_quality_string(fmt)}
'quality_string': audio_quality_string(fmt),
}
source.update(fmt) source.update(fmt)
source['mime_codec'] = (source['type'] + '; codecs="' source['mime_codec'] = source['type'] + '; codecs="' + source['acodec'] + '"'
+ source['acodec'] + '"') tid = fmt.get('audio_track_id') or 'default'
audio_sources.append(source) if tid not in audio_by_track:
# video-only source audio_by_track[tid] = {
elif all(fmt[attr] for attr in ('vcodec', 'quality', 'width', 'fps', 'name': fmt.get('audio_track_name') or 'Default',
'file_size')): 'is_default': fmt.get('audio_track_is_default', True),
'sources': [],
}
audio_by_track[tid]['sources'].append(source)
elif all(fmt[attr] for attr in ('vcodec', 'quality', 'width', 'fps', 'file_size')):
if codec_name(fmt['vcodec']) == 'unknown': if codec_name(fmt['vcodec']) == 'unknown':
continue continue
source = { source = {'type': 'video/' + fmt['ext'],
'type': 'video/' + fmt['ext'], 'quality_string': short_video_quality_string(fmt)}
'quality_string': short_video_quality_string(fmt),
}
source.update(fmt) source.update(fmt)
source['mime_codec'] = (source['type'] + '; codecs="' source['mime_codec'] = source['type'] + '; codecs="' + source['vcodec'] + '"'
+ source['vcodec'] + '"')
quality = str(fmt['quality']) + 'p' + str(fmt['fps']) quality = str(fmt['quality']) + 'p' + str(fmt['fps'])
if quality in video_only_sources: video_only_sources.setdefault(quality, []).append(source)
video_only_sources[quality].append(source)
else:
video_only_sources[quality] = [source]
audio_sources.sort(key=lambda source: source['audio_bitrate']) audio_tracks = []
default_track_id = 'default'
for tid, ti in audio_by_track.items():
audio_tracks.append({'id': tid, 'name': ti['name'], 'is_default': ti['is_default']})
if ti['is_default']:
default_track_id = tid
audio_tracks.sort(key=lambda t: (not t['is_default'], t['name']))
default_audio = audio_by_track.get(default_track_id, {}).get('sources', [])
default_audio.sort(key=lambda s: s['audio_bitrate'])
uni_sources.sort(key=lambda src: src['quality']) uni_sources.sort(key=lambda src: src['quality'])
webm_audios = [a for a in default_audio if a['ext'] == 'webm']
webm_audios = [a for a in audio_sources if a['ext'] == 'webm'] mp4_audios = [a for a in default_audio if a['ext'] == 'mp4']
mp4_audios = [a for a in audio_sources if a['ext'] == 'mp4']
for quality_string, sources in video_only_sources.items(): for quality_string, sources in video_only_sources.items():
# choose an audio source to go with it # choose an audio source to go with it
@@ -163,11 +162,19 @@ def get_video_sources(info, target_resolution):
break break
pair_idx = i pair_idx = i
audio_track_sources = {}
for tid, ti in audio_by_track.items():
srcs = ti['sources']
srcs.sort(key=lambda s: s.get('audio_bitrate', 0))
audio_track_sources[tid] = srcs
return { return {
'uni_sources': uni_sources, 'uni_sources': uni_sources,
'uni_idx': uni_idx, 'uni_idx': uni_idx,
'pair_sources': pair_sources, 'pair_sources': pair_sources,
'pair_idx': pair_idx, 'pair_idx': pair_idx,
'audio_tracks': audio_tracks,
'audio_track_sources': audio_track_sources,
} }
@@ -177,8 +184,34 @@ def make_caption_src(info, lang, auto=False, trans_lang=None):
label += ' (Automatic)' label += ' (Automatic)'
if trans_lang: if trans_lang:
label += ' -> ' + trans_lang label += ' -> ' + trans_lang
# Try to use Android caption URL directly (no PO Token needed)
caption_url = None
for track in info.get('_android_caption_tracks', []):
track_lang = track.get('languageCode', '')
track_kind = track.get('kind', '')
if track_lang == lang and (
(auto and track_kind == 'asr') or
(not auto and track_kind != 'asr')
):
caption_url = track.get('baseUrl')
break
if caption_url:
# Add format
if '&fmt=' in caption_url:
caption_url = re.sub(r'&fmt=[^&]*', '&fmt=vtt', caption_url)
else:
caption_url += '&fmt=vtt'
if trans_lang:
caption_url += '&tlang=' + trans_lang
url = util.prefix_url(caption_url)
else:
# Fallback to old method
url = util.prefix_url(yt_data_extract.get_caption_url(info, lang, 'vtt', auto, trans_lang))
return { return {
'url': util.prefix_url(yt_data_extract.get_caption_url(info, lang, 'vtt', auto, trans_lang)), 'url': url,
'label': label, 'label': label,
'srclang': trans_lang[0:2] if trans_lang else lang[0:2], 'srclang': trans_lang[0:2] if trans_lang else lang[0:2],
'on': False, 'on': False,
@@ -190,7 +223,7 @@ def lang_in(lang, sequence):
if lang is None: if lang is None:
return False return False
lang = lang[0:2] lang = lang[0:2]
return lang in (l[0:2] for l in sequence) return lang in (item[0:2] for item in sequence)
def lang_eq(lang1, lang2): def lang_eq(lang1, lang2):
@@ -206,9 +239,9 @@ def equiv_lang_in(lang, sequence):
e.g. if lang is en, extracts en-GB from sequence. e.g. if lang is en, extracts en-GB from sequence.
Necessary because if only a specific variant like en-GB is available, can't ask YouTube for simply en. Need to get the available variant.''' Necessary because if only a specific variant like en-GB is available, can't ask YouTube for simply en. Need to get the available variant.'''
lang = lang[0:2] lang = lang[0:2]
for l in sequence: for item in sequence:
if l[0:2] == lang: if item[0:2] == lang:
return l return item
return None return None
@@ -278,7 +311,15 @@ def get_subtitle_sources(info):
sources[-1]['on'] = True sources[-1]['on'] = True
if len(sources) == 0: if len(sources) == 0:
assert len(info['automatic_caption_languages']) == 0 and len(info['manual_caption_languages']) == 0 # Invariant: with no caption sources there should be no languages
# either. Don't rely on `assert` which is stripped under `python -O`.
if (len(info['automatic_caption_languages']) != 0
or len(info['manual_caption_languages']) != 0):
logger.warning(
'Unexpected state: no subtitle sources but %d auto / %d manual languages',
len(info['automatic_caption_languages']),
len(info['manual_caption_languages']),
)
return sources return sources
@@ -300,10 +341,7 @@ def get_ordered_music_list_attributes(music_list):
def save_decrypt_cache(): def save_decrypt_cache():
try: os.makedirs(settings.data_dir, exist_ok=True)
f = open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'w')
except FileNotFoundError:
os.makedirs(settings.data_dir)
f = open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'w') f = open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'w')
f.write(json.dumps({'version': 1, 'decrypt_cache':decrypt_cache}, indent=4, sort_keys=True)) f.write(json.dumps({'version': 1, 'decrypt_cache':decrypt_cache}, indent=4, sort_keys=True))
@@ -367,32 +405,168 @@ def fetch_watch_page_info(video_id, playlist_id, index):
watch_page = watch_page.decode('utf-8') watch_page = watch_page.decode('utf-8')
return yt_data_extract.extract_watch_info_from_html(watch_page) return yt_data_extract.extract_watch_info_from_html(watch_page)
def extract_info(video_id, use_invidious, playlist_id=None, index=None): def extract_info(video_id, use_invidious, playlist_id=None, index=None):
primary_client = 'android_vr'
fallback_client = 'ios'
last_resort_client = 'tv_embedded'
tasks = ( tasks = (
# Get video metadata from here # Get video metadata from here
gevent.spawn(fetch_watch_page_info, video_id, playlist_id, index), gevent.spawn(fetch_watch_page_info, video_id, playlist_id, index),
gevent.spawn(fetch_player_response, 'android_vr', video_id) gevent.spawn(fetch_player_response, primary_client, video_id)
) )
gevent.joinall(tasks) gevent.joinall(tasks)
util.check_gevent_exceptions(*tasks) util.check_gevent_exceptions(*tasks)
info, player_response = tasks[0].value, tasks[1].value
yt_data_extract.update_with_new_urls(info, player_response) info = tasks[0].value or {}
player_response = tasks[1].value or {}
# Age restricted video, retry # Save android_vr caption tracks (no PO Token needed for these URLs)
if info['age_restricted'] or info['player_urls_missing']: if isinstance(player_response, str):
if info['age_restricted']: try:
print('Age restricted video, retrying') pr_data = json.loads(player_response)
except Exception:
pr_data = {}
else: else:
print('Player urls missing, retrying') pr_data = player_response or {}
player_response = fetch_player_response('tv_embedded', video_id) android_caption_tracks = yt_data_extract.deep_get(
pr_data, 'captions', 'playerCaptionsTracklistRenderer',
'captionTracks', default=[])
info['_android_caption_tracks'] = android_caption_tracks
# Save streamingData for multi-audio extraction
pr_streaming_data = pr_data.get('streamingData', {})
info['_streamingData'] = pr_streaming_data
yt_data_extract.update_with_new_urls(info, player_response) yt_data_extract.update_with_new_urls(info, player_response)
# HLS manifest - try multiple clients in case one is blocked
info['hls_manifest_url'] = None
info['hls_audio_tracks'] = {}
hls_data = None
hls_client_used = None
for hls_client in ('ios', 'ios_vr', 'android'):
try:
resp = fetch_player_response(hls_client, video_id) or {}
hls_data = json.loads(resp) if isinstance(resp, str) else resp
hls_manifest_url = (hls_data.get('streamingData') or {}).get('hlsManifestUrl', '')
if hls_manifest_url:
hls_client_used = hls_client
break
except Exception as e:
print(f'HLS fetch with {hls_client} failed: {e}')
if hls_manifest_url:
info['hls_manifest_url'] = hls_manifest_url
import re as _re
from urllib.parse import urljoin
hls_manifest = util.fetch_url(hls_manifest_url,
headers=(('User-Agent', 'Mozilla/5.0'),),
debug_name='hls_manifest').decode('utf-8')
# Parse EXT-X-MEDIA audio tracks from HLS manifest
for line in hls_manifest.split('\n'):
if '#EXT-X-MEDIA' not in line or 'TYPE=AUDIO' not in line:
continue
name_m = _re.search(r'NAME="([^"]+)"', line)
lang_m = _re.search(r'LANGUAGE="([^"]+)"', line)
default_m = _re.search(r'DEFAULT=(YES|NO)', line)
group_m = _re.search(r'GROUP-ID="([^"]+)"', line)
uri_m = _re.search(r'URI="([^"]+)"', line)
if not uri_m or not lang_m:
continue
lang = lang_m.group(1)
is_default = default_m and default_m.group(1) == 'YES'
group = group_m.group(1) if group_m else '0'
key = lang
absolute_hls_url = urljoin(hls_manifest_url, uri_m.group(1))
if key not in info['hls_audio_tracks'] or group > info['hls_audio_tracks'][key].get('group', '0'):
info['hls_audio_tracks'][key] = {
'name': name_m.group(1) if name_m else lang,
'lang': lang,
'hls_url': absolute_hls_url,
'group': group,
'is_default': is_default,
}
# Register HLS audio tracks for proxy access
added = 0
for lang, track in info['hls_audio_tracks'].items():
ck = video_id + '_' + lang
from youtube.hls_cache import register_track
register_track(ck, track['hls_url'],
video_id=video_id, track_id=lang)
fmt = {
'audio_track_id': lang,
'audio_track_name': track['name'],
'audio_track_is_default': track['is_default'],
'itag': 'hls_' + lang,
'ext': 'mp4',
'audio_bitrate': 128,
'bitrate': 128000,
'acodec': 'mp4a.40.2',
'vcodec': None,
'width': None,
'height': None,
'file_size': None,
'audio_sample_rate': 44100,
'duration_ms': None,
'fps': None,
'init_range': {'start': 0, 'end': 0},
'index_range': {'start': 0, 'end': 0},
'url': '/ytl-api/audio-track?id=' + urllib.parse.quote(ck),
's': None,
'sp': None,
'quality': None,
'type': 'audio/mp4',
'quality_string': track['name'],
'mime_codec': 'audio/mp4; codecs="mp4a.40.2"',
'is_hls': True,
}
info['formats'].append(fmt)
added += 1
if added:
print(f"Added {added} HLS audio tracks (via {hls_client_used})")
else:
print("No HLS manifest available from any client")
info['hls_manifest_url'] = None
info['hls_audio_tracks'] = {}
info['hls_unavailable'] = True
# Register HLS manifest for proxying
if info['hls_manifest_url']:
ck = video_id + '_video'
from youtube.hls_cache import register_track
register_track(ck, info['hls_manifest_url'], video_id=video_id, track_id='video')
# Use proxy URL instead of direct Google Video URL
info['hls_manifest_url'] = '/ytl-api/hls-manifest?id=' + urllib.parse.quote(ck)
# Fallback to 'ios' if no valid URLs are found
if not info.get('formats') or info.get('player_urls_missing'):
print(f"No URLs found in '{primary_client}', attempting with '{fallback_client}'.")
try:
player_response = fetch_player_response(fallback_client, video_id) or {}
yt_data_extract.update_with_new_urls(info, player_response)
except util.FetchError as e:
print(f"Fallback '{fallback_client}' failed: {e}")
# Final attempt with 'tv_embedded' if there are still no URLs
if not info.get('formats') or info.get('player_urls_missing'):
print(f"No URLs found in '{fallback_client}', attempting with '{last_resort_client}'")
try:
player_response = fetch_player_response(last_resort_client, video_id) or {}
yt_data_extract.update_with_new_urls(info, player_response)
except util.FetchError as e:
print(f"Fallback '{last_resort_client}' failed: {e}")
# signature decryption # signature decryption
if info.get('formats'):
decryption_error = decrypt_signatures(info, video_id) decryption_error = decrypt_signatures(info, video_id)
if decryption_error: if decryption_error:
decryption_error = 'Error decrypting url signatures: ' + decryption_error info['playability_error'] = 'Error decrypting url signatures: ' + decryption_error
info['playability_error'] = decryption_error
# check if urls ready (non-live format) in former livestream # check if urls ready (non-live format) in former livestream
# urls not ready if all of them have no filesize # urls not ready if all of them have no filesize
@@ -406,20 +580,20 @@ def extract_info(video_id, use_invidious, playlist_id=None, index=None):
# livestream urls # livestream urls
# sometimes only the livestream urls work soon after the livestream is over # sometimes only the livestream urls work soon after the livestream is over
if (info['hls_manifest_url'] info['hls_formats'] = []
and (info['live'] or not info['formats'] or not info['urls_ready']) if info.get('hls_manifest_url') and (info.get('live') or not info.get('formats') or not info['urls_ready']):
): try:
manifest = util.fetch_url(info['hls_manifest_url'], manifest = util.fetch_url(info['hls_manifest_url'],
debug_name='hls_manifest.m3u8', debug_name='hls_manifest.m3u8',
report_text='Fetched hls manifest' report_text='Fetched hls manifest'
).decode('utf-8') ).decode('utf-8')
info['hls_formats'], err = yt_data_extract.extract_hls_formats(manifest) info['hls_formats'], err = yt_data_extract.extract_hls_formats(manifest)
if not err: if not err:
info['playability_error'] = None info['playability_error'] = None
for fmt in info['hls_formats']: for fmt in info['hls_formats']:
fmt['video_quality'] = video_quality_string(fmt) fmt['video_quality'] = video_quality_string(fmt)
else: except Exception as e:
print(f"Error obteniendo HLS manifest: {e}")
info['hls_formats'] = [] info['hls_formats'] = []
# check for 403. Unnecessary for tor video routing b/c ip address is same # check for 403. Unnecessary for tor video routing b/c ip address is same
@@ -501,6 +675,338 @@ def format_bytes(bytes):
return '%.2f%s' % (converted, suffix) return '%.2f%s' % (converted, suffix)
@yt_app.route('/ytl-api/audio-track-proxy')
def audio_track_proxy():
"""Proxy for DASH audio tracks to avoid throttling."""
audio_url = request.args.get('url', '')
if not audio_url:
flask.abort(400, 'Missing URL')
try:
headers = (
('User-Agent', 'Mozilla/5.0'),
('Accept', '*/*'),
)
content = util.fetch_url(audio_url, headers=headers,
debug_name='audio_dash', report_text=None)
return flask.Response(content, mimetype='audio/mp4',
headers={'Access-Control-Allow-Origin': '*',
'Cache-Control': 'max-age=3600'})
except Exception as e:
flask.abort(502, f'Audio fetch failed: {e}')
@yt_app.route('/ytl-api/audio-track')
def get_audio_track():
"""Proxy HLS audio/video: playlist or individual segment."""
from youtube.hls_cache import get_hls_url
cache_key = request.args.get('id', '')
seg_url = request.args.get('seg', '')
playlist_url = request.args.get('url', '')
# Handle playlist/manifest URL (used for audio track playlists)
if playlist_url:
# Unwrap if double-proxied
if '/ytl-api/audio-track' in playlist_url:
import urllib.parse as _up
parsed = _up.parse_qs(_up.urlparse(playlist_url).query)
if 'url' in parsed:
playlist_url = parsed['url'][0]
try:
playlist = util.fetch_url(playlist_url,
headers=(('User-Agent', 'Mozilla/5.0'),),
debug_name='audio_playlist').decode('utf-8')
# Rewrite segment URLs
import re as _re
from urllib.parse import urljoin
base_url = request.url_root.rstrip('/')
playlist_base = playlist_url.rsplit('/', 1)[0] + '/'
playlist_lines = []
for line in playlist.split('\n'):
line = line.strip()
if not line or line.startswith('#'):
playlist_lines.append(line)
continue
# Resolve and proxy segment URL
seg = line if line.startswith('http') else urljoin(playlist_base, line)
# Always use &seg= parameter, never &url= for segments
playlist_lines.append(
base_url + '/ytl-api/audio-track?id='
+ urllib.parse.quote(cache_key)
+ '&seg=' + urllib.parse.quote(seg, safe='')
)
playlist = '\n'.join(playlist_lines)
return flask.Response(playlist, mimetype='application/vnd.apple.mpegurl',
headers={'Access-Control-Allow-Origin': '*'})
except Exception as e:
import traceback
traceback.print_exc()
flask.abort(502, f'Playlist fetch failed: {e}')
# Handle individual segment or nested playlist
if seg_url:
# Check if seg_url is already a proxied URL
if '/ytl-api/audio-track' in seg_url:
import urllib.parse as _up
parsed = _up.parse_qs(_up.urlparse(seg_url).query)
if 'seg' in parsed:
seg_url = parsed['seg'][0]
elif 'url' in parsed:
seg_url = parsed['url'][0]
# Check if this is a nested playlist (m3u8) that needs rewriting
# Playlists END with .m3u8 (optionally followed by query params)
# Segments may contain /index.m3u8/ in their path but end with .ts or similar
url_path = urllib.parse.urlparse(seg_url).path
# Only treat as playlist if path ends with .m3u8
# Don't use 'in' check because segments can have /index.m3u8/ in their path
is_playlist = url_path.endswith('.m3u8')
if is_playlist:
# This is a variant playlist - fetch and rewrite it
try:
raw_content = util.fetch_url(seg_url,
headers=(('User-Agent', 'Mozilla/5.0'),),
debug_name='nested_playlist')
# Check if this is actually binary data (segment) misidentified as playlist
try:
playlist = raw_content.decode('utf-8')
except UnicodeDecodeError:
is_playlist = False # Fall through to segment handler
if is_playlist:
# Rewrite segment URLs in this playlist
from urllib.parse import urljoin
import re as _re
base_url = request.url_root.rstrip('/')
playlist_base = seg_url.rsplit('/', 1)[0] + '/'
def proxy_url(url):
"""Rewrite a single URL to go through the proxy"""
if not url or url.startswith('/ytl-api/'):
return url
if not url.startswith('http://') and not url.startswith('https://'):
url = urljoin(playlist_base, url)
return (base_url + '/ytl-api/audio-track?id='
+ urllib.parse.quote(cache_key)
+ '&seg=' + urllib.parse.quote(url, safe=''))
playlist_lines = []
for line in playlist.split('\n'):
line = line.strip()
if not line:
playlist_lines.append(line)
continue
# Handle tags with URI attributes (EXT-X-MAP, EXT-X-KEY, etc.)
if line.startswith('#') and 'URI=' in line:
def rewrite_uri_attr(match):
uri = match.group(1)
return 'URI="' + proxy_url(uri) + '"'
line = _re.sub(r'URI="([^"]+)"', rewrite_uri_attr, line)
playlist_lines.append(line)
elif line.startswith('#'):
# Other tags pass through unchanged
playlist_lines.append(line)
else:
# This is a segment URL line
seg = line if line.startswith('http') else urljoin(playlist_base, line)
playlist_lines.append(proxy_url(seg))
playlist = '\n'.join(playlist_lines)
return flask.Response(playlist, mimetype='application/vnd.apple.mpegurl',
headers={'Access-Control-Allow-Origin': '*'})
except Exception as e:
import traceback
traceback.print_exc()
flask.abort(502, f'Nested playlist fetch failed: {e}')
# This is an actual segment - fetch and serve it
try:
headers = (
('User-Agent', 'Mozilla/5.0'),
('Accept', '*/*'),
)
content = util.fetch_url(seg_url, headers=headers,
debug_name='hls_seg', report_text=None)
# Determine content type based on URL or content
# HLS segments are usually MPEG-TS (.ts) but can be MP4 (.mp4, .m4s)
if '.mp4' in seg_url or '.m4s' in seg_url or seg_url.lower().endswith('.mp4'):
content_type = 'video/mp4'
elif '.webm' in seg_url or seg_url.lower().endswith('.webm'):
content_type = 'video/webm'
else:
# Default to MPEG-TS for HLS
content_type = 'video/mp2t'
return flask.Response(content, mimetype=content_type,
headers={
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, OPTIONS',
'Access-Control-Allow-Headers': 'Range, Content-Type',
'Cache-Control': 'max-age=3600',
'Content-Type': content_type,
})
except Exception as e:
import traceback
traceback.print_exc()
flask.abort(502, f'Segment fetch failed: {e}')
# Legacy: Proxy the HLS playlist for audio tracks (using get_hls_url)
hls_url = get_hls_url(cache_key)
if not hls_url:
flask.abort(404, 'Audio track not found')
try:
playlist = util.fetch_url(hls_url,
headers=(('User-Agent', 'Mozilla/5.0'),),
debug_name='audio_hls_playlist').decode('utf-8')
# Rewrite segment URLs to go through our proxy endpoint
import re as _re
from urllib.parse import urljoin
hls_base_url = hls_url.rsplit('/', 1)[0] + '/'
def make_proxy_url(segment_url):
if segment_url.startswith('/ytl-api/audio-track'):
return segment_url
base_url = request.url_root.rstrip('/')
return (base_url + '/ytl-api/audio-track?id='
+ urllib.parse.quote(cache_key)
+ '&seg=' + urllib.parse.quote(segment_url))
playlist_lines = []
for line in playlist.split('\n'):
line = line.strip()
if not line or line.startswith('#'):
playlist_lines.append(line)
continue
if line.startswith('http://') or line.startswith('https://'):
segment_url = line
else:
segment_url = urljoin(hls_base_url, line)
playlist_lines.append(make_proxy_url(segment_url))
playlist = '\n'.join(playlist_lines)
return flask.Response(playlist, mimetype='application/vnd.apple.mpegurl',
headers={'Access-Control-Allow-Origin': '*'})
except Exception as e:
flask.abort(502, f'Playlist fetch failed: {e}')
@yt_app.route('/ytl-api/hls-manifest')
def get_hls_manifest():
"""Proxy HLS video manifest, rewriting ALL URLs including audio tracks."""
from youtube.hls_cache import get_hls_url
cache_key = request.args.get('id', '')
is_audio = '_audio_' in cache_key or cache_key.endswith('_audio')
print(f'[hls-manifest] Request: id={cache_key[:40] if cache_key else ""}... (audio={is_audio})')
hls_url = get_hls_url(cache_key)
print(f'[hls-manifest] HLS URL: {hls_url[:80] if hls_url else None}...')
if not hls_url:
flask.abort(404, 'HLS manifest not found')
try:
print('[hls-manifest] Fetching HLS manifest...')
manifest = util.fetch_url(hls_url,
headers=(('User-Agent', 'Mozilla/5.0'),),
debug_name='hls_manifest').decode('utf-8')
print(f'[hls-manifest] Successfully fetched manifest ({len(manifest)} bytes)')
# Rewrite all URLs in the manifest to go through our proxy
import re as _re
from urllib.parse import urljoin
# Get the base URL for resolving relative URLs
hls_base_url = hls_url.rsplit('/', 1)[0] + '/'
base_url = request.url_root.rstrip('/')
# Rewrite URLs - handle both segment URLs and audio track URIs
def rewrite_url(url, is_audio_track=False):
if not url or url.startswith('/ytl-api/'):
return url
# Resolve relative URLs
if not url.startswith('http://') and not url.startswith('https://'):
url = urljoin(hls_base_url, url)
if is_audio_track:
# Audio track playlist - proxy through audio-track endpoint
return (base_url + '/ytl-api/audio-track?id='
+ urllib.parse.quote(cache_key)
+ '&url=' + urllib.parse.quote(url, safe=''))
else:
# Video segment or variant playlist - proxy through audio-track endpoint
return (base_url + '/ytl-api/audio-track?id='
+ urllib.parse.quote(cache_key)
+ '&seg=' + urllib.parse.quote(url, safe=''))
# Parse and rewrite the manifest
manifest_lines = []
rewritten_count = 0
for line in manifest.split('\n'):
line = line.strip()
if not line:
manifest_lines.append(line)
continue
# Handle EXT-X-MEDIA tags with URI (audio tracks)
if line.startswith('#EXT-X-MEDIA:') and 'URI=' in line:
# Extract and rewrite the URI attribute
def rewrite_media_uri(match):
nonlocal rewritten_count
uri = match.group(1)
rewritten_count += 1
return 'URI="' + rewrite_url(uri, is_audio_track=True) + '"'
line = _re.sub(r'URI="([^"]+)"', rewrite_media_uri, line)
manifest_lines.append(line)
elif line.startswith('#'):
# Other tags pass through
manifest_lines.append(line)
else:
# This is a URL (segment or variant playlist)
if line.startswith('http://') or line.startswith('https://'):
url = line
else:
url = urljoin(hls_base_url, line)
rewritten_count += 1
manifest_lines.append(rewrite_url(url))
manifest = '\n'.join(manifest_lines)
print(f'[hls-manifest] Rewrote manifest with {len(manifest_lines)} lines, {rewritten_count} URLs rewritten')
return flask.Response(manifest, mimetype='application/vnd.apple.mpegurl',
headers={
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, OPTIONS',
'Access-Control-Allow-Headers': 'Range, Content-Type',
'Cache-Control': 'no-cache',
'Content-Type': 'application/vnd.apple.mpegurl',
})
except Exception as e:
print(f'[hls-manifest] Error: {e}')
import traceback
traceback.print_exc()
flask.abort(502, f'Manifest fetch failed: {e}')
@yt_app.route('/ytl-api/storyboard.vtt') @yt_app.route('/ytl-api/storyboard.vtt')
def get_storyboard_vtt(): def get_storyboard_vtt():
""" """
@@ -520,7 +1026,8 @@ def get_storyboard_vtt():
for i, board in enumerate(boards): for i, board in enumerate(boards):
*t, _, sigh = board.split("#") *t, _, sigh = board.split("#")
width, height, count, width_cnt, height_cnt, interval = map(int, t) width, height, count, width_cnt, height_cnt, interval = map(int, t)
if height != wanted_height: continue if height != wanted_height:
continue
q['sigh'] = [sigh] q['sigh'] = [sigh]
url = f"{base_url}?{urlencode(q, doseq=True)}" url = f"{base_url}?{urlencode(q, doseq=True)}"
storyboard = SimpleNamespace( storyboard = SimpleNamespace(
@@ -615,7 +1122,12 @@ def get_watch_page(video_id=None):
# prefix urls, and other post-processing not handled by yt_data_extract # prefix urls, and other post-processing not handled by yt_data_extract
for item in info['related_videos']: for item in info['related_videos']:
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id']) # set HQ relateds thumbnail videos # Only set thumbnail if YouTube didn't provide one
if not item.get('thumbnail'):
if item.get('type') == 'playlist' and item.get('first_video_id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['first_video_id'])
elif item.get('type') == 'video' and item.get('id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id'])
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
for song in info['music_list']: for song in info['music_list']:
@@ -623,6 +1135,9 @@ def get_watch_page(video_id=None):
if info['playlist']: if info['playlist']:
playlist_id = info['playlist']['id'] playlist_id = info['playlist']['id']
for item in info['playlist']['items']: for item in info['playlist']['items']:
# Only set thumbnail if YouTube didn't provide one
if not item.get('thumbnail') and item.get('type') == 'video' and item.get('id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id'])
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
if playlist_id: if playlist_id:
@@ -668,47 +1183,49 @@ def get_watch_page(video_id=None):
if (settings.route_tor == 2) or info['tor_bypass_used']: if (settings.route_tor == 2) or info['tor_bypass_used']:
target_resolution = 240 target_resolution = 240
else: else:
target_resolution = settings.default_resolution res = settings.default_resolution
target_resolution = 1080 if res == 'auto' else int(res)
source_info = get_video_sources(info, target_resolution) # Get video sources for no-JS fallback and DASH (av-merge) fallback
uni_sources = source_info['uni_sources'] video_sources = get_video_sources(info, target_resolution)
pair_sources = source_info['pair_sources'] uni_sources = video_sources['uni_sources']
uni_idx, pair_idx = source_info['uni_idx'], source_info['pair_idx'] pair_sources = video_sources['pair_sources']
pair_idx = video_sources['pair_idx']
pair_quality = yt_data_extract.deep_get(pair_sources, pair_idx, 'quality') # Build audio tracks list from HLS
uni_quality = yt_data_extract.deep_get(uni_sources, uni_idx, 'quality') audio_tracks = []
hls_audio_tracks = info.get('hls_audio_tracks', {})
hls_manifest_url = info.get('hls_manifest_url')
if hls_audio_tracks:
# Prefer "original" audio track
original_lang = None
for lang, track in hls_audio_tracks.items():
if 'original' in (track.get('name') or '').lower():
original_lang = lang
break
pair_error = abs((pair_quality or 360) - target_resolution) # Add tracks, preferring original as default
uni_error = abs((uni_quality or 360) - target_resolution) for lang, track in hls_audio_tracks.items():
if uni_error == pair_error: is_default = (lang == original_lang) if original_lang else track['is_default']
# use settings.prefer_uni_sources as a tiebreaker if is_default:
closer_to_target = 'uni' if settings.prefer_uni_sources else 'pair' audio_tracks.insert(0, {
elif uni_error < pair_error: 'id': lang,
closer_to_target = 'uni' 'name': track['name'],
'is_default': True,
})
else: else:
closer_to_target = 'pair' audio_tracks.append({
'id': lang,
'name': track['name'],
'is_default': False,
})
else:
# Fallback: single default audio track
audio_tracks = [{'id': 'default', 'name': 'Default', 'is_default': True}]
if settings.prefer_uni_sources == 2: # Get video dimensions
# Use uni sources unless there's no choice. video_height = info.get('height') or 360
using_pair_sources = ( video_width = info.get('width') or 640
bool(pair_sources) and (not uni_sources)
)
else:
# Use the pair sources if they're closer to the desired resolution
using_pair_sources = (
bool(pair_sources)
and (not uni_sources or closer_to_target == 'pair')
)
if using_pair_sources:
video_height = pair_sources[pair_idx]['height']
video_width = pair_sources[pair_idx]['width']
else:
video_height = yt_data_extract.deep_get(
uni_sources, uni_idx, 'height', default=360
)
video_width = yt_data_extract.deep_get(
uni_sources, uni_idx, 'width', default=640
)
@@ -755,7 +1272,14 @@ def get_watch_page(video_id=None):
other_downloads = other_downloads, other_downloads = other_downloads,
video_info = json.dumps(video_info), video_info = json.dumps(video_info),
hls_formats = info['hls_formats'], hls_formats = info['hls_formats'],
hls_manifest_url = hls_manifest_url,
audio_tracks = audio_tracks,
subtitle_sources = subtitle_sources, subtitle_sources = subtitle_sources,
uni_sources = uni_sources,
pair_sources = pair_sources,
pair_idx = pair_idx,
hls_unavailable = info.get('hls_unavailable', False),
playback_mode = settings.playback_mode,
related = info['related_videos'], related = info['related_videos'],
playlist = info['playlist'], playlist = info['playlist'],
music_list = info['music_list'], music_list = info['music_list'],
@@ -792,24 +1316,33 @@ def get_watch_page(video_id=None):
'video_duration': info['duration'], 'video_duration': info['duration'],
'settings': settings.current_settings_dict, 'settings': settings.current_settings_dict,
'has_manual_captions': any(s.get('on') for s in subtitle_sources), 'has_manual_captions': any(s.get('on') for s in subtitle_sources),
**source_info, 'audio_tracks': audio_tracks,
'using_pair_sources': using_pair_sources, 'hls_manifest_url': hls_manifest_url,
'time_start': time_start, 'time_start': time_start,
'playlist': info['playlist'], 'playlist': info['playlist'],
'related': info['related_videos'], 'related': info['related_videos'],
'playability_error': info['playability_error'], 'playability_error': info['playability_error'],
'hls_unavailable': info.get('hls_unavailable', False),
'pair_sources': pair_sources,
'pair_idx': pair_idx,
'uni_sources': uni_sources,
'uni_idx': video_sources['uni_idx'],
'using_pair_sources': bool(pair_sources),
}, },
font_family = youtube.font_choices[settings.font], # for embed page font_family = youtube.font_choices[settings.font], # for embed page
**source_info,
using_pair_sources = using_pair_sources,
) )
@yt_app.route('/api/<path:dummy>') @yt_app.route('/api/<path:dummy>')
def get_captions(dummy): def get_captions(dummy):
result = util.fetch_url('https://www.youtube.com' + request.full_path) url = 'https://www.youtube.com' + request.full_path
try:
result = util.fetch_url(url, headers=util.mobile_ua)
result = result.replace(b"align:start position:0%", b"") result = result.replace(b"align:start position:0%", b"")
return result return flask.Response(result, mimetype='text/vtt')
except Exception as e:
logger.debug(f'Caption fetch failed: {e}')
return flask.Response(b'WEBVTT\n\n', mimetype='text/vtt', status=200)
times_reg = re.compile(r'^\d\d:\d\d:\d\d\.\d\d\d --> \d\d:\d\d:\d\d\.\d\d\d.*$') times_reg = re.compile(r'^\d\d:\d\d:\d\d\.\d\d\d --> \d\d:\d\d:\d\d\.\d\d\d.*$')

View File

@@ -10,4 +10,4 @@ from .watch_extraction import (extract_watch_info, get_caption_url,
update_with_new_urls, requires_decryption, update_with_new_urls, requires_decryption,
extract_decryption_function, decrypt_signatures, _formats, extract_decryption_function, decrypt_signatures, _formats,
update_format_with_type_info, extract_hls_formats, update_format_with_type_info, extract_hls_formats,
extract_watch_info_from_html, captions_available) extract_watch_info_from_html, captions_available, parse_format)

View File

@@ -226,6 +226,190 @@ def check_missing_keys(object, *key_sequences):
return None return None
def extract_lockup_view_model_info(item, additional_info={}):
"""Extract info from new lockupViewModel format (YouTube 2024+)"""
info = {'error': None}
content_type = item.get('contentType', '')
content_id = item.get('contentId', '')
# Extract title from metadata
metadata = item.get('metadata', {})
lockup_metadata = metadata.get('lockupMetadataViewModel', {})
title_data = lockup_metadata.get('title', {})
info['title'] = title_data.get('content', '')
# Determine type based on contentType
if 'PLAYLIST' in content_type or 'PODCAST' in content_type:
info['type'] = 'playlist'
info['playlist_type'] = 'playlist'
info['id'] = content_id
info['video_count'] = None
info['first_video_id'] = None
# Try to get video count from metadata
metadata_rows = lockup_metadata.get('metadata', {})
for row in metadata_rows.get('contentMetadataViewModel', {}).get('metadataRows', []):
for part in row.get('metadataParts', []):
text = part.get('text', {}).get('content', '')
if 'video' in text.lower() or 'episode' in text.lower():
info['video_count'] = extract_int(text)
elif 'VIDEO' in content_type:
info['type'] = 'video'
info['id'] = content_id
info['view_count'] = None
info['approx_view_count'] = None
info['time_published'] = None
info['duration'] = None
# Extract duration/other info from metadata rows
metadata_rows = lockup_metadata.get('metadata', {})
for row in metadata_rows.get('contentMetadataViewModel', {}).get('metadataRows', []):
for part in row.get('metadataParts', []):
text = part.get('text', {}).get('content', '')
if 'view' in text.lower():
info['approx_view_count'] = extract_approx_int(text)
elif 'ago' in text.lower():
info['time_published'] = text
elif 'CHANNEL' in content_type:
info['type'] = 'channel'
info['id'] = content_id
info['approx_subscriber_count'] = None
info['video_count'] = None
# Extract subscriber count and video count from metadata rows
metadata_rows = lockup_metadata.get('metadata', {})
for row in metadata_rows.get('contentMetadataViewModel', {}).get('metadataRows', []):
for part in row.get('metadataParts', []):
text = part.get('text', {}).get('content', '')
if 'subscriber' in text.lower():
info['approx_subscriber_count'] = extract_approx_int(text)
elif 'video' in text.lower():
info['video_count'] = extract_int(text)
else:
info['type'] = 'unsupported'
return info
# Extract thumbnail from contentImage
content_image = item.get('contentImage', {})
info['thumbnail'] = normalize_url(multi_deep_get(content_image,
# playlists with collection thumbnail
['collectionThumbnailViewModel', 'primaryThumbnail', 'thumbnailViewModel', 'image', 'sources', 0, 'url'],
# single thumbnail (some playlists, videos)
['thumbnailViewModel', 'image', 'sources', 0, 'url'],
)) or ''
# Extract video/episode count from thumbnail overlay badges
# (podcasts and some playlists put the count here instead of metadata rows)
thumb_vm = multi_deep_get(content_image,
['collectionThumbnailViewModel', 'primaryThumbnail', 'thumbnailViewModel'],
['thumbnailViewModel'],
) or {}
for overlay in thumb_vm.get('overlays', []):
for badge in deep_get(overlay, 'thumbnailOverlayBadgeViewModel', 'thumbnailBadges', default=[]):
badge_text = deep_get(badge, 'thumbnailBadgeViewModel', 'text', default='')
if badge_text and not info.get('video_count'):
conservative_update(info, 'video_count', extract_int(badge_text))
# Extract author info if available
info['author'] = None
info['author_id'] = None
info['author_url'] = None
info['description'] = None
info['badges'] = []
# Try to get first video ID from inline player data
item_playback = item.get('itemPlayback', {})
inline_player = item_playback.get('inlinePlayerData', {})
on_select = inline_player.get('onSelect', {})
innertube_cmd = on_select.get('innertubeCommand', {})
watch_endpoint = innertube_cmd.get('watchEndpoint', {})
if watch_endpoint.get('videoId'):
info['first_video_id'] = watch_endpoint.get('videoId')
info.update(additional_info)
return info
def extract_shorts_lockup_view_model_info(item, additional_info={}):
"""Extract info from shortsLockupViewModel format (YouTube Shorts)"""
info = {'error': None, 'type': 'video'}
# Video ID from reelWatchEndpoint or entityId
info['id'] = deep_get(item,
'onTap', 'innertubeCommand', 'reelWatchEndpoint', 'videoId')
if not info['id']:
entity_id = item.get('entityId', '')
if entity_id.startswith('shorts-shelf-item-'):
info['id'] = entity_id[len('shorts-shelf-item-'):]
# Thumbnail
info['thumbnail'] = normalize_url(deep_get(item,
'onTap', 'innertubeCommand', 'reelWatchEndpoint',
'thumbnail', 'thumbnails', 0, 'url'))
# Parse title and views from accessibilityText
# Format: "Title, N views - play Short"
acc_text = item.get('accessibilityText', '')
info['title'] = ''
info['view_count'] = None
info['approx_view_count'] = None
if acc_text:
# Remove trailing " - play Short"
cleaned = re.sub(r'\s*-\s*play Short$', '', acc_text)
# Split on last comma+views pattern to separate title from view count
match = re.match(r'^(.*?),\s*([\d,.]+\s*(?:thousand|million|billion|)\s*views?)$',
cleaned, re.IGNORECASE)
if match:
info['title'] = match.group(1).strip()
view_text = match.group(2)
info['view_count'] = extract_int(view_text)
# Convert "7.1 thousand" -> "7.1 K" for display
suffix_map = {'thousand': 'K', 'million': 'M', 'billion': 'B'}
suffix_match = re.search(r'([\d,.]+)\s*(thousand|million|billion)?', view_text, re.IGNORECASE)
if suffix_match:
num = suffix_match.group(1)
word = suffix_match.group(2)
if word:
info['approx_view_count'] = num + ' ' + suffix_map[word.lower()]
else:
info['approx_view_count'] = '{:,}'.format(int(num.replace(',', ''))) if num.isdigit() or num.replace(',','').isdigit() else num
else:
info['approx_view_count'] = extract_approx_int(view_text)
else:
# Fallback: try "N views" at end
match2 = re.match(r'^(.*?),\s*(.+views?)$', cleaned, re.IGNORECASE)
if match2:
info['title'] = match2.group(1).strip()
info['approx_view_count'] = extract_approx_int(match2.group(2))
else:
info['title'] = cleaned
# Overlay text (usually has the title too)
overlay_metadata = deep_get(item, 'overlayMetadata',
'secondaryText', 'content')
if overlay_metadata and not info['approx_view_count']:
info['approx_view_count'] = extract_approx_int(overlay_metadata)
primary_text = deep_get(item, 'overlayMetadata',
'primaryText', 'content')
if primary_text and not info['title']:
info['title'] = primary_text
info['duration'] = ''
info['time_published'] = None
info['description'] = None
info['badges'] = []
info['author'] = None
info['author_id'] = None
info['author_url'] = None
info['index'] = None
info.update(additional_info)
return info
def extract_item_info(item, additional_info={}): def extract_item_info(item, additional_info={}):
if not item: if not item:
return {'error': 'No item given'} return {'error': 'No item given'}
@@ -243,6 +427,14 @@ def extract_item_info(item, additional_info={}):
info['type'] = 'unsupported' info['type'] = 'unsupported'
return info return info
# Handle new lockupViewModel format (YouTube 2024+)
if type == 'lockupViewModel':
return extract_lockup_view_model_info(item, additional_info)
# Handle shortsLockupViewModel format (YouTube Shorts)
if type == 'shortsLockupViewModel':
return extract_shorts_lockup_view_model_info(item, additional_info)
# type looks like e.g. 'compactVideoRenderer' or 'gridVideoRenderer' # type looks like e.g. 'compactVideoRenderer' or 'gridVideoRenderer'
# camelCase split, https://stackoverflow.com/a/37697078 # camelCase split, https://stackoverflow.com/a/37697078
type_parts = [s.lower() for s in re.sub(r'([A-Z][a-z]+)', r' \1', type).split()] type_parts = [s.lower() for s in re.sub(r'([A-Z][a-z]+)', r' \1', type).split()]
@@ -282,9 +474,9 @@ def extract_item_info(item, additional_info={}):
['detailedMetadataSnippets', 0, 'snippetText'], ['detailedMetadataSnippets', 0, 'snippetText'],
)) ))
info['thumbnail'] = normalize_url(multi_deep_get(item, info['thumbnail'] = normalize_url(multi_deep_get(item,
['thumbnail', 'thumbnails', 0, 'url'], # videos ['thumbnail', 'thumbnails', -1, 'url'], # videos (highest quality)
['thumbnails', 0, 'thumbnails', 0, 'url'], # playlists ['thumbnails', 0, 'thumbnails', -1, 'url'], # playlists
['thumbnailRenderer', 'showCustomThumbnailRenderer', 'thumbnail', 'thumbnails', 0, 'url'], # shows ['thumbnailRenderer', 'showCustomThumbnailRenderer', 'thumbnail', 'thumbnails', -1, 'url'], # shows
)) ))
info['badges'] = [] info['badges'] = []
@@ -376,6 +568,13 @@ def extract_item_info(item, additional_info={}):
elif primary_type == 'channel': elif primary_type == 'channel':
info['id'] = item.get('channelId') info['id'] = item.get('channelId')
info['approx_subscriber_count'] = extract_approx_int(item.get('subscriberCountText')) info['approx_subscriber_count'] = extract_approx_int(item.get('subscriberCountText'))
# YouTube sometimes puts the handle (@name) in subscriberCountText
# instead of the actual count. Fall back to accessibility data.
if not info['approx_subscriber_count']:
acc_label = deep_get(item, 'subscriberCountText',
'accessibility', 'accessibilityData', 'label', default='')
if 'subscriber' in acc_label.lower():
info['approx_subscriber_count'] = extract_approx_int(acc_label)
elif primary_type == 'show': elif primary_type == 'show':
info['id'] = deep_get(item, 'navigationEndpoint', 'watchEndpoint', 'playlistId') info['id'] = deep_get(item, 'navigationEndpoint', 'watchEndpoint', 'playlistId')
info['first_video_id'] = deep_get(item, 'navigationEndpoint', info['first_video_id'] = deep_get(item, 'navigationEndpoint',
@@ -441,6 +640,10 @@ _item_types = {
'channelRenderer', 'channelRenderer',
'compactChannelRenderer', 'compactChannelRenderer',
'gridChannelRenderer', 'gridChannelRenderer',
# New viewModel format (YouTube 2024+)
'lockupViewModel',
'shortsLockupViewModel',
} }
def _traverse_browse_renderer(renderer): def _traverse_browse_renderer(renderer):

View File

@@ -218,18 +218,28 @@ def extract_playlist_metadata(polymer_json):
return {'error': err} return {'error': err}
metadata = {'error': None} metadata = {'error': None}
header = deep_get(response, 'header', 'playlistHeaderRenderer', default={}) metadata['title'] = None
metadata['title'] = extract_str(header.get('title')) metadata['first_video_id'] = None
metadata['thumbnail'] = None
metadata['video_count'] = None
metadata['description'] = ''
metadata['author'] = None
metadata['author_id'] = None
metadata['author_url'] = None
metadata['view_count'] = None
metadata['like_count'] = None
metadata['time_published'] = None
header = deep_get(response, 'header', 'playlistHeaderRenderer', default={})
if header:
# Classic playlistHeaderRenderer format
metadata['title'] = extract_str(header.get('title'))
metadata['first_video_id'] = deep_get(header, 'playEndpoint', 'watchEndpoint', 'videoId') metadata['first_video_id'] = deep_get(header, 'playEndpoint', 'watchEndpoint', 'videoId')
first_id = re.search(r'([a-z_\-]{11})', deep_get(header, first_id = re.search(r'([a-z_\-]{11})', deep_get(header,
'thumbnail', 'thumbnails', 0, 'url', default='')) 'thumbnail', 'thumbnails', 0, 'url', default=''))
if first_id: if first_id:
conservative_update(metadata, 'first_video_id', first_id.group(1)) conservative_update(metadata, 'first_video_id', first_id.group(1))
if metadata['first_video_id'] is None:
metadata['thumbnail'] = None
else:
metadata['thumbnail'] = f"https://i.ytimg.com/vi/{metadata['first_video_id']}/hqdefault.jpg"
metadata['video_count'] = extract_int(header.get('numVideosText')) metadata['video_count'] = extract_int(header.get('numVideosText'))
metadata['description'] = extract_str(header.get('descriptionText'), default='') metadata['description'] = extract_str(header.get('descriptionText'), default='')
@@ -237,20 +247,70 @@ def extract_playlist_metadata(polymer_json):
metadata['author_id'] = multi_deep_get(header, metadata['author_id'] = multi_deep_get(header,
['ownerText', 'runs', 0, 'navigationEndpoint', 'browseEndpoint', 'browseId'], ['ownerText', 'runs', 0, 'navigationEndpoint', 'browseEndpoint', 'browseId'],
['ownerEndpoint', 'browseEndpoint', 'browseId']) ['ownerEndpoint', 'browseEndpoint', 'browseId'])
if metadata['author_id']:
metadata['author_url'] = 'https://www.youtube.com/channel/' + metadata['author_id']
else:
metadata['author_url'] = None
metadata['view_count'] = extract_int(header.get('viewCountText')) metadata['view_count'] = extract_int(header.get('viewCountText'))
metadata['like_count'] = extract_int(header.get('likesCountWithoutLikeText')) metadata['like_count'] = extract_int(header.get('likesCountWithoutLikeText'))
for stat in header.get('stats', ()): for stat in header.get('stats', ()):
text = extract_str(stat) text = extract_str(stat)
if 'videos' in text: if 'videos' in text or 'episodes' in text:
conservative_update(metadata, 'video_count', extract_int(text)) conservative_update(metadata, 'video_count', extract_int(text))
elif 'views' in text: elif 'views' in text:
conservative_update(metadata, 'view_count', extract_int(text)) conservative_update(metadata, 'view_count', extract_int(text))
elif 'updated' in text: elif 'updated' in text:
metadata['time_published'] = extract_date(text) metadata['time_published'] = extract_date(text)
else:
# New pageHeaderRenderer format (YouTube 2024+)
page_header = deep_get(response, 'header', 'pageHeaderRenderer', default={})
metadata['title'] = page_header.get('pageTitle')
view_model = deep_get(page_header, 'content', 'pageHeaderViewModel', default={})
# Extract title from viewModel if not found
if not metadata['title']:
metadata['title'] = deep_get(view_model,
'title', 'dynamicTextViewModel', 'text', 'content')
# Extract metadata from rows (author, video count, views, etc.)
meta_rows = deep_get(view_model,
'metadata', 'contentMetadataViewModel', 'metadataRows', default=[])
for row in meta_rows:
for part in row.get('metadataParts', []):
text_content = deep_get(part, 'text', 'content', default='')
# Author from avatarStack
avatar_stack = deep_get(part, 'avatarStack', 'avatarStackViewModel', default={})
if avatar_stack:
author_text = deep_get(avatar_stack, 'text', 'content')
if author_text:
metadata['author'] = author_text
# Extract author_id from commandRuns
for run in deep_get(avatar_stack, 'text', 'commandRuns', default=[]):
browse_id = deep_get(run, 'onTap', 'innertubeCommand',
'browseEndpoint', 'browseId')
if browse_id:
metadata['author_id'] = browse_id
# Video/episode count
if text_content and ('video' in text_content.lower() or 'episode' in text_content.lower()):
conservative_update(metadata, 'video_count', extract_int(text_content))
# View count
elif text_content and 'view' in text_content.lower():
conservative_update(metadata, 'view_count', extract_int(text_content))
# Last updated
elif text_content and 'updated' in text_content.lower():
metadata['time_published'] = extract_date(text_content)
# Extract description from sidebar if available
sidebar = deep_get(response, 'sidebar', 'playlistSidebarRenderer', 'items', default=[])
for sidebar_item in sidebar:
desc = deep_get(sidebar_item, 'playlistSidebarPrimaryInfoRenderer',
'description', 'simpleText')
if desc:
metadata['description'] = desc
if metadata['author_id']:
metadata['author_url'] = 'https://www.youtube.com/channel/' + metadata['author_id']
if metadata['first_video_id'] is None:
metadata['thumbnail'] = None
else:
metadata['thumbnail'] = f"https://i.ytimg.com/vi/{metadata['first_video_id']}/hqdefault.jpg"
microformat = deep_get(response, 'microformat', 'microformatDataRenderer', microformat = deep_get(response, 'microformat', 'microformatDataRenderer',
default={}) default={})

View File

@@ -473,13 +473,22 @@ def _extract_formats(info, player_response):
itag = yt_fmt.get('itag') itag = yt_fmt.get('itag')
# Translated audio track # Translated audio track
# Example: https://www.youtube.com/watch?v=gF9kkB0UWYQ # Keep non-default tracks for multi-audio support
# Only get the original language for now so a foreign # (they will be served via local proxy)
# translation will not be picked just because it comes first
if deep_get(yt_fmt, 'audioTrack', 'audioIsDefault') is False:
continue
fmt = {} fmt = {}
# Audio track info
audio_track = yt_fmt.get('audioTrack')
if audio_track:
fmt['audio_track_id'] = audio_track.get('id')
fmt['audio_track_name'] = audio_track.get('displayName')
fmt['audio_track_is_default'] = audio_track.get('audioIsDefault', True)
else:
fmt['audio_track_id'] = None
fmt['audio_track_name'] = None
fmt['audio_track_is_default'] = True
fmt['itag'] = itag fmt['itag'] = itag
fmt['ext'] = None fmt['ext'] = None
fmt['audio_bitrate'] = None fmt['audio_bitrate'] = None
@@ -532,6 +541,61 @@ def _extract_formats(info, player_response):
else: else:
info['ip_address'] = None info['ip_address'] = None
def parse_format(yt_fmt):
'''Parse a single YouTube format dict into our internal format dict.'''
itag = yt_fmt.get('itag')
fmt = {}
audio_track = yt_fmt.get('audioTrack')
if audio_track:
fmt['audio_track_id'] = audio_track.get('id')
fmt['audio_track_name'] = audio_track.get('displayName')
fmt['audio_track_is_default'] = audio_track.get('audioIsDefault', True)
else:
fmt['audio_track_id'] = None
fmt['audio_track_name'] = None
fmt['audio_track_is_default'] = True
fmt['itag'] = itag
fmt['ext'] = None
fmt['audio_bitrate'] = None
fmt['bitrate'] = yt_fmt.get('bitrate')
fmt['acodec'] = None
fmt['vcodec'] = None
fmt['width'] = yt_fmt.get('width')
fmt['height'] = yt_fmt.get('height')
fmt['file_size'] = extract_int(yt_fmt.get('contentLength'))
fmt['audio_sample_rate'] = extract_int(yt_fmt.get('audioSampleRate'))
fmt['duration_ms'] = yt_fmt.get('approxDurationMs')
fmt['fps'] = yt_fmt.get('fps')
fmt['init_range'] = yt_fmt.get('initRange')
fmt['index_range'] = yt_fmt.get('indexRange')
for key in ('init_range', 'index_range'):
if fmt[key]:
fmt[key]['start'] = int(fmt[key]['start'])
fmt[key]['end'] = int(fmt[key]['end'])
update_format_with_type_info(fmt, yt_fmt)
cipher = dict(urllib.parse.parse_qsl(multi_get(yt_fmt,
'cipher', 'signatureCipher', default='')))
if cipher:
fmt['url'] = cipher.get('url')
else:
fmt['url'] = yt_fmt.get('url')
fmt['s'] = cipher.get('s')
fmt['sp'] = cipher.get('sp')
hardcoded_itag_info = _formats.get(str(itag), {})
for key, value in hardcoded_itag_info.items():
conservative_update(fmt, key, value)
fmt['quality'] = hardcoded_itag_info.get('height')
conservative_update(fmt, 'quality',
extract_int(yt_fmt.get('quality'), whole_word=False))
conservative_update(fmt, 'quality',
extract_int(yt_fmt.get('qualityLabel'), whole_word=False))
return fmt
hls_regex = re.compile(r'[\w_-]+=(?:"[^"]+"|[^",]+),') hls_regex = re.compile(r'[\w_-]+=(?:"[^"]+"|[^",]+),')
def extract_hls_formats(hls_manifest): def extract_hls_formats(hls_manifest):
'''returns hls_formats, err''' '''returns hls_formats, err'''
@@ -628,6 +692,7 @@ def extract_watch_info(polymer_json):
info['manual_caption_languages'] = [] info['manual_caption_languages'] = []
info['_manual_caption_language_names'] = {} # language name written in that language, needed in some cases to create the url info['_manual_caption_language_names'] = {} # language name written in that language, needed in some cases to create the url
info['translation_languages'] = [] info['translation_languages'] = []
info['_caption_track_urls'] = {} # lang_code -> full baseUrl from player response
captions_info = player_response.get('captions', {}) captions_info = player_response.get('captions', {})
info['_captions_base_url'] = normalize_url(deep_get(captions_info, 'playerCaptionsRenderer', 'baseUrl')) info['_captions_base_url'] = normalize_url(deep_get(captions_info, 'playerCaptionsRenderer', 'baseUrl'))
# Sometimes the above playerCaptionsRender is randomly missing # Sometimes the above playerCaptionsRender is randomly missing
@@ -658,6 +723,10 @@ def extract_watch_info(polymer_json):
else: else:
info['manual_caption_languages'].append(lang_code) info['manual_caption_languages'].append(lang_code)
base_url = caption_track.get('baseUrl', '') base_url = caption_track.get('baseUrl', '')
# Store the full URL from the player response (includes valid tokens)
if base_url:
normalized = normalize_url(base_url) if base_url.startswith('/') or not base_url.startswith('http') else base_url
info['_caption_track_urls'][lang_code + ('_asr' if caption_track.get('kind') == 'asr' else '')] = normalized
lang_name = deep_get(urllib.parse.parse_qs(urllib.parse.urlparse(base_url).query), 'name', 0) lang_name = deep_get(urllib.parse.parse_qs(urllib.parse.urlparse(base_url).query), 'name', 0)
if lang_name: if lang_name:
info['_manual_caption_language_names'][lang_code] = lang_name info['_manual_caption_language_names'][lang_code] = lang_name
@@ -825,6 +894,21 @@ def captions_available(info):
def get_caption_url(info, language, format, automatic=False, translation_language=None): def get_caption_url(info, language, format, automatic=False, translation_language=None):
'''Gets the url for captions with the given language and format. If automatic is True, get the automatic captions for that language. If translation_language is given, translate the captions from `language` to `translation_language`. If automatic is true and translation_language is given, the automatic captions will be translated.''' '''Gets the url for captions with the given language and format. If automatic is True, get the automatic captions for that language. If translation_language is given, translate the captions from `language` to `translation_language`. If automatic is true and translation_language is given, the automatic captions will be translated.'''
# Try to use the direct URL from the player response first (has valid tokens)
track_key = language + ('_asr' if automatic else '')
direct_url = info.get('_caption_track_urls', {}).get(track_key)
if direct_url:
url = direct_url
# Override format
if '&fmt=' in url:
url = re.sub(r'&fmt=[^&]*', '&fmt=' + format, url)
else:
url += '&fmt=' + format
if translation_language:
url += '&tlang=' + translation_language
return url
# Fallback to base_url construction
url = info['_captions_base_url'] url = info['_captions_base_url']
if not url: if not url:
return None return None