16 Commits

Author SHA1 Message Date
1aa344c7b0 bump to v0.4.3
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 46s
2026-03-28 16:09:23 -05:00
fa7273b328 fix: race condition in os.makedirs causing worker crashes
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 47s
Replace check-then-create pattern with exist_ok=True to prevent
FileExistsError when multiple workers initialize simultaneously.

Affects:
- subscriptions.py: open_database()
- watch.py: save_decrypt_cache()
- local_playlist.py: add_to_playlist()
- util.py: fetch_url(), get_visitor_data()
- settings.py: initialization

Fixes Gunicorn worker startup failures in multi-worker deployments.
2026-03-28 16:06:47 -05:00
a0d10e6a00 docs: remove duplicate FreeTube entry in README
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 44s
2026-03-27 21:29:46 -05:00
a46cfda029 bump to v0.4.2
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 12s
CI / test (push) Successful in 46s
2026-03-27 21:26:08 -05:00
e03f40d728 fix error handling, null URLs in templates, and Radio playlist support
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 49s
- Global error handler: friendly messages for 429, 502, 403, 400
  instead of raw tracebacks. Filter FetchError from Flask logger.
- Fix None URLs in templates: protect href/src in common_elements,
  playlist, watch, and comments templates against None values.
- Radio playlists (RD...): redirect /playlist?list=RD... to
  /watch?v=...&list=RD... since YouTube only supports them in player.
- Wrap player client fallbacks (ios, tv_embedded) in try/catch so
  a failed fallback doesn't crash the whole page.
2026-03-27 21:23:03 -05:00
22c72aa842 remove yt-dlp, fix captions PO Token issue, fix 429 retry logic
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 52s
- Remove yt-dlp entirely (modules, routes, settings, dependency)
  Was blocking page loads by running synchronously in gevent
- Fix captions: use Android client caption URLs (no PO Token needed)
  instead of web timedtext URLs that YouTube now blocks
- Fix 429 retry: fail immediately without Tor (same IP = pointless retry)
  Was causing ~27s delays with exponential backoff
- Accept ytdlp_enabled as legacy setting to avoid warning on startup
2026-03-27 20:47:44 -05:00
56ecd6cb1b fix: use YouTube-provided thumbnail URLs instead of hardcoded hq720.jpg
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 15s
CI / test (push) Successful in 58s
Videos without hq720.jpg thumbnails caused mass 404 errors.
Now preserves the actual thumbnail URL from YouTube's API response,
falls back to hqdefault.jpg only when no thumbnail is provided.
Also picks highest quality thumbnail from API (thumbnails[-1])
and adds progressive fallback for subscription/download functions.
2026-03-27 19:22:12 -05:00
f629565e77 bump to v0.4.1
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 13s
CI / test (push) Successful in 48s
2026-03-22 21:27:50 -05:00
1f8c13adff feat: improve 429 handling with Tor support and clean CI
All checks were successful
git-sync-with-mirror / git-sync (push) Successful in 11s
CI / test (push) Successful in 50s
- Retry with new Tor identity on 429
- Improve error logging
- Remove .build.yml and .drone.yml
2026-03-22 21:25:57 -05:00
6a68f06645 Release v0.4.0 - HD Thumbnails, YouTube 2024+ Support, and yt-dlp Integration
Some checks failed
CI / test (push) Failing after 1m19s
Major Features:
- HD video thumbnails (hq720.jpg) with automatic fallback to lower qualities
- HD channel avatars (240x240 instead of 88x88)
- YouTube 2024+ lockupViewModel support for channel playlists
- youtubei/v1/browse API integration for channel playlist tabs
- yt-dlp integration for multi-language audio and subtitles

Bug Fixes:
- Fixed undefined `abort` import in playlist.py
- Fixed undefined functions in proto.py (encode_varint, bytes_to_hex, succinct_encode)
- Fixed missing `traceback` import in proto_debug.py
- Fixed blurry playlist thumbnails using default.jpg instead of HD versions
- Fixed channel playlists page using deprecated pbj=1 format

Improvements:
- Automatic thumbnail fallback system (hq720 → sddefault → hqdefault → mqdefault → default)
- JavaScript thumbnail_fallback() handler for 404 errors
- Better thumbnail quality across all pages (watch, channel, playlist, subscriptions)
- Consistent HD avatar display for all channel items
- Settings system automatically adds new settings without breaking user config

Files Modified:
- youtube/watch.py - HD thumbnails for related videos and playlist items
- youtube/channel.py - HD thumbnails for channel playlists, youtubei API integration
- youtube/playlist.py - HD thumbnails, fixed abort import
- youtube/util.py - HD thumbnail URLs, avatar HD upgrade, prefix_url improvements
- youtube/comments.py - HD video thumbnail
- youtube/subscriptions.py - HD thumbnails, fixed abort import
- youtube/yt_data_extract/common.py - lockupViewModel support, extract_lockup_view_model_info()
- youtube/yt_data_extract/everything_else.py - HD playlist thumbnails
- youtube/proto.py - Fixed undefined function references
- youtube/proto_debug.py - Added traceback import
- youtube/static/js/common.js - thumbnail_fallback() handler
- youtube/templates/*.html - Added onerror handlers for thumbnail fallback
- youtube/version.py - Bump to v0.4.0

Technical Details:
- All thumbnail URLs now use hq720.jpg (1280x720) when available
- Fallback handled client-side via JavaScript onerror handler
- Server-side avatar upgrade via regex in util.prefix_url()
- lockupViewModel parser extracts contentType, metadata, and first_video_id
- Channel playlist tabs now use youtubei/v1/browse instead of deprecated pbj=1
- Settings version system ensures backward compatibility
2026-03-22 20:50:03 -05:00
84e1acaab8 yt-dlp 2026-03-22 14:17:23 -05:00
Jesus
ed4b05d9b6 Bump version to v0.3.2 2025-03-08 16:41:58 -05:00
Jesus
6f88b1cec6 Refactor extract_info in watch.py to improve client flexibility
Introduce primary_client, fallback_client, and last_resort_client variables for better configurability.
Replace hardcoded 'android_vr' with primary_client in fetch_player_response call.
2025-03-08 16:40:51 -05:00
Jesus
03451fb8ae fix: prevent error when closing avMerge if not a function 2025-03-08 16:39:37 -05:00
Jesus
e45c3fd48b Add styles error in player 2025-03-08 16:38:31 -05:00
Jesus
1153ac8f24 Fix NoneType inside comments.py
Bug:

Traceback (most recent call last):
  File "/home/rusian/yt-local/youtube/comments.py", line 180, in video_comments
    post_process_comments_info(comments_info)
  File "/home/rusian/yt-local/youtube/comments.py", line 81, in post_process_comments_info
    comment['author'] = strip_non_ascii(comment['author'])
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rusian/yt-local/youtube/util.py", line 843, in strip_non_ascii
    stripped = (c for c in string if 0 < ord(c) < 127)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not iterable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "src/gevent/greenlet.py", line 900, in gevent._gevent_cgreenlet.Greenlet.run
  File "/home/rusian/yt-local/youtube/comments.py", line 195, in video_comments
    comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip
                                                                             ^^^^
AttributeError: 'TypeError' object has no attribute 'ip'
2025-03-08T01:25:47Z <Greenlet at 0x7f251e5279c0: video_comments('hcm55lU9knw', 0, lc='')> failed with AttributeError
2025-03-08 16:37:33 -05:00
36 changed files with 1520 additions and 282 deletions

View File

@@ -1,12 +0,0 @@
image: debian/buster
packages:
- python3-pip
- virtualenv
tasks:
- test: |
cd yt-local
virtualenv -p python3 venv
source venv/bin/activate
python --version
pip install -r requirements-dev.txt
pytest

View File

@@ -1,10 +0,0 @@
kind: pipeline
name: default
steps:
- name: test
image: python:3.7.3
commands:
- pip install --upgrade pip
- pip install -r requirements-dev.txt
- pytest

137
.gitignore vendored
View File

@@ -1,5 +1,128 @@
# Byte-compiled / optimized / DLL files
__pycache__/ __pycache__/
*.py[cod]
*$py.class *$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
Pipfile.lock
# PEP 582
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
*venv*
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# Project specific
debug/ debug/
data/ data/
python/ python/
@@ -11,5 +134,17 @@ get-pip.py
latest-dist.zip latest-dist.zip
*.7z *.7z
*.zip *.zip
*venv*
# Editor specific
flycheck_* flycheck_*
.vscode/
.idea/
*.swp
*.swo
*~
.DS_Store
# Temporary files
*.tmp
*.bak
*.orig

210
Makefile Normal file
View File

@@ -0,0 +1,210 @@
# yt-local Makefile
# Automated tasks for development, translations, and maintenance
.PHONY: help install dev clean test i18n-extract i18n-init i18n-update i18n-compile i18n-stats i18n-clean setup-dev lint format backup restore
# Variables
PYTHON := python3
PIP := pip3
LANG_CODE ?= es
VENV_DIR := venv
PROJECT_NAME := yt-local
## Help
help: ## Show this help message
@echo "$(PROJECT_NAME) - Available tasks:"
@echo ""
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf " %-20s %s\n", $$1, $$2}'
@echo ""
@echo "Examples:"
@echo " make install # Install dependencies"
@echo " make dev # Run development server"
@echo " make i18n-extract # Extract strings for translation"
@echo " make i18n-init LANG_CODE=fr # Initialize French"
@echo " make lint # Check code style"
## Installation and Setup
install: ## Install project dependencies
@echo "[INFO] Installing dependencies..."
$(PIP) install -r requirements.txt
@echo "[SUCCESS] Dependencies installed"
setup-dev: ## Complete development setup
@echo "[INFO] Setting up development environment..."
$(PYTHON) -m venv $(VENV_DIR)
./$(VENV_DIR)/bin/pip install -r requirements.txt
@echo "[SUCCESS] Virtual environment created in $(VENV_DIR)"
@echo "[INFO] Activate with: source $(VENV_DIR)/bin/activate"
requirements: ## Update and install requirements
@echo "[INFO] Installing/updating requirements..."
$(PIP) install --upgrade pip
$(PIP) install -r requirements.txt
@echo "[SUCCESS] Requirements installed"
## Development
dev: ## Run development server
@echo "[INFO] Starting development server..."
@echo "[INFO] Server available at: http://localhost:9010"
$(PYTHON) server.py
run: dev ## Alias for dev
## Testing
test: ## Run tests
@echo "[INFO] Running tests..."
@if [ -d "tests" ]; then \
$(PYTHON) -m pytest -v; \
else \
echo "[WARN] No tests directory found"; \
fi
test-cov: ## Run tests with coverage
@echo "[INFO] Running tests with coverage..."
@if command -v pytest-cov >/dev/null 2>&1; then \
$(PYTHON) -m pytest -v --cov=$(PROJECT_NAME) --cov-report=html; \
else \
echo "[WARN] pytest-cov not installed. Run: pip install pytest-cov"; \
fi
## Internationalization (i18n)
i18n-extract: ## Extract strings for translation
@echo "[INFO] Extracting strings for translation..."
$(PYTHON) manage_translations.py extract
@echo "[SUCCESS] Strings extracted to translations/messages.pot"
i18n-init: ## Initialize new language (use LANG_CODE=xx)
@echo "[INFO] Initializing language: $(LANG_CODE)"
$(PYTHON) manage_translations.py init $(LANG_CODE)
@echo "[SUCCESS] Language $(LANG_CODE) initialized"
@echo "[INFO] Edit: translations/$(LANG_CODE)/LC_MESSAGES/messages.po"
i18n-update: ## Update existing translations
@echo "[INFO] Updating existing translations..."
$(PYTHON) manage_translations.py update
@echo "[SUCCESS] Translations updated"
i18n-compile: ## Compile translations to binary .mo files
@echo "[INFO] Compiling translations..."
$(PYTHON) manage_translations.py compile
@echo "[SUCCESS] Translations compiled"
i18n-stats: ## Show translation statistics
@echo "[INFO] Translation statistics:"
@echo ""
@for lang_dir in translations/*/; do \
if [ -d "$$lang_dir" ] && [ "$$lang_dir" != "translations/*/" ]; then \
lang=$$(basename "$$lang_dir"); \
po_file="$$lang_dir/LC_MESSAGES/messages.po"; \
if [ -f "$$po_file" ]; then \
total=$$(grep -c "^msgid " "$$po_file" 2>/dev/null || echo "0"); \
translated=$$(grep -c "^msgstr \"[^\"]\+\"" "$$po_file" 2>/dev/null || echo "0"); \
fuzzy=$$(grep -c "^#, fuzzy" "$$po_file" 2>/dev/null || echo "0"); \
if [ "$$total" -gt 0 ]; then \
percent=$$((translated * 100 / total)); \
echo " [STAT] $$lang: $$translated/$$total ($$percent%) - Fuzzy: $$fuzzy"; \
else \
echo " [STAT] $$lang: No translations yet"; \
fi; \
fi \
fi \
done
@echo ""
i18n-clean: ## Clean compiled translation files
@echo "[INFO] Cleaning compiled .mo files..."
find translations/ -name "*.mo" -delete
@echo "[SUCCESS] .mo files removed"
i18n-workflow: ## Complete workflow: extract → update → compile
@echo "[INFO] Running complete translation workflow..."
@make i18n-extract
@make i18n-update
@make i18n-compile
@make i18n-stats
@echo "[SUCCESS] Translation workflow completed"
## Code Quality
lint: ## Check code with flake8
@echo "[INFO] Checking code style..."
@if command -v flake8 >/dev/null 2>&1; then \
flake8 youtube/ --max-line-length=120 --ignore=E501,W503,E402 --exclude=youtube/ytdlp_service.py,youtube/ytdlp_integration.py,youtube/ytdlp_proxy.py; \
echo "[SUCCESS] Code style check passed"; \
else \
echo "[WARN] flake8 not installed (pip install flake8)"; \
fi
format: ## Format code with black (if available)
@echo "[INFO] Formatting code..."
@if command -v black >/dev/null 2>&1; then \
black youtube/ --line-length=120 --exclude='ytdlp_.*\.py'; \
echo "[SUCCESS] Code formatted"; \
else \
echo "[WARN] black not installed (pip install black)"; \
fi
check-deps: ## Check installed dependencies
@echo "[INFO] Checking dependencies..."
@$(PYTHON) -c "import flask_babel; print('[OK] Flask-Babel:', flask_babel.__version__)" 2>/dev/null || echo "[ERROR] Flask-Babel not installed"
@$(PYTHON) -c "import flask; print('[OK] Flask:', flask.__version__)" 2>/dev/null || echo "[ERROR] Flask not installed"
@$(PYTHON) -c "import yt_dlp; print('[OK] yt-dlp:', yt_dlp.__version__)" 2>/dev/null || echo "[ERROR] yt-dlp not installed"
## Maintenance
backup: ## Create translations backup
@echo "[INFO] Creating translations backup..."
@timestamp=$$(date +%Y%m%d_%H%M%S); \
tar -czf "translations_backup_$$timestamp.tar.gz" translations/ 2>/dev/null || echo "[WARN] No translations to backup"; \
if [ -f "translations_backup_$$timestamp.tar.gz" ]; then \
echo "[SUCCESS] Backup created: translations_backup_$$timestamp.tar.gz"; \
fi
restore: ## Restore translations from backup
@echo "[INFO] Restoring translations from backup..."
@if ls translations_backup_*.tar.gz 1>/dev/null 2>&1; then \
latest_backup=$$(ls -t translations_backup_*.tar.gz | head -1); \
tar -xzf "$$latest_backup"; \
echo "[SUCCESS] Restored from: $$latest_backup"; \
else \
echo "[ERROR] No backup files found"; \
fi
clean: ## Clean temporary files and caches
@echo "[INFO] Cleaning temporary files..."
find . -type f -name "*.pyc" -delete
find . -type d -name "__pycache__" -delete
find . -type f -name "*.mo" -delete
find . -type d -name ".pytest_cache" -delete
find . -type f -name ".coverage" -delete
find . -type d -name "htmlcov" -delete
@echo "[SUCCESS] Temporary files removed"
distclean: clean ## Clean everything including venv
@echo "[INFO] Cleaning everything..."
rm -rf $(VENV_DIR)
@echo "[SUCCESS] Complete cleanup done"
## Project Information
info: ## Show project information
@echo "[INFO] $(PROJECT_NAME) - Project information:"
@echo ""
@echo " [INFO] Directory: $$(pwd)"
@echo " [INFO] Python: $$($(PYTHON) --version)"
@echo " [INFO] Pip: $$($(PIP) --version | cut -d' ' -f1-2)"
@echo ""
@echo " [INFO] Configured languages:"
@for lang_dir in translations/*/; do \
if [ -d "$$lang_dir" ] && [ "$$lang_dir" != "translations/*/" ]; then \
lang=$$(basename "$$lang_dir"); \
echo " - $$lang"; \
fi \
done
@echo ""
@echo " [INFO] Main files:"
@echo " - babel.cfg (i18n configuration)"
@echo " - manage_translations.py (i18n CLI)"
@echo " - youtube/i18n_strings.py (centralized strings)"
@echo " - youtube/ytdlp_service.py (yt-dlp integration)"
@echo ""
# Default target
.DEFAULT_GOAL := help

View File

@@ -173,7 +173,6 @@ This project is completely free/Libre and will always be.
- [NewPipe](https://newpipe.schabi.org/) (app for android) - [NewPipe](https://newpipe.schabi.org/) (app for android)
- [mps-youtube](https://github.com/mps-youtube/mps-youtube) (terminal-only program) - [mps-youtube](https://github.com/mps-youtube/mps-youtube) (terminal-only program)
- [youtube-viewer](https://github.com/trizen/youtube-viewer) - [youtube-viewer](https://github.com/trizen/youtube-viewer)
- [FreeTube](https://github.com/FreeTubeApp/FreeTube) (Similar to this project, but is an electron app outside the browser)
- [smtube](https://www.smtube.org/) - [smtube](https://www.smtube.org/)
- [Minitube](https://flavio.tordini.org/minitube), [github here](https://github.com/flaviotordini/minitube) - [Minitube](https://flavio.tordini.org/minitube), [github here](https://github.com/flaviotordini/minitube)
- [toogles](https://github.com/mikecrittenden/toogles) (only embeds videos, doesn't use mp4) - [toogles](https://github.com/mikecrittenden/toogles) (only embeds videos, doesn't use mp4)

7
babel.cfg Normal file
View File

@@ -0,0 +1,7 @@
[python: youtube/**.py]
keywords = lazy_gettext:1,2 _l:1,2
[python: server.py]
[python: settings.py]
[jinja2: youtube/templates/**.html]
extensions=jinja2.ext.i18n
encoding = utf-8

113
manage_translations.py Normal file
View File

@@ -0,0 +1,113 @@
#!/usr/bin/env python3
"""
Translation management script for yt-local
Usage:
python manage_translations.py extract # Extract strings to messages.pot
python manage_translations.py init es # Initialize Spanish translation
python manage_translations.py update # Update all translations
python manage_translations.py compile # Compile translations to .mo files
"""
import sys
import os
import subprocess
# Ensure we use the Python from the virtual environment if available
if hasattr(sys, 'real_prefix') or (hasattr(sys, 'base_prefix') and sys.base_prefix != sys.prefix):
# Already in venv
pass
else:
# Try to activate venv
venv_path = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), 'venv')
if os.path.exists(venv_path):
venv_bin = os.path.join(venv_path, 'bin')
if os.path.exists(venv_bin):
os.environ['PATH'] = venv_bin + os.pathsep + os.environ['PATH']
def run_command(cmd):
"""Run a shell command and print output"""
print(f"Running: {' '.join(cmd)}")
# Use the pybabel from the same directory as our Python executable
if cmd[0] == 'pybabel':
import os
pybabel_path = os.path.join(os.path.dirname(sys.executable), 'pybabel')
if os.path.exists(pybabel_path):
cmd = [pybabel_path] + cmd[1:]
result = subprocess.run(cmd, capture_output=True, text=True)
if result.stdout:
print(result.stdout)
if result.stderr:
print(result.stderr, file=sys.stderr)
return result.returncode
def extract():
"""Extract translatable strings from source code"""
print("Extracting translatable strings...")
return run_command([
'pybabel', 'extract',
'-F', 'babel.cfg',
'-k', 'lazy_gettext',
'-k', '_l',
'-o', 'translations/messages.pot',
'.'
])
def init(language):
"""Initialize a new language translation"""
print(f"Initializing {language} translation...")
return run_command([
'pybabel', 'init',
'-i', 'translations/messages.pot',
'-d', 'translations',
'-l', language
])
def update():
"""Update existing translations with new strings"""
print("Updating translations...")
return run_command([
'pybabel', 'update',
'-i', 'translations/messages.pot',
'-d', 'translations'
])
def compile_translations():
"""Compile .po files to .mo files"""
print("Compiling translations...")
return run_command([
'pybabel', 'compile',
'-d', 'translations'
])
def main():
if len(sys.argv) < 2:
print(__doc__)
sys.exit(1)
command = sys.argv[1]
if command == 'extract':
sys.exit(extract())
elif command == 'init':
if len(sys.argv) < 3:
print("Error: Please specify a language code (e.g., es, fr, de)")
sys.exit(1)
sys.exit(init(sys.argv[2]))
elif command == 'update':
sys.exit(update())
elif command == 'compile':
sys.exit(compile_translations())
else:
print(f"Unknown command: {command}")
print(__doc__)
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -1,4 +1,6 @@
Flask>=1.0.3 Flask>=1.0.3
Flask-Babel>=4.0.0
Babel>=2.12.0
gevent>=1.2.2 gevent>=1.2.2
Brotli>=1.0.7 Brotli>=1.0.7
PySocks>=1.6.8 PySocks>=1.6.8
@@ -6,3 +8,4 @@ urllib3>=1.24.1
defusedxml>=0.5.0 defusedxml>=0.5.0
cachetools>=4.0.0 cachetools>=4.0.0
stem>=1.8.0 stem>=1.8.0
requests>=2.25.0

View File

@@ -99,7 +99,6 @@ def proxy_site(env, start_response, video=False):
if response.status >= 400: if response.status >= 400:
print('Error: YouTube returned "%d %s" while routing %s' % ( print('Error: YouTube returned "%d %s" while routing %s' % (
response.status, response.reason, url.split('?')[0])) response.status, response.reason, url.split('?')[0]))
total_received = 0 total_received = 0
retry = False retry = False
while True: while True:
@@ -279,6 +278,16 @@ if __name__ == '__main__':
print('Starting httpserver at http://%s:%s/' % print('Starting httpserver at http://%s:%s/' %
(ip_server, settings.port_number)) (ip_server, settings.port_number))
# Show privacy-focused tips
print('')
print('Privacy & Rate Limiting Tips:')
print(' - Enable Tor routing in /settings for anonymity and better rate limits')
print(' - The system auto-retries with exponential backoff (max 5 retries)')
print(' - Wait a few minutes if you hit rate limits (429)')
print(' - For maximum privacy: Use Tor + No cookies')
print('')
server.serve_forever() server.serve_forever()
# for uwsgi, gunicorn, etc. # for uwsgi, gunicorn, etc.

View File

@@ -296,6 +296,17 @@ Archive: https://archive.ph/OZQbN''',
'category': 'interface', 'category': 'interface',
}), }),
('language', {
'type': str,
'default': 'en',
'comment': 'Interface language',
'options': [
('en', 'English'),
('es', 'Español'),
],
'category': 'interface',
}),
('embed_page_mode', { ('embed_page_mode', {
'type': bool, 'type': bool,
'label': 'Enable embed page', 'label': 'Enable embed page',
@@ -339,7 +350,8 @@ Archive: https://archive.ph/OZQbN''',
program_directory = os.path.dirname(os.path.realpath(__file__)) program_directory = os.path.dirname(os.path.realpath(__file__))
acceptable_targets = SETTINGS_INFO.keys() | { acceptable_targets = SETTINGS_INFO.keys() | {
'enable_comments', 'enable_related_videos', 'preferred_video_codec' 'enable_comments', 'enable_related_videos', 'preferred_video_codec',
'ytdlp_enabled',
} }
@@ -441,8 +453,7 @@ else:
print("Running in non-portable mode") print("Running in non-portable mode")
settings_dir = os.path.expanduser(os.path.normpath("~/.yt-local")) settings_dir = os.path.expanduser(os.path.normpath("~/.yt-local"))
data_dir = os.path.expanduser(os.path.normpath("~/.yt-local/data")) data_dir = os.path.expanduser(os.path.normpath("~/.yt-local/data"))
if not os.path.exists(settings_dir): os.makedirs(settings_dir, exist_ok=True)
os.makedirs(settings_dir)
settings_file_path = os.path.join(settings_dir, 'settings.txt') settings_file_path = os.path.join(settings_dir, 'settings.txt')

View File

@@ -0,0 +1,74 @@
# Spanish translations for yt-local.
# Copyright (C) 2026 yt-local
# This file is distributed under the same license as the yt-local project.
#
msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-03-22 15:05-0500\n"
"PO-Revision-Date: 2026-03-22 15:06-0500\n"
"Last-Translator: \n"
"Language: es\n"
"Language-Team: es <LL@li.org>\n"
"Plural-Forms: nplurals=2; plural=(n != 1);\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.18.0\n"
#: youtube/templates/base.html:38
msgid "Type to search..."
msgstr "Escribe para buscar..."
#: youtube/templates/base.html:39
msgid "Search"
msgstr "Buscar"
#: youtube/templates/base.html:45
msgid "Options"
msgstr "Opciones"
#: youtube/templates/base.html:47
msgid "Sort by"
msgstr "Ordenar por"
#: youtube/templates/base.html:50
msgid "Relevance"
msgstr "Relevancia"
#: youtube/templates/base.html:54 youtube/templates/base.html:65
msgid "Upload date"
msgstr "Fecha de subida"
#: youtube/templates/base.html:58
msgid "View count"
msgstr "Número de visualizaciones"
#: youtube/templates/base.html:62
msgid "Rating"
msgstr "Calificación"
#: youtube/templates/base.html:68
msgid "Any"
msgstr "Cualquiera"
#: youtube/templates/base.html:72
msgid "Last hour"
msgstr "Última hora"
#: youtube/templates/base.html:76
msgid "Today"
msgstr "Hoy"
#: youtube/templates/base.html:80
msgid "This week"
msgstr "Esta semana"
#: youtube/templates/base.html:84
msgid "This month"
msgstr "Este mes"
#: youtube/templates/base.html:88
msgid "This year"
msgstr "Este año"

75
translations/messages.pot Normal file
View File

@@ -0,0 +1,75 @@
# Translations template for PROJECT.
# Copyright (C) 2026 ORGANIZATION
# This file is distributed under the same license as the PROJECT project.
# FIRST AUTHOR <EMAIL@ADDRESS>, 2026.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-03-22 15:05-0500\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=utf-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Generated-By: Babel 2.18.0\n"
#: youtube/templates/base.html:38
msgid "Type to search..."
msgstr ""
#: youtube/templates/base.html:39
msgid "Search"
msgstr ""
#: youtube/templates/base.html:45
msgid "Options"
msgstr ""
#: youtube/templates/base.html:47
msgid "Sort by"
msgstr ""
#: youtube/templates/base.html:50
msgid "Relevance"
msgstr ""
#: youtube/templates/base.html:54 youtube/templates/base.html:65
msgid "Upload date"
msgstr ""
#: youtube/templates/base.html:58
msgid "View count"
msgstr ""
#: youtube/templates/base.html:62
msgid "Rating"
msgstr ""
#: youtube/templates/base.html:68
msgid "Any"
msgstr ""
#: youtube/templates/base.html:72
msgid "Last hour"
msgstr ""
#: youtube/templates/base.html:76
msgid "Today"
msgstr ""
#: youtube/templates/base.html:80
msgid "This week"
msgstr ""
#: youtube/templates/base.html:84
msgid "This month"
msgstr ""
#: youtube/templates/base.html:88
msgid "This year"
msgstr ""

View File

@@ -5,14 +5,48 @@ from flask import request
import jinja2 import jinja2
import settings import settings
import traceback import traceback
import logging
import re import re
from sys import exc_info from sys import exc_info
from flask_babel import Babel
yt_app = flask.Flask(__name__) yt_app = flask.Flask(__name__)
yt_app.config['TEMPLATES_AUTO_RELOAD'] = True yt_app.config['TEMPLATES_AUTO_RELOAD'] = True
yt_app.url_map.strict_slashes = False yt_app.url_map.strict_slashes = False
# Don't log full tracebacks for handled FetchErrors
class FetchErrorFilter(logging.Filter):
def filter(self, record):
if record.exc_info and record.exc_info[0] == util.FetchError:
return False
return True
yt_app.logger.addFilter(FetchErrorFilter())
# yt_app.jinja_env.trim_blocks = True # yt_app.jinja_env.trim_blocks = True
# yt_app.jinja_env.lstrip_blocks = True # yt_app.jinja_env.lstrip_blocks = True
# Configure Babel for i18n
import os
yt_app.config['BABEL_DEFAULT_LOCALE'] = 'en'
# Use absolute path for translations directory to avoid issues with package structure changes
_app_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
yt_app.config['BABEL_TRANSLATION_DIRECTORIES'] = os.path.join(_app_root, 'translations')
def get_locale():
"""Determine the best locale based on user preference or browser settings"""
# Check if user has a language preference in settings
if hasattr(settings, 'language') and settings.language:
locale = settings.language
print(f'[i18n] Using user preference: {locale}')
return locale
# Otherwise, use browser's Accept-Language header
# Only match languages with available translations
locale = request.accept_languages.best_match(['en', 'es'])
print(f'[i18n] Using browser language: {locale}')
return locale or 'en'
babel = Babel(yt_app, locale_selector=get_locale)
yt_app.add_url_rule('/settings', 'settings_page', settings.settings_page, methods=['POST', 'GET']) yt_app.add_url_rule('/settings', 'settings_page', settings.settings_page, methods=['POST', 'GET'])
@@ -100,36 +134,54 @@ def timestamps(text):
@yt_app.errorhandler(500) @yt_app.errorhandler(500)
def error_page(e): def error_page(e):
slim = request.args.get('slim', False) # whether it was an ajax request slim = request.args.get('slim', False) # whether it was an ajax request
if (exc_info()[0] == util.FetchError if exc_info()[0] == util.FetchError:
and exc_info()[1].code == '429' fetch_err = exc_info()[1]
and settings.route_tor error_code = fetch_err.code
):
error_message = ('Error: YouTube blocked the request because the Tor' if error_code == '429' and settings.route_tor:
' exit node is overutilized. Try getting a new exit node by' error_message = ('Error: YouTube blocked the request because the Tor'
' using the New Identity button in the Tor Browser.') ' exit node is overutilized. Try getting a new exit node by'
if exc_info()[1].error_message: ' using the New Identity button in the Tor Browser.')
error_message += '\n\n' + exc_info()[1].error_message if fetch_err.error_message:
if exc_info()[1].ip: error_message += '\n\n' + fetch_err.error_message
error_message += '\n\nExit node IP address: ' + exc_info()[1].ip if fetch_err.ip:
return flask.render_template('error.html', error_message=error_message, slim=slim), 502 error_message += '\n\nExit node IP address: ' + fetch_err.ip
elif exc_info()[0] == util.FetchError and exc_info()[1].error_message: return flask.render_template('error.html', error_message=error_message, slim=slim), 502
return (flask.render_template(
'error.html', elif error_code == '429':
error_message=exc_info()[1].error_message, error_message = ('YouTube is temporarily blocking requests from your IP address (429 Too Many Requests).\n\n'
slim=slim 'Try:\n'
), 502) '• Wait a few minutes and refresh\n'
elif (exc_info()[0] == util.FetchError '• Enable Tor routing in Settings for automatic IP rotation\n'
and exc_info()[1].code == '404' '• Use a VPN to change your IP address')
): if fetch_err.ip:
error_message = ('Error: The page you are looking for isn\'t here.') error_message += '\n\nYour IP: ' + fetch_err.ip
return flask.render_template('error.html', return flask.render_template('error.html', error_message=error_message, slim=slim), 429
error_code=exc_info()[1].code,
error_message=error_message, elif error_code == '502' and ('Failed to resolve' in str(fetch_err) or 'Failed to establish' in str(fetch_err)):
slim=slim), 404 error_message = ('Could not connect to YouTube.\n\n'
'Check your internet connection and try again.')
return flask.render_template('error.html', error_message=error_message, slim=slim), 502
elif error_code == '403':
error_message = ('YouTube blocked this request (403 Forbidden).\n\n'
'Try enabling Tor routing in Settings.')
return flask.render_template('error.html', error_message=error_message, slim=slim), 403
elif error_code == '404':
error_message = 'Error: The page you are looking for isn\'t here.'
return flask.render_template('error.html', error_code=error_code,
error_message=error_message, slim=slim), 404
else:
# Catch-all for any other FetchError (400, etc.)
error_message = f'Error communicating with YouTube ({error_code}).'
if fetch_err.error_message:
error_message += '\n\n' + fetch_err.error_message
return flask.render_template('error.html', error_message=error_message, slim=slim), 502
return flask.render_template('error.html', traceback=traceback.format_exc(), return flask.render_template('error.html', traceback=traceback.format_exc(),
error_code=exc_info()[1].code,
slim=slim), 500 slim=slim), 500
# return flask.render_template('error.html', traceback=traceback.format_exc(), slim=slim), 500
font_choices = { font_choices = {

View File

@@ -33,53 +33,75 @@ headers_mobile = (
real_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=8XihrAcN1l4'),) real_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=8XihrAcN1l4'),)
generic_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=ST1Ti53r4fU'),) generic_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=ST1Ti53r4fU'),)
# added an extra nesting under the 2nd base64 compared to v4 # FIXED 2026: YouTube changed continuation token structure (from Invidious commit a9f8127)
# added tab support # Sort values for YouTube API (from Invidious): 2=popular, 4=newest, 5=oldest
# changed offset field to uint id 1
def channel_ctoken_v5(channel_id, page, sort, tab, view=1): def channel_ctoken_v5(channel_id, page, sort, tab, view=1):
new_sort = (2 if int(sort) == 1 else 1) # Map sort values to YouTube API values (Invidious values)
# Input: sort=3 (newest), sort=4 (newest no shorts)
# YouTube expects: 4=newest
sort_mapping = {'1': 2, '2': 5, '3': 4, '4': 4} # 4 is newest without shorts
new_sort = sort_mapping.get(sort, 4)
offset = 30*(int(page) - 1) offset = 30*(int(page) - 1)
if tab == 'videos':
tab = 15 # Build continuation token using Invidious structure
elif tab == 'shorts': # The structure is: base64(protobuf({
tab = 10 # 80226972: {
elif tab == 'streams': # 2: channel_id,
tab = 14 # 3: base64(protobuf({
# 110: {
# 3: {
# tab: {
# 1: {
# 1: base64(protobuf({
# 1: base64(protobuf({
# 2: "ST:" + base64(offset_varint)
# }))
# }))
# },
# 2: base64(protobuf({1: UUID}))
# 4: sort_value
# 8: base64(protobuf({
# 1: UUID
# 3: sort_value
# }))
# }
# }
# }
# }))
# }
# }))
# UUID placeholder
uuid_proto = proto.string(1, "00000000-0000-0000-0000-000000000000")
# Offset encoding
offset_varint = proto.uint(1, offset)
offset_encoded = proto.string(2, proto.unpadded_b64encode(offset_varint))
offset_wrapper = proto.string(1, proto.unpadded_b64encode(offset_encoded))
offset_base = proto.string(1, proto.unpadded_b64encode(offset_wrapper))
# Sort value varint
sort_varint = proto.uint(4, new_sort)
# Embedded message with UUID and sort
embedded_inner = uuid_proto + proto.uint(3, new_sort)
embedded_encoded = proto.string(8, proto.unpadded_b64encode(embedded_inner))
# Combine: uuid_wrapper + sort_varint + embedded
tab_inner_content = offset_base + uuid_proto + sort_varint + embedded_encoded
tab_inner = proto.string(1, proto.unpadded_b64encode(tab_inner_content))
tab_wrapper = proto.string(tab, tab_inner)
inner_container = proto.string(3, tab_wrapper)
outer_container = proto.string(110, inner_container)
encoded_inner = proto.percent_b64encode(outer_container)
pointless_nest = proto.string(80226972, pointless_nest = proto.string(80226972,
proto.string(2, channel_id) proto.string(2, channel_id)
+ proto.string(3, + proto.string(3, encoded_inner)
proto.percent_b64encode(
proto.string(110,
proto.string(3,
proto.string(tab,
proto.string(1,
proto.string(1,
proto.unpadded_b64encode(
proto.string(1,
proto.string(1,
proto.unpadded_b64encode(
proto.string(2,
b"ST:"
+ proto.unpadded_b64encode(
proto.uint(1, offset)
)
)
)
)
)
)
)
# targetId, just needs to be present but
# doesn't need to be correct
+ proto.string(2, "63faaff0-0000-23fe-80f0-582429d11c38")
)
# 1 - newest, 2 - popular
+ proto.uint(3, new_sort)
)
)
)
)
)
) )
return base64.urlsafe_b64encode(pointless_nest).decode('ascii') return base64.urlsafe_b64encode(pointless_nest).decode('ascii')
@@ -161,11 +183,6 @@ def channel_ctoken_v4(channel_id, page, sort, tab, view=1):
# SORT: # SORT:
# videos: # videos:
# Popular - 1
# Oldest - 2
# Newest - 3
# playlists:
# Oldest - 2
# Newest - 3 # Newest - 3
# Last video added - 4 # Last video added - 4
@@ -389,7 +406,12 @@ def post_process_channel_info(info):
info['avatar'] = util.prefix_url(info['avatar']) info['avatar'] = util.prefix_url(info['avatar'])
info['channel_url'] = util.prefix_url(info['channel_url']) info['channel_url'] = util.prefix_url(info['channel_url'])
for item in info['items']: for item in info['items']:
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id']) # Only set thumbnail if YouTube didn't provide one
if not item.get('thumbnail'):
if item.get('type') == 'playlist' and item.get('first_video_id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['first_video_id'])
elif item.get('type') == 'video' and item.get('id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id'])
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
if info['current_tab'] == 'about': if info['current_tab'] == 'about':
@@ -398,11 +420,20 @@ def post_process_channel_info(info):
info['links'][i] = (text, util.prefix_url(url)) info['links'][i] = (text, util.prefix_url(url))
def get_channel_first_page(base_url=None, tab='videos', channel_id=None): def get_channel_first_page(base_url=None, tab='videos', channel_id=None, sort=None):
if channel_id: if channel_id:
base_url = 'https://www.youtube.com/channel/' + channel_id base_url = 'https://www.youtube.com/channel/' + channel_id
return util.fetch_url(base_url + '/' + tab + '?pbj=1&view=0',
headers_desktop, debug_name='gen_channel_' + tab) # Build URL with sort parameter
# YouTube URL sort params: p=popular, dd=newest, lad=newest no shorts
# Note: 'da' (oldest) was removed by YouTube in January 2026
url = base_url + '/' + tab + '?pbj=1&view=0'
if sort:
# Map sort values to YouTube's URL parameter values
sort_map = {'3': 'dd', '4': 'lad'}
url += '&sort=' + sort_map.get(sort, 'dd')
return util.fetch_url(url, headers_desktop, debug_name='gen_channel_' + tab)
playlist_sort_codes = {'2': "da", '3': "dd", '4': "lad"} playlist_sort_codes = {'2': "da", '3': "dd", '4': "lad"}
@@ -416,7 +447,6 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
page_number = int(request.args.get('page', 1)) page_number = int(request.args.get('page', 1))
# sort 1: views # sort 1: views
# sort 2: oldest # sort 2: oldest
# sort 3: newest
# sort 4: newest - no shorts (Just a kludge on our end, not internal to yt) # sort 4: newest - no shorts (Just a kludge on our end, not internal to yt)
default_sort = '3' if settings.include_shorts_in_channel else '4' default_sort = '3' if settings.include_shorts_in_channel else '4'
sort = request.args.get('sort', default_sort) sort = request.args.get('sort', default_sort)
@@ -483,17 +513,15 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
else: else:
num_videos_call = (get_number_of_videos_general, base_url) num_videos_call = (get_number_of_videos_general, base_url)
# Use ctoken method, which YouTube changes all the time # For page 1, use the first-page method which won't break
if channel_id and not default_params: # Pass sort parameter directly (2=oldest, 3=newest, etc.)
if sort == 4: if page_number == 1:
_sort = 3 # Always use first-page method for page 1 with sort parameter
else: page_call = (get_channel_first_page, base_url, tab, None, sort)
_sort = sort
page_call = (get_channel_tab, channel_id, page_number, _sort,
tab, view, ctoken)
# Use the first-page method, which won't break
else: else:
page_call = (get_channel_first_page, base_url, tab) # For page 2+, we can't paginate without continuation tokens
# This is a YouTube limitation, not our bug
flask.abort(404, 'Pagination not available for this sort option. YouTube removed this feature.')
tasks = ( tasks = (
gevent.spawn(*num_videos_call), gevent.spawn(*num_videos_call),
@@ -512,7 +540,14 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
}) })
continuation=True continuation=True
elif tab == 'playlists' and page_number == 1: elif tab == 'playlists' and page_number == 1:
polymer_json = util.fetch_url(base_url+ '/playlists?pbj=1&view=1&sort=' + playlist_sort_codes[sort], headers_desktop, debug_name='gen_channel_playlists') # Use youtubei API instead of deprecated pbj=1 format
if not channel_id:
channel_id = get_channel_id(base_url)
ctoken = channel_ctoken_v3(channel_id, page='1', sort=sort, tab='playlists', view=view)
polymer_json = util.call_youtube_api('web', 'browse', {
'continuation': ctoken,
})
continuation = True
elif tab == 'playlists': elif tab == 'playlists':
polymer_json = get_channel_tab(channel_id, page_number, sort, polymer_json = get_channel_tab(channel_id, page_number, sort,
'playlists', view) 'playlists', view)

View File

@@ -53,7 +53,7 @@ def request_comments(ctoken, replies=False):
'hl': 'en', 'hl': 'en',
'gl': 'US', 'gl': 'US',
'clientName': 'MWEB', 'clientName': 'MWEB',
'clientVersion': '2.20240328.08.00', 'clientVersion': '2.20210804.02.00',
}, },
}, },
'continuation': ctoken.replace('=', '%3D'), 'continuation': ctoken.replace('=', '%3D'),
@@ -78,7 +78,7 @@ def single_comment_ctoken(video_id, comment_id):
def post_process_comments_info(comments_info): def post_process_comments_info(comments_info):
for comment in comments_info['comments']: for comment in comments_info['comments']:
comment['author'] = strip_non_ascii(comment['author']) comment['author'] = strip_non_ascii(comment['author']) if comment.get('author') else ""
comment['author_url'] = concat_or_none( comment['author_url'] = concat_or_none(
'/', comment['author_url']) '/', comment['author_url'])
comment['author_avatar'] = concat_or_none( comment['author_avatar'] = concat_or_none(
@@ -189,10 +189,10 @@ def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''):
comments_info['error'] += '\n\n' + e.error_message comments_info['error'] += '\n\n' + e.error_message
comments_info['error'] += '\n\nExit node IP address: %s' % e.ip comments_info['error'] += '\n\nExit node IP address: %s' % e.ip
else: else:
comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip comments_info['error'] = 'YouTube blocked the request. Error: %s' % str(e)
except Exception as e: except Exception as e:
comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip comments_info['error'] = 'YouTube blocked the request. Error: %s' % str(e)
if comments_info.get('error'): if comments_info.get('error'):
print('Error retrieving comments for ' + str(video_id) + ':\n' + print('Error retrieving comments for ' + str(video_id) + ':\n' +

112
youtube/i18n_strings.py Normal file
View File

@@ -0,0 +1,112 @@
#!/usr/bin/env python3
"""
Centralized i18n strings for yt-local
This file contains static strings that need to be translated but are used
dynamically in templates or generated content. By importing this module,
these strings get extracted by babel for translation.
"""
from flask_babel import lazy_gettext as _l
# Settings categories
CATEGORY_NETWORK = _l('Network')
CATEGORY_PLAYBACK = _l('Playback')
CATEGORY_INTERFACE = _l('Interface')
# Common setting labels
ROUTE_TOR = _l('Route Tor')
DEFAULT_SUBTITLES_MODE = _l('Default subtitles mode')
AV1_CODEC_RANKING = _l('AV1 Codec Ranking')
VP8_VP9_CODEC_RANKING = _l('VP8/VP9 Codec Ranking')
H264_CODEC_RANKING = _l('H.264 Codec Ranking')
USE_INTEGRATED_SOURCES = _l('Use integrated sources')
ROUTE_IMAGES = _l('Route images')
ENABLE_COMMENTS_JS = _l('Enable comments.js')
ENABLE_SPONSORBLOCK = _l('Enable SponsorBlock')
ENABLE_EMBED_PAGE = _l('Enable embed page')
# Setting names (auto-generated from setting keys)
RELATED_VIDEOS_MODE = _l('Related videos mode')
COMMENTS_MODE = _l('Comments mode')
ENABLE_COMMENT_AVATARS = _l('Enable comment avatars')
DEFAULT_COMMENT_SORTING = _l('Default comment sorting')
THEATER_MODE = _l('Theater mode')
AUTOPLAY_VIDEOS = _l('Autoplay videos')
DEFAULT_RESOLUTION = _l('Default resolution')
USE_VIDEO_PLAYER = _l('Use video player')
USE_VIDEO_DOWNLOAD = _l('Use video download')
PROXY_IMAGES = _l('Proxy images')
THEME = _l('Theme')
FONT = _l('Font')
LANGUAGE = _l('Language')
EMBED_PAGE_MODE = _l('Embed page mode')
# Common option values
OFF = _l('Off')
ON = _l('On')
DISABLED = _l('Disabled')
ENABLED = _l('Enabled')
ALWAYS_SHOWN = _l('Always shown')
SHOWN_BY_CLICKING_BUTTON = _l('Shown by clicking button')
NATIVE = _l('Native')
NATIVE_WITH_HOTKEYS = _l('Native with hotkeys')
PLYR = _l('Plyr')
# Theme options
LIGHT = _l('Light')
GRAY = _l('Gray')
DARK = _l('Dark')
# Font options
BROWSER_DEFAULT = _l('Browser default')
LIBERATION_SERIF = _l('Liberation Serif')
ARIAL = _l('Arial')
VERDANA = _l('Verdana')
TAHOMA = _l('Tahoma')
# Search and filter options
SORT_BY = _l('Sort by')
RELEVANCE = _l('Relevance')
UPLOAD_DATE = _l('Upload date')
VIEW_COUNT = _l('View count')
RATING = _l('Rating')
# Time filters
ANY = _l('Any')
LAST_HOUR = _l('Last hour')
TODAY = _l('Today')
THIS_WEEK = _l('This week')
THIS_MONTH = _l('This month')
THIS_YEAR = _l('This year')
# Content types
TYPE = _l('Type')
VIDEO = _l('Video')
CHANNEL = _l('Channel')
PLAYLIST = _l('Playlist')
MOVIE = _l('Movie')
SHOW = _l('Show')
# Duration filters
DURATION = _l('Duration')
SHORT_DURATION = _l('Short (< 4 minutes)')
LONG_DURATION = _l('Long (> 20 minutes)')
# Actions
SEARCH = _l('Search')
DOWNLOAD = _l('Download')
SUBSCRIBE = _l('Subscribe')
UNSUBSCRIBE = _l('Unsubscribe')
IMPORT = _l('Import')
EXPORT = _l('Export')
SAVE = _l('Save')
CHECK = _l('Check')
MUTE = _l('Mute')
UNMUTE = _l('Unmute')
# Common UI elements
OPTIONS = _l('Options')
SETTINGS = _l('Settings')
ERROR = _l('Error')
LOADING = _l('loading...')

View File

@@ -26,8 +26,7 @@ def video_ids_in_playlist(name):
def add_to_playlist(name, video_info_list): def add_to_playlist(name, video_info_list):
if not os.path.exists(playlists_directory): os.makedirs(playlists_directory, exist_ok=True)
os.makedirs(playlists_directory)
ids = video_ids_in_playlist(name) ids = video_ids_in_playlist(name)
missing_thumbnails = [] missing_thumbnails = []
with open(os.path.join(playlists_directory, name + ".txt"), "a", encoding='utf-8') as file: with open(os.path.join(playlists_directory, name + ".txt"), "a", encoding='utf-8') as file:

View File

@@ -8,7 +8,7 @@ import json
import string import string
import gevent import gevent
import math import math
from flask import request from flask import request, abort
import flask import flask
@@ -78,6 +78,15 @@ def get_playlist_page():
abort(400) abort(400)
playlist_id = request.args.get('list') playlist_id = request.args.get('list')
# Radio/Mix playlists (RD...) only work as watch page, not playlist page
if playlist_id.startswith('RD'):
first_video_id = playlist_id[2:] # video ID after 'RD' prefix
return flask.redirect(
util.URL_ORIGIN + '/watch?v=' + first_video_id + '&list=' + playlist_id,
302
)
page = request.args.get('page', '1') page = request.args.get('page', '1')
if page == '1': if page == '1':
@@ -106,7 +115,7 @@ def get_playlist_page():
for item in info.get('items', ()): for item in info.get('items', ()):
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
if 'id' in item: if 'id' in item and not item.get('thumbnail'):
item['thumbnail'] = f"{settings.img_prefix}https://i.ytimg.com/vi/{item['id']}/hqdefault.jpg" item['thumbnail'] = f"{settings.img_prefix}https://i.ytimg.com/vi/{item['id']}/hqdefault.jpg"
item['url'] += '&list=' + playlist_id item['url'] += '&list=' + playlist_id

View File

@@ -113,12 +113,12 @@ def read_protobuf(data):
length = read_varint(data) length = read_varint(data)
value = data.read(length) value = data.read(length)
elif wire_type == 3: elif wire_type == 3:
end_bytes = encode_varint((field_number << 3) | 4) end_bytes = varint_encode((field_number << 3) | 4)
value = read_group(data, end_bytes) value = read_group(data, end_bytes)
elif wire_type == 5: elif wire_type == 5:
value = data.read(4) value = data.read(4)
else: else:
raise Exception("Unknown wire type: " + str(wire_type) + ", Tag: " + bytes_to_hex(succinct_encode(tag)) + ", at position " + str(data.tell())) raise Exception("Unknown wire type: " + str(wire_type) + " at position " + str(data.tell()))
yield (wire_type, field_number, value) yield (wire_type, field_number, value)

View File

@@ -97,6 +97,7 @@ import re
import time import time
import json import json
import os import os
import traceback
import pprint import pprint

View File

@@ -114,3 +114,57 @@ function copyTextToClipboard(text) {
window.addEventListener('DOMContentLoaded', function() { window.addEventListener('DOMContentLoaded', function() {
cur_track_idx = getDefaultTranscriptTrackIdx(); cur_track_idx = getDefaultTranscriptTrackIdx();
}); });
/**
* Thumbnail fallback handler
* Tries lower quality thumbnails when higher quality fails (404)
* Priority: hq720.jpg -> sddefault.jpg -> hqdefault.jpg -> mqdefault.jpg -> default.jpg
*/
function thumbnail_fallback(img) {
// Once src is set (image was loaded or attempted), always work with src
const src = img.src;
if (!src) return;
// Handle YouTube video thumbnails
if (src.includes('/i.ytimg.com/') || src.includes('/i.ytimg.com%2F')) {
// Extract video ID from URL
const match = src.match(/\/vi\/([^/]+)/);
if (!match) return;
const videoId = match[1];
const imgPrefix = settings_img_prefix || '';
// Define fallback order (from highest to lowest quality)
const fallbacks = [
'hq720.jpg',
'sddefault.jpg',
'hqdefault.jpg',
];
// Find current quality and try next fallback
for (let i = 0; i < fallbacks.length; i++) {
if (src.includes(fallbacks[i])) {
if (i < fallbacks.length - 1) {
img.src = imgPrefix + 'https://i.ytimg.com/vi/' + videoId + '/' + fallbacks[i + 1];
} else {
// Last fallback failed, stop retrying
img.onerror = null;
}
return;
}
}
// Unknown quality format, stop retrying
img.onerror = null;
}
// Handle YouTube channel avatars (ggpht.com)
else if (src.includes('ggpht.com') || src.includes('yt3.ggpht.com')) {
const newSrc = src.replace(/=s\d+-c-k/, '=s240-c-k-c0x00ffffff-no-rj');
if (newSrc !== src) {
img.src = newSrc;
} else {
img.onerror = null;
}
} else {
img.onerror = null;
}
}

View File

@@ -5,8 +5,9 @@ function changeQuality(selection) {
let videoPaused = video.paused; let videoPaused = video.paused;
let videoSpeed = video.playbackRate; let videoSpeed = video.playbackRate;
let srcInfo; let srcInfo;
if (avMerge) if (avMerge && typeof avMerge.close === 'function') {
avMerge.close(); avMerge.close();
}
if (selection.type == 'uni'){ if (selection.type == 'uni'){
srcInfo = data['uni_sources'][selection.index]; srcInfo = data['uni_sources'][selection.index];
video.src = srcInfo.url; video.src = srcInfo.url;

View File

@@ -128,6 +128,29 @@ header {
background-color: var(--buttom-hover); background-color: var(--buttom-hover);
} }
.live-url-choices {
background-color: var(--thumb-background);
margin: 1rem 0;
padding: 1rem;
}
.playability-error {
position: relative;
box-sizing: border-box;
height: 30vh;
margin: 1rem 0;
}
.playability-error > span {
display: flex;
background-color: var(--thumb-background);
height: 100%;
object-fit: cover;
justify-content: center;
align-items: center;
text-align: center;
}
.playlist { .playlist {
display: grid; display: grid;
grid-gap: 4px; grid-gap: 4px;
@@ -622,6 +645,9 @@ figure.sc-video {
max-height: 80vh; max-height: 80vh;
overflow-y: scroll; overflow-y: scroll;
} }
.playability-error {
height: 60vh;
}
.playlist { .playlist {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;

View File

@@ -30,8 +30,7 @@ database_path = os.path.join(settings.data_dir, "subscriptions.sqlite")
def open_database(): def open_database():
if not os.path.exists(settings.data_dir): os.makedirs(settings.data_dir, exist_ok=True)
os.makedirs(settings.data_dir)
connection = sqlite3.connect(database_path, check_same_thread=False) connection = sqlite3.connect(database_path, check_same_thread=False)
try: try:
@@ -1089,12 +1088,26 @@ def serve_subscription_thumbnail(thumbnail):
f.close() f.close()
return flask.Response(image, mimetype='image/jpeg') return flask.Response(image, mimetype='image/jpeg')
url = f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg" image = None
try: for quality in ('hq720.jpg', 'sddefault.jpg', 'hqdefault.jpg'):
image = util.fetch_url(url, report_text="Saved thumbnail: " + video_id) url = f"https://i.ytimg.com/vi/{video_id}/{quality}"
except urllib.error.HTTPError as e: try:
print("Failed to download thumbnail for " + video_id + ": " + str(e)) image = util.fetch_url(url, report_text="Saved thumbnail: " + video_id)
abort(e.code) break
except util.FetchError as e:
if '404' in str(e):
continue
print("Failed to download thumbnail for " + video_id + ": " + str(e))
flask.abort(500)
except urllib.error.HTTPError as e:
if e.code == 404:
continue
print("Failed to download thumbnail for " + video_id + ": " + str(e))
flask.abort(e.code)
if image is None:
flask.abort(404)
try: try:
f = open(thumbnail_path, 'wb') f = open(thumbnail_path, 'wb')
except FileNotFoundError: except FileNotFoundError:

View File

@@ -26,6 +26,12 @@
// @license-end // @license-end
</script> </script>
{% endif %} {% endif %}
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later
// Image prefix for thumbnails
let settings_img_prefix = "{{ settings.img_prefix or '' }}";
// @license-end
</script>
</head> </head>
<body> <body>
@@ -35,57 +41,57 @@
</nav> </nav>
<form class="form" id="site-search" action="/youtube.com/results"> <form class="form" id="site-search" action="/youtube.com/results">
<input type="search" name="search_query" class="search-box" value="{{ search_box_value }}" <input type="search" name="search_query" class="search-box" value="{{ search_box_value }}"
{{ "autofocus" if (request.path in ("/", "/results") or error_message) else "" }} required placeholder="Type to search..."> {{ "autofocus" if (request.path in ("/", "/results") or error_message) else "" }} required placeholder="{{ _('Type to search...') }}">
<button type="submit" value="Search" class="search-button">Search</button> <button type="submit" value="Search" class="search-button">{{ _('Search') }}</button>
<!-- options --> <!-- options -->
<div class="dropdown"> <div class="dropdown">
<!-- hidden box --> <!-- hidden box -->
<input id="options-toggle-cbox" class="opt-box" type="checkbox"> <input id="options-toggle-cbox" class="opt-box" type="checkbox">
<!-- end hidden box --> <!-- end hidden box -->
<label class="dropdown-label" for="options-toggle-cbox">Options</label> <label class="dropdown-label" for="options-toggle-cbox">{{ _('Options') }}</label>
<div class="dropdown-content"> <div class="dropdown-content">
<h3>Sort by</h3> <h3>{{ _('Sort by') }}</h3>
<div class="option"> <div class="option">
<input type="radio" id="sort_relevance" name="sort" value="0"> <input type="radio" id="sort_relevance" name="sort" value="0">
<label for="sort_relevance">Relevance</label> <label for="sort_relevance">{{ _('Relevance') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="sort_upload_date" name="sort" value="2"> <input type="radio" id="sort_upload_date" name="sort" value="2">
<label for="sort_upload_date">Upload date</label> <label for="sort_upload_date">{{ _('Upload date') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="sort_view_count" name="sort" value="3"> <input type="radio" id="sort_view_count" name="sort" value="3">
<label for="sort_view_count">View count</label> <label for="sort_view_count">{{ _('View count') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="sort_rating" name="sort" value="1"> <input type="radio" id="sort_rating" name="sort" value="1">
<label for="sort_rating">Rating</label> <label for="sort_rating">{{ _('Rating') }}</label>
</div> </div>
<h3>Upload date</h3> <h3>{{ _('Upload date') }}</h3>
<div class="option"> <div class="option">
<input type="radio" id="time_any" name="time" value="0"> <input type="radio" id="time_any" name="time" value="0">
<label for="time_any">Any</label> <label for="time_any">{{ _('Any') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_last_hour" name="time" value="1"> <input type="radio" id="time_last_hour" name="time" value="1">
<label for="time_last_hour">Last hour</label> <label for="time_last_hour">{{ _('Last hour') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_today" name="time" value="2"> <input type="radio" id="time_today" name="time" value="2">
<label for="time_today">Today</label> <label for="time_today">{{ _('Today') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_this_week" name="time" value="3"> <input type="radio" id="time_this_week" name="time" value="3">
<label for="time_this_week">This week</label> <label for="time_this_week">{{ _('This week') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_this_month" name="time" value="4"> <input type="radio" id="time_this_month" name="time" value="4">
<label for="time_this_month">This month</label> <label for="time_this_month">{{ _('This month') }}</label>
</div> </div>
<div class="option"> <div class="option">
<input type="radio" id="time_this_year" name="time" value="5"> <input type="radio" id="time_this_year" name="time" value="5">
<label for="time_this_year">This year</label> <label for="time_this_year">{{ _('This year') }}</label>
</div> </div>
<h3>Type</h3> <h3>Type</h3>

View File

@@ -81,10 +81,10 @@
<!-- new--> <!-- new-->
<div id="links-metadata"> <div id="links-metadata">
{% if current_tab in ('videos', 'shorts', 'streams') %} {% if current_tab in ('videos', 'shorts', 'streams') %}
{% set sorts = [('1', 'views'), ('2', 'oldest'), ('3', 'newest'), ('4', 'newest - no shorts'),] %} {% set sorts = [('3', 'newest'), ('4', 'newest - no shorts')] %}
<div id="number-of-results">{{ number_of_videos }} videos</div> <div id="number-of-results">{{ number_of_videos }} videos</div>
{% elif current_tab == 'playlists' %} {% elif current_tab == 'playlists' %}
{% set sorts = [('2', 'oldest'), ('3', 'newest'), ('4', 'last video added')] %} {% set sorts = [('3', 'newest'), ('4', 'last video added')] %}
{% if items %} {% if items %}
<h2 class="page-number">Page {{ page_number }}</h2> <h2 class="page-number">Page {{ page_number }}</h2>
{% else %} {% else %}

View File

@@ -3,13 +3,13 @@
{% macro render_comment(comment, include_avatar, timestamp_links=False) %} {% macro render_comment(comment, include_avatar, timestamp_links=False) %}
<div class="comment-container"> <div class="comment-container">
<div class="comment"> <div class="comment">
<a class="author-avatar" href="{{ comment['author_url'] }}" title="{{ comment['author'] }}"> <a class="author-avatar" href="{{ comment['author_url'] or '#' }}" title="{{ comment['author'] }}">
{% if include_avatar %} {% if include_avatar %}
<img class="author-avatar-img" alt="{{ comment['author'] }}" src="{{ comment['author_avatar'] }}"> <img class="author-avatar-img" alt="{{ comment['author'] }}" src="{{ comment['author_avatar'] }}">
{% endif %} {% endif %}
</a> </a>
<address class="author-name"> <address class="author-name">
<a class="author" href="{{ comment['author_url'] }}" title="{{ comment['author'] }}">{{ comment['author'] }}</a> <a class="author" href="{{ comment['author_url'] or '#' }}" title="{{ comment['author'] }}">{{ comment['author'] }}</a>
</address> </address>
<a class="permalink" href="{{ comment['permalink'] }}" title="permalink"> <a class="permalink" href="{{ comment['permalink'] }}" title="permalink">
<span>{{ comment['time_published'] }}</span> <span>{{ comment['time_published'] }}</span>

View File

@@ -20,14 +20,14 @@
{{ info['error'] }} {{ info['error'] }}
{% else %} {% else %}
<div class="item-video {{ info['type'] + '-item' }}"> <div class="item-video {{ info['type'] + '-item' }}">
<a class="thumbnail-box" href="{{ info['url'] }}" title="{{ info['title'] }}"> <a class="thumbnail-box" href="{{ info['url'] or '#' }}" title="{{ info['title'] }}">
<div class="thumbnail {% if info['type'] == 'channel' %} channel {% endif %}"> <div class="thumbnail {% if info['type'] == 'channel' %} channel {% endif %}">
{% if lazy_load %} {% if lazy_load %}
<img class="thumbnail-img lazy" alt="&#x20;" data-src="{{ info['thumbnail'] }}"> <img class="thumbnail-img lazy" alt="&#x20;" data-src="{{ info['thumbnail'] }}" onerror="thumbnail_fallback(this)">
{% elif info['type'] == 'channel' %} {% elif info['type'] == 'channel' %}
<img class="thumbnail-img channel" alt="&#x20;" src="{{ info['thumbnail'] }}"> <img class="thumbnail-img channel" alt="&#x20;" src="{{ info['thumbnail'] }}" onerror="thumbnail_fallback(this)">
{% else %} {% else %}
<img class="thumbnail-img" alt="&#x20;" src="{{ info['thumbnail'] }}"> <img class="thumbnail-img" alt="&#x20;" src="{{ info['thumbnail'] }}" onerror="thumbnail_fallback(this)">
{% endif %} {% endif %}
{% if info['type'] != 'channel' %} {% if info['type'] != 'channel' %}
@@ -35,7 +35,7 @@
{% endif %} {% endif %}
</div> </div>
</a> </a>
<h4 class="title"><a href="{{ info['url'] }}" title="{{ info['title'] }}">{{ info['title'] }}</a></h4> <h4 class="title"><a href="{{ info['url'] or '#' }}" title="{{ info['title'] }}">{{ info['title'] }}</a></h4>
{% if include_author %} {% if include_author %}
{% set author_description = info['author'] %} {% set author_description = info['author'] %}

View File

@@ -10,11 +10,17 @@
<div class="playlist-metadata"> <div class="playlist-metadata">
<div class="author"> <div class="author">
{% if thumbnail %}
<img alt="{{ title }}" src="{{ thumbnail }}"> <img alt="{{ title }}" src="{{ thumbnail }}">
{% endif %}
<h2>{{ title }}</h2> <h2>{{ title }}</h2>
</div> </div>
<div class="summary"> <div class="summary">
{% if author_url %}
<a class="playlist-author" href="{{ author_url }}">{{ author }}</a> <a class="playlist-author" href="{{ author_url }}">{{ author }}</a>
{% else %}
<span class="playlist-author">{{ author }}</span>
{% endif %}
</div> </div>
<div class="playlist-stats"> <div class="playlist-stats">
<div>{{ video_count|commatize }} videos</div> <div>{{ video_count|commatize }} videos</div>

View File

@@ -31,11 +31,19 @@
<input type="number" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}" step="1"> <input type="number" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}" step="1">
{% endif %} {% endif %}
{% elif setting_info['type'].__name__ == 'float' %} {% elif setting_info['type'].__name__ == 'float' %}
<input type="number" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}" step="0.01">
{% elif setting_info['type'].__name__ == 'str' %} {% elif setting_info['type'].__name__ == 'str' %}
<input type="text" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}"> {% if 'options' is in(setting_info) %}
<select id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}">
{% for option in setting_info['options'] %}
<option value="{{ option[0] }}" {{ 'selected' if option[0] == value else '' }}>{{ option[1] }}</option>
{% endfor %}
</select>
{% else %}
<input type="text" id="{{ 'setting_' + setting_name }}" name="{{ setting_name }}" value="{{ value }}">
{% endif %}
{% else %} {% else %}
<span>Error: Unknown setting type: setting_info['type'].__name__</span> <span>Error: Unknown setting type: {{ setting_info['type'].__name__ }}</span>
{% endif %} {% endif %}
</li> </li>
{% endif %} {% endif %}

View File

@@ -85,6 +85,7 @@
<option value='{"type": "pair", "index": {{ loop.index0}}}' {{ 'selected' if loop.index0 == pair_idx and using_pair_sources else '' }} >{{ src_pair['quality_string'] }}</option> <option value='{"type": "pair", "index": {{ loop.index0}}}' {{ 'selected' if loop.index0 == pair_idx and using_pair_sources else '' }} >{{ src_pair['quality_string'] }}</option>
{% endfor %} {% endfor %}
</select> </select>
{% endif %} {% endif %}
</div> </div>
<input class="v-checkbox" name="video_info_list" value="{{ video_info }}" form="playlist-edit" type="checkbox"> <input class="v-checkbox" name="video_info_list" value="{{ video_info }}" form="playlist-edit" type="checkbox">
@@ -171,7 +172,11 @@
{% else %} {% else %}
<li>{{ playlist['current_index']+1 }}/{{ playlist['video_count'] }}</li> <li>{{ playlist['current_index']+1 }}/{{ playlist['video_count'] }}</li>
{% endif %} {% endif %}
{% if playlist['author_url'] %}
<li><a href="{{ playlist['author_url'] }}" title="{{ playlist['author'] }}">{{ playlist['author'] }}</a></li> <li><a href="{{ playlist['author_url'] }}" title="{{ playlist['author'] }}">{{ playlist['author'] }}</a></li>
{% elif playlist['author'] %}
<li>{{ playlist['author'] }}</li>
{% endif %}
</ul> </ul>
</div> </div>
<nav class="playlist-videos"> <nav class="playlist-videos">
@@ -246,6 +251,7 @@
let storyboard_url = {{ storyboard_url | tojson }}; let storyboard_url = {{ storyboard_url | tojson }};
// @license-end // @license-end
</script> </script>
<script src="/youtube.com/static/js/common.js"></script> <script src="/youtube.com/static/js/common.js"></script>
<script src="/youtube.com/static/js/transcript-table.js"></script> <script src="/youtube.com/static/js/transcript-table.js"></script>
{% if settings.use_video_player == 2 %} {% if settings.use_video_player == 2 %}

View File

@@ -1,4 +1,5 @@
from datetime import datetime from datetime import datetime
import logging
import settings import settings
import socks import socks
import sockshandler import sockshandler
@@ -18,6 +19,8 @@ import gevent.queue
import gevent.lock import gevent.lock
import collections import collections
import stem import stem
logger = logging.getLogger(__name__)
import stem.control import stem.control
import traceback import traceback
@@ -302,73 +305,140 @@ def fetch_url_response(url, headers=(), timeout=15, data=None,
def fetch_url(url, headers=(), timeout=15, report_text=None, data=None, def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
cookiejar_send=None, cookiejar_receive=None, use_tor=True, cookiejar_send=None, cookiejar_receive=None, use_tor=True,
debug_name=None): debug_name=None):
while True: """
start_time = time.monotonic() Fetch URL with exponential backoff retry logic for rate limiting.
response, cleanup_func = fetch_url_response( Retries:
url, headers, timeout=timeout, data=data, - 429 Too Many Requests: Exponential backoff (1s, 2s, 4s, 8s, 16s)
cookiejar_send=cookiejar_send, cookiejar_receive=cookiejar_receive, - 503 Service Unavailable: Exponential backoff
use_tor=use_tor) - 302 Redirect to Google Sorry: Treated as rate limit
response_time = time.monotonic()
content = response.read() Max retries: 5 attempts with exponential backoff
"""
import random
read_finish = time.monotonic() max_retries = 5
base_delay = 1.0 # Base delay in seconds
cleanup_func(response) # release_connection for urllib3 for attempt in range(max_retries):
content = decode_content( try:
content, start_time = time.monotonic()
response.headers.get('Content-Encoding', default='identity'))
if (settings.debugging_save_responses response, cleanup_func = fetch_url_response(
and debug_name is not None url, headers, timeout=timeout, data=data,
and content): cookiejar_send=cookiejar_send, cookiejar_receive=cookiejar_receive,
save_dir = os.path.join(settings.data_dir, 'debug') use_tor=use_tor)
if not os.path.exists(save_dir): response_time = time.monotonic()
os.makedirs(save_dir)
with open(os.path.join(save_dir, debug_name), 'wb') as f: content = response.read()
f.write(content)
if response.status == 429 or ( read_finish = time.monotonic()
response.status == 302 and (response.getheader('Location') == url
or response.getheader('Location').startswith(
'https://www.google.com/sorry/index'
)
)
):
print(response.status, response.reason, response.headers)
ip = re.search(
br'IP address: ((?:[\da-f]*:)+[\da-f]+|(?:\d+\.)+\d+)',
content)
ip = ip.group(1).decode('ascii') if ip else None
if not ip:
ip = re.search(r'IP=((?:\d+\.)+\d+)',
response.getheader('Set-Cookie') or '')
ip = ip.group(1) if ip else None
# don't get new identity if we're not using Tor cleanup_func(response) # release_connection for urllib3
if not use_tor: content = decode_content(
raise FetchError('429', reason=response.reason, ip=ip) content,
response.headers.get('Content-Encoding', default='identity'))
print('Error: YouTube blocked the request because the Tor exit node is overutilized. Exit node IP address: %s' % ip) if (settings.debugging_save_responses
and debug_name is not None
and content):
save_dir = os.path.join(settings.data_dir, 'debug')
os.makedirs(save_dir, exist_ok=True)
# get new identity with open(os.path.join(save_dir, debug_name), 'wb') as f:
error = tor_manager.new_identity(start_time) f.write(content)
if error:
raise FetchError(
'429', reason=response.reason, ip=ip,
error_message='Automatic circuit change: ' + error)
else:
continue # retry now that we have new identity
elif response.status >= 400: # Check for rate limiting (429) or redirect to Google Sorry
raise FetchError(str(response.status), reason=response.reason, if response.status == 429 or (
ip=None) response.status == 302 and (response.getheader('Location') == url
break or response.getheader('Location').startswith(
'https://www.google.com/sorry/index'
)
)
):
logger.info(f'Rate limit response: {response.status} {response.reason}')
ip = re.search(
br'IP address: ((?:[\da-f]*:)+[\da-f]+|(?:\d+\.)+\d+)',
content)
ip = ip.group(1).decode('ascii') if ip else None
if not ip:
ip = re.search(r'IP=((?:\d+\.)+\d+)',
response.getheader('Set-Cookie') or '')
ip = ip.group(1) if ip else None
# Without Tor, no point retrying with same IP
if not use_tor or not settings.route_tor:
logger.warning('Rate limited (429). Enable Tor routing to retry with new IP.')
raise FetchError('429', reason=response.reason, ip=ip)
# Tor: exhausted retries
if attempt >= max_retries - 1:
logger.error(f'Rate limited after {max_retries} retries. Exit IP: {ip}')
raise FetchError('429', reason=response.reason, ip=ip,
error_message='Tor exit node overutilized after multiple retries')
# Tor: get new identity and retry
logger.info(f'Rate limited. Getting new Tor identity... (IP: {ip})')
error = tor_manager.new_identity(start_time)
if error:
raise FetchError(
'429', reason=response.reason, ip=ip,
error_message='Automatic circuit change: ' + error)
continue # retry with new identity
# Check for client errors (400, 404) - don't retry these
if response.status == 400:
logger.error(f'Bad Request (400) - Invalid parameters or URL: {url[:100]}')
raise FetchError('400', reason='Bad Request - Invalid parameters or URL format', ip=None)
if response.status == 404:
logger.warning(f'Not Found (404): {url[:100]}')
raise FetchError('404', reason='Not Found', ip=None)
# Check for other server errors (503, 502, 504)
if response.status in (502, 503, 504):
if attempt >= max_retries - 1:
logger.error(f'Server error {response.status} after {max_retries} retries')
raise FetchError(str(response.status), reason=response.reason, ip=None)
# Exponential backoff for server errors
delay = (base_delay * (2 ** attempt)) + random.uniform(0, 1)
logger.warning(f'Server error ({response.status}). Waiting {delay:.1f}s before retry {attempt + 1}/{max_retries}...')
time.sleep(delay)
continue
# Success - break out of retry loop
break
except urllib3.exceptions.MaxRetryError as e:
# If this is the last attempt, raise the error
if attempt >= max_retries - 1:
exception_cause = e.__context__.__context__
if (isinstance(exception_cause, socks.ProxyConnectionError)
and settings.route_tor):
msg = ('Failed to connect to Tor. Check that Tor is open and '
'that your internet connection is working.\n\n'
+ str(e))
logger.error(f'Tor connection failed: {msg}')
raise FetchError('502', reason='Bad Gateway',
error_message=msg)
elif isinstance(e.__context__,
urllib3.exceptions.NewConnectionError):
msg = 'Failed to establish a connection.\n\n' + str(e)
logger.error(f'Connection failed: {msg}')
raise FetchError(
'502', reason='Bad Gateway',
error_message=msg)
else:
raise
# Wait and retry
delay = (base_delay * (2 ** attempt)) + random.uniform(0, 1)
logger.warning(f'Connection error. Waiting {delay:.1f}s before retry {attempt + 1}/{max_retries}...')
time.sleep(delay)
if report_text: if report_text:
print(report_text, ' Latency:', round(response_time - start_time, 3), ' Read time:', round(read_finish - response_time,3)) logger.info(f'{report_text} - Latency: {round(response_time - start_time, 3)}s - Read time: {round(read_finish - response_time, 3)}s')
return content return content
@@ -462,21 +532,31 @@ class RateLimitedQueue(gevent.queue.Queue):
def download_thumbnail(save_directory, video_id): def download_thumbnail(save_directory, video_id):
url = f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"
save_location = os.path.join(save_directory, video_id + ".jpg") save_location = os.path.join(save_directory, video_id + ".jpg")
try: for quality in ('hq720.jpg', 'sddefault.jpg', 'hqdefault.jpg'):
thumbnail = fetch_url(url, report_text="Saved thumbnail: " + video_id) url = f"https://i.ytimg.com/vi/{video_id}/{quality}"
except urllib.error.HTTPError as e: try:
print("Failed to download thumbnail for " + video_id + ": " + str(e)) thumbnail = fetch_url(url, report_text="Saved thumbnail: " + video_id)
return False except FetchError as e:
try: if '404' in str(e):
f = open(save_location, 'wb') continue
except FileNotFoundError: print("Failed to download thumbnail for " + video_id + ": " + str(e))
os.makedirs(save_directory, exist_ok=True) return False
f = open(save_location, 'wb') except urllib.error.HTTPError as e:
f.write(thumbnail) if e.code == 404:
f.close() continue
return True print("Failed to download thumbnail for " + video_id + ": " + str(e))
return False
try:
f = open(save_location, 'wb')
except FileNotFoundError:
os.makedirs(save_directory, exist_ok=True)
f = open(save_location, 'wb')
f.write(thumbnail)
f.close()
return True
print("No thumbnail available for " + video_id)
return False
def download_thumbnails(save_directory, ids): def download_thumbnails(save_directory, ids):
@@ -502,9 +582,40 @@ def video_id(url):
return urllib.parse.parse_qs(url_parts.query)['v'][0] return urllib.parse.parse_qs(url_parts.query)['v'][0]
# default, sddefault, mqdefault, hqdefault, hq720 def get_thumbnail_url(video_id, quality='hq720'):
def get_thumbnail_url(video_id): """Get thumbnail URL with fallback to lower quality if needed.
return f"{settings.img_prefix}https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"
Args:
video_id: YouTube video ID
quality: Preferred quality ('maxres', 'hq720', 'sd', 'hq', 'mq', 'default')
Returns:
Tuple of (best_available_url, quality_used)
"""
# Quality priority order (highest to lowest)
quality_order = {
'maxres': ['maxresdefault.jpg', 'sddefault.jpg', 'hqdefault.jpg'],
'hq720': ['hq720.jpg', 'sddefault.jpg', 'hqdefault.jpg'],
'sd': ['sddefault.jpg', 'hqdefault.jpg'],
'hq': ['hqdefault.jpg', 'mqdefault.jpg'],
'mq': ['mqdefault.jpg', 'default.jpg'],
'default': ['default.jpg'],
}
qualities = quality_order.get(quality, quality_order['hq720'])
base_url = f"{settings.img_prefix}https://i.ytimg.com/vi/{video_id}/"
# For now, return the highest quality URL
# The browser will handle 404s gracefully with alt text
return base_url + qualities[0], qualities[0]
def get_best_thumbnail_url(video_id):
"""Get the best available thumbnail URL for a video.
Tries hq720 first (for HD videos), falls back to sddefault for SD videos.
"""
return get_thumbnail_url(video_id, quality='hq720')[0]
def seconds_to_timestamp(seconds): def seconds_to_timestamp(seconds):
@@ -538,6 +649,12 @@ def prefix_url(url):
if url is None: if url is None:
return None return None
url = url.lstrip('/') # some urls have // before them, which has a special meaning url = url.lstrip('/') # some urls have // before them, which has a special meaning
# Increase resolution for YouTube channel avatars
if url and ('ggpht.com' in url or 'yt3.ggpht.com' in url):
# Replace size parameter with higher resolution (s240 instead of s88)
url = re.sub(r'=s\d+-c-k', '=s240-c-k-c0x00ffffff-no-rj', url)
return '/' + url return '/' + url
@@ -784,8 +901,7 @@ INNERTUBE_CLIENTS = {
def get_visitor_data(): def get_visitor_data():
visitor_data = None visitor_data = None
visitor_data_cache = os.path.join(settings.data_dir, 'visitorData.txt') visitor_data_cache = os.path.join(settings.data_dir, 'visitorData.txt')
if not os.path.exists(settings.data_dir): os.makedirs(settings.data_dir, exist_ok=True)
os.makedirs(settings.data_dir)
if os.path.isfile(visitor_data_cache): if os.path.isfile(visitor_data_cache):
with open(visitor_data_cache, 'r') as file: with open(visitor_data_cache, 'r') as file:
print('Getting visitor_data from cache') print('Getting visitor_data from cache')
@@ -840,6 +956,8 @@ def call_youtube_api(client, api, data):
def strip_non_ascii(string): def strip_non_ascii(string):
''' Returns the string without non ASCII characters''' ''' Returns the string without non ASCII characters'''
if string is None:
return ""
stripped = (c for c in string if 0 < ord(c) < 127) stripped = (c for c in string if 0 < ord(c) < 127)
return ''.join(stripped) return ''.join(stripped)

View File

@@ -1,3 +1,3 @@
from __future__ import unicode_literals from __future__ import unicode_literals
__version__ = 'v0.3.1' __version__ = 'v0.4.3'

View File

@@ -6,6 +6,9 @@ import settings
from flask import request from flask import request
import flask import flask
import logging
logger = logging.getLogger(__name__)
import json import json
import gevent import gevent
@@ -177,8 +180,34 @@ def make_caption_src(info, lang, auto=False, trans_lang=None):
label += ' (Automatic)' label += ' (Automatic)'
if trans_lang: if trans_lang:
label += ' -> ' + trans_lang label += ' -> ' + trans_lang
# Try to use Android caption URL directly (no PO Token needed)
caption_url = None
for track in info.get('_android_caption_tracks', []):
track_lang = track.get('languageCode', '')
track_kind = track.get('kind', '')
if track_lang == lang and (
(auto and track_kind == 'asr') or
(not auto and track_kind != 'asr')
):
caption_url = track.get('baseUrl')
break
if caption_url:
# Add format
if '&fmt=' in caption_url:
caption_url = re.sub(r'&fmt=[^&]*', '&fmt=vtt', caption_url)
else:
caption_url += '&fmt=vtt'
if trans_lang:
caption_url += '&tlang=' + trans_lang
url = util.prefix_url(caption_url)
else:
# Fallback to old method
url = util.prefix_url(yt_data_extract.get_caption_url(info, lang, 'vtt', auto, trans_lang))
return { return {
'url': util.prefix_url(yt_data_extract.get_caption_url(info, lang, 'vtt', auto, trans_lang)), 'url': url,
'label': label, 'label': label,
'srclang': trans_lang[0:2] if trans_lang else lang[0:2], 'srclang': trans_lang[0:2] if trans_lang else lang[0:2],
'on': False, 'on': False,
@@ -300,11 +329,8 @@ def get_ordered_music_list_attributes(music_list):
def save_decrypt_cache(): def save_decrypt_cache():
try: os.makedirs(settings.data_dir, exist_ok=True)
f = open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'w') f = open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'w')
except FileNotFoundError:
os.makedirs(settings.data_dir)
f = open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'w')
f.write(json.dumps({'version': 1, 'decrypt_cache':decrypt_cache}, indent=4, sort_keys=True)) f.write(json.dumps({'version': 1, 'decrypt_cache':decrypt_cache}, indent=4, sort_keys=True))
f.close() f.close()
@@ -367,32 +393,61 @@ def fetch_watch_page_info(video_id, playlist_id, index):
watch_page = watch_page.decode('utf-8') watch_page = watch_page.decode('utf-8')
return yt_data_extract.extract_watch_info_from_html(watch_page) return yt_data_extract.extract_watch_info_from_html(watch_page)
def extract_info(video_id, use_invidious, playlist_id=None, index=None): def extract_info(video_id, use_invidious, playlist_id=None, index=None):
primary_client = 'android_vr'
fallback_client = 'ios'
last_resort_client = 'tv_embedded'
tasks = ( tasks = (
# Get video metadata from here # Get video metadata from here
gevent.spawn(fetch_watch_page_info, video_id, playlist_id, index), gevent.spawn(fetch_watch_page_info, video_id, playlist_id, index),
gevent.spawn(fetch_player_response, 'android_vr', video_id) gevent.spawn(fetch_player_response, primary_client, video_id)
) )
gevent.joinall(tasks) gevent.joinall(tasks)
util.check_gevent_exceptions(*tasks) util.check_gevent_exceptions(*tasks)
info, player_response = tasks[0].value, tasks[1].value
info = tasks[0].value or {}
player_response = tasks[1].value or {}
# Save android_vr caption tracks (no PO Token needed for these URLs)
if isinstance(player_response, str):
try:
pr_data = json.loads(player_response)
except Exception:
pr_data = {}
else:
pr_data = player_response or {}
android_caption_tracks = yt_data_extract.deep_get(
pr_data, 'captions', 'playerCaptionsTracklistRenderer',
'captionTracks', default=[])
info['_android_caption_tracks'] = android_caption_tracks
yt_data_extract.update_with_new_urls(info, player_response) yt_data_extract.update_with_new_urls(info, player_response)
# Age restricted video, retry # Fallback to 'ios' if no valid URLs are found
if info['age_restricted'] or info['player_urls_missing']: if not info.get('formats') or info.get('player_urls_missing'):
if info['age_restricted']: print(f"No URLs found in '{primary_client}', attempting with '{fallback_client}'.")
print('Age restricted video, retrying') try:
else: player_response = fetch_player_response(fallback_client, video_id) or {}
print('Player urls missing, retrying') yt_data_extract.update_with_new_urls(info, player_response)
player_response = fetch_player_response('tv_embedded', video_id) except util.FetchError as e:
yt_data_extract.update_with_new_urls(info, player_response) print(f"Fallback '{fallback_client}' failed: {e}")
# Final attempt with 'tv_embedded' if there are still no URLs
if not info.get('formats') or info.get('player_urls_missing'):
print(f"No URLs found in '{fallback_client}', attempting with '{last_resort_client}'")
try:
player_response = fetch_player_response(last_resort_client, video_id) or {}
yt_data_extract.update_with_new_urls(info, player_response)
except util.FetchError as e:
print(f"Fallback '{last_resort_client}' failed: {e}")
# signature decryption # signature decryption
decryption_error = decrypt_signatures(info, video_id) if info.get('formats'):
if decryption_error: decryption_error = decrypt_signatures(info, video_id)
decryption_error = 'Error decrypting url signatures: ' + decryption_error if decryption_error:
info['playability_error'] = decryption_error info['playability_error'] = 'Error decrypting url signatures: ' + decryption_error
# check if urls ready (non-live format) in former livestream # check if urls ready (non-live format) in former livestream
# urls not ready if all of them have no filesize # urls not ready if all of them have no filesize
@@ -406,21 +461,21 @@ def extract_info(video_id, use_invidious, playlist_id=None, index=None):
# livestream urls # livestream urls
# sometimes only the livestream urls work soon after the livestream is over # sometimes only the livestream urls work soon after the livestream is over
if (info['hls_manifest_url'] info['hls_formats'] = []
and (info['live'] or not info['formats'] or not info['urls_ready']) if info.get('hls_manifest_url') and (info.get('live') or not info.get('formats') or not info['urls_ready']):
): try:
manifest = util.fetch_url(info['hls_manifest_url'], manifest = util.fetch_url(info['hls_manifest_url'],
debug_name='hls_manifest.m3u8', debug_name='hls_manifest.m3u8',
report_text='Fetched hls manifest' report_text='Fetched hls manifest'
).decode('utf-8') ).decode('utf-8')
info['hls_formats'], err = yt_data_extract.extract_hls_formats(manifest)
info['hls_formats'], err = yt_data_extract.extract_hls_formats(manifest) if not err:
if not err: info['playability_error'] = None
info['playability_error'] = None for fmt in info['hls_formats']:
for fmt in info['hls_formats']: fmt['video_quality'] = video_quality_string(fmt)
fmt['video_quality'] = video_quality_string(fmt) except Exception as e:
else: print(f"Error obteniendo HLS manifest: {e}")
info['hls_formats'] = [] info['hls_formats'] = []
# check for 403. Unnecessary for tor video routing b/c ip address is same # check for 403. Unnecessary for tor video routing b/c ip address is same
info['invidious_used'] = False info['invidious_used'] = False
@@ -615,7 +670,12 @@ def get_watch_page(video_id=None):
# prefix urls, and other post-processing not handled by yt_data_extract # prefix urls, and other post-processing not handled by yt_data_extract
for item in info['related_videos']: for item in info['related_videos']:
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id']) # set HQ relateds thumbnail videos # Only set thumbnail if YouTube didn't provide one
if not item.get('thumbnail'):
if item.get('type') == 'playlist' and item.get('first_video_id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['first_video_id'])
elif item.get('type') == 'video' and item.get('id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id'])
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
for song in info['music_list']: for song in info['music_list']:
@@ -623,6 +683,9 @@ def get_watch_page(video_id=None):
if info['playlist']: if info['playlist']:
playlist_id = info['playlist']['id'] playlist_id = info['playlist']['id']
for item in info['playlist']['items']: for item in info['playlist']['items']:
# Only set thumbnail if YouTube didn't provide one
if not item.get('thumbnail') and item.get('type') == 'video' and item.get('id'):
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id'])
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
if playlist_id: if playlist_id:
@@ -807,9 +870,14 @@ def get_watch_page(video_id=None):
@yt_app.route('/api/<path:dummy>') @yt_app.route('/api/<path:dummy>')
def get_captions(dummy): def get_captions(dummy):
result = util.fetch_url('https://www.youtube.com' + request.full_path) url = 'https://www.youtube.com' + request.full_path
result = result.replace(b"align:start position:0%", b"") try:
return result result = util.fetch_url(url, headers=util.mobile_ua)
result = result.replace(b"align:start position:0%", b"")
return flask.Response(result, mimetype='text/vtt')
except Exception as e:
logger.debug(f'Caption fetch failed: {e}')
return flask.Response(b'WEBVTT\n\n', mimetype='text/vtt', status=200)
times_reg = re.compile(r'^\d\d:\d\d:\d\d\.\d\d\d --> \d\d:\d\d:\d\d\.\d\d\d.*$') times_reg = re.compile(r'^\d\d:\d\d:\d\d\.\d\d\d --> \d\d:\d\d:\d\d\.\d\d\d.*$')

View File

@@ -226,6 +226,89 @@ def check_missing_keys(object, *key_sequences):
return None return None
def extract_lockup_view_model_info(item, additional_info={}):
"""Extract info from new lockupViewModel format (YouTube 2024+)"""
info = {'error': None}
content_type = item.get('contentType', '')
content_id = item.get('contentId', '')
# Extract title from metadata
metadata = item.get('metadata', {})
lockup_metadata = metadata.get('lockupMetadataViewModel', {})
title_data = lockup_metadata.get('title', {})
info['title'] = title_data.get('content', '')
# Determine type based on contentType
if 'PLAYLIST' in content_type:
info['type'] = 'playlist'
info['playlist_type'] = 'playlist'
info['id'] = content_id
info['video_count'] = None
info['first_video_id'] = None
# Try to get video count from metadata
metadata_rows = lockup_metadata.get('metadata', {})
for row in metadata_rows.get('contentMetadataViewModel', {}).get('metadataRows', []):
for part in row.get('metadataParts', []):
text = part.get('text', {}).get('content', '')
if 'video' in text.lower():
info['video_count'] = extract_int(text)
elif 'VIDEO' in content_type:
info['type'] = 'video'
info['id'] = content_id
info['view_count'] = None
info['approx_view_count'] = None
info['time_published'] = None
info['duration'] = None
# Extract duration/other info from metadata rows
metadata_rows = lockup_metadata.get('metadata', {})
for row in metadata_rows.get('contentMetadataViewModel', {}).get('metadataRows', []):
for part in row.get('metadataParts', []):
text = part.get('text', {}).get('content', '')
if 'view' in text.lower():
info['approx_view_count'] = extract_approx_int(text)
elif 'ago' in text.lower():
info['time_published'] = text
elif 'CHANNEL' in content_type:
info['type'] = 'channel'
info['id'] = content_id
info['approx_subscriber_count'] = None
else:
info['type'] = 'unsupported'
return info
# Extract thumbnail from contentImage
content_image = item.get('contentImage', {})
collection_thumb = content_image.get('collectionThumbnailViewModel', {})
primary_thumb = collection_thumb.get('primaryThumbnail', {})
thumb_vm = primary_thumb.get('thumbnailViewModel', {})
image_sources = thumb_vm.get('image', {}).get('sources', [])
if image_sources:
info['thumbnail'] = image_sources[0].get('url', '')
else:
info['thumbnail'] = ''
# Extract author info if available
info['author'] = None
info['author_id'] = None
info['author_url'] = None
# Try to get first video ID from inline player data
item_playback = item.get('itemPlayback', {})
inline_player = item_playback.get('inlinePlayerData', {})
on_select = inline_player.get('onSelect', {})
innertube_cmd = on_select.get('innertubeCommand', {})
watch_endpoint = innertube_cmd.get('watchEndpoint', {})
if watch_endpoint.get('videoId'):
info['first_video_id'] = watch_endpoint.get('videoId')
info.update(additional_info)
return info
def extract_item_info(item, additional_info={}): def extract_item_info(item, additional_info={}):
if not item: if not item:
return {'error': 'No item given'} return {'error': 'No item given'}
@@ -243,6 +326,10 @@ def extract_item_info(item, additional_info={}):
info['type'] = 'unsupported' info['type'] = 'unsupported'
return info return info
# Handle new lockupViewModel format (YouTube 2024+)
if type == 'lockupViewModel':
return extract_lockup_view_model_info(item, additional_info)
# type looks like e.g. 'compactVideoRenderer' or 'gridVideoRenderer' # type looks like e.g. 'compactVideoRenderer' or 'gridVideoRenderer'
# camelCase split, https://stackoverflow.com/a/37697078 # camelCase split, https://stackoverflow.com/a/37697078
type_parts = [s.lower() for s in re.sub(r'([A-Z][a-z]+)', r' \1', type).split()] type_parts = [s.lower() for s in re.sub(r'([A-Z][a-z]+)', r' \1', type).split()]
@@ -282,9 +369,9 @@ def extract_item_info(item, additional_info={}):
['detailedMetadataSnippets', 0, 'snippetText'], ['detailedMetadataSnippets', 0, 'snippetText'],
)) ))
info['thumbnail'] = normalize_url(multi_deep_get(item, info['thumbnail'] = normalize_url(multi_deep_get(item,
['thumbnail', 'thumbnails', 0, 'url'], # videos ['thumbnail', 'thumbnails', -1, 'url'], # videos (highest quality)
['thumbnails', 0, 'thumbnails', 0, 'url'], # playlists ['thumbnails', 0, 'thumbnails', -1, 'url'], # playlists
['thumbnailRenderer', 'showCustomThumbnailRenderer', 'thumbnail', 'thumbnails', 0, 'url'], # shows ['thumbnailRenderer', 'showCustomThumbnailRenderer', 'thumbnail', 'thumbnails', -1, 'url'], # shows
)) ))
info['badges'] = [] info['badges'] = []
@@ -441,6 +528,9 @@ _item_types = {
'channelRenderer', 'channelRenderer',
'compactChannelRenderer', 'compactChannelRenderer',
'gridChannelRenderer', 'gridChannelRenderer',
# New viewModel format (YouTube 2024+)
'lockupViewModel',
} }
def _traverse_browse_renderer(renderer): def _traverse_browse_renderer(renderer):

View File

@@ -628,6 +628,7 @@ def extract_watch_info(polymer_json):
info['manual_caption_languages'] = [] info['manual_caption_languages'] = []
info['_manual_caption_language_names'] = {} # language name written in that language, needed in some cases to create the url info['_manual_caption_language_names'] = {} # language name written in that language, needed in some cases to create the url
info['translation_languages'] = [] info['translation_languages'] = []
info['_caption_track_urls'] = {} # lang_code -> full baseUrl from player response
captions_info = player_response.get('captions', {}) captions_info = player_response.get('captions', {})
info['_captions_base_url'] = normalize_url(deep_get(captions_info, 'playerCaptionsRenderer', 'baseUrl')) info['_captions_base_url'] = normalize_url(deep_get(captions_info, 'playerCaptionsRenderer', 'baseUrl'))
# Sometimes the above playerCaptionsRender is randomly missing # Sometimes the above playerCaptionsRender is randomly missing
@@ -658,6 +659,10 @@ def extract_watch_info(polymer_json):
else: else:
info['manual_caption_languages'].append(lang_code) info['manual_caption_languages'].append(lang_code)
base_url = caption_track.get('baseUrl', '') base_url = caption_track.get('baseUrl', '')
# Store the full URL from the player response (includes valid tokens)
if base_url:
normalized = normalize_url(base_url) if base_url.startswith('/') or not base_url.startswith('http') else base_url
info['_caption_track_urls'][lang_code + ('_asr' if caption_track.get('kind') == 'asr' else '')] = normalized
lang_name = deep_get(urllib.parse.parse_qs(urllib.parse.urlparse(base_url).query), 'name', 0) lang_name = deep_get(urllib.parse.parse_qs(urllib.parse.urlparse(base_url).query), 'name', 0)
if lang_name: if lang_name:
info['_manual_caption_language_names'][lang_code] = lang_name info['_manual_caption_language_names'][lang_code] = lang_name
@@ -825,6 +830,21 @@ def captions_available(info):
def get_caption_url(info, language, format, automatic=False, translation_language=None): def get_caption_url(info, language, format, automatic=False, translation_language=None):
'''Gets the url for captions with the given language and format. If automatic is True, get the automatic captions for that language. If translation_language is given, translate the captions from `language` to `translation_language`. If automatic is true and translation_language is given, the automatic captions will be translated.''' '''Gets the url for captions with the given language and format. If automatic is True, get the automatic captions for that language. If translation_language is given, translate the captions from `language` to `translation_language`. If automatic is true and translation_language is given, the automatic captions will be translated.'''
# Try to use the direct URL from the player response first (has valid tokens)
track_key = language + ('_asr' if automatic else '')
direct_url = info.get('_caption_track_urls', {}).get(track_key)
if direct_url:
url = direct_url
# Override format
if '&fmt=' in url:
url = re.sub(r'&fmt=[^&]*', '&fmt=' + format, url)
else:
url += '&fmt=' + format
if translation_language:
url += '&tlang=' + translation_language
return url
# Fallback to base_url construction
url = info['_captions_base_url'] url = info['_captions_base_url']
if not url: if not url:
return None return None