Compare commits
1 Commits
3795d9e4ff
...
v0.5.0
| Author | SHA1 | Date | |
|---|---|---|---|
| a0f315be51 |
423
README.md
423
README.md
@@ -1,180 +1,313 @@
|
||||
# yt-local
|
||||
|
||||
Fork of [youtube-local](https://github.com/user234683/youtube-local)
|
||||
[](https://www.gnu.org/licenses/agpl-3.0)
|
||||
[](https://www.python.org/downloads/)
|
||||
[](https://github.com/user234683/youtube-local)
|
||||
|
||||
yt-local is a browser-based client written in Python for watching YouTube anonymously and without the lag of the slow page used by YouTube. One of the primary features is that all requests are routed through Tor, except for the video file at googlevideo.com. This is analogous to what HookTube (defunct) and Invidious do, except that you do not have to trust a third-party to respect your privacy. The assumption here is that Google won't put the effort in to incorporate the video file requests into their tracking, as it's not worth pursuing the incredibly small number of users who care about privacy (Tor video routing is also provided as an option). Tor has high latency, so this will not be as fast network-wise as regular YouTube. However, using Tor is optional; when not routing through Tor, video pages may load faster than they do with YouTube's page depending on your browser.
|
||||
A privacy-focused, browser-based YouTube client that routes requests through Tor for anonymous viewing—**without compromising on speed or features**.
|
||||
|
||||
The YouTube API is not used, so no keys or anything are needed. It uses the same requests as the YouTube webpage.
|
||||
[Features](#features) • [Install](#install) • [Usage](#usage) • [Screenshots](#screenshots)
|
||||
|
||||
---
|
||||
|
||||
> [!NOTE]
|
||||
> How it works: yt-local mirrors YouTube's web requests (using the same Invidious/InnerTube endpoints as yt-dlp and Invidious) but strips JavaScript and serves a lightweight HTML frontend. No API keys needed.
|
||||
|
||||
## Overview
|
||||
|
||||
yt-local is a lightweight, self-hosted YouTube client written in Python that gives you:
|
||||
|
||||
- **Privacy-first**: All requests route through Tor by default (video optional), keeping you anonymous.
|
||||
- **Fast page loads**: No lazy-loading, no layout reflows, instant comment rendering.
|
||||
- **Full control**: Customize subtitles, related videos, comments, and playback speed.
|
||||
- **High quality**: Supports all YouTube video qualities (144p–2160p) via DASH muxing.
|
||||
- **Zero ads**: Clean interface, no tracking, no sponsored content.
|
||||
- **Self-hosted**: You control the instance—no third-party trust required.
|
||||
|
||||
## Features
|
||||
|
||||
| Category | Features |
|
||||
|---------------|----------------------------------------------------------------------------------------|
|
||||
| Core | Search, channels, playlists, watch pages, comments, subtitles (auto/manual) |
|
||||
| Privacy | Optional Tor routing (including video), automatic circuit rotation on 429 errors |
|
||||
| Local | Local playlists (durable against YouTube deletions), thumbnail caching |
|
||||
| UI | 3 themes (Light/Gray/Dark), theater mode, custom font selection |
|
||||
| Config | Fine-grained settings: subtitle mode, comment visibility, sponsorblock integration |
|
||||
| Performance | No JavaScript required, instant page rendering, rate limiting with exponential backoff |
|
||||
| Subscriptions | Import from YouTube Takeout (CSV/JSON), tag organization, mute channels |
|
||||
|
||||
### Advanced Capabilities
|
||||
|
||||
- SponsorBlock integration — skip sponsored segments automatically
|
||||
- Custom video speeds — 0.25x to 4x playback rate
|
||||
- Video transcripts — accessible via transcript button
|
||||
- Video quality muxing — combine separate video/audio streams for non-360p/720p resolutions
|
||||
- Tor circuit rotation — automatic new identity on rate limiting (429)
|
||||
- File downloading — download videos/audio (disabled by default, configurable)
|
||||
|
||||
## Screenshots
|
||||
|
||||
[Light theme](https://pic.infini.fr/l7WINjzS/0Ru6MrhA.png)
|
||||
| Light Theme | Gray Theme | Dark Theme |
|
||||
|:-----------------------------------------------------:|:----------------------------------------------------:|:----------------------------------------------------:|
|
||||
|  |  |  |
|
||||
|
||||
[Gray theme](https://pic.infini.fr/znnQXWNc/hL78CRzo.png)
|
||||
| Channel View | Playlist View |
|
||||
|:-------------------------------------------------------:|:---------------------:|
|
||||
|  | *(similar structure)* |
|
||||
|
||||
[Dark theme](https://pic.infini.fr/iXwFtTWv/mt2kS5bv.png)
|
||||
---
|
||||
|
||||
[Channel](https://pic.infini.fr/JsenWVYe/SbdIQlS6.png)
|
||||
|
||||
## Features
|
||||
* Standard pages of YouTube: search, channels, playlists
|
||||
* Anonymity from Google's tracking by routing requests through Tor
|
||||
* Local playlists: These solve the two problems with creating playlists on YouTube: (1) they're datamined and (2) videos frequently get deleted by YouTube and lost from the playlist, making it very difficult to find a reupload as the title of the deleted video is not displayed.
|
||||
* Themes: Light, Gray, and Dark
|
||||
* Subtitles
|
||||
* Easily download videos or their audio. (Disabled by default)
|
||||
* No ads
|
||||
* View comments
|
||||
* JavaScript not required
|
||||
* Theater and non-theater mode
|
||||
* Subscriptions that are independent from YouTube
|
||||
* Can import subscriptions from YouTube
|
||||
* Works by checking channels individually
|
||||
* Can be set to automatically check channels.
|
||||
* For efficiency of requests, frequency of checking is based on how quickly channel posts videos
|
||||
* Can mute channels, so as to have a way to "soft" unsubscribe. Muted channels won't be checked automatically or when using the "Check all" button. Videos from these channels will be hidden.
|
||||
* Can tag subscriptions to organize them or check specific tags
|
||||
* Fast page
|
||||
* No distracting/slow layout rearrangement
|
||||
* No lazy-loading of comments; they are ready instantly.
|
||||
* Settings allow fine-tuned control over when/how comments or related videos are shown:
|
||||
1. Shown by default, with click to hide
|
||||
2. Hidden by default, with click to show
|
||||
3. Never shown
|
||||
* Optionally skip sponsored segments using [SponsorBlock](https://github.com/ajayyy/SponsorBlock)'s API
|
||||
* Custom video speeds
|
||||
* Video transcript
|
||||
* Supports all available video qualities: 144p through 2160p
|
||||
|
||||
## Planned features
|
||||
- [ ] Putting videos from subscriptions or local playlists into the related videos
|
||||
- [x] Information about video (geographic regions, region of Tor exit node, etc)
|
||||
- [ ] Ability to delete playlists
|
||||
- [ ] Auto-saving of local playlist videos
|
||||
- [ ] Import youtube playlist into a local playlist
|
||||
- [ ] Rearrange items of local playlist
|
||||
- [x] Video qualities other than 360p and 720p by muxing video and audio
|
||||
- [x] Indicate if comments are disabled
|
||||
- [x] Indicate how many comments a video has
|
||||
- [ ] Featured channels page
|
||||
- [ ] Channel comments
|
||||
- [x] Video transcript
|
||||
- [x] Automatic Tor circuit change when blocked
|
||||
- [x] Support &t parameter
|
||||
- [ ] Subscriptions: Option to mark what has been watched
|
||||
- [ ] Subscriptions: Option to filter videos based on keywords in title or description
|
||||
- [ ] Subscriptions: Delete old entries and thumbnails
|
||||
- [ ] Support for more sites, such as Vimeo, Dailymotion, LBRY, etc.
|
||||
|
||||
## Installing
|
||||
## Install
|
||||
|
||||
### Windows
|
||||
|
||||
Download the zip file under the Releases page. Unzip it anywhere you choose.
|
||||
1. Download the latest [release ZIP](https://github.com/user234683/yt-local/releases)
|
||||
2. Extract to any folder
|
||||
3. Run `run.bat` to start
|
||||
|
||||
### GNU+Linux/MacOS
|
||||
### GNU/Linux / macOS
|
||||
|
||||
Download the tarball under the Releases page and extract it. `cd` into the directory and run
|
||||
```bash
|
||||
# 1. Clone or extract the release
|
||||
git clone https://github.com/user234683/yt-local.git
|
||||
cd yt-local
|
||||
|
||||
1. `cd yt-local`
|
||||
2. `virtualenv -p python3 venv`
|
||||
3. `source venv/bin/activate`
|
||||
4. `pip install -r requirements.txt`
|
||||
5. `python server.py`
|
||||
# 2. Create and activate virtual environment
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate # or `venv\Scripts\activate` on Windows
|
||||
|
||||
# 3. Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
**Note**: If pip isn't installed, first try installing it from your package manager. Make sure you install pip for python 3. For example, the package you need on debian is python3-pip rather than python-pip. If your package manager doesn't provide it, try to install it according to [this answer](https://unix.stackexchange.com/a/182467), but make sure you run `python3 get-pip.py` instead of `python get-pip.py`
|
||||
# 4. Run the server
|
||||
python3 server.py
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> If `pip` isn't installed, use your distro's package manager (e.g., `sudo apt install python3-pip` on Debian/Ubuntu).
|
||||
|
||||
### Portable Mode
|
||||
|
||||
To keep settings and data in the same directory as the app:
|
||||
|
||||
```bash
|
||||
# Create an empty settings.txt in the project root
|
||||
touch settings.txt
|
||||
python3 server.py
|
||||
# Data now stored in ./data/ instead of ~/.yt-local/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
Firstly, if you wish to run this in portable mode, create the empty file "settings.txt" in the program's main directory. If the file is there, settings and data will be stored in the same directory as the program. Otherwise, settings and data will be stored in `C:\Users\[your username]\.yt-local` on Windows and `~/.yt-local` on GNU+Linux/MacOS.
|
||||
### Basic Access
|
||||
|
||||
To run the program on windows, open `run.bat`. On GNU+Linux/MacOS, run `python3 server.py`.
|
||||
|
||||
Access youtube URLs by prefixing them with `http://localhost:9010/`.
|
||||
For instance, `http://localhost:9010/https://www.youtube.com/watch?v=vBgulDeV2RU`
|
||||
You can use an addon such as Redirector ([Firefox](https://addons.mozilla.org/en-US/firefox/addon/redirector/)|[Chrome](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)) to automatically redirect YouTube URLs to yt-local. I use the include pattern `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)` and redirect pattern `http://localhost:9010/$1` (Make sure you're using regular expression mode).
|
||||
|
||||
If you want embeds on web to also redirect to yt-local, make sure "Iframes" is checked under advanced options in your redirector rule. Check test `http://localhost:9010/youtube.com/embed/vBgulDeV2RU`
|
||||
|
||||
yt-local can be added as a search engine in firefox to make searching more convenient. See [here](https://support.mozilla.org/en-US/kb/add-or-remove-search-engine-firefox) for information on firefox search plugins.
|
||||
|
||||
### Using Tor
|
||||
|
||||
In the settings page, set "Route Tor" to "On, except video" (the second option). Be sure to save the settings.
|
||||
|
||||
Ensure Tor is listening for Socks5 connections on port 9150. A simple way to accomplish this is by opening the Tor Browser Bundle and leaving it open. However, you will not be accessing the program (at https://localhost:8080) through the Tor Browser. You will use your regular browser for that. Rather, this is just a quick way to give the program access to Tor routing.
|
||||
|
||||
### Standalone Tor
|
||||
|
||||
If you don't want to waste system resources leaving the Tor Browser open in addition to your regular browser, you can configure standalone Tor to run instead using the following instructions.
|
||||
|
||||
For Windows, to make standalone Tor run at startup, press Windows Key + R and type `shell:startup` to open the Startup folder. Create a new shortcut there. For the command of the shortcut, enter `"C:\[path-to-Tor-Browser-directory]\Tor\tor.exe" SOCKSPort 9150 ControlPort 9151`. You can then launch this shortcut to start it. Alternatively, if something isn't working, to see what's wrong, open `cmd.exe` and go to the directory `C:\[path-to-Tor-Browser-directory]\Tor`. Then run `tor SOCKSPort 9150 ControlPort 9151 | more`. The `more` part at the end is just to make sure any errors are displayed, to fix a bug in Windows cmd where tor doesn't display any output. You can stop tor in the task manager.
|
||||
|
||||
For Debian/Ubuntu, you can `sudo apt install tor` to install the command line version of Tor, and then run `sudo systemctl start tor` to run it as a background service that will get started during boot as well. However, Tor on the command line uses the port `9050` by default (rather than the 9150 used by the Tor Browser). So you will need to change `Tor port` to 9050 and `Tor control port` to `9051` in yt-local settings page. Additionally, you will need to enable the Tor control port by uncommenting the line `ControlPort 9051`, and setting `CookieAuthentication` to 0 in `/etc/tor/torrc`. If no Tor package is available for your distro, you can configure the `tor` binary located at `./Browser/TorBrowser/Tor/tor` inside the Tor Browser installation location to run at start time, or create a service to do it.
|
||||
|
||||
### Tor video routing
|
||||
|
||||
If you wish to route the video through Tor, set "Route Tor" to "On, including video". Because this is bandwidth-intensive, you are strongly encouraged to donate to the [consortium of Tor node operators](https://torservers.net/donate.html). For instance, donations to [NoiseTor](https://noisetor.net/) go straight towards funding nodes. Using their numbers for bandwidth costs, together with an average of 485 kbit/sec for a diverse sample of videos, and assuming n hours of video watched per day, gives $0.03n/month. A $1/month donation will be a very generous amount to not only offset losses, but help keep the network healthy.
|
||||
|
||||
In general, Tor video routing will be slower (for instance, moving around in the video is quite slow). I've never seen any signs that watch history in yt-local affects on-site Youtube recommendations. It's likely that requests to googlevideo are logged for some period of time, but are not integrated into Youtube's larger advertisement/recommendation systems, since those presumably depend more heavily on in-page tracking through Javascript rather than CDN requests to googlevideo.
|
||||
|
||||
### Importing subscriptions
|
||||
|
||||
1. Go to the [Google takeout manager](https://takeout.google.com/takeout/custom/youtube).
|
||||
2. Log in if asked.
|
||||
3. Click on "All data included", then on "Deselect all", then select only "subscriptions" and click "OK".
|
||||
4. Click on "Next step" and then on "Create export".
|
||||
5. Click on the "Download" button after it appears.
|
||||
6. From the downloaded takeout zip extract the .csv file. It is usually located under `YouTube and YouTube Music/subscriptions/subscriptions.csv`
|
||||
7. Go to the subscriptions manager in yt-local. In the import area, select your .csv file, then press import.
|
||||
|
||||
Supported subscriptions import formats:
|
||||
- NewPipe subscriptions export JSON
|
||||
- Google Takeout CSV
|
||||
- Old Google Takeout JSON
|
||||
- OPML format from now-removed YouTube subscriptions manager
|
||||
|
||||
## Contributing
|
||||
|
||||
Pull requests and issues are welcome
|
||||
|
||||
For coding guidelines and an overview of the software architecture, see the [HACKING.md](docs/HACKING.md) file.
|
||||
|
||||
## GPG public KEY
|
||||
1. Start the server:
|
||||
|
||||
```bash
|
||||
72CFB264DFC43F63E098F926E607CE7149F4D71C
|
||||
python3 server.py
|
||||
# Server runs on http://127.0.0.1:9010 (configurable in /settings)
|
||||
```
|
||||
|
||||
## Public instances
|
||||
2. Access YouTube via proxy:
|
||||
|
||||
yt-local is not made to work in public mode, however there is an instance of yt-local in public mode but with less features
|
||||
```bash
|
||||
http://localhost:9010/https://www.youtube.com/watch?v=vBgulDeV2RU
|
||||
```
|
||||
|
||||
- <https://m.fridu.us/https://youtube.com>
|
||||
All YouTube URLs must be prefixed with `http://localhost:9010/https://`.
|
||||
|
||||
3. (Optional) Use Redirector to auto-redirect YouTube URLs:
|
||||
|
||||
- **Firefox**: [Redirector addon](https://addons.mozilla.org/firefox/addon/redirector/)
|
||||
- **Chrome**: [Redirector addon](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)
|
||||
- **Pattern**: `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)`
|
||||
- **Redirect to**: `http://localhost:9010/$1`
|
||||
|
||||
> [!NOTE]
|
||||
> To use embeds on web pages, make sure "Iframes" is checked under advanced options in your redirector rule.
|
||||
|
||||
### Tor Routing
|
||||
|
||||
> [!IMPORTANT]
|
||||
> Recommended for privacy. In `/settings`, set **Route Tor** to `"On, except video"` (or `"On, including video"`), then save.
|
||||
|
||||
#### Running Tor
|
||||
|
||||
Option A: Tor Browser (easiest)
|
||||
|
||||
- Launch Tor Browser and leave it running
|
||||
- yt-local uses port `9150` (Tor Browser default)
|
||||
|
||||
Option B: Standalone Tor
|
||||
|
||||
```bash
|
||||
# Linux (Debian/Ubuntu)
|
||||
sudo apt install tor
|
||||
sudo systemctl enable --now tor
|
||||
|
||||
# Configure yt-local ports (if using default Tor ports):
|
||||
# Tor port: 9150
|
||||
# Tor control port: 9151
|
||||
```
|
||||
|
||||
> [!WARNING]
|
||||
> Video over Tor is bandwidth-intensive. Consider donating to [Tor node operators](https://torservers.net/donate.html) to sustain the network.
|
||||
|
||||
### Import Subscriptions
|
||||
|
||||
1. Go to [Google Takeout](https://takeout.google.com/takeout/custom/youtube)
|
||||
2. Deselect all → select only **Subscriptions** → create export
|
||||
3. Download and extract `subscriptions.csv` (path: `YouTube and YouTube Music/subscriptions/subscriptions.csv`)
|
||||
4. In yt-local: **Subscriptions** → **Import** → upload CSV
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The CSV file must contain columns: `channel_id,channel_name,channel_url`
|
||||
|
||||
## Supported formats
|
||||
|
||||
- Google Takeout CSV
|
||||
- Google Takeout JSON (legacy)
|
||||
- NewPipe JSON export
|
||||
- OPML (from YouTube's old subscription manager)
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
Visit `http://localhost:9010/settings` to configure:
|
||||
|
||||
| Setting | Description |
|
||||
|--------------------|-------------------------------------------------|
|
||||
| Route Tor | Off / On (except video) / On (including video) |
|
||||
| Default subtitles | Off / Manual only / Auto + Manual |
|
||||
| Comments mode | Shown by default / Hidden by default / Never |
|
||||
| Related videos | Same options as comments |
|
||||
| Theme | Light / Gray / Dark |
|
||||
| Font | Browser default / Serif / Sans-serif |
|
||||
| Default resolution | Auto / 144p–2160p |
|
||||
| SponsorBlock | Enable Sponsored segments skipping |
|
||||
| Proxy images | Route thumbnails through yt-local (for privacy) |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|------------------------------|----------------------------------------------------------------------------------------------|
|
||||
| Port already in use | Change `port_number` in `/settings` or kill existing process: `pkill -f "python3 server.py"` |
|
||||
| 429 Too Many Requests | Enable Tor routing for automatic IP rotation, or wait 5-10 minutes |
|
||||
| Failed to connect to Tor | Verify Tor is running: `tor --version` or launch Tor Browser |
|
||||
| Subscriptions not importing | Ensure CSV has columns: `channel_id,channel_name,channel_url` |
|
||||
| Settings persist across runs | Check `~/.yt-local/settings.txt` (non-portable) or `./settings.txt` (portable) |
|
||||
|
||||
---
|
||||
|
||||
## Development
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
source venv/bin/activate # if not already in venv
|
||||
make test
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
|
||||
```bash
|
||||
yt-local/
|
||||
├── youtube/ # Core application logic
|
||||
│ ├── __init__.py # Flask app entry point
|
||||
│ ├── util.py # HTTP utilities, Tor manager, fetch_url
|
||||
│ ├── watch.py # Video/playlist page handlers
|
||||
│ ├── channel.py # Channel page handlers
|
||||
│ ├── playlist.py # Playlist handlers
|
||||
│ ├── search.py # Search handlers
|
||||
│ ├── comments.py # Comment extraction/rendering
|
||||
│ ├── subscriptions.py # Subscription management + SQLite
|
||||
│ ├── local_playlist.py # Local playlist CRUD
|
||||
│ ├── proto.py # YouTube protobuf token generation
|
||||
│ ├── yt_data_extract/ # Polymer JSON parsing abstractions
|
||||
│ └── hls_cache.py # HLS audio/video streaming proxy
|
||||
├── templates/ # Jinja2 HTML templates
|
||||
├── static/ # CSS/JS assets
|
||||
├── translations/ # i18n files (Babel)
|
||||
├── tests/ # pytest test suite
|
||||
├── server.py # WSGI entry point
|
||||
├── settings.py # Settings parser + admin page
|
||||
├── generate_release.py # Windows release builder
|
||||
└── manage_translations.py # i18n maintenance script
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> For detailed architecture guidance, see [`docs/HACKING.md`](docs/HACKING.md).
|
||||
|
||||
### Contributing
|
||||
|
||||
Contributions welcome! Please:
|
||||
|
||||
1. Read [`docs/HACKING.md`](docs/HACKING.md) for coding guidelines
|
||||
2. Follow [PEP 8](https://peps.python.org/pep-0008/) style (use `ruff format`)
|
||||
3. Run tests before submitting: `pytest`
|
||||
4. Ensure no security issues: `bandit -r .`
|
||||
5. Update docs for new features
|
||||
|
||||
---
|
||||
|
||||
## Security Notes
|
||||
|
||||
- **No API keys required** — uses same endpoints as public YouTube web interface
|
||||
- **Tor is optional** — disable in `/settings` if you prefer performance over anonymity
|
||||
- **Rate limiting handled** — exponential backoff (max 5 retries) with automatic Tor circuit rotation
|
||||
- **Path traversal protected** — user input validated against regex whitelists (CWE-22)
|
||||
- **Subprocess calls secure** — build scripts use `subprocess.run([...])` instead of shell (CWE-78)
|
||||
|
||||
> [!NOTE]
|
||||
> GPG key for release verification: `72CFB264DFC43F63E098F926E607CE7149F4D71C`
|
||||
|
||||
---
|
||||
|
||||
## Public Instances
|
||||
|
||||
yt-local is designed for self-hosting.
|
||||
|
||||
---
|
||||
|
||||
## Donate
|
||||
|
||||
This project is 100% free and open-source. If you'd like to support development:
|
||||
|
||||
- **Bitcoin**: `1JrC3iqs3PP5Ge1m1vu7WE8LEf4S85eo7y`
|
||||
- **Tor node donation**: https://torservers.net/donate
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the GNU Affero General Public License v3 (GNU AGPLv3) or any later version.
|
||||
GNU Affero General Public License v3.0+
|
||||
|
||||
Permission is hereby granted to the youtube-dl project at [https://github.com/ytdl-org/youtube-dl](https://github.com/ytdl-org/youtube-dl) to relicense any portion of this software under the Unlicense, public domain, or whichever license is in use by youtube-dl at the time of relicensing, for the purpose of inclusion of said portion into youtube-dl. Relicensing permission is not granted for any purpose outside of direct inclusion into the [official repository](https://github.com/ytdl-org/youtube-dl) of youtube-dl. If inclusion happens during the process of a pull-request, relicensing happens at the moment the pull request is merged into youtube-dl; until that moment, any cloned repositories of youtube-dl which make use of this software are subject to the terms of the GNU AGPLv3.
|
||||
See [`LICENSE`](LICENSE) for full text.
|
||||
|
||||
## Donate
|
||||
This project is completely free/Libre and will always be.
|
||||
### Exception for youtube-dl
|
||||
|
||||
#### Crypto:
|
||||
- **Bitcoin**: `1JrC3iqs3PP5Ge1m1vu7WE8LEf4S85eo7y`
|
||||
Permission is granted to relicense code portions into youtube-dl's license (currently GPL) for direct inclusion into the [official youtube-dl repository](https://github.com/ytdl-org/youtube-dl). This exception **does not apply** to forks or other uses—those remain under AGPLv3.
|
||||
|
||||
## Similar projects
|
||||
- [invidious](https://github.com/iv-org/invidious) Similar to this project, but also allows it to be hosted as a server to serve many users
|
||||
- [Yotter](https://github.com/ytorg/Yotter) Similar to this project and to invidious. Also supports Twitter
|
||||
- [FreeTube](https://github.com/FreeTubeApp/FreeTube) (Similar to this project, but is an electron app outside the browser)
|
||||
- [youtube-local](https://github.com/user234683/youtube-local) first project on which yt-local is based
|
||||
- [NewPipe](https://newpipe.schabi.org/) (app for android)
|
||||
- [mps-youtube](https://github.com/mps-youtube/mps-youtube) (terminal-only program)
|
||||
- [youtube-viewer](https://github.com/trizen/youtube-viewer)
|
||||
- [smtube](https://www.smtube.org/)
|
||||
- [Minitube](https://flavio.tordini.org/minitube), [github here](https://github.com/flaviotordini/minitube)
|
||||
- [toogles](https://github.com/mikecrittenden/toogles) (only embeds videos, doesn't use mp4)
|
||||
- [YTLibre](https://git.sr.ht/~heckyel/ytlibre) only extract video
|
||||
- [youtube-dl](https://rg3.github.io/youtube-dl/), which this project was based off
|
||||
---
|
||||
|
||||
## Similar Projects
|
||||
|
||||
| Project | Type | Notes |
|
||||
|--------------------------------------------------------------|----------|--------------------------------------|
|
||||
| [invidious](https://github.com/iv-org/invidious) | Server | Multi-user instance, REST API |
|
||||
| [Yotter](https://github.com/ytorg/Yotter) | Server | YouTube + Twitter integration |
|
||||
| [FreeTube](https://github.com/FreeTubeApp/FreeTube) | Desktop | Electron-based client |
|
||||
| [NewPipe](https://newpipe.schabi.org/) | Mobile | Android-only, no JavaScript |
|
||||
| [mps-youtube](https://github.com/mps-youtube/mps-youtube) | Terminal | CLI-based, text UI |
|
||||
| [youtube-local](https://github.com/user234683/youtube-local) | Browser | Original project (base for yt-local) |
|
||||
|
||||
---
|
||||
|
||||
Made for privacy-conscious users
|
||||
|
||||
Last updated: 2026-04-19
|
||||
|
||||
@@ -1,76 +1,108 @@
|
||||
## Basic init yt-local for openrc
|
||||
# Basic init yt-local for openrc
|
||||
|
||||
1. Write `/etc/init.d/ytlocal` file.
|
||||
## Prerequisites
|
||||
|
||||
```
|
||||
#!/sbin/openrc-run
|
||||
# Distributed under the terms of the GNU General Public License v3 or later
|
||||
name="yt-local"
|
||||
pidfile="/var/run/ytlocal.pid"
|
||||
command="/usr/sbin/ytlocal"
|
||||
- System with OpenRC installed and configured.
|
||||
- Administrative privileges (doas or sudo).
|
||||
- `ytlocal` script located at `/usr/sbin/ytlocal` and application files in an accessible directory.
|
||||
|
||||
depend() {
|
||||
use net
|
||||
}
|
||||
## Service Installation
|
||||
|
||||
start_pre() {
|
||||
if [ ! -f /usr/sbin/ytlocal ] ; then
|
||||
eerror "Please create script file of ytlocal in '/usr/sbin/ytlocal'"
|
||||
return 1
|
||||
else
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
1. **Create the OpenRC service script** `/etc/init.d/ytlocal`:
|
||||
|
||||
start() {
|
||||
ebegin "Starting yt-local"
|
||||
start-stop-daemon --start --exec "${command}" --pidfile "${pidfile}"
|
||||
eend $?
|
||||
}
|
||||
```sh
|
||||
#!/sbin/openrc-run
|
||||
# Distributed under the terms of the GNU General Public License v3 or later
|
||||
name="yt-local"
|
||||
pidfile="/var/run/ytlocal.pid"
|
||||
command="/usr/sbin/ytlocal"
|
||||
|
||||
reload() {
|
||||
ebegin "Reloading ${name}"
|
||||
start-stop-daemon --signal HUP --pidfile "${pidfile}"
|
||||
eend $?
|
||||
}
|
||||
depend() {
|
||||
use net
|
||||
}
|
||||
|
||||
stop() {
|
||||
ebegin "Stopping ${name}"
|
||||
start-stop-daemon --quiet --stop --exec "${command}" --pidfile "${pidfile}"
|
||||
eend $?
|
||||
}
|
||||
```
|
||||
start_pre() {
|
||||
if [ ! -f /usr/sbin/ytlocal ]; then
|
||||
eerror "Please create script file of ytlocal in '/usr/sbin/ytlocal'"
|
||||
return 1
|
||||
else
|
||||
return 0
|
||||
fi
|
||||
}
|
||||
|
||||
after, modified execute permissions:
|
||||
start() {
|
||||
ebegin "Starting yt-local"
|
||||
start-stop-daemon --start --exec "${command}" --pidfile "${pidfile}"
|
||||
eend $?
|
||||
}
|
||||
|
||||
$ doas chmod a+x /etc/init.d/ytlocal
|
||||
reload() {
|
||||
ebegin "Reloading ${name}"
|
||||
start-stop-daemon --signal HUP --pidfile "${pidfile}"
|
||||
eend $?
|
||||
}
|
||||
|
||||
stop() {
|
||||
ebegin "Stopping ${name}"
|
||||
start-stop-daemon --quiet --stop --exec "${command}" --pidfile "${pidfile}"
|
||||
eend $?
|
||||
}
|
||||
```
|
||||
|
||||
2. Write `/usr/sbin/ytlocal` and configure path.
|
||||
> [!NOTE]
|
||||
> Ensure the script is executable:
|
||||
>
|
||||
> ```sh
|
||||
> doas chmod a+x /etc/init.d/ytlocal
|
||||
> ```
|
||||
|
||||
```
|
||||
#!/usr/bin/env bash
|
||||
2. **Create the executable script** `/usr/sbin/ytlocal`:
|
||||
|
||||
cd /home/your-path/ytlocal/ # change me
|
||||
source venv/bin/activate
|
||||
python server.py > /dev/null 2>&1 &
|
||||
echo $! > /var/run/ytlocal.pid
|
||||
```
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
|
||||
after, modified execute permissions:
|
||||
# Change the working directory according to your installation path
|
||||
# Example: if installed in /usr/local/ytlocal, use:
|
||||
cd /home/your-path/ytlocal/ # <-- MODIFY TO YOUR PATH
|
||||
source venv/bin/activate
|
||||
python server.py > /dev/null 2>&1 &
|
||||
echo $! > /var/run/ytlocal.pid
|
||||
```
|
||||
|
||||
$ doas chmod a+x /usr/sbin/ytlocal
|
||||
> [!WARNING]
|
||||
> Run this script only as root or via `doas`, as it writes to `/var/run` and uses network privileges.
|
||||
|
||||
> [!TIP]
|
||||
> To store the PID in a different location, adjust the `pidfile` variable in the service script.
|
||||
|
||||
3. OpenRC check
|
||||
> [!IMPORTANT]
|
||||
> Verify that the virtual environment (`venv`) is correctly set up and that `python` points to the appropriate version.
|
||||
|
||||
- status: `doas rc-service ytlocal status`
|
||||
- start: `doas rc-service ytlocal start`
|
||||
- restart: `doas rc-service ytlocal restart`
|
||||
- stop: `doas rc-service ytlocal stop`
|
||||
> [!CAUTION]
|
||||
> Do not stop the process manually; use OpenRC commands (`rc-service ytlocal stop`) to avoid race conditions.
|
||||
|
||||
- enable: `doas rc-update add ytlocal default`
|
||||
- disable: `doas rc-update del ytlocal`
|
||||
> [!NOTE]
|
||||
> When run with administrative privileges, the configuration is saved in `/root/.yt-local`, which is root‑only.
|
||||
|
||||
When yt-local is run with administrator privileges,
|
||||
the configuration file is stored in /root/.yt-local
|
||||
## Service Management
|
||||
|
||||
- **Status**: `doas rc-service ytlocal status`
|
||||
- **Start**: `doas rc-service ytlocal start`
|
||||
- **Restart**: `doas rc-service ytlocal restart`
|
||||
- **Stop**: `doas rc-service ytlocal stop`
|
||||
- **Enable at boot**: `doas rc-update add ytlocal default`
|
||||
- **Disable**: `doas rc-update del ytlocal`
|
||||
|
||||
## Post‑Installation Verification
|
||||
|
||||
- Confirm the process is running: `doas rc-service ytlocal status`
|
||||
- Inspect logs for issues: `doas tail -f /var/log/ytlocal.log` (if logging is configured).
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
- **Service fails to start**: verify script permissions, correct `command=` path, and that the virtualenv exists.
|
||||
- **Port conflict**: adjust the server’s port configuration before launching.
|
||||
- **Import errors**: ensure all dependencies are installed in the virtual environment.
|
||||
|
||||
[!IMPORTANT]
|
||||
Keep the service script updated when modifying startup logic or adding new dependencies.
|
||||
|
||||
@@ -44,6 +44,10 @@ def remove_files_with_extensions(path, extensions):
|
||||
|
||||
def download_if_not_exists(file_name, url, sha256=None):
|
||||
if not os.path.exists('./' + file_name):
|
||||
# Reject non-https URLs so a mistaken constant cannot cause a
|
||||
# plaintext download (bandit B310 hardening).
|
||||
if not url.startswith('https://'):
|
||||
raise Exception('Refusing to download over non-https URL: ' + url)
|
||||
log('Downloading ' + file_name + '..')
|
||||
data = urllib.request.urlopen(url).read()
|
||||
log('Finished downloading ' + file_name)
|
||||
@@ -58,12 +62,14 @@ def download_if_not_exists(file_name, url, sha256=None):
|
||||
log('Using existing ' + file_name)
|
||||
|
||||
def wine_run_shell(command):
|
||||
# Keep argv-style invocation (no shell) to avoid command injection.
|
||||
if os.name == 'posix':
|
||||
check(os.system('wine ' + command.replace('\\', '/')))
|
||||
parts = ['wine'] + command.replace('\\', '/').split()
|
||||
elif os.name == 'nt':
|
||||
check(os.system(command))
|
||||
parts = command.split()
|
||||
else:
|
||||
raise Exception('Unsupported OS')
|
||||
check(subprocess.run(parts).returncode)
|
||||
|
||||
def wine_run(command_parts):
|
||||
if os.name == 'posix':
|
||||
@@ -92,7 +98,20 @@ if os.path.exists('./yt-local'):
|
||||
# confused with working directory. I'm calling it the same thing so it will
|
||||
# have that name when extracted from the final release zip archive)
|
||||
log('Making copy of yt-local files')
|
||||
check(os.system('git archive --format tar master | 7z x -si -ttar -oyt-local'))
|
||||
# Avoid the shell: pipe `git archive` into 7z directly via subprocess.
|
||||
_git_archive = subprocess.Popen(
|
||||
['git', 'archive', '--format', 'tar', 'master'],
|
||||
stdout=subprocess.PIPE,
|
||||
)
|
||||
_sevenz = subprocess.Popen(
|
||||
['7z', 'x', '-si', '-ttar', '-oyt-local'],
|
||||
stdin=_git_archive.stdout,
|
||||
)
|
||||
_git_archive.stdout.close()
|
||||
_sevenz.wait()
|
||||
_git_archive.wait()
|
||||
check(_sevenz.returncode)
|
||||
check(_git_archive.returncode)
|
||||
|
||||
if len(os.listdir('./yt-local')) == 0:
|
||||
raise Exception('Failed to copy yt-local files')
|
||||
@@ -136,7 +155,7 @@ if os.path.exists('./python'):
|
||||
|
||||
log('Extracting python distribution')
|
||||
|
||||
check(os.system(r'7z -y x -opython ' + python_dist_name))
|
||||
check_subp(subprocess.run(['7z', '-y', 'x', '-opython', python_dist_name]))
|
||||
|
||||
log('Executing get-pip.py')
|
||||
wine_run(['./python/python.exe', '-I', 'get-pip.py'])
|
||||
@@ -241,7 +260,7 @@ if os.path.exists('./' + output_filename):
|
||||
log('Removing previous zipped release')
|
||||
os.remove('./' + output_filename)
|
||||
log('Zipping release')
|
||||
check(os.system(r'7z -mx=9 a ' + output_filename + ' ./yt-local'))
|
||||
check_subp(subprocess.run(['7z', '-mx=9', 'a', output_filename, './yt-local']))
|
||||
|
||||
print('\n')
|
||||
log('Finished')
|
||||
|
||||
20
server.py
20
server.py
@@ -1,22 +1,28 @@
|
||||
#!/usr/bin/env python3
|
||||
# E402 is deliberately ignored in this file: `monkey.patch_all()` must run
|
||||
# before any stdlib networking or gevent-dependent modules are imported.
|
||||
from gevent import monkey
|
||||
monkey.patch_all()
|
||||
import gevent.socket
|
||||
|
||||
from youtube import yt_app
|
||||
from youtube import util
|
||||
|
||||
# these are just so the files get run - they import yt_app and add routes to it
|
||||
from youtube import watch, search, playlist, channel, local_playlist, comments, subscriptions
|
||||
from youtube import (
|
||||
watch,
|
||||
search,
|
||||
playlist,
|
||||
channel,
|
||||
local_playlist,
|
||||
comments,
|
||||
subscriptions,
|
||||
)
|
||||
|
||||
import settings
|
||||
|
||||
from gevent.pywsgi import WSGIServer
|
||||
import urllib
|
||||
import urllib3
|
||||
import socket
|
||||
import socks, sockshandler
|
||||
import subprocess
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
@@ -55,8 +61,6 @@ def proxy_site(env, start_response, video=False):
|
||||
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64)',
|
||||
'Accept': '*/*',
|
||||
}
|
||||
current_range_start = 0
|
||||
range_end = None
|
||||
if 'HTTP_RANGE' in env:
|
||||
send_headers['Range'] = env['HTTP_RANGE']
|
||||
|
||||
@@ -274,6 +278,8 @@ class FilteredRequestLog:
|
||||
|
||||
if __name__ == '__main__':
|
||||
if settings.allow_foreign_addresses:
|
||||
# Binding to all interfaces is opt-in via the
|
||||
# `allow_foreign_addresses` setting and documented as discouraged.
|
||||
server = WSGIServer(('0.0.0.0', settings.port_number), site_dispatch,
|
||||
log=FilteredRequestLog())
|
||||
ip_server = '0.0.0.0'
|
||||
|
||||
34
settings.py
34
settings.py
@@ -261,10 +261,20 @@ For security reasons, enabling this is not recommended.''',
|
||||
'category': 'interface',
|
||||
}),
|
||||
|
||||
('native_player_storyboard', {
|
||||
'type': bool,
|
||||
'default': False,
|
||||
'label': 'Storyboard preview (native)',
|
||||
'comment': '''Show thumbnail preview on hover (native player modes).
|
||||
Positioning is heuristic; may misalign in Firefox/Safari.
|
||||
Works best on Chromium browsers.
|
||||
No effect in Plyr.''',
|
||||
'category': 'interface',
|
||||
}),
|
||||
|
||||
('use_video_download', {
|
||||
'type': int,
|
||||
'default': 0,
|
||||
'comment': '',
|
||||
'options': [
|
||||
(0, 'Disabled'),
|
||||
(1, 'Enabled'),
|
||||
@@ -471,7 +481,7 @@ upgrade_functions = {
|
||||
|
||||
|
||||
def log_ignored_line(line_number, message):
|
||||
print("WARNING: Ignoring settings.txt line " + str(node.lineno) + " (" + message + ")")
|
||||
print('WARNING: Ignoring settings.txt line ' + str(line_number) + ' (' + message + ')')
|
||||
|
||||
|
||||
if os.path.isfile("settings.txt"):
|
||||
@@ -499,25 +509,29 @@ else:
|
||||
else:
|
||||
# parse settings in a safe way, without exec
|
||||
current_settings_dict = {}
|
||||
# Python 3.8+ uses ast.Constant; older versions use ast.Num, ast.Str, ast.NameConstant
|
||||
attributes = {
|
||||
ast.Constant: 'value',
|
||||
ast.NameConstant: 'value',
|
||||
ast.Num: 'n',
|
||||
ast.Str: 's',
|
||||
}
|
||||
try:
|
||||
attributes[ast.Num] = 'n'
|
||||
attributes[ast.Str] = 's'
|
||||
attributes[ast.NameConstant] = 'value'
|
||||
except AttributeError:
|
||||
pass # Removed in Python 3.12+
|
||||
module_node = ast.parse(settings_text)
|
||||
for node in module_node.body:
|
||||
if type(node) != ast.Assign:
|
||||
log_ignored_line(node.lineno, "only assignments are allowed")
|
||||
if not isinstance(node, ast.Assign):
|
||||
log_ignored_line(node.lineno, 'only assignments are allowed')
|
||||
continue
|
||||
|
||||
if len(node.targets) > 1:
|
||||
log_ignored_line(node.lineno, "only simple single-variable assignments allowed")
|
||||
log_ignored_line(node.lineno, 'only simple single-variable assignments allowed')
|
||||
continue
|
||||
|
||||
target = node.targets[0]
|
||||
if type(target) != ast.Name:
|
||||
log_ignored_line(node.lineno, "only simple single-variable assignments allowed")
|
||||
if not isinstance(target, ast.Name):
|
||||
log_ignored_line(node.lineno, 'only simple single-variable assignments allowed')
|
||||
continue
|
||||
|
||||
if target.id not in acceptable_targets:
|
||||
|
||||
@@ -11,8 +11,7 @@ import pytest
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..'))
|
||||
import youtube.proto as proto
|
||||
from youtube.yt_data_extract.common import (
|
||||
extract_item_info, extract_items, extract_shorts_lockup_view_model_info,
|
||||
extract_approx_int,
|
||||
extract_item_info, extract_items,
|
||||
)
|
||||
|
||||
|
||||
@@ -58,6 +57,59 @@ class TestChannelCtokenV5:
|
||||
assert t_shorts != t_streams
|
||||
assert t_videos != t_streams
|
||||
|
||||
def test_include_shorts_false_adds_filter(self):
|
||||
"""Test that include_shorts=False adds the shorts filter (field 104)."""
|
||||
# Token with shorts included (default)
|
||||
t_with_shorts = self.channel_ctoken_v5('UCtest', '1', '3', 'videos', include_shorts=True)
|
||||
# Token with shorts excluded
|
||||
t_without_shorts = self.channel_ctoken_v5('UCtest', '1', '3', 'videos', include_shorts=False)
|
||||
|
||||
# The tokens should be different because of the shorts filter
|
||||
assert t_with_shorts != t_without_shorts
|
||||
|
||||
# Decode and verify the filter is present
|
||||
raw_with_shorts = base64.urlsafe_b64decode(t_with_shorts + '==')
|
||||
raw_without_shorts = base64.urlsafe_b64decode(t_without_shorts + '==')
|
||||
|
||||
# Parse the outer protobuf structure
|
||||
import youtube.proto as proto
|
||||
outer_fields_with = list(proto.read_protobuf(raw_with_shorts))
|
||||
outer_fields_without = list(proto.read_protobuf(raw_without_shorts))
|
||||
|
||||
# Field 80226972 contains the inner data
|
||||
inner_with = [v for _, fn, v in outer_fields_with if fn == 80226972][0]
|
||||
inner_without = [v for _, fn, v in outer_fields_without if fn == 80226972][0]
|
||||
|
||||
# Parse the inner data - field 3 contains percent-encoded base64 data
|
||||
inner_fields_with = list(proto.read_protobuf(inner_with))
|
||||
inner_fields_without = list(proto.read_protobuf(inner_without))
|
||||
|
||||
# Get field 3 data (the encoded inner which is percent-encoded base64)
|
||||
encoded_inner_with = [v for _, fn, v in inner_fields_with if fn == 3][0]
|
||||
encoded_inner_without = [v for _, fn, v in inner_fields_without if fn == 3][0]
|
||||
|
||||
# The inner without shorts should contain field 104
|
||||
# Decode the percent-encoded base64 data
|
||||
import urllib.parse
|
||||
decoded_with = urllib.parse.unquote(encoded_inner_with.decode('ascii'))
|
||||
decoded_without = urllib.parse.unquote(encoded_inner_without.decode('ascii'))
|
||||
|
||||
# Decode the base64 data
|
||||
decoded_with_bytes = base64.urlsafe_b64decode(decoded_with + '==')
|
||||
decoded_without_bytes = base64.urlsafe_b64decode(decoded_without + '==')
|
||||
|
||||
# Parse the decoded protobuf data
|
||||
fields_with = list(proto.read_protobuf(decoded_with_bytes))
|
||||
fields_without = list(proto.read_protobuf(decoded_without_bytes))
|
||||
|
||||
field_numbers_with = [fn for _, fn, _ in fields_with]
|
||||
field_numbers_without = [fn for _, fn, _ in fields_without]
|
||||
|
||||
# The 'with' version should NOT have field 104
|
||||
assert 104 not in field_numbers_with
|
||||
# The 'without' version SHOULD have field 104
|
||||
assert 104 in field_numbers_without
|
||||
|
||||
|
||||
# --- shortsLockupViewModel parsing ---
|
||||
|
||||
|
||||
@@ -39,7 +39,8 @@ class NewIdentityState():
|
||||
self.new_identities_till_success -= 1
|
||||
|
||||
def fetch_url_response(self, *args, **kwargs):
|
||||
cleanup_func = (lambda r: None)
|
||||
def cleanup_func(response):
|
||||
return None
|
||||
if self.new_identities_till_success == 0:
|
||||
return MockResponse(), cleanup_func
|
||||
return MockResponse(body=html429, status=429), cleanup_func
|
||||
|
||||
@@ -1,14 +1,17 @@
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import traceback
|
||||
from sys import exc_info
|
||||
|
||||
import flask
|
||||
import jinja2
|
||||
from flask import request
|
||||
from flask_babel import Babel
|
||||
|
||||
from youtube import util
|
||||
from .get_app_version import app_version
|
||||
import flask
|
||||
from flask import request
|
||||
import jinja2
|
||||
import settings
|
||||
import traceback
|
||||
import logging
|
||||
import re
|
||||
from sys import exc_info
|
||||
from flask_babel import Babel
|
||||
|
||||
yt_app = flask.Flask(__name__)
|
||||
yt_app.config['TEMPLATES_AUTO_RELOAD'] = True
|
||||
@@ -26,7 +29,6 @@ yt_app.logger.addFilter(FetchErrorFilter())
|
||||
# yt_app.jinja_env.lstrip_blocks = True
|
||||
|
||||
# Configure Babel for i18n
|
||||
import os
|
||||
yt_app.config['BABEL_DEFAULT_LOCALE'] = 'en'
|
||||
# Use absolute path for translations directory to avoid issues with package structure changes
|
||||
_app_root = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
|
||||
@@ -6,9 +6,7 @@ import settings
|
||||
|
||||
import urllib
|
||||
import json
|
||||
from string import Template
|
||||
import youtube.proto as proto
|
||||
import html
|
||||
import math
|
||||
import gevent
|
||||
import re
|
||||
@@ -33,9 +31,9 @@ headers_mobile = (
|
||||
real_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=8XihrAcN1l4'),)
|
||||
generic_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=ST1Ti53r4fU'),)
|
||||
|
||||
# FIXED 2026: YouTube changed continuation token structure (from Invidious commit a9f8127)
|
||||
# Sort values for YouTube API (from Invidious): 2=popular, 4=newest, 5=oldest
|
||||
def channel_ctoken_v5(channel_id, page, sort, tab, view=1):
|
||||
# include_shorts only applies to tab='videos'; tab='shorts'/'streams' always include their own content.
|
||||
def channel_ctoken_v5(channel_id, page, sort, tab, view=1, include_shorts=True):
|
||||
# Tab-specific protobuf field numbers (from Invidious source)
|
||||
# Each tab uses different field numbers in the protobuf structure:
|
||||
# videos: 110 -> 3 -> 15 -> { 2:{1:UUID}, 4:sort, 8:{1:UUID, 3:sort} }
|
||||
@@ -74,6 +72,11 @@ def channel_ctoken_v5(channel_id, page, sort, tab, view=1):
|
||||
inner_container = proto.string(3, tab_wrapper)
|
||||
outer_container = proto.string(110, inner_container)
|
||||
|
||||
# Add shorts filter when include_shorts=False (field 104, same as playlist.py)
|
||||
# This tells YouTube to exclude shorts from the results
|
||||
if not include_shorts:
|
||||
outer_container += proto.string(104, proto.uint(2, 1))
|
||||
|
||||
encoded_inner = proto.percent_b64encode(outer_container)
|
||||
|
||||
pointless_nest = proto.string(80226972,
|
||||
@@ -236,12 +239,12 @@ def channel_ctoken_v1(channel_id, page, sort, tab, view=1):
|
||||
|
||||
|
||||
def get_channel_tab(channel_id, page="1", sort=3, tab='videos', view=1,
|
||||
ctoken=None, print_status=True):
|
||||
ctoken=None, print_status=True, include_shorts=True):
|
||||
message = 'Got channel tab' if print_status else None
|
||||
|
||||
if not ctoken:
|
||||
if tab in ('videos', 'shorts', 'streams'):
|
||||
ctoken = channel_ctoken_v5(channel_id, page, sort, tab, view)
|
||||
ctoken = channel_ctoken_v5(channel_id, page, sort, tab, view, include_shorts)
|
||||
else:
|
||||
ctoken = channel_ctoken_v3(channel_id, page, sort, tab, view)
|
||||
ctoken = ctoken.replace('=', '%3D')
|
||||
@@ -288,19 +291,30 @@ def get_number_of_videos_channel(channel_id):
|
||||
try:
|
||||
response = util.fetch_url(url, headers_mobile,
|
||||
debug_name='number_of_videos', report_text='Got number of videos')
|
||||
except (urllib.error.HTTPError, util.FetchError) as e:
|
||||
except (urllib.error.HTTPError, util.FetchError):
|
||||
traceback.print_exc()
|
||||
print("Couldn't retrieve number of videos")
|
||||
return 1000
|
||||
|
||||
response = response.decode('utf-8')
|
||||
|
||||
# match = re.search(r'"numVideosText":\s*{\s*"runs":\s*\[{"text":\s*"([\d,]*) videos"', response)
|
||||
match = re.search(r'"numVideosText".*?([,\d]+)', response)
|
||||
if match:
|
||||
return int(match.group(1).replace(',',''))
|
||||
else:
|
||||
return 0
|
||||
# Try several patterns since YouTube's format changes:
|
||||
# "numVideosText":{"runs":[{"text":"1,234"},{"text":" videos"}]}
|
||||
# "stats":[..., {"runs":[{"text":"1,234"},{"text":" videos"}]}]
|
||||
for pattern in (
|
||||
r'"numVideosText".*?"text":\s*"([\d,]+)"',
|
||||
r'"numVideosText".*?([\d,]+)\s*videos?',
|
||||
r'"numVideosText".*?([,\d]+)',
|
||||
r'([\d,]+)\s*videos?\s*</span>',
|
||||
):
|
||||
match = re.search(pattern, response)
|
||||
if match:
|
||||
try:
|
||||
return int(match.group(1).replace(',', ''))
|
||||
except ValueError:
|
||||
continue
|
||||
# Fallback: unknown count
|
||||
return 0
|
||||
def set_cached_number_of_videos(channel_id, num_videos):
|
||||
@cachetools.cached(number_of_videos_cache)
|
||||
def dummy_func_using_same_cache(channel_id):
|
||||
@@ -425,24 +439,27 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
|
||||
page_number = int(request.args.get('page', 1))
|
||||
# sort 1: views
|
||||
# sort 2: oldest
|
||||
# sort 4: newest - no shorts (Just a kludge on our end, not internal to yt)
|
||||
# sort 3: newest (includes shorts, via UU uploads playlist)
|
||||
# sort 4: newest - no shorts (uses channel Videos tab API directly, like Invidious)
|
||||
default_sort = '3' if settings.include_shorts_in_channel else '4'
|
||||
sort = request.args.get('sort', default_sort)
|
||||
view = request.args.get('view', '1')
|
||||
query = request.args.get('query', '')
|
||||
ctoken = request.args.get('ctoken', '')
|
||||
include_shorts = (sort != '4')
|
||||
default_params = (page_number == 1 and sort in ('3', '4') and view == '1')
|
||||
continuation = bool(ctoken) # whether or not we're using a continuation
|
||||
continuation = bool(ctoken)
|
||||
page_size = 30
|
||||
try_channel_api = True
|
||||
polymer_json = None
|
||||
number_of_videos = 0
|
||||
info = None
|
||||
|
||||
# Use the special UU playlist which contains all the channel's uploads
|
||||
if tab == 'videos' and sort in ('3', '4'):
|
||||
# -------------------------------------------------------------------------
|
||||
# sort=3: use UU uploads playlist (includes shorts)
|
||||
# -------------------------------------------------------------------------
|
||||
if tab == 'videos' and sort == '3':
|
||||
if not channel_id:
|
||||
channel_id = get_channel_id(base_url)
|
||||
if page_number == 1 and include_shorts:
|
||||
if page_number == 1:
|
||||
tasks = (
|
||||
gevent.spawn(playlist.playlist_first_page,
|
||||
'UU' + channel_id[2:],
|
||||
@@ -451,9 +468,6 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
|
||||
)
|
||||
gevent.joinall(tasks)
|
||||
util.check_gevent_exceptions(*tasks)
|
||||
|
||||
# Ignore the metadata for now, it is cached and will be
|
||||
# recalled later
|
||||
pl_json = tasks[0].value
|
||||
pl_info = yt_data_extract.extract_playlist_info(pl_json)
|
||||
number_of_videos = pl_info['metadata']['video_count']
|
||||
@@ -464,86 +478,70 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
|
||||
else:
|
||||
tasks = (
|
||||
gevent.spawn(playlist.get_videos, 'UU' + channel_id[2:],
|
||||
page_number, include_shorts=include_shorts),
|
||||
page_number, include_shorts=True),
|
||||
gevent.spawn(get_metadata, channel_id),
|
||||
gevent.spawn(get_number_of_videos_channel, channel_id),
|
||||
gevent.spawn(playlist.playlist_first_page, 'UU' + channel_id[2:],
|
||||
report_text='Retrieved channel video count'),
|
||||
)
|
||||
gevent.joinall(tasks)
|
||||
util.check_gevent_exceptions(*tasks)
|
||||
|
||||
pl_json = tasks[0].value
|
||||
pl_info = yt_data_extract.extract_playlist_info(pl_json)
|
||||
number_of_videos = tasks[2].value
|
||||
first_page_meta = yt_data_extract.extract_playlist_metadata(tasks[3].value)
|
||||
number_of_videos = (tasks[2].value
|
||||
or first_page_meta.get('video_count')
|
||||
or 0)
|
||||
|
||||
info = pl_info
|
||||
info['channel_id'] = channel_id
|
||||
info['current_tab'] = 'videos'
|
||||
if info['items']: # Success
|
||||
if pl_info['items']:
|
||||
info = pl_info
|
||||
info['channel_id'] = channel_id
|
||||
info['current_tab'] = 'videos'
|
||||
page_size = 100
|
||||
try_channel_api = False
|
||||
else: # Try the first-page method next
|
||||
try_channel_api = True
|
||||
# else fall through to the channel browse API below
|
||||
|
||||
# Use the regular channel API
|
||||
if tab in ('shorts', 'streams') or (tab=='videos' and try_channel_api):
|
||||
# -------------------------------------------------------------------------
|
||||
# Channel browse API: sort=4 (videos tab, no shorts), shorts, streams,
|
||||
# or fallback when the UU playlist returned no items.
|
||||
# Uses channel_ctoken_v5 per-tab tokens, mirroring Invidious's approach.
|
||||
# Pagination is driven by the continuation token YouTube returns each page.
|
||||
# -------------------------------------------------------------------------
|
||||
used_channel_api = False
|
||||
if info is None and (
|
||||
tab in ('shorts', 'streams')
|
||||
or (tab == 'videos' and sort == '4')
|
||||
or (tab == 'videos' and sort == '3') # UU-playlist fallback
|
||||
):
|
||||
if not channel_id:
|
||||
channel_id = get_channel_id(base_url)
|
||||
used_channel_api = True
|
||||
|
||||
# For shorts/streams, use continuation token from cache or request
|
||||
if tab in ('shorts', 'streams'):
|
||||
if ctoken:
|
||||
# Use ctoken directly from request (passed via pagination)
|
||||
polymer_json = util.call_youtube_api('web', 'browse', {
|
||||
'continuation': ctoken,
|
||||
})
|
||||
continuation = True
|
||||
elif page_number > 1:
|
||||
# For page 2+, get ctoken from cache
|
||||
cache_key = (channel_id, tab, sort, page_number - 1)
|
||||
cached_ctoken = continuation_token_cache.get(cache_key)
|
||||
if cached_ctoken:
|
||||
polymer_json = util.call_youtube_api('web', 'browse', {
|
||||
'continuation': cached_ctoken,
|
||||
})
|
||||
continuation = True
|
||||
else:
|
||||
# Fallback: generate fresh ctoken
|
||||
page_call = (get_channel_tab, channel_id, str(page_number), sort, tab, int(view))
|
||||
continuation = True
|
||||
polymer_json = gevent.spawn(*page_call)
|
||||
polymer_json.join()
|
||||
if polymer_json.exception:
|
||||
raise polymer_json.exception
|
||||
polymer_json = polymer_json.value
|
||||
# Determine what browse call to make
|
||||
if ctoken:
|
||||
browse_call = (util.call_youtube_api, 'web', 'browse',
|
||||
{'continuation': ctoken})
|
||||
continuation = True
|
||||
elif page_number > 1:
|
||||
cache_key = (channel_id, tab, sort, page_number - 1)
|
||||
cached_ctoken = continuation_token_cache.get(cache_key)
|
||||
if cached_ctoken:
|
||||
browse_call = (util.call_youtube_api, 'web', 'browse',
|
||||
{'continuation': cached_ctoken})
|
||||
else:
|
||||
# Page 1: generate fresh ctoken
|
||||
page_call = (get_channel_tab, channel_id, str(page_number), sort, tab, int(view))
|
||||
continuation = True
|
||||
polymer_json = gevent.spawn(*page_call)
|
||||
polymer_json.join()
|
||||
if polymer_json.exception:
|
||||
raise polymer_json.exception
|
||||
polymer_json = polymer_json.value
|
||||
# Cache miss — restart from page 1 (better than an error)
|
||||
browse_call = (get_channel_tab, channel_id, '1', sort, tab, int(view))
|
||||
continuation = True
|
||||
else:
|
||||
# videos tab - original logic
|
||||
page_call = (get_channel_tab, channel_id, str(page_number), sort,
|
||||
tab, int(view))
|
||||
browse_call = (get_channel_tab, channel_id, '1', sort, tab, int(view))
|
||||
continuation = True
|
||||
|
||||
if tab == 'videos':
|
||||
# Only need video count for the videos tab
|
||||
if channel_id:
|
||||
num_videos_call = (get_number_of_videos_channel, channel_id)
|
||||
else:
|
||||
num_videos_call = (get_number_of_videos_general, base_url)
|
||||
tasks = (
|
||||
gevent.spawn(*num_videos_call),
|
||||
gevent.spawn(*page_call),
|
||||
)
|
||||
gevent.joinall(tasks)
|
||||
util.check_gevent_exceptions(*tasks)
|
||||
number_of_videos, polymer_json = tasks[0].value, tasks[1].value
|
||||
# For shorts/streams, polymer_json is already set above, nothing to do here
|
||||
# Single browse call; number_of_videos is computed from items actually
|
||||
# fetched so we don't mislead the user with a total that includes
|
||||
# shorts (which this branch is explicitly excluding for sort=4).
|
||||
task = gevent.spawn(*browse_call)
|
||||
task.join()
|
||||
util.check_gevent_exceptions(task)
|
||||
polymer_json = task.value
|
||||
|
||||
elif tab == 'about':
|
||||
# polymer_json = util.fetch_url(base_url + '/about?pbj=1', headers_desktop, debug_name='gen_channel_about')
|
||||
@@ -571,16 +569,16 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
|
||||
elif tab == 'search':
|
||||
url = base_url + '/search?pbj=1&query=' + urllib.parse.quote(query, safe='')
|
||||
polymer_json = util.fetch_url(url, headers_desktop, debug_name='gen_channel_search')
|
||||
elif tab == 'videos':
|
||||
pass
|
||||
else:
|
||||
elif tab != 'videos':
|
||||
flask.abort(404, 'Unknown channel tab: ' + tab)
|
||||
|
||||
if polymer_json is not None:
|
||||
if polymer_json is not None and info is None:
|
||||
info = yt_data_extract.extract_channel_info(
|
||||
json.loads(polymer_json), tab, continuation=continuation
|
||||
)
|
||||
|
||||
if info is None:
|
||||
return flask.render_template('error.html', error_message='Could not retrieve channel data')
|
||||
if info['error'] is not None:
|
||||
return flask.render_template('error.html', error_message=info['error'])
|
||||
|
||||
@@ -610,16 +608,40 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
|
||||
item.update(additional_info)
|
||||
|
||||
if tab in ('videos', 'shorts', 'streams'):
|
||||
if tab in ('shorts', 'streams'):
|
||||
# For shorts/streams, use ctoken to determine pagination
|
||||
# For any tab using the channel browse API (sort=4, shorts, streams),
|
||||
# pagination is driven by the ctoken YouTube returns in the response.
|
||||
# Cache it so the next page request can use it.
|
||||
if info.get('ctoken'):
|
||||
cache_key = (channel_id, tab, sort, page_number)
|
||||
continuation_token_cache[cache_key] = info['ctoken']
|
||||
|
||||
# Determine is_last_page and final number_of_pages.
|
||||
# For channel-API-driven tabs (sort=4, shorts, streams, UU fallback),
|
||||
# YouTube doesn't give us a reliable total filtered count. So instead
|
||||
# of displaying a misleading number (the total-including-shorts from
|
||||
# get_number_of_videos_channel), we count only what we've actually
|
||||
# paged through, and use the ctoken to know whether to show "next".
|
||||
if used_channel_api:
|
||||
info['is_last_page'] = (info.get('ctoken') is None)
|
||||
number_of_videos = len(info.get('items', []))
|
||||
# Cache the ctoken for next page
|
||||
items_on_page = len(info.get('items', []))
|
||||
items_seen_so_far = (page_number - 1) * page_size + items_on_page
|
||||
|
||||
# Use accumulated count as the displayed total so "N videos" shown
|
||||
# to the user always matches what they could actually reach.
|
||||
number_of_videos = items_seen_so_far
|
||||
|
||||
# If there's more content, bump by 1 so the Next-page button exists
|
||||
if info.get('ctoken'):
|
||||
cache_key = (channel_id, tab, sort, page_number)
|
||||
continuation_token_cache[cache_key] = info['ctoken']
|
||||
number_of_videos = max(number_of_videos,
|
||||
page_number * page_size + 1)
|
||||
# For sort=3 via UU playlist (used_channel_api=False), number_of_videos
|
||||
# was already set from playlist metadata above.
|
||||
|
||||
info['number_of_videos'] = number_of_videos
|
||||
info['number_of_pages'] = math.ceil(number_of_videos/page_size) if number_of_videos else 1
|
||||
info['number_of_pages'] = math.ceil(number_of_videos / page_size) if number_of_videos else 1
|
||||
# Never show fewer pages than the page the user is actually on
|
||||
if info['number_of_pages'] < page_number:
|
||||
info['number_of_pages'] = page_number
|
||||
info['header_playlist_names'] = local_playlist.get_playlist_names()
|
||||
if tab in ('videos', 'shorts', 'streams', 'playlists'):
|
||||
info['current_sort'] = sort
|
||||
|
||||
@@ -155,33 +155,35 @@ def post_process_comments_info(comments_info):
|
||||
|
||||
|
||||
def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''):
|
||||
if not settings.comments_mode:
|
||||
return {}
|
||||
|
||||
# Initialize the result dict up-front so that any exception path below
|
||||
# can safely attach an 'error' field without risking UnboundLocalError.
|
||||
comments_info = {'error': None}
|
||||
try:
|
||||
if settings.comments_mode:
|
||||
comments_info = {'error': None}
|
||||
other_sort_url = (
|
||||
util.URL_ORIGIN + '/comments?ctoken='
|
||||
+ make_comment_ctoken(video_id, sort=1 - sort, lc=lc)
|
||||
)
|
||||
other_sort_text = 'Sort by ' + ('newest' if sort == 0 else 'top')
|
||||
other_sort_url = (
|
||||
util.URL_ORIGIN + '/comments?ctoken='
|
||||
+ make_comment_ctoken(video_id, sort=1 - sort, lc=lc)
|
||||
)
|
||||
other_sort_text = 'Sort by ' + ('newest' if sort == 0 else 'top')
|
||||
|
||||
this_sort_url = (util.URL_ORIGIN
|
||||
+ '/comments?ctoken='
|
||||
+ make_comment_ctoken(video_id, sort=sort, lc=lc))
|
||||
this_sort_url = (util.URL_ORIGIN
|
||||
+ '/comments?ctoken='
|
||||
+ make_comment_ctoken(video_id, sort=sort, lc=lc))
|
||||
|
||||
comments_info['comment_links'] = [
|
||||
(other_sort_text, other_sort_url),
|
||||
('Direct link', this_sort_url)
|
||||
]
|
||||
comments_info['comment_links'] = [
|
||||
(other_sort_text, other_sort_url),
|
||||
('Direct link', this_sort_url)
|
||||
]
|
||||
|
||||
ctoken = make_comment_ctoken(video_id, sort, offset, lc)
|
||||
comments_info.update(yt_data_extract.extract_comments_info(
|
||||
request_comments(ctoken), ctoken=ctoken
|
||||
))
|
||||
post_process_comments_info(comments_info)
|
||||
ctoken = make_comment_ctoken(video_id, sort, offset, lc)
|
||||
comments_info.update(yt_data_extract.extract_comments_info(
|
||||
request_comments(ctoken), ctoken=ctoken
|
||||
))
|
||||
post_process_comments_info(comments_info)
|
||||
|
||||
return comments_info
|
||||
else:
|
||||
return {}
|
||||
return comments_info
|
||||
except util.FetchError as e:
|
||||
if e.code == '429' and settings.route_tor:
|
||||
comments_info['error'] = 'Error: YouTube blocked the request because the Tor exit node is overutilized.'
|
||||
|
||||
@@ -1 +1,3 @@
|
||||
from .get_app_version import *
|
||||
from .get_app_version import app_version
|
||||
|
||||
__all__ = ['app_version']
|
||||
|
||||
@@ -1,47 +1,56 @@
|
||||
from __future__ import unicode_literals
|
||||
from subprocess import (
|
||||
call,
|
||||
STDOUT
|
||||
)
|
||||
from ..version import __version__
|
||||
import os
|
||||
import shutil
|
||||
import subprocess
|
||||
|
||||
from ..version import __version__
|
||||
|
||||
|
||||
def app_version():
|
||||
def minimal_env_cmd(cmd):
|
||||
# make minimal environment
|
||||
env = {k: os.environ[k] for k in ['SYSTEMROOT', 'PATH'] if k in os.environ}
|
||||
env.update({'LANGUAGE': 'C', 'LANG': 'C', 'LC_ALL': 'C'})
|
||||
|
||||
out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
|
||||
return out
|
||||
|
||||
subst_list = {
|
||||
"version": __version__,
|
||||
"branch": None,
|
||||
"commit": None
|
||||
'version': __version__,
|
||||
'branch': None,
|
||||
'commit': None,
|
||||
}
|
||||
|
||||
if os.system("command -v git > /dev/null 2>&1") != 0:
|
||||
# Use shutil.which instead of `command -v`/os.system so we don't spawn a
|
||||
# shell (CWE-78 hardening) and so it works cross-platform.
|
||||
if shutil.which('git') is None:
|
||||
return subst_list
|
||||
|
||||
if call(["git", "branch"], stderr=STDOUT, stdout=open(os.devnull, 'w')) != 0:
|
||||
try:
|
||||
# Check we are inside a git work tree. Using DEVNULL avoids the
|
||||
# file-handle leak from `open(os.devnull, 'w')`.
|
||||
rc = subprocess.call(
|
||||
['git', 'branch'],
|
||||
stderr=subprocess.DEVNULL,
|
||||
stdout=subprocess.DEVNULL,
|
||||
)
|
||||
except OSError:
|
||||
return subst_list
|
||||
if rc != 0:
|
||||
return subst_list
|
||||
|
||||
describe = minimal_env_cmd(["git", "describe", "--tags", "--always"])
|
||||
describe = minimal_env_cmd(['git', 'describe', '--tags', '--always'])
|
||||
git_revision = describe.strip().decode('ascii')
|
||||
|
||||
branch = minimal_env_cmd(["git", "branch"])
|
||||
branch = minimal_env_cmd(['git', 'branch'])
|
||||
git_branch = branch.strip().decode('ascii').replace('* ', '')
|
||||
|
||||
subst_list.update({
|
||||
"branch": git_branch,
|
||||
"commit": git_revision
|
||||
'branch': git_branch,
|
||||
'commit': git_revision,
|
||||
})
|
||||
|
||||
return subst_list
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if __name__ == '__main__':
|
||||
app_version()
|
||||
|
||||
@@ -1,28 +1,42 @@
|
||||
from youtube import util, yt_data_extract
|
||||
from youtube import util
|
||||
from youtube import yt_app
|
||||
import settings
|
||||
|
||||
import os
|
||||
import json
|
||||
import html
|
||||
import gevent
|
||||
import urllib
|
||||
import math
|
||||
import glob
|
||||
import re
|
||||
|
||||
import flask
|
||||
from flask import request
|
||||
|
||||
playlists_directory = os.path.join(settings.data_dir, "playlists")
|
||||
thumbnails_directory = os.path.join(settings.data_dir, "playlist_thumbnails")
|
||||
playlists_directory = os.path.join(settings.data_dir, 'playlists')
|
||||
thumbnails_directory = os.path.join(settings.data_dir, 'playlist_thumbnails')
|
||||
|
||||
# Whitelist accepted playlist names so user input cannot escape
|
||||
# `playlists_directory` / `thumbnails_directory` (CWE-22, OWASP A01:2021).
|
||||
# Allow letters, digits, spaces, dot, dash and underscore.
|
||||
_PLAYLIST_NAME_RE = re.compile(r'^[\w .\-]{1,128}$')
|
||||
|
||||
|
||||
def _validate_playlist_name(name):
|
||||
'''Return the stripped name if safe, otherwise abort with 400.'''
|
||||
if name is None:
|
||||
flask.abort(400)
|
||||
name = name.strip()
|
||||
if not _PLAYLIST_NAME_RE.match(name):
|
||||
flask.abort(400)
|
||||
return name
|
||||
|
||||
|
||||
def _find_playlist_path(name):
|
||||
"""Find playlist file robustly, handling trailing spaces in filenames"""
|
||||
name = name.strip()
|
||||
pattern = os.path.join(playlists_directory, name + "*.txt")
|
||||
'''Find playlist file robustly, handling trailing spaces in filenames'''
|
||||
name = _validate_playlist_name(name)
|
||||
pattern = os.path.join(playlists_directory, name + '*.txt')
|
||||
files = glob.glob(pattern)
|
||||
return files[0] if files else os.path.join(playlists_directory, name + ".txt")
|
||||
return files[0] if files else os.path.join(playlists_directory, name + '.txt')
|
||||
|
||||
|
||||
def _parse_playlist_lines(data):
|
||||
@@ -179,8 +193,9 @@ def path_edit_playlist(playlist_name):
|
||||
redirect_page_number = min(int(request.values.get('page', 1)), math.ceil(number_of_videos_remaining/50))
|
||||
return flask.redirect(util.URL_ORIGIN + request.path + '?page=' + str(redirect_page_number))
|
||||
elif request.values['action'] == 'remove_playlist':
|
||||
safe_name = _validate_playlist_name(playlist_name)
|
||||
try:
|
||||
os.remove(os.path.join(playlists_directory, playlist_name + ".txt"))
|
||||
os.remove(os.path.join(playlists_directory, safe_name + '.txt'))
|
||||
except OSError:
|
||||
pass
|
||||
return flask.redirect(util.URL_ORIGIN + '/playlists')
|
||||
@@ -220,8 +235,17 @@ def edit_playlist():
|
||||
flask.abort(400)
|
||||
|
||||
|
||||
_THUMBNAIL_RE = re.compile(r'^[A-Za-z0-9_-]{11}\.jpg$')
|
||||
|
||||
|
||||
@yt_app.route('/data/playlist_thumbnails/<playlist_name>/<thumbnail>')
|
||||
def serve_thumbnail(playlist_name, thumbnail):
|
||||
# .. is necessary because flask always uses the application directory at ./youtube, not the working directory
|
||||
# Validate both path components so a crafted URL cannot escape
|
||||
# `thumbnails_directory` via `..` or NUL tricks (CWE-22).
|
||||
safe_name = _validate_playlist_name(playlist_name)
|
||||
if not _THUMBNAIL_RE.match(thumbnail):
|
||||
flask.abort(400)
|
||||
# .. is necessary because flask always uses the application directory at
|
||||
# ./youtube, not the working directory.
|
||||
return flask.send_from_directory(
|
||||
os.path.join('..', thumbnails_directory, playlist_name), thumbnail)
|
||||
os.path.join('..', thumbnails_directory, safe_name), thumbnail)
|
||||
|
||||
@@ -3,9 +3,7 @@ from youtube import yt_app
|
||||
import settings
|
||||
|
||||
import base64
|
||||
import urllib
|
||||
import json
|
||||
import string
|
||||
import gevent
|
||||
import math
|
||||
from flask import request, abort
|
||||
|
||||
@@ -5,7 +5,6 @@ import settings
|
||||
import json
|
||||
import urllib
|
||||
import base64
|
||||
import mimetypes
|
||||
from flask import request
|
||||
import flask
|
||||
import os
|
||||
|
||||
@@ -9,6 +9,8 @@
|
||||
--thumb-background: #222222;
|
||||
--link: #00B0FF;
|
||||
--link-visited: #40C4FF;
|
||||
--border-color: #333333;
|
||||
--thead-background: #0a0a0b;
|
||||
--border-bg: #222222;
|
||||
--border-bg-settings: #000000;
|
||||
--border-bg-license: #000000;
|
||||
|
||||
@@ -9,6 +9,8 @@
|
||||
--thumb-background: #35404D;
|
||||
--link: #22AAFF;
|
||||
--link-visited: #7755FF;
|
||||
--border-color: #4A5568;
|
||||
--thead-background: #1a2530;
|
||||
--border-bg: #FFFFFF;
|
||||
--border-bg-settings: #FFFFFF;
|
||||
--border-bg-license: #FFFFFF;
|
||||
|
||||
@@ -9,6 +9,8 @@
|
||||
--thumb-background: #F5F5F5;
|
||||
--link: #212121;
|
||||
--link-visited: #808080;
|
||||
--border-color: #CCCCCC;
|
||||
--thead-background: #d0d0d0;
|
||||
--border-bg: #212121;
|
||||
--border-bg-settings: #91918C;
|
||||
--border-bg-license: #91918C;
|
||||
|
||||
@@ -307,18 +307,122 @@ figure.sc-video {
|
||||
padding-top: 0.5rem;
|
||||
padding-bottom: 0.5rem;
|
||||
}
|
||||
.v-download { grid-area: v-download; }
|
||||
.v-download > ul.download-dropdown-content {
|
||||
background: var(--secondary-background);
|
||||
padding-left: 0px;
|
||||
.v-download {
|
||||
grid-area: v-download;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
.v-download > ul.download-dropdown-content > li.download-format {
|
||||
list-style: none;
|
||||
.v-download details {
|
||||
display: block;
|
||||
width: 100%;
|
||||
}
|
||||
.v-download > summary {
|
||||
cursor: pointer;
|
||||
padding: 0.4rem 0;
|
||||
padding-left: 1rem;
|
||||
}
|
||||
.v-download > ul.download-dropdown-content > li.download-format a.download-link {
|
||||
.v-download > summary.download-dropdown-label {
|
||||
cursor: pointer;
|
||||
-webkit-touch-callout: none;
|
||||
-webkit-user-select: none;
|
||||
-khtml-user-select: none;
|
||||
-moz-user-select: none;
|
||||
-ms-user-select: none;
|
||||
user-select: none;
|
||||
padding-bottom: 6px;
|
||||
padding-left: .75em;
|
||||
padding-right: .75em;
|
||||
padding-top: 6px;
|
||||
text-align: center;
|
||||
white-space: nowrap;
|
||||
background-color: var(--buttom);
|
||||
border: 1px solid var(--button-border);
|
||||
color: var(--buttom-text);
|
||||
border-radius: 5px;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
.v-download > summary.download-dropdown-label:hover {
|
||||
background-color: var(--buttom-hover);
|
||||
}
|
||||
.v-download > .download-table-container {
|
||||
background: var(--secondary-background);
|
||||
max-height: 65vh;
|
||||
overflow-y: auto;
|
||||
border: 1px solid var(--button-border);
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 4px 12px rgba(0,0,0,0.15);
|
||||
}
|
||||
.download-table {
|
||||
width: 100%;
|
||||
border-collapse: separate;
|
||||
border-spacing: 0;
|
||||
font-size: 0.875rem;
|
||||
}
|
||||
.download-table thead {
|
||||
background: var(--thead-background);
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 1;
|
||||
}
|
||||
.download-table th,
|
||||
.download-table td {
|
||||
padding: 0.7rem 0.9rem;
|
||||
text-align: left;
|
||||
border-bottom: 1px solid var(--button-border);
|
||||
}
|
||||
.download-table th {
|
||||
font-weight: 600;
|
||||
font-size: 0.7rem;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.8px;
|
||||
}
|
||||
.download-table tbody tr {
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
.download-table tbody tr:hover {
|
||||
background: var(--primary-background);
|
||||
}
|
||||
.download-table a.download-link {
|
||||
display: inline-block;
|
||||
padding: 0.4rem 0.85rem;
|
||||
background: rgba(0,0,0,0.12);
|
||||
color: var(--buttom-text);
|
||||
text-decoration: none;
|
||||
border-radius: 5px;
|
||||
font-weight: 500;
|
||||
font-size: 0.85rem;
|
||||
transition: background 0.2s ease;
|
||||
white-space: nowrap;
|
||||
}
|
||||
.download-table a.download-link:hover {
|
||||
background: rgba(0,0,0,0.28);
|
||||
color: var(--buttom-text);
|
||||
}
|
||||
.download-table tbody tr:last-child td {
|
||||
border-bottom: none;
|
||||
}
|
||||
.download-table td[data-label="Ext"] {
|
||||
font-family: monospace;
|
||||
font-size: 0.8rem;
|
||||
font-weight: 600;
|
||||
}
|
||||
.download-table td[data-label="Link"] {
|
||||
white-space: nowrap;
|
||||
vertical-align: middle;
|
||||
}
|
||||
.download-table td[data-label="Codecs"] {
|
||||
max-width: 180px;
|
||||
text-overflow: ellipsis;
|
||||
overflow: hidden;
|
||||
font-family: monospace;
|
||||
font-size: 0.75rem;
|
||||
}
|
||||
.download-table td[data-label="Size"] {
|
||||
font-family: monospace;
|
||||
font-size: 0.85rem;
|
||||
}
|
||||
.download-table td[colspan="3"] {
|
||||
font-style: italic;
|
||||
opacity: 0.7;
|
||||
}
|
||||
|
||||
.v-description {
|
||||
|
||||
@@ -292,7 +292,10 @@ def youtube_timestamp_to_posix(dumb_timestamp):
|
||||
def posix_to_dumbed_down(posix_time):
|
||||
'''Inverse of youtube_timestamp_to_posix.'''
|
||||
delta = int(time.time() - posix_time)
|
||||
assert delta >= 0
|
||||
# Guard against future timestamps (clock drift) without relying on
|
||||
# `assert` (which is stripped under `python -O`).
|
||||
if delta < 0:
|
||||
delta = 0
|
||||
|
||||
if delta == 0:
|
||||
return '0 seconds ago'
|
||||
@@ -531,7 +534,8 @@ def _get_upstream_videos(channel_id):
|
||||
return None
|
||||
|
||||
root = defusedxml.ElementTree.fromstring(feed)
|
||||
assert remove_bullshit(root.tag) == 'feed'
|
||||
if remove_bullshit(root.tag) != 'feed':
|
||||
raise ValueError('Root element is not <feed>')
|
||||
for entry in root:
|
||||
if (remove_bullshit(entry.tag) != 'entry'):
|
||||
continue
|
||||
@@ -539,13 +543,13 @@ def _get_upstream_videos(channel_id):
|
||||
# it's yt:videoId in the xml but the yt: is turned into a namespace which is removed by remove_bullshit
|
||||
video_id_element = find_element(entry, 'videoId')
|
||||
time_published_element = find_element(entry, 'published')
|
||||
assert video_id_element is not None
|
||||
assert time_published_element is not None
|
||||
if video_id_element is None or time_published_element is None:
|
||||
raise ValueError('Missing videoId or published element')
|
||||
|
||||
time_published = int(calendar.timegm(time.strptime(time_published_element.text, '%Y-%m-%dT%H:%M:%S+00:00')))
|
||||
times_published[video_id_element.text] = time_published
|
||||
|
||||
except AssertionError:
|
||||
except ValueError:
|
||||
print('Failed to read atoma feed for ' + channel_status_name)
|
||||
traceback.print_exc()
|
||||
except defusedxml.ElementTree.ParseError:
|
||||
@@ -593,7 +597,10 @@ def _get_upstream_videos(channel_id):
|
||||
# Special case: none of the videos have a time published.
|
||||
# In this case, make something up
|
||||
if videos and videos[0]['time_published'] is None:
|
||||
assert all(v['time_published'] is None for v in videos)
|
||||
# Invariant: if the first video has no timestamp, earlier passes
|
||||
# ensure all of them are unset. Don't rely on `assert`.
|
||||
if not all(v['time_published'] is None for v in videos):
|
||||
raise RuntimeError('Inconsistent time_published state')
|
||||
now = time.time()
|
||||
for i in range(len(videos)):
|
||||
# 1 month between videos
|
||||
@@ -808,7 +815,8 @@ def import_subscriptions():
|
||||
file = file.read().decode('utf-8')
|
||||
try:
|
||||
root = defusedxml.ElementTree.fromstring(file)
|
||||
assert root.tag == 'opml'
|
||||
if root.tag != 'opml':
|
||||
raise ValueError('Root element is not <opml>')
|
||||
channels = []
|
||||
for outline_element in root[0][0]:
|
||||
if (outline_element.tag != 'outline') or ('xmlUrl' not in outline_element.attrib):
|
||||
@@ -819,7 +827,7 @@ def import_subscriptions():
|
||||
channel_id = channel_rss_url[channel_rss_url.find('channel_id=')+11:].strip()
|
||||
channels.append((channel_id, channel_name))
|
||||
|
||||
except (AssertionError, IndexError, defusedxml.ElementTree.ParseError) as e:
|
||||
except (ValueError, IndexError, defusedxml.ElementTree.ParseError):
|
||||
return '400 Bad Request: Unable to read opml xml file, or the file is not the expected format', 400
|
||||
elif mime_type in ('text/csv', 'application/vnd.ms-excel'):
|
||||
content = file.read().decode('utf-8')
|
||||
@@ -1071,11 +1079,20 @@ def post_subscriptions_page():
|
||||
return '', 204
|
||||
|
||||
|
||||
# YouTube video IDs are exactly 11 chars from [A-Za-z0-9_-]. Enforce this
|
||||
# before using the value in filesystem paths to prevent path traversal
|
||||
# (CWE-22, OWASP A01:2021).
|
||||
_VIDEO_ID_RE = re.compile(r'^[A-Za-z0-9_-]{11}$')
|
||||
|
||||
|
||||
@yt_app.route('/data/subscription_thumbnails/<thumbnail>')
|
||||
def serve_subscription_thumbnail(thumbnail):
|
||||
'''Serves thumbnail from disk if it's been saved already. If not, downloads the thumbnail, saves to disk, and serves it.'''
|
||||
assert thumbnail[-4:] == '.jpg'
|
||||
if not thumbnail.endswith('.jpg'):
|
||||
flask.abort(400)
|
||||
video_id = thumbnail[0:-4]
|
||||
if not _VIDEO_ID_RE.match(video_id):
|
||||
flask.abort(400)
|
||||
thumbnail_path = os.path.join(thumbnails_directory, thumbnail)
|
||||
|
||||
if video_id in existing_thumbnails:
|
||||
|
||||
@@ -105,5 +105,10 @@
|
||||
{% if use_dash %}
|
||||
<script src="/youtube.com/static/js/av-merge.js"></script>
|
||||
{% endif %}
|
||||
|
||||
<!-- Storyboard Preview Thumbnails (native players only; Plyr handles this internally) -->
|
||||
{% if settings.use_video_player != 2 and settings.native_player_storyboard %}
|
||||
<script src="/youtube.com/static/js/storyboard-preview.js"></script>
|
||||
{% endif %}
|
||||
</body>
|
||||
</html>
|
||||
|
||||
@@ -102,22 +102,40 @@
|
||||
{% if settings.use_video_download != 0 %}
|
||||
<details class="v-download">
|
||||
<summary class="download-dropdown-label">{{ _('Download') }}</summary>
|
||||
<ul class="download-dropdown-content">
|
||||
{% for format in download_formats %}
|
||||
<li class="download-format">
|
||||
<a class="download-link" href="{{ format['url'] }}" download="{{ title }}.{{ format['ext'] }}">
|
||||
{{ format['ext'] }} {{ format['video_quality'] }} {{ format['audio_quality'] }} {{ format['file_size'] }} {{ format['codecs'] }}
|
||||
</a>
|
||||
</li>
|
||||
{% endfor %}
|
||||
{% for download in other_downloads %}
|
||||
<li class="download-format">
|
||||
<a href="{{ download['url'] }}" download>
|
||||
{{ download['ext'] }} {{ download['label'] }}
|
||||
</a>
|
||||
</li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
<div class="download-table-container">
|
||||
<table class="download-table" aria-label="Download formats">
|
||||
<thead>
|
||||
<tr>
|
||||
<th scope="col">{{ _('Ext') }}</th>
|
||||
<th scope="col">{{ _('Video') }}</th>
|
||||
<th scope="col">{{ _('Audio') }}</th>
|
||||
<th scope="col">{{ _('Size') }}</th>
|
||||
<th scope="col">{{ _('Codecs') }}</th>
|
||||
<th scope="col">{{ _('Link') }}</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody>
|
||||
{% for format in download_formats %}
|
||||
<tr>
|
||||
<td data-label="{{ _('Ext') }}">{{ format['ext'] }}</td>
|
||||
<td data-label="{{ _('Video') }}">{{ format['video_quality'] }}</td>
|
||||
<td data-label="{{ _('Audio') }}">{{ format['audio_quality'] }}</td>
|
||||
<td data-label="{{ _('Size') }}">{{ format['file_size'] }}</td>
|
||||
<td data-label="{{ _('Codecs') }}">{{ format['codecs'] }}</td>
|
||||
<td data-label="{{ _('Link') }}"><a class="download-link" href="{{ format['url'] }}" download="{{ title }}.{{ format['ext'] }}" aria-label="{{ _('Download') }} {{ format['ext'] }} {{ format['video_quality'] }} {{ format['audio_quality'] }}">{{ _('Download') }}</a></td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
{% for download in other_downloads %}
|
||||
<tr>
|
||||
<td data-label="{{ _('Ext') }}">{{ download['ext'] }}</td>
|
||||
<td data-label="{{ _('Video') }}" colspan="3">{{ download['label'] }}</td>
|
||||
<td data-label="{{ _('Codecs') }}">{{ download.get('codecs', 'N/A') }}</td>
|
||||
<td data-label="{{ _('Link') }}"><a class="download-link" href="{{ download['url'] }}" download aria-label="{{ _('Download') }} {{ download['label'] }}">{{ _('Download') }}</a></td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</details>
|
||||
{% else %}
|
||||
<span class="v-download"></span>
|
||||
@@ -304,8 +322,8 @@
|
||||
<!-- /plyr -->
|
||||
{% endif %}
|
||||
|
||||
<!-- Storyboard Preview Thumbnails -->
|
||||
{% if settings.use_video_player != 2 %}
|
||||
<!-- Storyboard Preview Thumbnails (native players only; Plyr handles this internally) -->
|
||||
{% if settings.use_video_player != 2 and settings.native_player_storyboard %}
|
||||
<script src="/youtube.com/static/js/storyboard-preview.js"></script>
|
||||
{% endif %}
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from datetime import datetime
|
||||
import logging
|
||||
import random
|
||||
import settings
|
||||
import socks
|
||||
import sockshandler
|
||||
@@ -19,11 +20,11 @@ import gevent.queue
|
||||
import gevent.lock
|
||||
import collections
|
||||
import stem
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
import stem.control
|
||||
import traceback
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# The trouble with the requests library: It ships its own certificate bundle via certifi
|
||||
# instead of using the system certificate store, meaning self-signed certificates
|
||||
# configured by the user will not work. Some draconian networks block TLS unless a corporate
|
||||
@@ -54,8 +55,8 @@ import traceback
|
||||
# https://github.com/kennethreitz/requests/issues/2966
|
||||
|
||||
# Until then, I will use a mix of urllib3 and urllib.
|
||||
import urllib3
|
||||
import urllib3.contrib.socks
|
||||
import urllib3 # noqa: E402 (imported here intentionally after the long note above)
|
||||
import urllib3.contrib.socks # noqa: E402
|
||||
|
||||
URL_ORIGIN = "/https://www.youtube.com"
|
||||
|
||||
@@ -177,7 +178,6 @@ def get_pool(use_tor):
|
||||
class HTTPAsymmetricCookieProcessor(urllib.request.BaseHandler):
|
||||
'''Separate cookiejars for receiving and sending'''
|
||||
def __init__(self, cookiejar_send=None, cookiejar_receive=None):
|
||||
import http.cookiejar
|
||||
self.cookiejar_send = cookiejar_send
|
||||
self.cookiejar_receive = cookiejar_receive
|
||||
|
||||
@@ -208,6 +208,16 @@ class FetchError(Exception):
|
||||
self.error_message = error_message
|
||||
|
||||
|
||||
def _noop_cleanup(response):
|
||||
'''No-op cleanup used when the urllib opener owns the response.'''
|
||||
return None
|
||||
|
||||
|
||||
def _release_conn_cleanup(response):
|
||||
'''Release the urllib3 pooled connection back to the pool.'''
|
||||
response.release_conn()
|
||||
|
||||
|
||||
def decode_content(content, encoding_header):
|
||||
encodings = encoding_header.replace(' ', '').split(',')
|
||||
for encoding in reversed(encodings):
|
||||
@@ -263,7 +273,7 @@ def fetch_url_response(url, headers=(), timeout=15, data=None,
|
||||
opener = urllib.request.build_opener(cookie_processor)
|
||||
|
||||
response = opener.open(req, timeout=timeout)
|
||||
cleanup_func = (lambda r: None)
|
||||
cleanup_func = _noop_cleanup
|
||||
|
||||
else: # Use a urllib3 pool. Cookies can't be used since urllib3 doesn't have easy support for them.
|
||||
# default: Retry.DEFAULT = Retry(3)
|
||||
@@ -297,7 +307,7 @@ def fetch_url_response(url, headers=(), timeout=15, data=None,
|
||||
error_message=msg)
|
||||
else:
|
||||
raise
|
||||
cleanup_func = (lambda r: r.release_conn())
|
||||
cleanup_func = _release_conn_cleanup
|
||||
|
||||
return response, cleanup_func
|
||||
|
||||
@@ -315,8 +325,6 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
|
||||
|
||||
Max retries: 5 attempts with exponential backoff
|
||||
"""
|
||||
import random
|
||||
|
||||
max_retries = 5
|
||||
base_delay = 1.0 # Base delay in seconds
|
||||
|
||||
@@ -401,7 +409,7 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
|
||||
logger.error(f'Server error {response.status} after {max_retries} retries')
|
||||
raise FetchError(str(response.status), reason=response.reason, ip=None)
|
||||
|
||||
# Exponential backoff for server errors
|
||||
# Exponential backoff for server errors. Non-crypto jitter.
|
||||
delay = (base_delay * (2 ** attempt)) + random.uniform(0, 1)
|
||||
logger.warning(f'Server error ({response.status}). Waiting {delay:.1f}s before retry {attempt + 1}/{max_retries}...')
|
||||
time.sleep(delay)
|
||||
@@ -432,7 +440,7 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
|
||||
else:
|
||||
raise
|
||||
|
||||
# Wait and retry
|
||||
# Wait and retry. Non-crypto jitter.
|
||||
delay = (base_delay * (2 ** attempt)) + random.uniform(0, 1)
|
||||
logger.warning(f'Connection error. Waiting {delay:.1f}s before retry {attempt + 1}/{max_retries}...')
|
||||
time.sleep(delay)
|
||||
@@ -532,30 +540,30 @@ class RateLimitedQueue(gevent.queue.Queue):
|
||||
|
||||
|
||||
def download_thumbnail(save_directory, video_id):
|
||||
save_location = os.path.join(save_directory, video_id + ".jpg")
|
||||
save_location = os.path.join(save_directory, video_id + '.jpg')
|
||||
for quality in ('hq720.jpg', 'sddefault.jpg', 'hqdefault.jpg'):
|
||||
url = f"https://i.ytimg.com/vi/{video_id}/{quality}"
|
||||
url = f'https://i.ytimg.com/vi/{video_id}/{quality}'
|
||||
try:
|
||||
thumbnail = fetch_url(url, report_text="Saved thumbnail: " + video_id)
|
||||
thumbnail = fetch_url(url, report_text='Saved thumbnail: ' + video_id)
|
||||
except FetchError as e:
|
||||
if '404' in str(e):
|
||||
continue
|
||||
print("Failed to download thumbnail for " + video_id + ": " + str(e))
|
||||
print('Failed to download thumbnail for ' + video_id + ': ' + str(e))
|
||||
return False
|
||||
except urllib.error.HTTPError as e:
|
||||
if e.code == 404:
|
||||
continue
|
||||
print("Failed to download thumbnail for " + video_id + ": " + str(e))
|
||||
print('Failed to download thumbnail for ' + video_id + ': ' + str(e))
|
||||
return False
|
||||
try:
|
||||
f = open(save_location, 'wb')
|
||||
with open(save_location, 'wb') as f:
|
||||
f.write(thumbnail)
|
||||
except FileNotFoundError:
|
||||
os.makedirs(save_directory, exist_ok=True)
|
||||
f = open(save_location, 'wb')
|
||||
f.write(thumbnail)
|
||||
f.close()
|
||||
with open(save_location, 'wb') as f:
|
||||
f.write(thumbnail)
|
||||
return True
|
||||
print("No thumbnail available for " + video_id)
|
||||
print('No thumbnail available for ' + video_id)
|
||||
return False
|
||||
|
||||
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
__version__ = 'v0.4.5'
|
||||
__version__ = 'v0.5.0'
|
||||
|
||||
@@ -1,27 +1,26 @@
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
import re
|
||||
import traceback
|
||||
import urllib
|
||||
from math import ceil
|
||||
from types import SimpleNamespace
|
||||
from urllib.parse import parse_qs, urlencode
|
||||
|
||||
import flask
|
||||
import gevent
|
||||
import urllib3.exceptions
|
||||
from flask import request
|
||||
|
||||
import youtube
|
||||
from youtube import yt_app
|
||||
from youtube import util, comments, local_playlist, yt_data_extract
|
||||
from youtube.util import time_utc_isoformat
|
||||
import settings
|
||||
|
||||
from flask import request
|
||||
import flask
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
import json
|
||||
import gevent
|
||||
import os
|
||||
import math
|
||||
import traceback
|
||||
import urllib
|
||||
import re
|
||||
import urllib3.exceptions
|
||||
from urllib.parse import parse_qs, urlencode
|
||||
from types import SimpleNamespace
|
||||
from math import ceil
|
||||
|
||||
|
||||
try:
|
||||
with open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'r') as f:
|
||||
@@ -62,7 +61,9 @@ def get_video_sources(info, target_resolution):
|
||||
continue
|
||||
if not (fmt['init_range'] and fmt['index_range']):
|
||||
# Allow HLS-backed audio tracks (served locally, no init/index needed)
|
||||
if not fmt.get('url', '').startswith('http://127.') and not '/ytl-api/' in fmt.get('url', ''):
|
||||
url_value = fmt.get('url', '')
|
||||
if (not url_value.startswith('http://127.')
|
||||
and '/ytl-api/' not in url_value):
|
||||
continue
|
||||
# Mark as HLS for frontend
|
||||
fmt['is_hls'] = True
|
||||
@@ -222,7 +223,7 @@ def lang_in(lang, sequence):
|
||||
if lang is None:
|
||||
return False
|
||||
lang = lang[0:2]
|
||||
return lang in (l[0:2] for l in sequence)
|
||||
return lang in (item[0:2] for item in sequence)
|
||||
|
||||
|
||||
def lang_eq(lang1, lang2):
|
||||
@@ -238,9 +239,9 @@ def equiv_lang_in(lang, sequence):
|
||||
e.g. if lang is en, extracts en-GB from sequence.
|
||||
Necessary because if only a specific variant like en-GB is available, can't ask YouTube for simply en. Need to get the available variant.'''
|
||||
lang = lang[0:2]
|
||||
for l in sequence:
|
||||
if l[0:2] == lang:
|
||||
return l
|
||||
for item in sequence:
|
||||
if item[0:2] == lang:
|
||||
return item
|
||||
return None
|
||||
|
||||
|
||||
@@ -310,7 +311,15 @@ def get_subtitle_sources(info):
|
||||
sources[-1]['on'] = True
|
||||
|
||||
if len(sources) == 0:
|
||||
assert len(info['automatic_caption_languages']) == 0 and len(info['manual_caption_languages']) == 0
|
||||
# Invariant: with no caption sources there should be no languages
|
||||
# either. Don't rely on `assert` which is stripped under `python -O`.
|
||||
if (len(info['automatic_caption_languages']) != 0
|
||||
or len(info['manual_caption_languages']) != 0):
|
||||
logger.warning(
|
||||
'Unexpected state: no subtitle sources but %d auto / %d manual languages',
|
||||
len(info['automatic_caption_languages']),
|
||||
len(info['manual_caption_languages']),
|
||||
)
|
||||
|
||||
return sources
|
||||
|
||||
@@ -669,7 +678,6 @@ def format_bytes(bytes):
|
||||
@yt_app.route('/ytl-api/audio-track-proxy')
|
||||
def audio_track_proxy():
|
||||
"""Proxy for DASH audio tracks to avoid throttling."""
|
||||
cache_key = request.args.get('id', '')
|
||||
audio_url = request.args.get('url', '')
|
||||
|
||||
if not audio_url:
|
||||
@@ -692,7 +700,7 @@ def audio_track_proxy():
|
||||
@yt_app.route('/ytl-api/audio-track')
|
||||
def get_audio_track():
|
||||
"""Proxy HLS audio/video: playlist or individual segment."""
|
||||
from youtube.hls_cache import get_hls_url, _tracks
|
||||
from youtube.hls_cache import get_hls_url
|
||||
|
||||
cache_key = request.args.get('id', '')
|
||||
seg_url = request.args.get('seg', '')
|
||||
@@ -916,7 +924,7 @@ def get_hls_manifest():
|
||||
flask.abort(404, 'HLS manifest not found')
|
||||
|
||||
try:
|
||||
print(f'[hls-manifest] Fetching HLS manifest...')
|
||||
print('[hls-manifest] Fetching HLS manifest...')
|
||||
manifest = util.fetch_url(hls_url,
|
||||
headers=(('User-Agent', 'Mozilla/5.0'),),
|
||||
debug_name='hls_manifest').decode('utf-8')
|
||||
@@ -1018,7 +1026,8 @@ def get_storyboard_vtt():
|
||||
for i, board in enumerate(boards):
|
||||
*t, _, sigh = board.split("#")
|
||||
width, height, count, width_cnt, height_cnt, interval = map(int, t)
|
||||
if height != wanted_height: continue
|
||||
if height != wanted_height:
|
||||
continue
|
||||
q['sigh'] = [sigh]
|
||||
url = f"{base_url}?{urlencode(q, doseq=True)}"
|
||||
storyboard = SimpleNamespace(
|
||||
@@ -1182,7 +1191,6 @@ def get_watch_page(video_id=None):
|
||||
uni_sources = video_sources['uni_sources']
|
||||
pair_sources = video_sources['pair_sources']
|
||||
pair_idx = video_sources['pair_idx']
|
||||
audio_track_sources = video_sources['audio_track_sources']
|
||||
|
||||
# Build audio tracks list from HLS
|
||||
audio_tracks = []
|
||||
|
||||
Reference in New Issue
Block a user