192 Commits

Author SHA1 Message Date
Jesus
ed4b05d9b6 Bump version to v0.3.2 2025-03-08 16:41:58 -05:00
Jesus
6f88b1cec6 Refactor extract_info in watch.py to improve client flexibility
Introduce primary_client, fallback_client, and last_resort_client variables for better configurability.
Replace hardcoded 'android_vr' with primary_client in fetch_player_response call.
2025-03-08 16:40:51 -05:00
Jesus
03451fb8ae fix: prevent error when closing avMerge if not a function 2025-03-08 16:39:37 -05:00
Jesus
e45c3fd48b Add styles error in player 2025-03-08 16:38:31 -05:00
Jesus
1153ac8f24 Fix NoneType inside comments.py
Bug:

Traceback (most recent call last):
  File "/home/rusian/yt-local/youtube/comments.py", line 180, in video_comments
    post_process_comments_info(comments_info)
  File "/home/rusian/yt-local/youtube/comments.py", line 81, in post_process_comments_info
    comment['author'] = strip_non_ascii(comment['author'])
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/rusian/yt-local/youtube/util.py", line 843, in strip_non_ascii
    stripped = (c for c in string if 0 < ord(c) < 127)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object is not iterable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "src/gevent/greenlet.py", line 900, in gevent._gevent_cgreenlet.Greenlet.run
  File "/home/rusian/yt-local/youtube/comments.py", line 195, in video_comments
    comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip
                                                                             ^^^^
AttributeError: 'TypeError' object has no attribute 'ip'
2025-03-08T01:25:47Z <Greenlet at 0x7f251e5279c0: video_comments('hcm55lU9knw', 0, lc='')> failed with AttributeError
2025-03-08 16:37:33 -05:00
Jesus
c256a045f9 Bump version to v0.3.1 2025-03-08 16:34:29 -05:00
Jesus
98603439cb Improve buffer management for different platforms
- Introduced `BUFFER_CONFIG` to define buffer sizes for various systems (webOS, Samsung Tizen, Android TV, desktop).
- Added `detectSystem()` function to determine the platform based on `navigator.userAgent`.
- Updated `Stream` constructor to use platform-specific buffer sizes dynamically.
- Added console log for debugging detected system and applied buffer size.
2025-03-08 16:32:26 -05:00
Jesus
a6ca011202 version v0.3.0 2025-03-08 16:28:39 -05:00
Jesus
114c2572a4 Renew plyr UI and simplify elements 2025-03-08 16:28:27 -05:00
f64b362603 update logic plyr-start.js 2025-03-03 08:20:41 +08:00
2fd7910194 version 0.2.21 2025-03-02 06:24:03 +08:00
c2e53072f7 update dependencies 2025-03-01 04:58:31 +08:00
c2986f3b14 Refactoring get_app_version 2025-03-01 04:06:11 +08:00
57854169f4 minor fix deprecation warning
tests/test_util.py: 14 warnings
  /home/runner/work/yt-local/youtube-local/youtube/util.py:321: DeprecationWarning: HTTPResponse.getheader() is deprecated and will be removed in urllib3 v2.1.0. Instead use HTTPResponse.headers.get(name, default).
    response.getheader('Content-Encoding', default='identity'))
2025-03-01 01:12:09 +08:00
3217305f9f version 0.2.20 2025-02-28 11:04:06 +08:00
639aadd2c1 Remove gather_googlevideo_domains setting
This was an old experiment to collect googlevideo domains to see
if there was a pattern that could correlate to IP address to
look for workarounds for 403 errors

Can bug out if enabled and if failed to get any vidoe urls,
so remove since it is obsolete and some people are enabling it

See #218
2025-02-28 10:58:29 +08:00
7157df13cd Remove params to fetch_player_response 2025-02-28 10:58:15 +08:00
630e0137e0 Increase playlist count to 1000 by default if cannot get video count
This way, buttons will still appear even if there is a failure
to read playlist metadata

Fixes #220# Please enter the commit message for your changes. Lines starting
2025-02-28 10:51:51 +08:00
a0c51731af channel.py: Catch FetchError
Should catch this error to fail gracefully

See #227
2025-02-28 10:51:29 +08:00
d361996fc0 util: use visitorData for api request
watch: use android_vr client to get player data
2025-02-28 10:43:14 +08:00
Jesus
4ef7dda14a version 0.2.19 2024-10-11 11:25:12 +08:00
Jesus
ee31cedae0 Revert "Refactoring code and reuse INNERTUBE_CLIENTS"
This reverts commit 8af98968dd.
2024-10-11 11:22:36 +08:00
d3b0cb5e13 workflows: update git sync actions 2024-08-05 09:23:38 +08:00
0a79974d11 Add sync to c.fridu.us and sourcehut 2024-08-05 05:27:58 +08:00
4e327944a0 Add CI 2024-07-15 10:39:00 +08:00
09a437f7fb v0.2.18 2024-07-09 13:10:10 +08:00
3cbe18aac0 Fix cves
CVE-2024-34064
CVE-2024-34069
CVE-2024-37891
2024-07-09 13:03:36 +08:00
Jesus
62418f8e95 Switch to android test suite client by default
Invidious' solution to the destruction of the android client:
https://github.com/iv-org/invidious/pull/4650

Fixes #207
2024-06-11 10:46:25 +08:00
bfd3760969 Release v0.2.17 2024-04-29 01:00:13 +08:00
efd89b2e64 set ios client 2024-04-27 09:54:42 +08:00
0dc1747178 update version 0.2.16 2024-04-21 13:16:18 +08:00
8577164785 update client params 2024-04-21 13:14:08 +08:00
8af98968dd Refactoring code and reuse INNERTUBE_CLIENTS 2024-04-21 13:13:19 +08:00
8f00cbcdd6 update
update android_music client
2024-04-21 11:21:35 +08:00
af75551bc2 update
update android client
2024-04-21 11:18:42 +08:00
3a6cc1e44f version 0.2.15 2024-04-08 07:25:50 +08:00
7664b5f0ff normalize css 2024-04-08 07:12:03 +08:00
ec5d236cad fix color dark theme 2024-04-08 07:10:03 +08:00
d6b7a255d0 v0.2.14 2024-04-07 11:52:53 +08:00
22bc7324db css normalize 2024-04-07 11:50:53 +08:00
48e8f271e7 update styles to modern 2024-04-07 11:44:19 +08:00
9a0ad6070b version 0.2.13 2024-04-06 22:12:21 +08:00
6039589f24 Update android params
Discovered by LuanRT - https://github.com/LuanRT/YouTube.js/pull/624
2024-04-06 22:04:14 +08:00
d4cba7eb6c version 0.2.12 2024-03-31 04:44:03 +08:00
70cb453280 Set 'ios' client to bypass
absidue notes that blockage of the android client is collateral
damage due to YouTube's war with ReVanced. Switching to iOS should
keep us out of the line of fire for now:
https://github.com/yt-dlp/yt-dlp/issues/9554#issuecomment-2026828421
2024-03-31 04:43:11 +08:00
7a106331e7 README.md: update 2024-03-31 02:06:20 +08:00
8775e131af Temporal fix: all requests with ANDROID client get redirected to aQvGIIdgFDM video, hence the different "content not available"
Set YTMUSIC_ANDROID client instead, but it's just the matter of time before youtube updates that one too :(
2024-03-31 01:48:43 +08:00
1f16f7cb62 version 0.2.11 2024-03-30 10:14:08 +08:00
80b7f3cd00 Update user-agents and update android client parameters to fix blockage 2024-03-30 10:10:35 +08:00
8b79e067bc README.md: update 2024-03-11 10:30:09 +08:00
cda0627d5a version 0.2.10 2024-03-11 09:55:09 +08:00
ad40dd6d6b update requirements 2024-03-11 09:53:55 +08:00
b91d53dc6f Use response.headers instead of response.getheaders()
response.getheaders() will be deprecated by urllib3.
2024-03-11 09:47:35 +08:00
cda4fd1f26 version 0.2.9 2024-03-10 02:13:29 +08:00
ff2a2edaa5 generate_release: Fix wrong (32bit) MSVCR included for 64 bitInsert the 64 bit microsoft visual C runtime for 64 bit releases 2024-03-10 02:11:09 +08:00
38d8d5d4c5 av-merge: Retry more than once for timeouts 2024-03-10 02:08:23 +08:00
f010452abf Update android client version to fix 400 Bad Request 2024-03-10 02:02:42 +08:00
ab93f8242b bump v0.2.8 2024-01-29 06:10:14 +08:00
1505414a1a Update Plyr custom styles for menu container
Specifically, set a maximum height and added vertical scrolling
to address an issue related to Plyr's menu height.

Improve the overall usability and visual appearance of the menu in video player.
2024-01-29 06:06:18 +08:00
c04d7c9a24 Adjust Plyr custom styles for video preview thumbnail
In custom_plyr.css, made adjustments to styles for video preview thumbnail in Plyr

Specific changes:
- Modified the size and positioning of the thumbnail container to improve the visual presentation.
- Enchance the user experience when interacting with video previews.
2024-01-29 05:08:18 +08:00
3ee2df7faa Refactor styles on video playback page
Made changes to the styles on the video playback page to enhance visibility and address issues with the video player.
Added a new custom style file for Plyr, and removed redundant and unused styles in watch.css.

Specific changes:
- Added custom_plyr.css for Plyr styles.
- Removed redundant styles related to playback issues in watch.css
2024-01-29 05:06:38 +08:00
d2c883c211 fix thumbnail into channel 2024-01-28 13:21:54 +08:00
59c988f819 Revert update plyr 2024-01-28 00:31:30 +08:00
629c811e84 av-merge: Retry failed requests
Should reduce playback stalling
2024-01-26 01:12:54 +08:00
284024433b av-merge: Use fetchRange promise properly 2024-01-26 01:09:12 +08:00
55a8e50d6a Fix plyr hash version into embed 2024-01-24 11:53:32 +08:00
810dff999e Set flexible responsive video 2024-01-24 11:50:13 +08:00
4da91fb972 update plyr 2024-01-22 12:10:13 +08:00
874ac0a0ac Add autoplay to plyr 2024-01-22 12:09:52 +08:00
89ae1e265b Refactor captions logic in Plyr video player initialization
Simplify the captions logic in the Plyr video player initialization by using a conditional statement.
Cleaner and more concise code.
2024-01-22 07:48:00 +08:00
00bd9fee6f Add autoplay functionality in Plyr video player
Introduce autoplay feature in the Plyr video player based on the configuration settings.
2024-01-22 07:44:24 +08:00
b215e2a3b2 Add setting to autoplay videos 2024-01-22 06:38:52 +08:00
97972d6fa3 Fix like count extraction 2024-01-22 06:35:46 +08:00
6ae20bb1f5 Add option to always use integrated sources
Make the prefer_integrated_sources setting an int with 0,1,2
instead of a bool, where 2 makes it always use integrated sources
unless none are available.
2024-01-22 06:33:34 +08:00
5f3b90ad45 Fix channel about tab 2024-01-22 06:29:42 +08:00
2463af7685 subscriptions: Update live/upcoming/premier durations upon check
The durations were previously set to "LIVE", "UPCOMING", etc. and
would not be updated once the livestream was over or the upcoming
video was published.
2024-01-22 06:14:32 +08:00
86bb312d6d Subscriptions: Fix exceptions when videos are missing upload dates
E.g. line 548, AttributeError: 'NoneType' object has no attribute 'lower'

When upload dates are unavailable, make ones up which give the
correct video order
2024-01-22 06:03:16 +08:00
964b99ea40 Fix comment replies not working
YouTube set a limit of 200 replies, otherwise it rejects the
request. So decrease the requested number of replies to 200
2024-01-22 06:00:49 +08:00
51a1693789 Fix comment count extraction due to 'K/M' postfixes
YouTube now displays 2K comments instead of 2359, for instance
2024-01-22 05:59:11 +08:00
ca4a735692 Add settings for filtering out shorts in subscriptions and channels 2024-01-22 05:55:59 +08:00
2140f48919 Subscriptions: Use playlist method to get channel videos
Use the UU (user uploads) playlist since it includes streams
2024-01-22 05:52:44 +08:00
4be01d3964 Put back sort by oldest logic since YouTube added it back
Previous commit replaced it with shorts-filtering, use sort code
number 4 for that instead. Sort by oldest is still broken
pending reverse engineering of new ctoken format, however.
2024-01-22 05:47:09 +08:00
b45e3476c8 channels: Use the UU playlist to get videos by default
This will be much less likely to break moving forward since
YouTube rarely changes the playlist api

Videos page now includes shorts and streams in the video lsit

Also include an option to filter out shorts on the videos page
2024-01-22 05:39:11 +08:00
d591956baa ylist: show 100 videos per page instead of 20
Also add an option to the internal playlist ctoken function
for filtering out shorts, to be used in future anti-shorts features
2024-01-22 05:21:12 +08:00
Jesus
6011a08cdf v0.2.6 2023-09-11 04:20:49 +08:00
Jesus
83af4ab0d7 Fix comment count not extracted sometimes
YouTube created a new key 'commentCount' in addition to 'headerText'
2023-09-11 04:15:25 +08:00
Jesus
5594d017e2 Fix related vids, like_count, playlist sometimes missing
Cause is that some pages have the onResponseReceivedEndpoints key
at the top level with useless stuff in it, and the extract_items
function was searching in that instead of the 'contents' key.

Change to use if blocks instead of elif blocks in the
extract_items function.
2023-09-11 04:13:56 +08:00
Jesus
8f9c5eeb48 Fix 403s 1 minute into videos
https://github.com/iv-org/invidious/issues/4027
https://github.com/TeamNewPipe/NewPipeExtractor/pull/1084/files
2023-09-11 04:08:23 +08:00
Jesus
89e21302e3 generate_release.py: fix syntax error 2023-09-11 04:07:15 +08:00
Jesus
cb4ceefada Filter out translated audio tracks
See comment in code
2023-09-11 04:06:11 +08:00
Jesus E
c4cc5cecbf README.md: update 2023-06-19 21:38:05 -04:00
Jesus E
cc8f30eba2 Relax error and send error_code to template 2023-06-19 21:23:25 -04:00
Jesus E
6740afd6a0 version 0.2.5 2023-06-18 20:30:39 -04:00
Jesus E
63c0f4aa8f Fix typo 2023-06-18 20:12:48 -04:00
Jesus E
8908dc138f Set related videos thumbnail to HQ 2023-06-18 19:47:15 -04:00
Jesus E
cd7624f2cb Set hqdefault thumnail images 2023-06-18 19:45:34 -04:00
Jesus E
5d53225874 Fix pagination 2023-06-18 13:55:07 -04:00
Jesus E
6af17450c6 README.md: update 2023-06-17 20:01:01 -04:00
Jesus E
d85c27a728 version 0.2.4 2023-06-17 19:41:40 -04:00
Jesus E
344341b87f README.md: update 2023-06-17 17:08:30 -04:00
Jesus E
21224c8dae watch_extraction.py: fix conditional 2023-06-17 16:25:34 -04:00
Jesus E
93b58efa0e Fix offset format 2023-06-17 16:17:18 -04:00
Jesus E
db08283368 Update token offset field
Change offset field to a uint with field number 1
2023-06-17 16:16:40 -04:00
Jesus E
0f4bf45cde Fix minor formatting issues 2023-06-17 16:14:59 -04:00
Jesus E
d7f934b7b2 Merge short and video parsing even further
Use multi_get and multi_deep_get for tag differences
Replace the duration check with conservative_update
2023-06-17 16:14:02 -04:00
Jesus E
a4299dc917 Merge short and video parsing 2023-06-17 16:10:59 -04:00
Jesus E
e6fd9b40f4 Fix parsing shorts
Add check for extracting duration for shorts
Make short duration extraction stricter
Fix handling shorts with no views
2023-06-17 16:08:52 -04:00
Jesus E
f322035d4a Add functional but preliminary channel tab support
Add channel tabs to the channel template and script
Update continuation token to request different tabs

Add support for 'reelItemRenderer' format required to extract shorts
2023-06-17 16:05:40 -04:00
Jesus E
74907a8183 Music list extraction: read from SONG field
This one is used when there is no corresponding YouTube video
for the track
2023-05-28 21:45:20 -04:00
Jesus E
ec8f652bc8 Update generate_release.py
Need to use 64-bit by default now, because gevent is no longer
built for 32-bit Python
2023-05-28 21:44:13 -04:00
Jesus E
aa57ace742 Fix music list extraction
Closes #160
2023-05-28 21:42:13 -04:00
Jesus E
512798366c Revert to android URLs and fix 403s by including params
Including 'params': '8AEB' fixes the issue with the URLs
returning 403 after a couple minutes into the video.

Credit to @ImportTaste for pointing this out

Closes #168
2023-05-28 21:36:15 -04:00
Jesus E
9859c5485e Only use android URLs if encrypted; they randomly go 403
Android URLs now begin returning 403s mid playback at random.
2023-05-28 21:32:37 -04:00
Jesus E
e54596f3e9 Partially fix age restricted videos
Does not work for videos that require decryption because
decryption is not working (giving 403) for some reason.

Related invidious issue for decryption not working:
https://github.com/iv-org/invidious/issues/3245

Partial fix for #146
2023-05-28 21:30:51 -04:00
Jesus E
c6e1b366b5 Fix "This video is unavailable" due to outdated android
client

Send the latest android client version as well as a new key
with the sdk version.

See https://github.com/iv-org/invidious/pull/3255 for more details

Fixes #165
2023-05-28 21:21:11 -04:00
Jesus E
43e7f7ce93 Cache channel metadata for pages that don't provide it
Ensures channel profile picture & description are displayed

Also ensures that videos added to a local playlist from such pages
will have the channel name included

Fixes #151
2023-05-28 21:19:55 -04:00
Jesus E
97032b31ee Update channel ctoken format due to youtube changes
Hopefully they don't immediately revert it.

Related to #151
2023-05-28 21:17:03 -04:00
Jesus E
ba3714c860 server.py: route any subdomain of googleusercontent.com &
ggpht.com

Avatars no longer loaded after YouTube changed the subdomain.

Fixes #163
2023-05-28 21:12:13 -04:00
Jesus E
14c8cf3f5b Fix error with non-channel-id urls
Only update channel id based on the url if we have it
2023-05-28 21:10:39 -04:00
Jesus E
3025158d14 Use ctoken_v3 format for channel playlist & search pages
For #151
2023-05-28 21:08:05 -04:00
Jesus E
fb13fd21ef channels: Fix sorting & page prefixing not working
Further completes #151
2023-05-28 21:06:53 -04:00
Jesus E
68752000f0 Update channel to new ctoken format
Huge thanks to @michaelweiser

Different sortings still don't work for videos and playlists
2023-05-28 21:04:36 -04:00
Jesus E
7b60751e99 Fix failure to detect vp9.2 and mp4v.20.3 codecs 2023-05-28 20:47:47 -04:00
Jesus E
9890617098 Fix fmt extraction mime_type regex failure as well as exceptions 2023-05-28 20:44:30 -04:00
Jesus E
beca545951 GO to cideo with url 2023-05-28 20:42:47 -04:00
Jesus E
a9a68e7df3 go ti video with url 2023-05-28 20:42:00 -04:00
Jesus E
0f78f07875 Remove leftover print statement 2023-05-28 20:40:25 -04:00
Jesus E
08545a29df Fix likes count 2023-05-28 20:39:11 -04:00
Jesús
9564ee30fe update gevent and update greenlet 2022-11-19 04:24:20 +08:00
Jesús
6806146450 Fix CVE-2022-29361 2022-10-25 03:52:52 +08:00
Jesús
5764586646 version 0.2.3 2022-10-06 04:21:47 +08:00
Jesús
aae1aec6ad README.md: update 2022-10-06 03:59:24 +08:00
Jesús
91bdaa716c Remove M4A downloads feature of Planned features due RIAA issue 2022-10-06 03:52:34 +08:00
Jesús
9a3a3c9c59 README.md: fix typo 2022-10-05 21:56:55 +08:00
Jesús
a736412fbd README.md: update 2022-10-05 21:54:56 +08:00
Jesús
85860087b6 Fix missing id into input tag 2022-10-05 10:30:17 +08:00
Jesús
a19da4050c Fix self closing tag w3c issues 2022-10-05 10:29:23 +08:00
Jesús
c524eb16e5 Disable download by RIAA issues
Ref: https://torrentfreak.com/riaa-thwarts-youts-attempt-to-declare-youtube-ripping-legal-221002/
Archive: https://archive.ph/OZQbN
2022-10-05 10:14:06 +08:00
Jesus
6ba3959e40 version 0.2.2 2022-08-07 05:57:41 +08:00
zrose584
7d767ff9ce copyTextToClipboard: support fullscreen 2022-08-07 02:52:39 +08:00
zrose584
65e7d85549 onKeyDown: ignore plyr CustomEvents 2022-08-07 02:50:05 +08:00
Jesus
599a09d7fc Set exact versions of packages from pip 2022-08-07 01:44:35 +08:00
Jesús
6c29802eb7 fix figure tag of sc-video 2022-05-31 04:12:16 +08:00
Jesús
6225dd085e set badge 2022-05-31 02:07:37 +08:00
Jesús
0cbdc78c3c Merge branch 'master' of ssh://c.hgit.ga/software/yt-local into master 2022-05-30 23:44:16 +08:00
Jesús
a1dd283832 Revert update plyr
because iceweasel not support engine v8+
More info: https://repo.palemoon.org/MoonchildProductions/UXP/issues/1675
2022-05-30 23:43:32 +08:00
Jesús
ed6c3ae036 Fevert update plyr
because iceweasel not support engine v8+
More info: https://repo.palemoon.org/MoonchildProductions/UXP/issues/1675
2022-05-30 23:38:11 +08:00
Jesús
1fbc0cdd46 Fix preview_thumbnails
use 'deep_get' for storyboard
2022-05-30 22:45:08 +08:00
James Taylor
263469cd30 Filter out noisy video routing requests in console
Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-03-30 00:45:45 +08:00
James Taylor
79fd2966cd Extract captions base_url using different method when missing
The base url will be randomly missing.

Take one of the listed captions urls which already
has the &lang and automatic specifiers. Then remove these
specifiers.

Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-03-30 00:41:30 +08:00
James Taylor
dcd4b0f0ae Fix exception when _captions_base_url is not present
Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-03-30 00:37:43 +08:00
Jesús
e8cbc5074a [embed]: Fix undefined storyboard_url and add license 2022-02-21 15:52:27 -05:00
James Taylor
4768835766 Fix failing exit node retry test
The urllib3 retries.history wasn't working anyways

Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-02-16 11:46:15 -05:00
James Taylor
3f4db4199c Fix error during exit blockage detection when Set-Cookie missing
Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-02-15 21:32:00 -05:00
James Taylor
5260716d14 Fix MaxRetryErrors due to Tor exit node blockage
Sometimes YouTube redirects to a google.com/sorry page, seemingly
setting up redirect loops. Other times the url redirects
to itself.

Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-02-15 21:30:47 -05:00
Jesús
32d30bde9c update plyr config 2022-02-11 12:03:01 -05:00
Jesús
cd876f65e3 Update plyr module 2022-02-11 12:01:22 -05:00
Jesús
a2723d76cd README.md: update public instance 2022-02-06 15:44:17 -05:00
Jesús
fef9c778ed check variable author_description 2022-01-31 22:36:54 -05:00
Jesús
6188ba81a0 Fix author in playlist 2022-01-31 22:12:55 -05:00
Jesús
a465805cb9 Fix name settings in hotkeys 2022-01-29 16:52:18 -05:00
Jesús
12c0daa58a hotkeys.js: fix 'f' 2022-01-29 11:06:47 -05:00
zrose584
0f58f1d114 also autofocus search for /results or on error
Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-01-29 10:57:31 -05:00
Jesús
f46035c6b6 README.md: minor fix 2022-01-20 12:29:56 -05:00
Jesús
3b57335e4c [Design]: fix author_description 2022-01-17 23:37:45 -05:00
zrose584
a5ef801c07 handle missing storyboard
Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-01-17 09:01:09 -05:00
zrose584
63c92e0c4e add preview thumbnails
Signed-off-by: Jesús <heckyel@hyperbola.info>
2022-01-09 16:39:50 -05:00
Andreas
693b4ac98b Add application/vnd.ms-excel as CSV mime type
Windows sends application/vnd.ms-excel as MIME Type instead of text/csv

Signed-off-by: Jesús <heckyel@hyperbola.info>
2021-12-31 18:21:37 -05:00
Jesús
90b080b7bb [FrontEnd]: fix placeholder in play-box 2021-12-31 18:19:07 -05:00
Jesús
90338c25c6 [FrontEnd]: fix dropdown design 2021-12-31 18:15:59 -05:00
Jesús
f572bb62aa [FrontEnd]: remove unused styles 2021-12-31 17:39:59 -05:00
Jesús
f2fc1cf564 [FrontEnd]: fix missing unsubscribe style 2021-12-31 17:35:48 -05:00
Jesús
7b7e69a8b1 [FrontEnd]: light_theme, change link-visited color 2021-12-27 16:20:58 -05:00
Jesús
217541bd9c [FrontEnd]: fix dropdown design 2021-12-27 16:13:35 -05:00
Jesús
b21b2a6009 fix: falied load resource: net:: ERR_FILE_NOT_FOUND 2021-12-27 13:23:18 -05:00
Jesús
a1d3cc5045 update formats 2021-12-27 13:05:54 -05:00
Jesús
92067638b1 Disable dislikes
Ref: https://blog.youtube/news-and-events/update-to-youtube/
2021-12-26 13:29:55 -05:00
Jesús
99b70497f2 [.gitignore]: update 2021-12-26 13:14:01 -05:00
Jesús
4405742b72 Delete unused file 2021-12-26 12:47:36 -05:00
Jesús
f3d3c4c0a4 Disable 'Prefer integrated sources' for default 2021-12-26 12:44:46 -05:00
Jesús
5006149b59 change by default format, priority FLOSS formats 2021-12-26 12:42:43 -05:00
Jesús
bcbd83fa30 [FrontEnd]: improved settings design 2021-12-26 12:27:24 -05:00
Jesús
0820909b7e [frontend]: fix design in playlist 2021-12-18 23:12:08 -05:00
Jesús
519b7e64e7 [frontend]: fix reporInfo in prototype 2021-12-16 18:19:15 -05:00
Jesús
5d753351c5 [frontend]: relax find segment 2021-12-16 18:10:00 -05:00
Jesús
df7e41b61a [frontend]: fix global scope, change var to let 2021-12-16 17:46:16 -05:00
Jesús
dd498e63d9 [Design]: short Clear text 2021-12-03 19:36:08 -05:00
Jesús
8e5b6dc831 [Design]: add 0.5rem for grid-gap (col and row) 2021-12-03 19:35:03 -05:00
Jesús
66b2b20007 update public instance 2021-11-29 14:52:50 -05:00
James Taylor
2e5a1133e3 Work around video throttling using android user-agent
Temporary fix for #95

Signed-off-by: Jesús <heckyel@hyperbola.info>
2021-10-18 18:56:53 -05:00
Jesús
ec5e995262 README.md: about public instances 2021-09-28 23:34:45 -05:00
Jesús
2fe0b5e539 Improve input styles 2021-09-22 12:56:59 -05:00
67 changed files with 6274 additions and 4164 deletions

23
.gitea/workflows/ci.yaml Normal file
View File

@@ -0,0 +1,23 @@
name: CI
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: 3.11
- name: Install dependencies
run: |
pip install --upgrade pip
pip install -r requirements-dev.txt
- name: Run tests
run: pytest

View File

@@ -0,0 +1,40 @@
name: git-sync-with-mirror
on:
push:
branches: [ master ]
workflow_dispatch:
jobs:
git-sync:
runs-on: ubuntu-latest
steps:
- name: git-sync
env:
git_sync_source_repo: git@git.fridu.us:heckyel/yt-local.git
git_sync_destination_repo: ssh://git@c.fridu.us/software/yt-local.git
if: env.git_sync_source_repo && env.git_sync_destination_repo
uses: astounds/git-sync@v1
with:
source_repo: git@git.fridu.us:heckyel/yt-local.git
source_branch: "master"
destination_repo: ssh://git@c.fridu.us/software/yt-local.git
destination_branch: "master"
source_ssh_private_key: ${{ secrets.GIT_SYNC_SOURCE_SSH_PRIVATE_KEY }}
destination_ssh_private_key: ${{ secrets.GIT_SYNC_DESTINATION_SSH_PRIVATE_KEY }}
- name: git-sync-sourcehut
env:
git_sync_source_repo: git@git.fridu.us:heckyel/yt-local.git
git_sync_destination_repo: git@git.sr.ht:~heckyel/yt-local
if: env.git_sync_source_repo && env.git_sync_destination_repo
uses: astounds/git-sync@v1
with:
source_repo: git@git.fridu.us:heckyel/yt-local.git
source_branch: "master"
destination_repo: git@git.sr.ht:~heckyel/yt-local
destination_branch: "master"
source_ssh_private_key: ${{ secrets.GIT_SYNC_SOURCE_SSH_PRIVATE_KEY }}
destination_ssh_private_key: ${{ secrets.GIT_SYNC_DESTINATION_SSH_PRIVATE_KEY }}
continue-on-error: true

1
.gitignore vendored
View File

@@ -12,3 +12,4 @@ latest-dist.zip
*.7z *.7z
*.zip *.zip
*venv* *venv*
flycheck_*

View File

@@ -1,5 +1,3 @@
[![builds.sr.ht status](https://builds.sr.ht/~heckyel/yt-local/commits/.build.yml.svg)](https://builds.sr.ht/~heckyel/yt-local/commits/.build.yml?)
# yt-local # yt-local
Fork of [youtube-local](https://github.com/user234683/youtube-local) Fork of [youtube-local](https://github.com/user234683/youtube-local)
@@ -24,10 +22,10 @@ The YouTube API is not used, so no keys or anything are needed. It uses the same
* Local playlists: These solve the two problems with creating playlists on YouTube: (1) they're datamined and (2) videos frequently get deleted by YouTube and lost from the playlist, making it very difficult to find a reupload as the title of the deleted video is not displayed. * Local playlists: These solve the two problems with creating playlists on YouTube: (1) they're datamined and (2) videos frequently get deleted by YouTube and lost from the playlist, making it very difficult to find a reupload as the title of the deleted video is not displayed.
* Themes: Light, Gray, and Dark * Themes: Light, Gray, and Dark
* Subtitles * Subtitles
* Easily download videos or their audio * Easily download videos or their audio. (Disabled by default)
* No ads * No ads
* View comments * View comments
* Javascript not required * JavaScript not required
* Theater and non-theater mode * Theater and non-theater mode
* Subscriptions that are independent from YouTube * Subscriptions that are independent from YouTube
* Can import subscriptions from YouTube * Can import subscriptions from YouTube
@@ -56,7 +54,6 @@ The YouTube API is not used, so no keys or anything are needed. It uses the same
- [ ] Import youtube playlist into a local playlist - [ ] Import youtube playlist into a local playlist
- [ ] Rearrange items of local playlist - [ ] Rearrange items of local playlist
- [x] Video qualities other than 360p and 720p by muxing video and audio - [x] Video qualities other than 360p and 720p by muxing video and audio
- [ ] Corrected .m4a downloads
- [x] Indicate if comments are disabled - [x] Indicate if comments are disabled
- [x] Indicate how many comments a video has - [x] Indicate how many comments a video has
- [ ] Featured channels page - [ ] Featured channels page
@@ -94,11 +91,11 @@ Firstly, if you wish to run this in portable mode, create the empty file "settin
To run the program on windows, open `run.bat`. On GNU+Linux/MacOS, run `python3 server.py`. To run the program on windows, open `run.bat`. On GNU+Linux/MacOS, run `python3 server.py`.
Access youtube URLs by prefixing them with `http://localhost:9010/`.
For instance, `http://localhost:9010/https://www.youtube.com/watch?v=vBgulDeV2RU`
You can use an addon such as Redirector ([Firefox](https://addons.mozilla.org/en-US/firefox/addon/redirector/)|[Chrome](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)) to automatically redirect YouTube URLs to yt-local. I use the include pattern `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)` and redirect pattern `http://localhost:9010/$1` (Make sure you're using regular expression mode).
Access youtube URLs by prefixing them with `http://localhost:9010/`, For instance, `http://localhost:9010/https://www.youtube.com/watch?v=vBgulDeV2RU` If you want embeds on web to also redirect to yt-local, make sure "Iframes" is checked under advanced options in your redirector rule. Check test `http://localhost:9010/youtube.com/embed/vBgulDeV2RU`
You can use an addon such as Redirector ([Firefox](https://addons.mozilla.org/en-US/firefox/addon/redirector/)|[Chrome](https://chrome.google.com/webstore/detail/redirector/ocgpenflpmgnfapjedencafcfakcekcd)) to automatically redirect YouTube URLs to yt-local. I use the include pattern `^(https?://(?:[a-zA-Z0-9_-]*\.)?(?:youtube\.com|youtu\.be|youtube-nocookie\.com)/.*)` and the redirect pattern `http://localhost:9010/$1` (Make sure you're using regular expression mode).
If you want embeds on the web to also redirect to yt-local, make sure "Iframes" is checked under advanced options in your redirector rule. Check test `http://localhost:9010/youtube.com/embed/vBgulDeV2RU`
yt-local can be added as a search engine in firefox to make searching more convenient. See [here](https://support.mozilla.org/en-US/kb/add-or-remove-search-engine-firefox) for information on firefox search plugins. yt-local can be added as a search engine in firefox to make searching more convenient. See [here](https://support.mozilla.org/en-US/kb/add-or-remove-search-engine-firefox) for information on firefox search plugins.
@@ -114,7 +111,7 @@ If you don't want to waste system resources leaving the Tor Browser open in addi
For Windows, to make standalone Tor run at startup, press Windows Key + R and type `shell:startup` to open the Startup folder. Create a new shortcut there. For the command of the shortcut, enter `"C:\[path-to-Tor-Browser-directory]\Tor\tor.exe" SOCKSPort 9150 ControlPort 9151`. You can then launch this shortcut to start it. Alternatively, if something isn't working, to see what's wrong, open `cmd.exe` and go to the directory `C:\[path-to-Tor-Browser-directory]\Tor`. Then run `tor SOCKSPort 9150 ControlPort 9151 | more`. The `more` part at the end is just to make sure any errors are displayed, to fix a bug in Windows cmd where tor doesn't display any output. You can stop tor in the task manager. For Windows, to make standalone Tor run at startup, press Windows Key + R and type `shell:startup` to open the Startup folder. Create a new shortcut there. For the command of the shortcut, enter `"C:\[path-to-Tor-Browser-directory]\Tor\tor.exe" SOCKSPort 9150 ControlPort 9151`. You can then launch this shortcut to start it. Alternatively, if something isn't working, to see what's wrong, open `cmd.exe` and go to the directory `C:\[path-to-Tor-Browser-directory]\Tor`. Then run `tor SOCKSPort 9150 ControlPort 9151 | more`. The `more` part at the end is just to make sure any errors are displayed, to fix a bug in Windows cmd where tor doesn't display any output. You can stop tor in the task manager.
For Debian/Ubuntu, you can `sudo apt install tor` to install the command line version of Tor, and then run `sudo systemctl start tor` to run it as a background service that will get started during boot as well. However, Tor on the command line uses the port 9050 by default (rather than the 9150 used by the Tor Browser). So you will need to change `Tor port` to 9050 and `Tor control port` to 9051 in the yt-local settings page. Additionally, you will need to enable the Tor control port by uncommenting the line `ControlPort 9051`, and setting `CookieAuthentication` to 0 in `/etc/tor/torrc`. If no Tor package is available for your distro, you can configure the `tor` binary located at `./Browser/TorBrowser/Tor/tor` inside the Tor Browser installation location to run at start time, or create a service to do it. For Debian/Ubuntu, you can `sudo apt install tor` to install the command line version of Tor, and then run `sudo systemctl start tor` to run it as a background service that will get started during boot as well. However, Tor on the command line uses the port `9050` by default (rather than the 9150 used by the Tor Browser). So you will need to change `Tor port` to 9050 and `Tor control port` to `9051` in yt-local settings page. Additionally, you will need to enable the Tor control port by uncommenting the line `ControlPort 9051`, and setting `CookieAuthentication` to 0 in `/etc/tor/torrc`. If no Tor package is available for your distro, you can configure the `tor` binary located at `./Browser/TorBrowser/Tor/tor` inside the Tor Browser installation location to run at start time, or create a service to do it.
### Tor video routing ### Tor video routing
@@ -144,6 +141,18 @@ Pull requests and issues are welcome
For coding guidelines and an overview of the software architecture, see the [HACKING.md](docs/HACKING.md) file. For coding guidelines and an overview of the software architecture, see the [HACKING.md](docs/HACKING.md) file.
## GPG public KEY
```bash
72CFB264DFC43F63E098F926E607CE7149F4D71C
```
## Public instances
yt-local is not made to work in public mode, however there is an instance of yt-local in public mode but with less features
- <https://m.fridu.us/https://youtube.com>
## License ## License
This project is licensed under the GNU Affero General Public License v3 (GNU AGPLv3) or any later version. This project is licensed under the GNU Affero General Public License v3 (GNU AGPLv3) or any later version.

View File

@@ -1,7 +1,8 @@
# Generate a windows release and a generated embedded distribution of python # Generate a windows release and a generated embedded distribution of python
# Latest python version is the argument of the script # Latest python version is the argument of the script (or oldwin for
# vista, 7 and 32-bit versions)
# Requirements: 7z, git # Requirements: 7z, git
# wine 32-bit is required in order to build on Linux # wine is required in order to build on Linux
import sys import sys
import urllib import urllib
@@ -12,22 +13,28 @@ import os
import hashlib import hashlib
latest_version = sys.argv[1] latest_version = sys.argv[1]
if len(sys.argv) > 2:
bitness = sys.argv[2]
else:
bitness = '64'
if latest_version == 'oldwin':
bitness = '32'
latest_version = '3.7.9'
suffix = 'windows-vista-7-only'
else:
suffix = 'windows'
def check(code): def check(code):
if code != 0: if code != 0:
raise Exception('Got nonzero exit code from command') raise Exception('Got nonzero exit code from command')
def check_subp(x): def check_subp(x):
if x.returncode != 0: if x.returncode != 0:
raise Exception('Got nonzero exit code from command') raise Exception('Got nonzero exit code from command')
def log(line): def log(line):
print('[generate_release.py] ' + line) print('[generate_release.py] ' + line)
# https://stackoverflow.com/questions/7833715/python-deleting-certain-file-extensions # https://stackoverflow.com/questions/7833715/python-deleting-certain-file-extensions
def remove_files_with_extensions(path, extensions): def remove_files_with_extensions(path, extensions):
for root, dirs, files in os.walk(path): for root, dirs, files in os.walk(path):
@@ -35,7 +42,6 @@ def remove_files_with_extensions(path, extensions):
if os.path.splitext(file)[1] in extensions: if os.path.splitext(file)[1] in extensions:
os.remove(os.path.join(root, file)) os.remove(os.path.join(root, file))
def download_if_not_exists(file_name, url, sha256=None): def download_if_not_exists(file_name, url, sha256=None):
if not os.path.exists('./' + file_name): if not os.path.exists('./' + file_name):
log('Downloading ' + file_name + '..') log('Downloading ' + file_name + '..')
@@ -51,7 +57,6 @@ def download_if_not_exists(file_name, url, sha256=None):
else: else:
log('Using existing ' + file_name) log('Using existing ' + file_name)
def wine_run_shell(command): def wine_run_shell(command):
if os.name == 'posix': if os.name == 'posix':
check(os.system('wine ' + command.replace('\\', '/'))) check(os.system('wine ' + command.replace('\\', '/')))
@@ -60,14 +65,12 @@ def wine_run_shell(command):
else: else:
raise Exception('Unsupported OS') raise Exception('Unsupported OS')
def wine_run(command_parts): def wine_run(command_parts):
if os.name == 'posix': if os.name == 'posix':
command_parts = ['wine', ] + command_parts command_parts = ['wine',] + command_parts
if subprocess.run(command_parts).returncode != 0: if subprocess.run(command_parts).returncode != 0:
raise Exception('Got nonzero exit code from command') raise Exception('Got nonzero exit code from command')
# ---------- Get current release version, for later ---------- # ---------- Get current release version, for later ----------
log('Getting current release version') log('Getting current release version')
describe_result = subprocess.run(['git', 'describe', '--tags'], stdout=subprocess.PIPE) describe_result = subprocess.run(['git', 'describe', '--tags'], stdout=subprocess.PIPE)
@@ -98,19 +101,33 @@ if len(os.listdir('./yt-local')) == 0:
# ----------- Generate embedded python distribution ----------- # ----------- Generate embedded python distribution -----------
os.environ['PYTHONDONTWRITEBYTECODE'] = '1' # *.pyc files double the size of the distribution os.environ['PYTHONDONTWRITEBYTECODE'] = '1' # *.pyc files double the size of the distribution
get_pip_url = 'https://bootstrap.pypa.io/get-pip.py' get_pip_url = 'https://bootstrap.pypa.io/get-pip.py'
latest_dist_url = 'https://www.python.org/ftp/python/' + latest_version + '/python-' + latest_version + '-embed-win32.zip' latest_dist_url = 'https://www.python.org/ftp/python/' + latest_version + '/python-' + latest_version
if bitness == '32':
latest_dist_url += '-embed-win32.zip'
else:
latest_dist_url += '-embed-amd64.zip'
# I've verified that all the dlls in the following are signed by Microsoft. # I've verified that all the dlls in the following are signed by Microsoft.
# Using this because Microsoft only provides installers whose files can't be # Using this because Microsoft only provides installers whose files can't be
# extracted without a special tool. # extracted without a special tool.
visual_c_runtime_url = 'https://github.com/yuempek/vc-archive/raw/master/archives/vc15_(14.10.25017.0)_2017_x86.7z' if bitness == '32':
visual_c_runtime_sha256 = '2549eb4d2ce4cf3a87425ea01940f74368bf1cda378ef8a8a1f1a12ed59f1547' visual_c_runtime_url = 'https://github.com/yuempek/vc-archive/raw/master/archives/vc15_(14.10.25017.0)_2017_x86.7z'
visual_c_runtime_sha256 = '2549eb4d2ce4cf3a87425ea01940f74368bf1cda378ef8a8a1f1a12ed59f1547'
visual_c_name = 'vc15_(14.10.25017.0)_2017_x86.7z'
visual_c_path_to_dlls = 'runtime_minimum/System'
else:
visual_c_runtime_url = 'https://github.com/yuempek/vc-archive/raw/master/archives/vc15_(14.10.25017.0)_2017_x64.7z'
visual_c_runtime_sha256 = '4f00b824c37e1017a93fccbd5775e6ee54f824b6786f5730d257a87a3d9ce921'
visual_c_name = 'vc15_(14.10.25017.0)_2017_x64.7z'
visual_c_path_to_dlls = 'runtime_minimum/System64'
download_if_not_exists('get-pip.py', get_pip_url) download_if_not_exists('get-pip.py', get_pip_url)
download_if_not_exists('python-dist-' + latest_version + '.zip', latest_dist_url)
download_if_not_exists('vc15_(14.10.25017.0)_2017_x86.7z', python_dist_name = 'python-dist-' + latest_version + '-' + bitness + '.zip'
visual_c_runtime_url,
sha256=visual_c_runtime_sha256) download_if_not_exists(python_dist_name, latest_dist_url)
download_if_not_exists(visual_c_name,
visual_c_runtime_url, sha256=visual_c_runtime_sha256)
if os.path.exists('./python'): if os.path.exists('./python'):
log('Removing old python distribution') log('Removing old python distribution')
@@ -119,7 +136,7 @@ if os.path.exists('./python'):
log('Extracting python distribution') log('Extracting python distribution')
check(os.system(r'7z -y x -opython python-dist-' + latest_version + '.zip')) check(os.system(r'7z -y x -opython ' + python_dist_name))
log('Executing get-pip.py') log('Executing get-pip.py')
wine_run(['./python/python.exe', '-I', 'get-pip.py']) wine_run(['./python/python.exe', '-I', 'get-pip.py'])
@@ -183,7 +200,7 @@ with open('./python/python3' + major_release + '._pth', 'a', encoding='utf-8') a
f.write('..\n')''' f.write('..\n')'''
log('Inserting Microsoft C Runtime') log('Inserting Microsoft C Runtime')
check_subp(subprocess.run([r'7z', '-y', 'e', '-opython', 'vc15_(14.10.25017.0)_2017_x86.7z', 'runtime_minimum/System'])) check_subp(subprocess.run([r'7z', '-y', 'e', '-opython', visual_c_name, visual_c_path_to_dlls]))
log('Installing dependencies') log('Installing dependencies')
wine_run(['./python/python.exe', '-I', '-m', 'pip', 'install', '--no-compile', '-r', './requirements.txt']) wine_run(['./python/python.exe', '-I', '-m', 'pip', 'install', '--no-compile', '-r', './requirements.txt'])
@@ -219,7 +236,7 @@ log('Copying python distribution into release folder')
shutil.copytree(r'./python', r'./yt-local/python') shutil.copytree(r'./python', r'./yt-local/python')
# ----------- Create release zip ----------- # ----------- Create release zip -----------
output_filename = 'yt-local-' + release_tag + '-windows.zip' output_filename = 'yt-local-' + release_tag + '-' + suffix + '.zip'
if os.path.exists('./' + output_filename): if os.path.exists('./' + output_filename):
log('Removing previous zipped release') log('Removing previous zipped release')
os.remove('./' + output_filename) os.remove('./' + output_filename)

View File

@@ -1,28 +1,5 @@
attrs>=20.3.0 # Include all production requirements
Brotli>=1.0.9 -r requirements.txt
cachetools>=4.2.2
click>=8.0.1 # Development requirements
dataclasses>=0.6 pytest>=6.2.1
defusedxml>=0.7.1
Flask>=2.0.1
gevent>=21.8.0
greenlet>=1.1.1
importlib-metadata>=4.6.4
iniconfig>=1.1.1
itsdangerous>=2.0.1
Jinja2>=3.0.1
MarkupSafe>=2.0.1
packaging>=20.9
pluggy>=0.13.1
py>=1.10.0
pyparsing>=2.4.7
PySocks>=1.7.1
pytest>=6.2.2
stem>=1.8.0
toml>=0.10.2
typing-extensions>=3.10.0.0
urllib3>=1.26.6
Werkzeug>=2.0.1
zipp>=3.5.0
zope.event>=4.5.0
zope.interface>=5.4.0

View File

@@ -1,20 +1,8 @@
Brotli>=1.0.9 Flask>=1.0.3
cachetools>=4.2.2 gevent>=1.2.2
click>=8.0.1 Brotli>=1.0.7
dataclasses>=0.6 PySocks>=1.6.8
defusedxml>=0.7.1 urllib3>=1.24.1
Flask>=2.0.1 defusedxml>=0.5.0
gevent>=21.8.0 cachetools>=4.0.0
greenlet>=1.1.1
importlib-metadata>=4.6.4
itsdangerous>=2.0.1
Jinja2>=3.0.1
MarkupSafe>=2.0.1
PySocks>=1.7.1
stem>=1.8.0 stem>=1.8.0
typing-extensions>=3.10.0.0
urllib3>=1.26.6
Werkzeug>=2.0.1
zipp>=3.5.0
zope.event>=4.5.0
zope.interface>=5.4.0

View File

@@ -84,7 +84,7 @@ def proxy_site(env, start_response, video=False):
else: else:
response, cleanup_func = util.fetch_url_response(url, send_headers) response, cleanup_func = util.fetch_url_response(url, send_headers)
response_headers = response.getheaders() response_headers = response.headers
if isinstance(response_headers, urllib3._collections.HTTPHeaderDict): if isinstance(response_headers, urllib3._collections.HTTPHeaderDict):
response_headers = response_headers.items() response_headers = response_headers.items()
if video: if video:
@@ -169,8 +169,8 @@ site_handlers = {
'youtube-nocookie.com': yt_app, 'youtube-nocookie.com': yt_app,
'youtu.be': youtu_be, 'youtu.be': youtu_be,
'ytimg.com': proxy_site, 'ytimg.com': proxy_site,
'yt3.ggpht.com': proxy_site, 'ggpht.com': proxy_site,
'lh3.googleusercontent.com': proxy_site, 'googleusercontent.com': proxy_site,
'sponsor.ajay.app': proxy_site, 'sponsor.ajay.app': proxy_site,
'googlevideo.com': proxy_video, 'googlevideo.com': proxy_video,
} }
@@ -250,12 +250,14 @@ def site_dispatch(env, start_response):
class FilteredRequestLog: class FilteredRequestLog:
'''Don't log noisy thumbnail and avatar requests''' '''Don't log noisy thumbnail and avatar requests'''
filter_re = re.compile(r"""(?x)^ filter_re = re.compile(r'''(?x)
"GET /https://(i[.]ytimg[.]com/| "GET\ /https://(
i[.]ytimg[.]com/|
www[.]youtube[.]com/data/subscription_thumbnails/| www[.]youtube[.]com/data/subscription_thumbnails/|
yt3[.]ggpht[.]com/| yt3[.]ggpht[.]com/|
www[.]youtube[.]com/api/timedtext).*" 200 www[.]youtube[.]com/api/timedtext|
""") [-\w]+[.]googlevideo[.]com/).*"\ (200|206)
''')
def __init__(self): def __init__(self):
pass pass

View File

@@ -151,6 +151,13 @@ For security reasons, enabling this is not recommended.''',
'category': 'interface', 'category': 'interface',
}), }),
('autoplay_videos', {
'type': bool,
'default': False,
'comment': '',
'category': 'playback',
}),
('default_resolution', { ('default_resolution', {
'type': int, 'type': int,
'default': 720, 'default': 720,
@@ -168,17 +175,13 @@ For security reasons, enabling this is not recommended.''',
'category': 'playback', 'category': 'playback',
}), }),
('codec_rank_h264', { ('codec_rank_av1', {
'type': int, 'type': int,
'default': 1, 'default': 1,
'label': 'H.264 Codec Ranking', 'label': 'AV1 Codec Ranking',
'comment': '', 'comment': '',
'options': [(1, '#1'), (2, '#2'), (3, '#3')], 'options': [(1, '#1'), (2, '#2'), (3, '#3')],
'category': 'playback', 'category': 'playback',
'description': (
'Which video codecs to prefer. Codecs given the same '
'ranking will use smaller file size as a tiebreaker.'
)
}), }),
('codec_rank_vp', { ('codec_rank_vp', {
@@ -190,22 +193,31 @@ For security reasons, enabling this is not recommended.''',
'category': 'playback', 'category': 'playback',
}), }),
('codec_rank_av1', { ('codec_rank_h264', {
'type': int, 'type': int,
'default': 3, 'default': 3,
'label': 'AV1 Codec Ranking', 'label': 'H.264 Codec Ranking',
'comment': '', 'comment': '',
'options': [(1, '#1'), (2, '#2'), (3, '#3')], 'options': [(1, '#1'), (2, '#2'), (3, '#3')],
'category': 'playback', 'category': 'playback',
'description': (
'Which video codecs to prefer. Codecs given the same '
'ranking will use smaller file size as a tiebreaker.'
)
}), }),
('prefer_uni_sources', { ('prefer_uni_sources', {
'label': 'Prefer integrated sources', 'label': 'Use integrated sources',
'type': bool, 'type': int,
'default': True, 'default': 1,
'comment': '', 'comment': '',
'options': [
(0, 'Prefer not'),
(1, 'Prefer'),
(2, 'Always'),
],
'category': 'playback', 'category': 'playback',
'description': 'If enabled and the default resolution is set to 360p or 720p, uses the unified (integrated) video files which contain audio and video, with buffering managed by the browser. If disabled, always uses the separate audio and video files through custom buffer management in av-merge via MediaSource.', 'description': 'If set to Prefer or Always and the default resolution is set to 360p or 720p, uses the unified (integrated) video files which contain audio and video, with buffering managed by the browser. If set to prefer not, uses the separate audio and video files through custom buffer management in av-merge via MediaSource unless they are unavailable.',
}), }),
('use_video_player', { ('use_video_player', {
@@ -220,6 +232,20 @@ For security reasons, enabling this is not recommended.''',
'category': 'interface', 'category': 'interface',
}), }),
('use_video_download', {
'type': int,
'default': 0,
'comment': '',
'options': [
(0, 'Disabled'),
(1, 'Enabled'),
],
'category': 'interface',
'comment': '''If enabled, you may incur legal issues with RIAA. Disabled by default.
More info: https://torrentfreak.com/riaa-thwarts-youts-attempt-to-declare-youtube-ripping-legal-221002/
Archive: https://archive.ph/OZQbN''',
}),
('proxy_images', { ('proxy_images', {
'label': 'Route images', 'label': 'Route images',
'type': bool, 'type': bool,
@@ -284,11 +310,16 @@ For security reasons, enabling this is not recommended.''',
'comment': '', 'comment': '',
}), }),
('gather_googlevideo_domains', { ('include_shorts_in_subscriptions', {
'type': bool, 'type': bool,
'default': False, 'default': 0,
'comment': '''Developer use to debug 403s''', 'comment': '',
'hidden': True, }),
('include_shorts_in_channel', {
'type': bool,
'default': 1,
'comment': '',
}), }),
('debugging_save_responses', { ('debugging_save_responses', {
@@ -300,7 +331,7 @@ For security reasons, enabling this is not recommended.''',
('settings_version', { ('settings_version', {
'type': int, 'type': int,
'default': 4, 'default': 6,
'comment': '''Do not change, remove, or comment out this value, or else your settings may be lost or corrupted''', 'comment': '''Do not change, remove, or comment out this value, or else your settings may be lost or corrupted''',
'hidden': True, 'hidden': True,
}), }),
@@ -373,10 +404,28 @@ def upgrade_to_4(settings_dict):
return new_settings return new_settings
def upgrade_to_5(settings_dict):
new_settings = settings_dict.copy()
if 'prefer_uni_sources' in settings_dict:
new_settings['prefer_uni_sources'] = int(settings_dict['prefer_uni_sources'])
new_settings['settings_version'] = 5
return new_settings
def upgrade_to_6(settings_dict):
new_settings = settings_dict.copy()
if 'gather_googlevideo_domains' in new_settings:
del new_settings['gather_googlevideo_domains']
new_settings['settings_version'] = 6
return new_settings
upgrade_functions = { upgrade_functions = {
1: upgrade_to_2, 1: upgrade_to_2,
2: upgrade_to_3, 2: upgrade_to_3,
3: upgrade_to_4, 3: upgrade_to_4,
4: upgrade_to_5,
5: upgrade_to_6,
} }

View File

@@ -54,7 +54,10 @@ def commatize(num):
if num is None: if num is None:
return '' return ''
if isinstance(num, str): if isinstance(num, str):
num = int(num) try:
num = int(num)
except ValueError:
return num
return '{:,}'.format(num) return '{:,}'.format(num)
@@ -115,7 +118,18 @@ def error_page(e):
error_message=exc_info()[1].error_message, error_message=exc_info()[1].error_message,
slim=slim slim=slim
), 502) ), 502)
return flask.render_template('error.html', traceback=traceback.format_exc(), slim=slim), 500 elif (exc_info()[0] == util.FetchError
and exc_info()[1].code == '404'
):
error_message = ('Error: The page you are looking for isn\'t here.')
return flask.render_template('error.html',
error_code=exc_info()[1].code,
error_message=error_message,
slim=slim), 404
return flask.render_template('error.html', traceback=traceback.format_exc(),
error_code=exc_info()[1].code,
slim=slim), 500
# return flask.render_template('error.html', traceback=traceback.format_exc(), slim=slim), 500
font_choices = { font_choices = {

View File

@@ -1,6 +1,8 @@
import base64 import base64
from youtube import util, yt_data_extract, local_playlist, subscriptions from youtube import (util, yt_data_extract, local_playlist, subscriptions,
playlist)
from youtube import yt_app from youtube import yt_app
import settings
import urllib import urllib
import json import json
@@ -31,6 +33,132 @@ headers_mobile = (
real_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=8XihrAcN1l4'),) real_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=8XihrAcN1l4'),)
generic_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=ST1Ti53r4fU'),) generic_cookie = (('Cookie', 'VISITOR_INFO1_LIVE=ST1Ti53r4fU'),)
# added an extra nesting under the 2nd base64 compared to v4
# added tab support
# changed offset field to uint id 1
def channel_ctoken_v5(channel_id, page, sort, tab, view=1):
new_sort = (2 if int(sort) == 1 else 1)
offset = 30*(int(page) - 1)
if tab == 'videos':
tab = 15
elif tab == 'shorts':
tab = 10
elif tab == 'streams':
tab = 14
pointless_nest = proto.string(80226972,
proto.string(2, channel_id)
+ proto.string(3,
proto.percent_b64encode(
proto.string(110,
proto.string(3,
proto.string(tab,
proto.string(1,
proto.string(1,
proto.unpadded_b64encode(
proto.string(1,
proto.string(1,
proto.unpadded_b64encode(
proto.string(2,
b"ST:"
+ proto.unpadded_b64encode(
proto.uint(1, offset)
)
)
)
)
)
)
)
# targetId, just needs to be present but
# doesn't need to be correct
+ proto.string(2, "63faaff0-0000-23fe-80f0-582429d11c38")
)
# 1 - newest, 2 - popular
+ proto.uint(3, new_sort)
)
)
)
)
)
)
return base64.urlsafe_b64encode(pointless_nest).decode('ascii')
def channel_about_ctoken(channel_id):
return proto.make_protobuf(
('base64p',
[
[2, 80226972,
[
[2, 2, channel_id],
[2, 3,
('base64p',
[
[2, 110,
[
[2, 3,
[
[2, 19,
[
[2, 1, b'66b0e9e9-0000-2820-9589-582429a83980'],
]
],
]
],
]
],
]
)
],
]
],
]
)
)
# https://github.com/user234683/youtube-local/issues/151
def channel_ctoken_v4(channel_id, page, sort, tab, view=1):
new_sort = (2 if int(sort) == 1 else 1)
offset = str(30*(int(page) - 1))
pointless_nest = proto.string(80226972,
proto.string(2, channel_id)
+ proto.string(3,
proto.percent_b64encode(
proto.string(110,
proto.string(3,
proto.string(15,
proto.string(1,
proto.string(1,
proto.unpadded_b64encode(
proto.string(1,
proto.unpadded_b64encode(
proto.string(2,
b"ST:"
+ proto.unpadded_b64encode(
proto.string(2, offset)
)
)
)
)
)
)
# targetId, just needs to be present but
# doesn't need to be correct
+ proto.string(2, "63faaff0-0000-23fe-80f0-582429d11c38")
)
# 1 - newest, 2 - popular
+ proto.uint(3, new_sort)
)
)
)
)
)
)
return base64.urlsafe_b64encode(pointless_nest).decode('ascii')
# SORT: # SORT:
# videos: # videos:
# Popular - 1 # Popular - 1
@@ -75,15 +203,15 @@ def channel_ctoken_v2(channel_id, page, sort, tab, view=1):
2: 17254859483345278706, 2: 17254859483345278706,
1: 16570086088270825023, 1: 16570086088270825023,
}[int(sort)] }[int(sort)]
page_token = proto.string(61, proto.unpadded_b64encode( page_token = proto.string(61, proto.unpadded_b64encode(proto.string(1,
proto.string(1, proto.uint(1, schema_number) + proto.string( proto.uint(1, schema_number) + proto.string(2,
2, proto.string(1, proto.unpadded_b64encode(proto.uint(1,offset)))
proto.string(1, proto.unpadded_b64encode(proto.uint(1, offset))) )
)))) )))
tab = proto.string(2, tab) tab = proto.string(2, tab)
sort = proto.uint(3, int(sort)) sort = proto.uint(3, int(sort))
# page = proto.string(15, str(page) ) #page = proto.string(15, str(page))
shelf_view = proto.uint(4, 0) shelf_view = proto.uint(4, 0)
view = proto.uint(6, int(view)) view = proto.uint(6, int(view))
@@ -118,8 +246,12 @@ def get_channel_tab(channel_id, page="1", sort=3, tab='videos', view=1,
message = 'Got channel tab' if print_status else None message = 'Got channel tab' if print_status else None
if not ctoken: if not ctoken:
ctoken = channel_ctoken_v3(channel_id, page, sort, tab, view) if tab in ('videos', 'shorts', 'streams'):
ctoken = channel_ctoken_v5(channel_id, page, sort, tab, view)
else:
ctoken = channel_ctoken_v3(channel_id, page, sort, tab, view)
ctoken = ctoken.replace('=', '%3D') ctoken = ctoken.replace('=', '%3D')
# Not sure what the purpose of the key is or whether it will change # Not sure what the purpose of the key is or whether it will change
# For now it seems to be constant for the API endpoint, not dependent # For now it seems to be constant for the API endpoint, not dependent
# on the browsing session or channel # on the browsing session or channel
@@ -132,7 +264,7 @@ def get_channel_tab(channel_id, page="1", sort=3, tab='videos', view=1,
'hl': 'en', 'hl': 'en',
'gl': 'US', 'gl': 'US',
'clientName': 'WEB', 'clientName': 'WEB',
'clientVersion': '2.20180830', 'clientVersion': '2.20240327.00.00',
}, },
}, },
'continuation': ctoken, 'continuation': ctoken,
@@ -147,7 +279,8 @@ def get_channel_tab(channel_id, page="1", sort=3, tab='videos', view=1,
# cache entries expire after 30 minutes # cache entries expire after 30 minutes
@cachetools.func.ttl_cache(maxsize=128, ttl=30*60) number_of_videos_cache = cachetools.TTLCache(128, 30*60)
@cachetools.cached(number_of_videos_cache)
def get_number_of_videos_channel(channel_id): def get_number_of_videos_channel(channel_id):
if channel_id is None: if channel_id is None:
return 1000 return 1000
@@ -159,7 +292,7 @@ def get_number_of_videos_channel(channel_id):
try: try:
response = util.fetch_url(url, headers_mobile, response = util.fetch_url(url, headers_mobile,
debug_name='number_of_videos', report_text='Got number of videos') debug_name='number_of_videos', report_text='Got number of videos')
except urllib.error.HTTPError as e: except (urllib.error.HTTPError, util.FetchError) as e:
traceback.print_exc() traceback.print_exc()
print("Couldn't retrieve number of videos") print("Couldn't retrieve number of videos")
return 1000 return 1000
@@ -172,18 +305,20 @@ def get_number_of_videos_channel(channel_id):
return int(match.group(1).replace(',','')) return int(match.group(1).replace(',',''))
else: else:
return 0 return 0
def set_cached_number_of_videos(channel_id, num_videos):
@cachetools.cached(number_of_videos_cache)
def dummy_func_using_same_cache(channel_id):
return num_videos
dummy_func_using_same_cache(channel_id)
channel_id_re = re.compile(r'videos\.xml\?channel_id=([a-zA-Z0-9_-]{24})"') channel_id_re = re.compile(r'videos\.xml\?channel_id=([a-zA-Z0-9_-]{24})"')
@cachetools.func.lru_cache(maxsize=128) @cachetools.func.lru_cache(maxsize=128)
def get_channel_id(base_url): def get_channel_id(base_url):
# method that gives the smallest possible response at ~4 kb # method that gives the smallest possible response at ~4 kb
# needs to be as fast as possible # needs to be as fast as possible
base_url = base_url.replace('https://www', 'https://m') # avoid redirect base_url = base_url.replace('https://www', 'https://m') # avoid redirect
response = util.fetch_url( response = util.fetch_url(base_url + '/about?pbj=1', headers_mobile,
base_url + '/about?pbj=1', headers_mobile,
debug_name='get_channel_id', report_text='Got channel id').decode('utf-8') debug_name='get_channel_id', report_text='Got channel id').decode('utf-8')
match = channel_id_re.search(response) match = channel_id_re.search(response)
if match: if match:
@@ -191,6 +326,31 @@ def get_channel_id(base_url):
return None return None
metadata_cache = cachetools.LRUCache(128)
@cachetools.cached(metadata_cache)
def get_metadata(channel_id):
base_url = 'https://www.youtube.com/channel/' + channel_id
polymer_json = util.fetch_url(base_url + '/about?pbj=1',
headers_desktop,
debug_name='gen_channel_about',
report_text='Retrieved channel metadata')
info = yt_data_extract.extract_channel_info(json.loads(polymer_json),
'about',
continuation=False)
return extract_metadata_for_caching(info)
def set_cached_metadata(channel_id, metadata):
@cachetools.cached(metadata_cache)
def dummy_func_using_same_cache(channel_id):
return metadata
dummy_func_using_same_cache(channel_id)
def extract_metadata_for_caching(channel_info):
metadata = {}
for key in ('approx_subscriber_count', 'short_description', 'channel_name',
'avatar'):
metadata[key] = channel_info[key]
return metadata
def get_number_of_videos_general(base_url): def get_number_of_videos_general(base_url):
return get_number_of_videos_channel(get_channel_id(base_url)) return get_number_of_videos_channel(get_channel_id(base_url))
@@ -211,7 +371,7 @@ def get_channel_search_json(channel_id, query, page):
'hl': 'en', 'hl': 'en',
'gl': 'US', 'gl': 'US',
'clientName': 'WEB', 'clientName': 'WEB',
'clientVersion': '2.20180830', 'clientVersion': '2.20240327.00.00',
}, },
}, },
'continuation': ctoken, 'continuation': ctoken,
@@ -229,19 +389,20 @@ def post_process_channel_info(info):
info['avatar'] = util.prefix_url(info['avatar']) info['avatar'] = util.prefix_url(info['avatar'])
info['channel_url'] = util.prefix_url(info['channel_url']) info['channel_url'] = util.prefix_url(info['channel_url'])
for item in info['items']: for item in info['items']:
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id'])
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
if info['current_tab'] == 'about': if info['current_tab'] == 'about':
for i, (text, url) in enumerate(info['links']): for i, (text, url) in enumerate(info['links']):
if util.YOUTUBE_URL_RE.fullmatch(url): if isinstance(url, str) and util.YOUTUBE_URL_RE.fullmatch(url):
info['links'][i] = (text, util.prefix_url(url)) info['links'][i] = (text, util.prefix_url(url))
def get_channel_first_page(base_url=None, channel_id=None): def get_channel_first_page(base_url=None, tab='videos', channel_id=None):
if channel_id: if channel_id:
base_url = 'https://www.youtube.com/channel/' + channel_id base_url = 'https://www.youtube.com/channel/' + channel_id
return util.fetch_url(base_url + '/videos?pbj=1&view=0', headers_desktop, return util.fetch_url(base_url + '/' + tab + '?pbj=1&view=0',
debug_name='gen_channel_videos') headers_desktop, debug_name='gen_channel_' + tab)
playlist_sort_codes = {'2': "da", '3': "dd", '4': "lad"} playlist_sort_codes = {'2': "da", '3': "dd", '4': "lad"}
@@ -250,63 +411,159 @@ playlist_sort_codes = {'2': "da", '3': "dd", '4': "lad"}
# youtube.com/user/[username]/[tab] # youtube.com/user/[username]/[tab]
# youtube.com/c/[custom]/[tab] # youtube.com/c/[custom]/[tab]
# youtube.com/[custom]/[tab] # youtube.com/[custom]/[tab]
def get_channel_page_general_url(base_url, tab, request, channel_id=None): def get_channel_page_general_url(base_url, tab, request, channel_id=None):
page_number = int(request.args.get('page', 1)) page_number = int(request.args.get('page', 1))
sort = request.args.get('sort', '3') # sort 1: views
# sort 2: oldest
# sort 3: newest
# sort 4: newest - no shorts (Just a kludge on our end, not internal to yt)
default_sort = '3' if settings.include_shorts_in_channel else '4'
sort = request.args.get('sort', default_sort)
view = request.args.get('view', '1') view = request.args.get('view', '1')
query = request.args.get('query', '') query = request.args.get('query', '')
ctoken = request.args.get('ctoken', '') ctoken = request.args.get('ctoken', '')
default_params = (page_number == 1 and sort == '3' and view == '1') include_shorts = (sort != '4')
default_params = (page_number == 1 and sort in ('3', '4') and view == '1')
continuation = bool(ctoken) # whether or not we're using a continuation
page_size = 30
try_channel_api = True
polymer_json = None
if tab == 'videos' and channel_id and not default_params: # Use the special UU playlist which contains all the channel's uploads
tasks = ( if tab == 'videos' and sort in ('3', '4'):
gevent.spawn(get_number_of_videos_channel, channel_id), if not channel_id:
gevent.spawn(get_channel_tab, channel_id, page_number, sort, channel_id = get_channel_id(base_url)
'videos', view, ctoken) if page_number == 1 and include_shorts:
) tasks = (
gevent.joinall(tasks) gevent.spawn(playlist.playlist_first_page,
util.check_gevent_exceptions(*tasks) 'UU' + channel_id[2:],
number_of_videos, polymer_json = tasks[0].value, tasks[1].value report_text='Retrieved channel videos'),
elif tab == 'videos': gevent.spawn(get_metadata, channel_id),
)
gevent.joinall(tasks)
util.check_gevent_exceptions(*tasks)
# Ignore the metadata for now, it is cached and will be
# recalled later
pl_json = tasks[0].value
pl_info = yt_data_extract.extract_playlist_info(pl_json)
number_of_videos = pl_info['metadata']['video_count']
if number_of_videos is None:
number_of_videos = 1000
else:
set_cached_number_of_videos(channel_id, number_of_videos)
else:
tasks = (
gevent.spawn(playlist.get_videos, 'UU' + channel_id[2:],
page_number, include_shorts=include_shorts),
gevent.spawn(get_metadata, channel_id),
gevent.spawn(get_number_of_videos_channel, channel_id),
)
gevent.joinall(tasks)
util.check_gevent_exceptions(*tasks)
pl_json = tasks[0].value
pl_info = yt_data_extract.extract_playlist_info(pl_json)
number_of_videos = tasks[2].value
info = pl_info
info['channel_id'] = channel_id
info['current_tab'] = 'videos'
if info['items']: # Success
page_size = 100
try_channel_api = False
else: # Try the first-page method next
try_channel_api = True
# Use the regular channel API
if tab in ('shorts', 'streams') or (tab=='videos' and try_channel_api):
if channel_id: if channel_id:
num_videos_call = (get_number_of_videos_channel, channel_id) num_videos_call = (get_number_of_videos_channel, channel_id)
else: else:
num_videos_call = (get_number_of_videos_general, base_url) num_videos_call = (get_number_of_videos_general, base_url)
# Use ctoken method, which YouTube changes all the time
if channel_id and not default_params:
if sort == 4:
_sort = 3
else:
_sort = sort
page_call = (get_channel_tab, channel_id, page_number, _sort,
tab, view, ctoken)
# Use the first-page method, which won't break
else:
page_call = (get_channel_first_page, base_url, tab)
tasks = ( tasks = (
gevent.spawn(*num_videos_call), gevent.spawn(*num_videos_call),
gevent.spawn(get_channel_first_page, base_url=base_url), gevent.spawn(*page_call),
) )
gevent.joinall(tasks) gevent.joinall(tasks)
util.check_gevent_exceptions(*tasks) util.check_gevent_exceptions(*tasks)
number_of_videos, polymer_json = tasks[0].value, tasks[1].value number_of_videos, polymer_json = tasks[0].value, tasks[1].value
elif tab == 'about': elif tab == 'about':
polymer_json = util.fetch_url(base_url + '/about?pbj=1', headers_desktop, debug_name='gen_channel_about') # polymer_json = util.fetch_url(base_url + '/about?pbj=1', headers_desktop, debug_name='gen_channel_about')
channel_id = get_channel_id(base_url)
ctoken = channel_about_ctoken(channel_id)
polymer_json = util.call_youtube_api('web', 'browse', {
'continuation': ctoken,
})
continuation=True
elif tab == 'playlists' and page_number == 1: elif tab == 'playlists' and page_number == 1:
polymer_json = util.fetch_url(base_url+ '/playlists?pbj=1&view=1&sort=' + playlist_sort_codes[sort], headers_desktop, debug_name='gen_channel_playlists') polymer_json = util.fetch_url(base_url+ '/playlists?pbj=1&view=1&sort=' + playlist_sort_codes[sort], headers_desktop, debug_name='gen_channel_playlists')
elif tab == 'playlists': elif tab == 'playlists':
polymer_json = get_channel_tab(channel_id, page_number, sort, polymer_json = get_channel_tab(channel_id, page_number, sort,
'playlists', view) 'playlists', view)
continuation = True
elif tab == 'search' and channel_id: elif tab == 'search' and channel_id:
polymer_json = get_channel_search_json(channel_id, query, page_number) polymer_json = get_channel_search_json(channel_id, query, page_number)
elif tab == 'search': elif tab == 'search':
url = base_url + '/search?pbj=1&query=' + urllib.parse.quote(query, safe='') url = base_url + '/search?pbj=1&query=' + urllib.parse.quote(query, safe='')
polymer_json = util.fetch_url(url, headers_desktop, debug_name='gen_channel_search') polymer_json = util.fetch_url(url, headers_desktop, debug_name='gen_channel_search')
elif tab == 'videos':
pass
else: else:
flask.abort(404, 'Unknown channel tab: ' + tab) flask.abort(404, 'Unknown channel tab: ' + tab)
info = yt_data_extract.extract_channel_info(json.loads(polymer_json), tab) if polymer_json is not None:
info = yt_data_extract.extract_channel_info(
json.loads(polymer_json), tab, continuation=continuation
)
if info['error'] is not None: if info['error'] is not None:
return flask.render_template('error.html', error_message=info['error']) return flask.render_template('error.html', error_message=info['error'])
post_process_channel_info(info) if channel_id:
if tab == 'videos': info['channel_url'] = 'https://www.youtube.com/channel/' + channel_id
info['channel_id'] = channel_id
else:
channel_id = info['channel_id']
# Will have microformat present, cache metadata while we have it
if channel_id and default_params and tab not in ('videos', 'about'):
metadata = extract_metadata_for_caching(info)
set_cached_metadata(channel_id, metadata)
# Otherwise, populate with our (hopefully cached) metadata
elif channel_id and info.get('channel_name') is None:
metadata = get_metadata(channel_id)
for key, value in metadata.items():
yt_data_extract.conservative_update(info, key, value)
# need to add this metadata to the videos/playlists
additional_info = {
'author': info['channel_name'],
'author_id': info['channel_id'],
'author_url': info['channel_url'],
}
for item in info['items']:
item.update(additional_info)
if tab in ('videos', 'shorts', 'streams'):
info['number_of_videos'] = number_of_videos info['number_of_videos'] = number_of_videos
info['number_of_pages'] = math.ceil(number_of_videos/30) info['number_of_pages'] = math.ceil(number_of_videos/page_size)
info['header_playlist_names'] = local_playlist.get_playlist_names() info['header_playlist_names'] = local_playlist.get_playlist_names()
if tab in ('videos', 'playlists'): if tab in ('videos', 'shorts', 'streams', 'playlists'):
info['current_sort'] = sort info['current_sort'] = sort
elif tab == 'search': elif tab == 'search':
info['search_box_value'] = query info['search_box_value'] = query
@@ -315,9 +572,10 @@ def get_channel_page_general_url(base_url, tab, request, channel_id=None):
info['page_number'] = page_number info['page_number'] = page_number
info['subscribed'] = subscriptions.is_subscribed(info['channel_id']) info['subscribed'] = subscriptions.is_subscribed(info['channel_id'])
return flask.render_template( post_process_channel_info(info)
'channel.html',
parameters_dictionary=request.args, return flask.render_template('channel.html',
parameters_dictionary = request.args,
**info **info
) )

View File

@@ -78,7 +78,7 @@ def single_comment_ctoken(video_id, comment_id):
def post_process_comments_info(comments_info): def post_process_comments_info(comments_info):
for comment in comments_info['comments']: for comment in comments_info['comments']:
comment['author'] = strip_non_ascii(comment['author']) comment['author'] = strip_non_ascii(comment['author']) if comment.get('author') else ""
comment['author_url'] = concat_or_none( comment['author_url'] = concat_or_none(
'/', comment['author_url']) '/', comment['author_url'])
comment['author_avatar'] = concat_or_none( comment['author_avatar'] = concat_or_none(
@@ -97,7 +97,7 @@ def post_process_comments_info(comments_info):
ctoken = comment['reply_ctoken'] ctoken = comment['reply_ctoken']
ctoken, err = proto.set_protobuf_value( ctoken, err = proto.set_protobuf_value(
ctoken, ctoken,
'base64p', 6, 3, 9, value=250) 'base64p', 6, 3, 9, value=200)
if err: if err:
print('Error setting ctoken value:') print('Error setting ctoken value:')
print(err) print(err)
@@ -127,7 +127,7 @@ def post_process_comments_info(comments_info):
# change max_replies field to 250 in ctoken # change max_replies field to 250 in ctoken
new_ctoken, err = proto.set_protobuf_value( new_ctoken, err = proto.set_protobuf_value(
ctoken, ctoken,
'base64p', 6, 3, 9, value=250) 'base64p', 6, 3, 9, value=200)
if err: if err:
print('Error setting ctoken value:') print('Error setting ctoken value:')
print(err) print(err)
@@ -150,7 +150,7 @@ def post_process_comments_info(comments_info):
util.URL_ORIGIN, '/watch?v=', comments_info['video_id']) util.URL_ORIGIN, '/watch?v=', comments_info['video_id'])
comments_info['video_thumbnail'] = concat_or_none( comments_info['video_thumbnail'] = concat_or_none(
settings.img_prefix, 'https://i.ytimg.com/vi/', settings.img_prefix, 'https://i.ytimg.com/vi/',
comments_info['video_id'], '/mqdefault.jpg' comments_info['video_id'], '/hqdefault.jpg'
) )
@@ -189,10 +189,10 @@ def video_comments(video_id, sort=0, offset=0, lc='', secret_key=''):
comments_info['error'] += '\n\n' + e.error_message comments_info['error'] += '\n\n' + e.error_message
comments_info['error'] += '\n\nExit node IP address: %s' % e.ip comments_info['error'] += '\n\nExit node IP address: %s' % e.ip
else: else:
comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip comments_info['error'] = 'YouTube blocked the request. Error: %s' % str(e)
except Exception as e: except Exception as e:
comments_info['error'] = 'YouTube blocked the request. IP address: %s' % e.ip comments_info['error'] = 'YouTube blocked the request. Error: %s' % str(e)
if comments_info.get('error'): if comments_info.get('error'):
print('Error retrieving comments for ' + str(video_id) + ':\n' + print('Error retrieving comments for ' + str(video_id) + ':\n' +

View File

@@ -11,17 +11,10 @@ import subprocess
def app_version(): def app_version():
def minimal_env_cmd(cmd): def minimal_env_cmd(cmd):
# make minimal environment # make minimal environment
env = {} env = {k: os.environ[k] for k in ['SYSTEMROOT', 'PATH'] if k in os.environ}
for k in ['SYSTEMROOT', 'PATH']: env.update({'LANGUAGE': 'C', 'LANG': 'C', 'LC_ALL': 'C'})
v = os.environ.get(k)
if v is not None:
env[k] = v
env['LANGUAGE'] = 'C' out = subprocess.Popen(cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
env['LANG'] = 'C'
env['LC_ALL'] = 'C'
out = subprocess.Popen(
cmd, stdout=subprocess.PIPE, env=env).communicate()[0]
return out return out
subst_list = { subst_list = {
@@ -31,24 +24,21 @@ def app_version():
} }
if os.system("command -v git > /dev/null 2>&1") != 0: if os.system("command -v git > /dev/null 2>&1") != 0:
subst_list return subst_list
else:
if call(["git", "branch"], stderr=STDOUT,
stdout=open(os.devnull, 'w')) != 0:
subst_list
else:
# version
describe = minimal_env_cmd(["git", "describe", "--always"])
git_revision = describe.strip().decode('ascii')
# branch
branch = minimal_env_cmd(["git", "branch"])
git_branch = branch.strip().decode('ascii').replace('* ', '')
subst_list = { if call(["git", "branch"], stderr=STDOUT, stdout=open(os.devnull, 'w')) != 0:
"version": __version__, return subst_list
"branch": git_branch,
"commit": git_revision describe = minimal_env_cmd(["git", "describe", "--tags", "--always"])
} git_revision = describe.strip().decode('ascii')
branch = minimal_env_cmd(["git", "branch"])
git_branch = branch.strip().decode('ascii').replace('* ', '')
subst_list.update({
"branch": git_branch,
"commit": git_revision
})
return subst_list return subst_list

View File

@@ -12,12 +12,13 @@ from flask import request
import flask import flask
def playlist_ctoken(playlist_id, offset): def playlist_ctoken(playlist_id, offset, include_shorts=True):
offset = proto.uint(1, offset) offset = proto.uint(1, offset)
# this is just obfuscation as far as I can tell. It doesn't even follow protobuf
offset = b'PT:' + proto.unpadded_b64encode(offset) offset = b'PT:' + proto.unpadded_b64encode(offset)
offset = proto.string(15, offset) offset = proto.string(15, offset)
if not include_shorts:
offset += proto.string(104, proto.uint(2, 1))
continuation_info = proto.string(3, proto.percent_b64encode(offset)) continuation_info = proto.string(3, proto.percent_b64encode(offset))
@@ -26,47 +27,46 @@ def playlist_ctoken(playlist_id, offset):
return base64.urlsafe_b64encode(pointless_nest).decode('ascii') return base64.urlsafe_b64encode(pointless_nest).decode('ascii')
# initial request types:
# polymer_json: https://m.youtube.com/playlist?list=PLv3TTBr1W_9tppikBxAE_G6qjWdBljBHJ&pbj=1&lact=0
# ajax json: https://m.youtube.com/playlist?list=PLv3TTBr1W_9tppikBxAE_G6qjWdBljBHJ&pbj=1&lact=0 with header X-YouTube-Client-Version: 1.20180418
def playlist_first_page(playlist_id, report_text="Retrieved playlist",
# continuation request types: use_mobile=False):
# polymer_json: https://m.youtube.com/playlist?&ctoken=[...]&pbj=1 if use_mobile:
# ajax json: https://m.youtube.com/playlist?action_continuation=1&ajax=1&ctoken=[...] url = 'https://m.youtube.com/playlist?list=' + playlist_id + '&pbj=1'
content = util.fetch_url(
url, util.mobile_xhr_headers,
headers_1 = ( report_text=report_text, debug_name='playlist_first_page'
('Accept', '*/*'), )
('Accept-Language', 'en-US,en;q=0.5'), content = json.loads(content.decode('utf-8'))
('X-YouTube-Client-Name', '2'), else:
('X-YouTube-Client-Version', '2.20180614'), url = 'https://www.youtube.com/playlist?list=' + playlist_id + '&pbj=1'
) content = util.fetch_url(
url, util.desktop_xhr_headers,
report_text=report_text, debug_name='playlist_first_page'
def playlist_first_page(playlist_id, report_text="Retrieved playlist"): )
url = 'https://m.youtube.com/playlist?list=' + playlist_id + '&pbj=1' content = json.loads(content.decode('utf-8'))
content = util.fetch_url(url, util.mobile_ua + headers_1, report_text=report_text, debug_name='playlist_first_page')
content = json.loads(content.decode('utf-8'))
return content return content
#https://m.youtube.com/playlist?itct=CBMQybcCIhMIptj9xJaJ2wIV2JKcCh3Idwu-&ctoken=4qmFsgI2EiRWTFBMT3kwajlBdmxWWlB0bzZJa2pLZnB1MFNjeC0tN1BHVEMaDmVnWlFWRHBEUWxFJTNE&pbj=1 def get_videos(playlist_id, page, include_shorts=True, use_mobile=False,
def get_videos(playlist_id, page): report_text='Retrieved playlist'):
# mobile requests return 20 videos per page
url = "https://m.youtube.com/playlist?ctoken=" + playlist_ctoken(playlist_id, (int(page)-1)*20) + "&pbj=1" if use_mobile:
headers = { page_size = 20
'User-Agent': ' Mozilla/5.0 (iPhone; CPU iPhone OS 10_3_1 like Mac OS X) AppleWebKit/603.1.30 (KHTML, like Gecko) Version/10.0 Mobile/14E304 Safari/602.1', headers = util.mobile_xhr_headers
'Accept': '*/*', # desktop requests return 100 videos per page
'Accept-Language': 'en-US,en;q=0.5', else:
'X-YouTube-Client-Name': '2', page_size = 100
'X-YouTube-Client-Version': '2.20180508', headers = util.desktop_xhr_headers
}
url = "https://m.youtube.com/playlist?ctoken="
url += playlist_ctoken(playlist_id, (int(page)-1)*page_size,
include_shorts=include_shorts)
url += "&pbj=1"
content = util.fetch_url( content = util.fetch_url(
url, headers, url, headers, report_text=report_text,
report_text="Retrieved playlist", debug_name='playlist_videos') debug_name='playlist_videos'
)
info = json.loads(content.decode('utf-8')) info = json.loads(content.decode('utf-8'))
return info return info
@@ -85,7 +85,10 @@ def get_playlist_page():
this_page_json = first_page_json this_page_json = first_page_json
else: else:
tasks = ( tasks = (
gevent.spawn(playlist_first_page, playlist_id, report_text="Retrieved playlist info" ), gevent.spawn(
playlist_first_page, playlist_id,
report_text="Retrieved playlist info", use_mobile=True
),
gevent.spawn(get_videos, playlist_id, page) gevent.spawn(get_videos, playlist_id, page)
) )
gevent.joinall(tasks) gevent.joinall(tasks)
@@ -104,7 +107,7 @@ def get_playlist_page():
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
if 'id' in item: if 'id' in item:
item['thumbnail'] = settings.img_prefix + 'https://i.ytimg.com/vi/' + item['id'] + '/default.jpg' item['thumbnail'] = f"{settings.img_prefix}https://i.ytimg.com/vi/{item['id']}/hqdefault.jpg"
item['url'] += '&list=' + playlist_id item['url'] += '&list=' + playlist_id
if item['index']: if item['index']:
@@ -112,13 +115,13 @@ def get_playlist_page():
video_count = yt_data_extract.deep_get(info, 'metadata', 'video_count') video_count = yt_data_extract.deep_get(info, 'metadata', 'video_count')
if video_count is None: if video_count is None:
video_count = 40 video_count = 1000
return flask.render_template( return flask.render_template(
'playlist.html', 'playlist.html',
header_playlist_names=local_playlist.get_playlist_names(), header_playlist_names=local_playlist.get_playlist_names(),
video_list=info.get('items', []), video_list=info.get('items', []),
num_pages=math.ceil(video_count/20), num_pages=math.ceil(video_count/100),
parameters_dictionary=request.args, parameters_dictionary=request.args,
**info['metadata'] **info['metadata']

View File

@@ -141,6 +141,17 @@ base64_enc_funcs = {
def _make_protobuf(data): def _make_protobuf(data):
'''
Input: Recursive list of protobuf objects or base-64 encodings
Output: Protobuf bytestring
Each protobuf object takes the form [wire_type, field_number, field_data]
If a string protobuf has a list/tuple of length 2, this has the form
(base64 type, data)
The base64 types are
- base64 means a base64 encode with equals sign paddings
- base64s means a base64 encode without padding
- base64p means a url base64 encode with equals signs replaced with %3D
'''
# must be dict mapping field_number to [wire_type, value] # must be dict mapping field_number to [wire_type, value]
if isinstance(data, dict): if isinstance(data, dict):
new_data = [] new_data = []

View File

@@ -64,6 +64,8 @@ def get_search_page():
query = request.args.get('search_query') or request.args.get('query') query = request.args.get('search_query') or request.args.get('query')
if query is None: if query is None:
return flask.render_template('home.html', title='Search') return flask.render_template('home.html', title='Search')
elif query.startswith('https://www.youtube.com') or query.startswith('https://www.youtu.be'):
return flask.redirect(f'/{query}')
page = request.args.get("page", "1") page = request.args.get("page", "1")
autocorrect = int(request.args.get("autocorrect", "1")) autocorrect = int(request.args.get("autocorrect", "1"))

View File

@@ -33,6 +33,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -200,8 +202,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -252,7 +256,8 @@ hr {
padding-top: 6px; padding-top: 6px;
text-align: center; text-align: center;
white-space: nowrap; white-space: nowrap;
border: none; border: 1px solid;
border-color: var(--button-border);
border-radius: 0.2rem; border-radius: 0.2rem;
} }
@@ -504,15 +509,19 @@ hr {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem;
z-index: 1;
position: absolute; position: absolute;
z-index: 1;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
} }
.author-container { .author-container {
@@ -528,7 +537,7 @@ hr {
grid-area: playlist; grid-area: playlist;
} }
.play-clean { .play-clean {
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
} }
.play-clean > button { .play-clean > button {
padding-left: 0px; padding-left: 0px;

View File

@@ -39,6 +39,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -105,9 +107,7 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem; padding-right: 4rem;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -272,7 +272,7 @@ label[for=options-toggle-cbox] {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: minmax(50px, 120px);
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
@@ -280,6 +280,12 @@ label[for=options-toggle-cbox] {
z-index: 1; z-index: 1;
position: absolute; position: absolute;
} }
#options-toggle-cbox:checked ~ .dropdown-content {
padding: 0rem 3rem 1rem 1rem;
width: 100%;
max-height: 45vh;
overflow-y: scroll;
}
.footer { .footer {
display: grid; display: grid;

View File

@@ -1,19 +1,22 @@
:root { :root {
--background: #212121; --background: #121113;
--text: #FFFFFF; --text: #FFFFFF;
--secondary-hover: #73828c; --secondary-hover: #222222;
--secondary-focus: #606060; --secondary-focus: #121113;
--secondary-inverse: #FFF; --secondary-inverse: #FFFFFF;
--primary-background: #757575; --primary-background: #242424;
--secondary-background: #424242; --secondary-background: #222222;
--thumb-background: #757575; --thumb-background: #222222;
--link: #00B0FF; --link: #00B0FF;
--link-visited: #40C4FF; --link-visited: #40C4FF;
--buttom: #dcdcdb; --border-bg: #222222;
--buttom-text: #415462; --border-bg-settings: #000000;
--button-border: #91918c; --border-bg-license: #000000;
--buttom-hover: #BBB; --buttom: #121113;
--search-text: #FFF; --buttom-text: #FFFFFF;
--time-background: #212121; --button-border: #222222;
--time-text: #FFF; --buttom-hover: #222222;
--search-text: #FFFFFF;
--time-background: #121113;
--time-text: #FFFFFF;
} }

View File

@@ -1,18 +1,21 @@
:root { :root {
--background: #2d3743; --background: #2D3743;
--text: #FFFFFF; --text: #FFFFFF;
--secondary-hover: #73828c; --secondary-hover: #73828C;
--secondary-focus: rgba(115, 130, 140, 0.125); --secondary-focus: rgba(115, 130, 140, 0.125);
--secondary-inverse: #FFFFFF; --secondary-inverse: #FFFFFF;
--primary-background: #2d3743; --primary-background: #2D3743;
--secondary-background: #102027; --secondary-background: #102027;
--thumb-background: #35404D; --thumb-background: #35404D;
--link: #22aaff; --link: #22AAFF;
--link-visited: #7755ff; --link-visited: #7755FF;
--buttom: #DCDCDC; --border-bg: #FFFFFF;
--buttom-text: #415462; --border-bg-settings: #FFFFFF;
--button-border: #91918c; --border-bg-license: #FFFFFF;
--buttom-hover: #BBBBBB; --buttom: #2D3743;
--buttom-text: #FFFFFF;
--button-border: #102027;
--buttom-hover: #102027;
--search-text: #FFFFFF; --search-text: #FFFFFF;
--time-background: #212121; --time-background: #212121;
--time-text: #FFFFFF; --time-text: #FFFFFF;

View File

@@ -29,6 +29,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -95,7 +97,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -135,8 +136,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -186,15 +189,20 @@ label[for=options-toggle-cbox] {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
position: absolute; position: absolute;
background: var(--background); background: var(--background);
padding-right: 4rem; padding-right: 4rem;
z-index: 1;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
} }
.footer { .footer {

View File

@@ -20,6 +20,29 @@
// TODO: Call abort to cancel in-progress appends? // TODO: Call abort to cancel in-progress appends?
// Buffer sizes for different systems
const BUFFER_CONFIG = {
default: 50 * 10**6, // 50 megabytes
webOS: 20 * 10**6, // 20 megabytes WebOS (LG)
samsungTizen: 20 * 10**6, // 20 megabytes Samsung Tizen OS
androidTV: 30 * 10**6, // 30 megabytes Android TV
desktop: 50 * 10**6, // 50 megabytes PC/Mac
};
function detectSystem() {
const userAgent = navigator.userAgent.toLowerCase();
if (/webos|lg browser/i.test(userAgent)) {
return "webOS";
} else if (/tizen/i.test(userAgent)) {
return "samsungTizen";
} else if (/android tv|smart-tv/i.test(userAgent)) {
return "androidTV";
} else if (/firefox|chrome|safari|edge/i.test(userAgent)) {
return "desktop";
} else {
return "default";
}
}
function AVMerge(video, srcInfo, startTime){ function AVMerge(video, srcInfo, startTime){
this.audioSource = null; this.audioSource = null;
@@ -41,7 +64,7 @@ function AVMerge(video, srcInfo, startTime){
} }
// Find supported video and audio sources // Find supported video and audio sources
for (var src of srcInfo['videos']) { for (let src of srcInfo['videos']) {
if (MediaSource.isTypeSupported(src['mime_codec'])) { if (MediaSource.isTypeSupported(src['mime_codec'])) {
reportDebug('Using video source', src['mime_codec'], reportDebug('Using video source', src['mime_codec'],
src['quality_string'], 'itag', src['itag']); src['quality_string'], 'itag', src['itag']);
@@ -49,7 +72,7 @@ function AVMerge(video, srcInfo, startTime){
break; break;
} }
} }
for (var src of srcInfo['audios']) { for (let src of srcInfo['audios']) {
if (MediaSource.isTypeSupported(src['mime_codec'])) { if (MediaSource.isTypeSupported(src['mime_codec'])) {
reportDebug('Using audio source', src['mime_codec'], reportDebug('Using audio source', src['mime_codec'],
src['quality_string'], 'itag', src['itag']); src['quality_string'], 'itag', src['itag']);
@@ -164,6 +187,8 @@ AVMerge.prototype.printDebuggingInfo = function() {
} }
function Stream(avMerge, source, startTime, avRatio) { function Stream(avMerge, source, startTime, avRatio) {
const selectedSystem = detectSystem();
let baseBufferTarget = BUFFER_CONFIG[selectedSystem] || BUFFER_CONFIG.default;
this.avMerge = avMerge; this.avMerge = avMerge;
this.video = avMerge.video; this.video = avMerge.video;
this.url = source['url']; this.url = source['url'];
@@ -173,10 +198,11 @@ function Stream(avMerge, source, startTime, avRatio) {
this.mimeCodec = source['mime_codec'] this.mimeCodec = source['mime_codec']
this.streamType = source['acodec'] ? 'audio' : 'video'; this.streamType = source['acodec'] ? 'audio' : 'video';
if (this.streamType == 'audio') { if (this.streamType == 'audio') {
this.bufferTarget = avRatio*50*10**6; this.bufferTarget = avRatio * baseBufferTarget;
} else { } else {
this.bufferTarget = 50*10**6; // 50 megabytes this.bufferTarget = baseBufferTarget;
} }
console.info(`Detected system: ${selectedSystem}. Applying bufferTarget of ${this.bufferTarget} bytes to ${this.streamType}.`);
this.initRange = source['init_range']; this.initRange = source['init_range'];
this.indexRange = source['index_range']; this.indexRange = source['index_range'];
@@ -204,29 +230,32 @@ Stream.prototype.setup = async function(){
this.url, this.url,
this.initRange.start, this.initRange.start,
this.indexRange.end, this.indexRange.end,
'Initialization+index segments',
).then(
(buffer) => { (buffer) => {
var init_end = this.initRange.end - this.initRange.start + 1; let init_end = this.initRange.end - this.initRange.start + 1;
var index_start = this.indexRange.start - this.initRange.start; let index_start = this.indexRange.start - this.initRange.start;
var index_end = this.indexRange.end - this.initRange.start + 1; let index_end = this.indexRange.end - this.initRange.start + 1;
this.setupInitSegment(buffer.slice(0, init_end)); this.setupInitSegment(buffer.slice(0, init_end));
this.setupSegmentIndex(buffer.slice(index_start, index_end)); this.setupSegmentIndex(buffer.slice(index_start, index_end));
} }
) );
} else { } else {
// initialization data // initialization data
await fetchRange( await fetchRange(
this.url, this.url,
this.initRange.start, this.initRange.start,
this.initRange.end, this.initRange.end,
this.setupInitSegment.bind(this), 'Initialization segment',
); ).then(this.setupInitSegment.bind(this));
// sidx (segment index) table // sidx (segment index) table
fetchRange( fetchRange(
this.url, this.url,
this.indexRange.start, this.indexRange.start,
this.indexRange.end, this.indexRange.end,
this.setupSegmentIndex.bind(this) 'Index segment',
); ).then(this.setupSegmentIndex.bind(this));
} }
} }
Stream.prototype.setupInitSegment = function(initSegment) { Stream.prototype.setupInitSegment = function(initSegment) {
@@ -247,7 +276,7 @@ Stream.prototype.setupSegmentIndex = async function(indexSegment){
entry.referencedSize = entry.end - entry.start + 1; entry.referencedSize = entry.end - entry.start + 1;
} }
} else { } else {
var box = unbox(indexSegment); let box = unbox(indexSegment);
this.sidx = sidx_parse(box.data, this.indexRange.end+1); this.sidx = sidx_parse(box.data, this.indexRange.end+1);
} }
this.fetchSegmentIfNeeded(this.getSegmentIdx(this.startTime)); this.fetchSegmentIfNeeded(this.getSegmentIdx(this.startTime));
@@ -289,8 +318,8 @@ Stream.prototype.appendSegment = function(segmentIdx, chunk) {
// Count how many bytes are in buffer to update buffering target, // Count how many bytes are in buffer to update buffering target,
// updating .have as well for when we need to delete segments // updating .have as well for when we need to delete segments
var bytesInBuffer = 0; let bytesInBuffer = 0;
for (var i = 0; i < this.sidx.entries.length; i++) { for (let i = 0; i < this.sidx.entries.length; i++) {
if (this.segmentInBuffer(i)) if (this.segmentInBuffer(i))
bytesInBuffer += this.sidx.entries[i].referencedSize; bytesInBuffer += this.sidx.entries[i].referencedSize;
else if (this.sidx.entries[i].have) { else if (this.sidx.entries[i].have) {
@@ -306,11 +335,11 @@ Stream.prototype.appendSegment = function(segmentIdx, chunk) {
// Delete 10 segments (arbitrary) from buffer, making sure // Delete 10 segments (arbitrary) from buffer, making sure
// not to delete current one // not to delete current one
var currentSegment = this.getSegmentIdx(this.video.currentTime); let currentSegment = this.getSegmentIdx(this.video.currentTime);
var numDeleted = 0; let numDeleted = 0;
var i = 0; let i = 0;
const DELETION_TARGET = 10; const DELETION_TARGET = 10;
var toDelete = []; // See below for why we have to schedule it let toDelete = []; // See below for why we have to schedule it
this.reportDebug('Deleting segments from beginning of buffer.'); this.reportDebug('Deleting segments from beginning of buffer.');
while (numDeleted < DELETION_TARGET && i < currentSegment) { while (numDeleted < DELETION_TARGET && i < currentSegment) {
if (this.sidx.entries[i].have) { if (this.sidx.entries[i].have) {
@@ -334,9 +363,9 @@ Stream.prototype.appendSegment = function(segmentIdx, chunk) {
// When calling .remove, the sourceBuffer will go into updating=true // When calling .remove, the sourceBuffer will go into updating=true
// state, and remove cannot be called until it is done. So we have // state, and remove cannot be called until it is done. So we have
// to delete on the updateend event for subsequent ones. // to delete on the updateend event for subsequent ones.
var removeFinishedEvent; let removeFinishedEvent;
var deletedStuff = (toDelete.length !== 0) let deletedStuff = (toDelete.length !== 0)
var deleteSegment = () => { let deleteSegment = () => {
if (toDelete.length === 0) { if (toDelete.length === 0) {
removeFinishedEvent.remove(); removeFinishedEvent.remove();
// If QuotaExceeded happened for current segment, retry the // If QuotaExceeded happened for current segment, retry the
@@ -370,19 +399,19 @@ Stream.prototype.appendSegment = function(segmentIdx, chunk) {
} }
Stream.prototype.getSegmentIdx = function(videoTime) { Stream.prototype.getSegmentIdx = function(videoTime) {
// get an estimate // get an estimate
var currentTick = videoTime * this.sidx.timeScale; let currentTick = videoTime * this.sidx.timeScale;
var firstSegmentDuration = this.sidx.entries[0].subSegmentDuration; let firstSegmentDuration = this.sidx.entries[0].subSegmentDuration;
var index = 1 + Math.floor(currentTick / firstSegmentDuration); let index = 1 + Math.floor(currentTick / firstSegmentDuration);
var index = clamp(index, 0, this.sidx.entries.length - 1); index = clamp(index, 0, this.sidx.entries.length - 1);
var increment = 1; let increment = 1;
if (currentTick < this.sidx.entries[index].tickStart){ if (currentTick < this.sidx.entries[index].tickStart){
increment = -1; increment = -1;
} }
// go up or down to find correct index // go up or down to find correct index
while (index >= 0 && index < this.sidx.entries.length) { while (index >= 0 && index < this.sidx.entries.length) {
var entry = this.sidx.entries[index]; let entry = this.sidx.entries[index];
if (entry.tickStart <= currentTick && (entry.tickEnd+1) > currentTick){ if (entry.tickStart <= currentTick && (entry.tickEnd+1) > currentTick){
return index; return index;
} }
@@ -396,11 +425,11 @@ Stream.prototype.checkBuffer = async function() {
return; return;
} }
// Find the first unbuffered segment, i // Find the first unbuffered segment, i
var currentSegmentIdx = this.getSegmentIdx(this.video.currentTime); let currentSegmentIdx = this.getSegmentIdx(this.video.currentTime);
var bufferedBytesAhead = 0; let bufferedBytesAhead = 0;
var i; let i;
for (i = currentSegmentIdx; i < this.sidx.entries.length; i++) { for (i = currentSegmentIdx; i < this.sidx.entries.length; i++) {
var entry = this.sidx.entries[i]; let entry = this.sidx.entries[i];
// check if we had it before, but it was deleted by the browser // check if we had it before, but it was deleted by the browser
if (entry.have && !this.segmentInBuffer(i)) { if (entry.have && !this.segmentInBuffer(i)) {
this.reportDebug('segment', i, 'deleted by browser'); this.reportDebug('segment', i, 'deleted by browser');
@@ -428,9 +457,9 @@ Stream.prototype.checkBuffer = async function() {
} }
} }
Stream.prototype.segmentInBuffer = function(segmentIdx) { Stream.prototype.segmentInBuffer = function(segmentIdx) {
var entry = this.sidx.entries[segmentIdx]; let entry = this.sidx.entries[segmentIdx];
// allow for 0.01 second error // allow for 0.01 second error
var timeStart = entry.tickStart/this.sidx.timeScale + 0.01; let timeStart = entry.tickStart/this.sidx.timeScale + 0.01;
/* Some of YouTube's mp4 fragments are malformed, with half-frame /* Some of YouTube's mp4 fragments are malformed, with half-frame
playback gaps. In this video at 240p (timeScale = 90000 ticks/second) playback gaps. In this video at 240p (timeScale = 90000 ticks/second)
@@ -457,14 +486,15 @@ Stream.prototype.segmentInBuffer = function(segmentIdx) {
quality switching, YouTube likely encodes their formats to line up nicely. quality switching, YouTube likely encodes their formats to line up nicely.
Either there is a bug in their encoder, or this is intentional. Allow for Either there is a bug in their encoder, or this is intentional. Allow for
up to 1 frame-time of error to work around this issue. */ up to 1 frame-time of error to work around this issue. */
let endError;
if (this.streamType == 'video') if (this.streamType == 'video')
var endError = 1/(this.avMerge.videoSource.fps || 30); endError = 1/(this.avMerge.videoSource.fps || 30);
else else
var endError = 0.01 endError = 0.01
var timeEnd = (entry.tickEnd+1)/this.sidx.timeScale - endError; let timeEnd = (entry.tickEnd+1)/this.sidx.timeScale - endError;
var timeRanges = this.sourceBuffer.buffered; let timeRanges = this.sourceBuffer.buffered;
for (var i=0; i < timeRanges.length; i++) { for (let i=0; i < timeRanges.length; i++) {
if (timeRanges.start(i) <= timeStart && timeEnd <= timeRanges.end(i)) { if (timeRanges.start(i) <= timeStart && timeEnd <= timeRanges.end(i)) {
return true; return true;
} }
@@ -484,8 +514,8 @@ Stream.prototype.fetchSegment = function(segmentIdx) {
this.url, this.url,
entry.start, entry.start,
entry.end, entry.end,
this.appendSegment.bind(this, segmentIdx), String(this.streamType) + ' segment ' + String(segmentIdx),
); ).then(this.appendSegment.bind(this, segmentIdx));
} }
Stream.prototype.fetchSegmentIfNeeded = function(segmentIdx) { Stream.prototype.fetchSegmentIfNeeded = function(segmentIdx) {
if (segmentIdx < 0 || segmentIdx >= this.sidx.entries.length){ if (segmentIdx < 0 || segmentIdx >= this.sidx.entries.length){
@@ -505,7 +535,7 @@ Stream.prototype.fetchSegmentIfNeeded = function(segmentIdx) {
this.fetchSegment(segmentIdx); this.fetchSegment(segmentIdx);
} }
Stream.prototype.handleSeek = function() { Stream.prototype.handleSeek = function() {
var segmentIdx = this.getSegmentIdx(this.video.currentTime); let segmentIdx = this.getSegmentIdx(this.video.currentTime);
this.fetchSegmentIfNeeded(segmentIdx); this.fetchSegmentIfNeeded(segmentIdx);
} }
Stream.prototype.reportDebug = function(...args) { Stream.prototype.reportDebug = function(...args) {
@@ -521,30 +551,67 @@ Stream.prototype.reportError = function(...args) {
// Utility functions // Utility functions
function fetchRange(url, start, end, cb) { // https://gomakethings.com/promise-based-xhr/
// https://stackoverflow.com/a/30008115
// http://lofi.limo/blog/retry-xmlhttprequest-carefully
function fetchRange(url, start, end, debugInfo) {
return new Promise((resolve, reject) => { return new Promise((resolve, reject) => {
var xhr = new XMLHttpRequest(); let retryCount = 0;
let xhr = new XMLHttpRequest();
function onFailure(err, message, maxRetries=5){
message = debugInfo + ': ' + message + ' - Err: ' + String(err);
retryCount++;
if (retryCount > maxRetries || xhr.status == 403){
reportError('fetchRange error while fetching ' + message);
reject(message);
return;
} else {
reportWarning('Failed to fetch ' + message
+ '. Attempting retry '
+ String(retryCount) +'/' + String(maxRetries));
}
// Retry in 1 second, doubled for each next retry
setTimeout(function(){
xhr.open('get',url);
xhr.send();
}, 1000*Math.pow(2,(retryCount-1)));
}
xhr.open('get', url); xhr.open('get', url);
xhr.timeout = 15000;
xhr.responseType = 'arraybuffer'; xhr.responseType = 'arraybuffer';
xhr.setRequestHeader('Range', 'bytes=' + start + '-' + end); xhr.setRequestHeader('Range', 'bytes=' + start + '-' + end);
xhr.onload = function() { xhr.onload = function (e) {
//bytesFetched += end - start + 1; if (xhr.status >= 200 && xhr.status < 300) {
resolve(cb(xhr.response)); resolve(xhr.response);
} else {
onFailure(e,
'Status '
+ String(xhr.status) + ' ' + String(xhr.statusText)
);
}
};
xhr.onerror = function (event) {
onFailure(e, 'Network error');
};
xhr.ontimeout = function (event){
xhr.timeout += 5000;
onFailure(null, 'Timeout (15s)', maxRetries=5);
}; };
xhr.send(); xhr.send();
}); });
} }
function debounce(func, wait, immediate) { function debounce(func, wait, immediate) {
var timeout; let timeout;
return function() { return function() {
var context = this; let context = this;
var args = arguments; let args = arguments;
var later = function() { let later = function() {
timeout = null; timeout = null;
if (!immediate) func.apply(context, args); if (!immediate) func.apply(context, args);
}; };
var callNow = immediate && !timeout; let callNow = immediate && !timeout;
clearTimeout(timeout); clearTimeout(timeout);
timeout = setTimeout(later, wait); timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args); if (callNow) func.apply(context, args);
@@ -580,7 +647,7 @@ function reportDebug(...args){
} }
function byteArrayToIntegerLittleEndian(unsignedByteArray){ function byteArrayToIntegerLittleEndian(unsignedByteArray){
var result = 0; let result = 0;
for (byte of unsignedByteArray){ for (byte of unsignedByteArray){
result = result*256; result = result*256;
result += byte result += byte
@@ -588,7 +655,7 @@ function byteArrayToIntegerLittleEndian(unsignedByteArray){
return result; return result;
} }
function byteArrayToFloat(byteArray) { function byteArrayToFloat(byteArray) {
var view = new DataView(byteArray.buffer); let view = new DataView(byteArray.buffer);
if (byteArray.length == 4) if (byteArray.length == 4)
return view.getFloat32(byteArray.byteOffset); return view.getFloat32(byteArray.byteOffset);
else else
@@ -599,14 +666,14 @@ function ByteParser(data){
this.data = new Uint8Array(data); this.data = new Uint8Array(data);
} }
ByteParser.prototype.readInteger = function(nBytes){ ByteParser.prototype.readInteger = function(nBytes){
var result = byteArrayToIntegerLittleEndian( let result = byteArrayToIntegerLittleEndian(
this.data.slice(this.curIndex, this.curIndex + nBytes) this.data.slice(this.curIndex, this.curIndex + nBytes)
); );
this.curIndex += nBytes; this.curIndex += nBytes;
return result; return result;
} }
ByteParser.prototype.readBufferBytes = function(nBytes){ ByteParser.prototype.readBufferBytes = function(nBytes){
var result = this.data.slice(this.curIndex, this.curIndex + nBytes); let result = this.data.slice(this.curIndex, this.curIndex + nBytes);
this.curIndex += nBytes; this.curIndex += nBytes;
return result; return result;
} }
@@ -635,7 +702,7 @@ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.*/ SOFTWARE.*/
function sidx_parse (data, offset) { function sidx_parse (data, offset) {
var bp = new ByteParser(data), let bp = new ByteParser(data),
version = bp.readInteger(1), version = bp.readInteger(1),
flags = bp.readInteger(3), flags = bp.readInteger(3),
referenceId = bp.readInteger(4), referenceId = bp.readInteger(4),
@@ -646,9 +713,9 @@ function sidx_parse (data, offset) {
entryCount = bp.readInteger(2), entryCount = bp.readInteger(2),
entries = []; entries = [];
var totalBytesOffset = firstOffset + offset; let totalBytesOffset = firstOffset + offset;
var totalTicks = 0; let totalTicks = 0;
for (var i = entryCount; i > 0; i=i-1 ) { for (let i = entryCount; i > 0; i=i-1 ) {
let referencedSize = bp.readInteger(4), let referencedSize = bp.readInteger(4),
subSegmentDuration = bp.readInteger(4), subSegmentDuration = bp.readInteger(4),
unused = bp.readBufferBytes(4) unused = bp.readBufferBytes(4)
@@ -681,7 +748,7 @@ function sidx_parse (data, offset) {
// BEGIN iso-bmff-parser-stream/lib/unbox.js (same license), modified // BEGIN iso-bmff-parser-stream/lib/unbox.js (same license), modified
function unbox(buf) { function unbox(buf) {
var bp = new ByteParser(buf), let bp = new ByteParser(buf),
bufferLength = buf.length, bufferLength = buf.length,
length, length,
typeData, typeData,
@@ -712,7 +779,7 @@ function unbox(buf) {
function extractWebmInitializationInfo(initializationSegment) { function extractWebmInitializationInfo(initializationSegment) {
var result = { let result = {
timeScale: null, timeScale: null,
cuesOffset: null, cuesOffset: null,
duration: null, duration: null,
@@ -740,9 +807,9 @@ function extractWebmInitializationInfo(initializationSegment) {
return result; return result;
} }
function parseWebmCues(indexSegment, initInfo) { function parseWebmCues(indexSegment, initInfo) {
var entries = []; let entries = [];
var currentEntry = {}; let currentEntry = {};
var cuesOffset = initInfo.cuesOffset; let cuesOffset = initInfo.cuesOffset;
(new EbmlDecoder()).readTags(indexSegment, (tagType, tag) => { (new EbmlDecoder()).readTags(indexSegment, (tagType, tag) => {
if (tag.name == 'CueTime') { if (tag.name == 'CueTime') {
const tickStart = byteArrayToIntegerLittleEndian(tag.data); const tickStart = byteArrayToIntegerLittleEndian(tag.data);
@@ -818,7 +885,7 @@ EbmlDecoder.prototype.readTags = function(chunk, onParsedTag) {
} }
EbmlDecoder.prototype.getSchemaInfo = function(tag) { EbmlDecoder.prototype.getSchemaInfo = function(tag) {
if (Number.isInteger(tag) && schema.has(tag)) { if (Number.isInteger(tag) && schema.has(tag)) {
var name, type; let name, type;
[name, type] = schema.get(tag); [name, type] = schema.get(tag);
return {name, type}; return {name, type};
} }

View File

@@ -1,9 +1,9 @@
function onClickReplies(e) { function onClickReplies(e) {
var details = e.target.parentElement; let details = e.target.parentElement;
// e.preventDefault(); // e.preventDefault();
console.log("loading replies .."); console.log("loading replies ..");
doXhr(details.getAttribute("data-src") + "&slim=1", (html) => { doXhr(details.getAttribute("data-src") + "&slim=1", (html) => {
var div = details.querySelector(".comment_page"); let div = details.querySelector(".comment_page");
div.innerHTML = html; div.innerHTML = html;
}); });
details.removeEventListener('click', onClickReplies); details.removeEventListener('click', onClickReplies);

View File

@@ -1,16 +1,19 @@
const Q = document.querySelector.bind(document); const Q = document.querySelector.bind(document);
const QA = document.querySelectorAll.bind(document); const QA = document.querySelectorAll.bind(document);
const QId = document.getElementById.bind(document); const QId = document.getElementById.bind(document);
let seconds,
minutes,
hours;
function text(msg) { return document.createTextNode(msg); } function text(msg) { return document.createTextNode(msg); }
function clearNode(node) { while (node.firstChild) node.removeChild(node.firstChild); } function clearNode(node) { while (node.firstChild) node.removeChild(node.firstChild); }
function toTimestamp(seconds) { function toTimestamp(seconds) {
var seconds = Math.floor(seconds); seconds = Math.floor(seconds);
var minutes = Math.floor(seconds/60); minutes = Math.floor(seconds/60);
var seconds = seconds % 60; seconds = seconds % 60;
var hours = Math.floor(minutes/60); hours = Math.floor(minutes/60);
var minutes = minutes % 60; minutes = minutes % 60;
if (hours) { if (hours) {
return `0${hours}:`.slice(-3) + `0${minutes}:`.slice(-3) + `0${seconds}`.slice(-2); return `0${hours}:`.slice(-3) + `0${minutes}:`.slice(-3) + `0${seconds}`.slice(-2);
@@ -18,8 +21,7 @@ function toTimestamp(seconds) {
return `0${minutes}:`.slice(-3) + `0${seconds}`.slice(-2); return `0${minutes}:`.slice(-3) + `0${seconds}`.slice(-2);
} }
let cur_track_idx = 0;
var cur_track_idx = 0;
function getActiveTranscriptTrackIdx() { function getActiveTranscriptTrackIdx() {
let textTracks = QId("js-video-player").textTracks; let textTracks = QId("js-video-player").textTracks;
if (!textTracks.length) return; if (!textTracks.length) return;
@@ -39,7 +41,7 @@ function getDefaultTranscriptTrackIdx() {
} }
function doXhr(url, callback=null) { function doXhr(url, callback=null) {
var xhr = new XMLHttpRequest(); let xhr = new XMLHttpRequest();
xhr.open("GET", url); xhr.open("GET", url);
xhr.onload = (e) => { xhr.onload = (e) => {
callback(e.currentTarget.response); callback(e.currentTarget.response);
@@ -50,7 +52,7 @@ function doXhr(url, callback=null) {
// https://stackoverflow.com/a/30810322 // https://stackoverflow.com/a/30810322
function copyTextToClipboard(text) { function copyTextToClipboard(text) {
var textArea = document.createElement("textarea"); let textArea = document.createElement("textarea");
// //
// *** This styling is an extra step which is likely not required. *** // *** This styling is an extra step which is likely not required. ***
@@ -92,19 +94,20 @@ function copyTextToClipboard(text) {
textArea.value = text; textArea.value = text;
document.body.appendChild(textArea); let parent_el = video.parentElement;
parent_el.appendChild(textArea);
textArea.focus(); textArea.focus();
textArea.select(); textArea.select();
try { try {
var successful = document.execCommand('copy'); let successful = document.execCommand('copy');
var msg = successful ? 'successful' : 'unsuccessful'; let msg = successful ? 'successful' : 'unsuccessful';
console.log('Copying text command was ' + msg); console.log('Copying text command was ' + msg);
} catch (err) { } catch (err) {
console.log('Oops, unable to copy'); console.log('Oops, unable to copy');
} }
document.body.removeChild(textArea); parent_el.removeChild(textArea);
} }

View File

@@ -3,6 +3,7 @@ function onKeyDown(e) {
// console.log(e); // console.log(e);
let v = QId("js-video-player"); let v = QId("js-video-player");
if (!e.isTrusted) return; // plyr CustomEvent
let c = e.key.toLowerCase(); let c = e.key.toLowerCase();
if (e.ctrlKey) return; if (e.ctrlKey) return;
else if (c == "k") { else if (c == "k") {
@@ -26,8 +27,17 @@ function onKeyDown(e) {
} }
else if (c == "f") { else if (c == "f") {
e.preventDefault(); e.preventDefault();
if (document.fullscreenElement && document.fullscreenElement.nodeName == 'VIDEO') {document.exitFullscreen();} if (data.settings.use_video_player == 2) {
else {v.requestFullscreen()}; player.fullscreen.toggle()
}
else {
if (document.fullscreen) {
document.exitFullscreen()
}
else {
v.requestFullscreen()
}
}
} }
else if (c == "m") { else if (c == "m") {
if (v.muted == false) {v.muted = true;} if (v.muted == false) {v.muted = true;}

View File

@@ -37,7 +37,7 @@
} }
// https://developer.mozilla.org/en-US/docs/Learn/HTML/Forms/Sending_forms_through_JavaScript // https://developer.mozilla.org/en-US/docs/Learn/HTML/Forms/Sending_forms_through_JavaScript
function sendData(event){ function sendData(event){
var clicked_button = document.activeElement; let clicked_button = document.activeElement;
if(clicked_button === null || clicked_button.getAttribute('type') !== 'submit' || clicked_button.parentElement != event.target){ if(clicked_button === null || clicked_button.getAttribute('type') !== 'submit' || clicked_button.parentElement != event.target){
console.log('ERROR: clicked_button not valid'); console.log('ERROR: clicked_button not valid');
return; return;
@@ -46,8 +46,8 @@
return; // video(s) are being removed from playlist, just let it refresh the page return; // video(s) are being removed from playlist, just let it refresh the page
} }
event.preventDefault(); event.preventDefault();
var XHR = new XMLHttpRequest(); let XHR = new XMLHttpRequest();
var FD = new FormData(playlistAddForm); let FD = new FormData(playlistAddForm);
if(FD.getAll('video_info_list').length === 0){ if(FD.getAll('video_info_list').length === 0){
displayMessage('Error: No videos selected', true); displayMessage('Error: No videos selected', true);

View File

@@ -1,77 +1,66 @@
(function main() { (function main() {
'use strict'; 'use strict';
let captionsActive; // Captions
let captionsActive = false;
switch(true) { if (data.settings.subtitles_mode === 2 || (data.settings.subtitles_mode === 1 && data.has_manual_captions)) {
case data.settings.subtitles_mode == 2: captionsActive = true;
captionsActive = true;
break;
case data.settings.subtitles_mode == 1 && data.has_manual_captions:
captionsActive = true;
break;
default:
captionsActive = false;
} }
// AutoPlay
let autoplayActive = data.settings.autoplay_videos || false;
let qualityOptions = []; let qualityOptions = [];
let qualityDefault; let qualityDefault;
for (var src of data['uni_sources']) {
qualityOptions.push(src.quality_string) for (let src of data.uni_sources) {
qualityOptions.push(src.quality_string);
} }
for (var src of data['pair_sources']) {
qualityOptions.push(src.quality_string) for (let src of data.pair_sources) {
qualityOptions.push(src.quality_string);
} }
if (data['using_pair_sources'])
qualityDefault = data['pair_sources'][data['pair_idx']].quality_string; if (data.using_pair_sources) {
else if (data['uni_sources'].length != 0) qualityDefault = data.pair_sources[data.pair_idx].quality_string;
qualityDefault = data['uni_sources'][data['uni_idx']].quality_string; } else if (data.uni_sources.length !== 0) {
else qualityDefault = data.uni_sources[data.uni_idx].quality_string;
} else {
qualityDefault = 'None'; qualityDefault = 'None';
}
// Fix plyr refusing to work with qualities that are strings // Fix plyr refusing to work with qualities that are strings
Object.defineProperty(Plyr.prototype, 'quality', { Object.defineProperty(Plyr.prototype, 'quality', {
set: function(input) { set: function (input) {
const config = this.config.quality; const config = this.config.quality;
const options = this.options.quality; const options = this.options.quality;
let quality; let quality = input;
let updateStorage = true;
if (!options.length) { if (!options.length) {
return; return;
} }
// removing this line:
//let quality = [!is.empty(input) && Number(input), this.storage.get('quality'), config.selected, config.default].find(is.number);
// replacing with:
quality = input;
let updateStorage = true;
if (!options.includes(quality)) { if (!options.includes(quality)) {
// Plyr sets quality to null at startup, resulting in the erroneous
// calling of this setter function with input = null, and the
// commented out code below would set the quality to something
// unrelated at startup. Comment out and just return.
return; return;
/*const value = closest(options, quality);
this.debug.warn(`Unsupported quality option: ${quality}, using ${value} instead`);
quality = value; // Don't update storage if quality is not supported
updateStorage = false;*/
} // Update config
config.selected = quality; // Set quality
this.media.quality = quality; // Save to storage
if (updateStorage) {
this.storage.set({
quality
});
} }
}
// Update config
config.selected = quality;
// Set quality
this.media.quality = quality;
// Save to storage
if (updateStorage) {
this.storage.set({ quality });
}
},
}); });
const player = new Plyr(document.getElementById('js-video-player'), { const playerOptions = {
// Learning about autoplay permission https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Permissions-Policy/autoplay#syntax
autoplay: autoplayActive,
disableContextMenu: false, disableContextMenu: false,
captions: { captions: {
active: captionsActive, active: captionsActive,
@@ -87,35 +76,61 @@
'volume', 'volume',
'captions', 'captions',
'settings', 'settings',
'fullscreen' 'pip',
'airplay',
'fullscreen',
], ],
iconUrl: "/youtube.com/static/modules/plyr/plyr.svg", iconUrl: '/youtube.com/static/modules/plyr/plyr.svg',
blankVideo: "/youtube.com/static/modules/plyr/blank.webm", blankVideo: '/youtube.com/static/modules/plyr/blank.webm',
debug: false, debug: false,
storage: {enabled: false}, storage: { enabled: false },
quality: { quality: {
default: qualityDefault, default: qualityDefault,
options: qualityOptions, options: qualityOptions,
forced: true, forced: true,
onChange: function(quality) { onChange: function (quality) {
if (quality == 'None') {return;} if (quality == 'None') {
return;
}
if (quality.includes('(integrated)')) { if (quality.includes('(integrated)')) {
for (var i=0; i < data['uni_sources'].length; i++) { for (let i = 0; i < data.uni_sources.length; i++) {
if (data['uni_sources'][i].quality_string == quality) { if (data.uni_sources[i].quality_string == quality) {
changeQuality({'type': 'uni', 'index': i}); changeQuality({ type: 'uni', index: i });
return; return;
} }
} }
} else { } else {
for (var i=0; i < data['pair_sources'].length; i++) { for (let i = 0; i < data.pair_sources.length; i++) {
if (data['pair_sources'][i].quality_string == quality) { if (data.pair_sources[i].quality_string == quality) {
changeQuality({'type': 'pair', 'index': i}); changeQuality({ type: 'pair', index: i });
return; return;
} }
} }
} }
}, },
}, },
previewThumbnails: {
enabled: storyboard_url !== null,
src: [storyboard_url],
},
settings: ['captions', 'quality', 'speed', 'loop'], settings: ['captions', 'quality', 'speed', 'loop'],
tooltips: {
controls: true,
},
}
const player = new Plyr(document.getElementById('js-video-player'), playerOptions);
// disable double click to fullscreen
// https://github.com/sampotts/plyr/issues/1370#issuecomment-528966795
player.eventListeners.forEach(function(eventListener) {
if(eventListener.type === 'dblclick') {
eventListener.element.removeEventListener(eventListener.type, eventListener.callback, eventListener.options);
}
}); });
}());
// Add .started property, true after the playback has been started
// Needed so controls won't be hidden before playback has started
player.started = false;
player.once('playing', function(){this.started = true});
})();

View File

@@ -2,7 +2,7 @@
// from: https://git.gir.st/subscriptionfeed.git/blob/59a590d:/app/youtube/templates/watch.html.j2#l28 // from: https://git.gir.st/subscriptionfeed.git/blob/59a590d:/app/youtube/templates/watch.html.j2#l28
var sha256=function a(b){function c(a,b){return a>>>b|a<<32-b}for(var d,e,f=Math.pow,g=f(2,32),h="length",i="",j=[],k=8*b[h],l=a.h=a.h||[],m=a.k=a.k||[],n=m[h],o={},p=2;64>n;p++)if(!o[p]){for(d=0;313>d;d+=p)o[d]=p;l[n]=f(p,.5)*g|0,m[n++]=f(p,1/3)*g|0}for(b+="\x80";b[h]%64-56;)b+="\x00";for(d=0;d<b[h];d++){if(e=b.charCodeAt(d),e>>8)return;j[d>>2]|=e<<(3-d)%4*8}for(j[j[h]]=k/g|0,j[j[h]]=k,e=0;e<j[h];){var q=j.slice(e,e+=16),r=l;for(l=l.slice(0,8),d=0;64>d;d++){var s=q[d-15],t=q[d-2],u=l[0],v=l[4],w=l[7]+(c(v,6)^c(v,11)^c(v,25))+(v&l[5]^~v&l[6])+m[d]+(q[d]=16>d?q[d]:q[d-16]+(c(s,7)^c(s,18)^s>>>3)+q[d-7]+(c(t,17)^c(t,19)^t>>>10)|0),x=(c(u,2)^c(u,13)^c(u,22))+(u&l[1]^u&l[2]^l[1]&l[2]);l=[w+x|0].concat(l),l[4]=l[4]+w|0}for(d=0;8>d;d++)l[d]=l[d]+r[d]|0}for(d=0;8>d;d++)for(e=3;e+1;e--){var y=l[d]>>8*e&255;i+=(16>y?0:"")+y.toString(16)}return i}; /*https://geraintluff.github.io/sha256/sha256.min.js (public domain)*/ let sha256=function a(b){function c(a,b){return a>>>b|a<<32-b}for(var d,e,f=Math.pow,g=f(2,32),h="length",i="",j=[],k=8*b[h],l=a.h=a.h||[],m=a.k=a.k||[],n=m[h],o={},p=2;64>n;p++)if(!o[p]){for(d=0;313>d;d+=p)o[d]=p;l[n]=f(p,.5)*g|0,m[n++]=f(p,1/3)*g|0}for(b+="\x80";b[h]%64-56;)b+="\x00";for(d=0;d<b[h];d++){if(e=b.charCodeAt(d),e>>8)return;j[d>>2]|=e<<(3-d)%4*8}for(j[j[h]]=k/g|0,j[j[h]]=k,e=0;e<j[h];){var q=j.slice(e,e+=16),r=l;for(l=l.slice(0,8),d=0;64>d;d++){var s=q[d-15],t=q[d-2],u=l[0],v=l[4],w=l[7]+(c(v,6)^c(v,11)^c(v,25))+(v&l[5]^~v&l[6])+m[d]+(q[d]=16>d?q[d]:q[d-16]+(c(s,7)^c(s,18)^s>>>3)+q[d-7]+(c(t,17)^c(t,19)^t>>>10)|0),x=(c(u,2)^c(u,13)^c(u,22))+(u&l[1]^u&l[2]^l[1]&l[2]);l=[w+x|0].concat(l),l[4]=l[4]+w|0}for(d=0;8>d;d++)l[d]=l[d]+r[d]|0}for(d=0;8>d;d++)for(e=3;e+1;e--){var y=l[d]>>8*e&255;i+=(16>y?0:"")+y.toString(16)}return i}; /*https://geraintluff.github.io/sha256/sha256.min.js (public domain)*/
window.addEventListener("load", load_sponsorblock); window.addEventListener("load", load_sponsorblock);
document.addEventListener('DOMContentLoaded', ()=>{ document.addEventListener('DOMContentLoaded', ()=>{

View File

@@ -1,12 +1,13 @@
const video = document.getElementById('js-video-player'); const video = document.getElementById('js-video-player');
function changeQuality(selection) { function changeQuality(selection) {
var currentVideoTime = video.currentTime; let currentVideoTime = video.currentTime;
var videoPaused = video.paused; let videoPaused = video.paused;
var videoSpeed = video.playbackRate; let videoSpeed = video.playbackRate;
var srcInfo; let srcInfo;
if (avMerge) if (avMerge && typeof avMerge.close === 'function') {
avMerge.close(); avMerge.close();
}
if (selection.type == 'uni'){ if (selection.type == 'uni'){
srcInfo = data['uni_sources'][selection.index]; srcInfo = data['uni_sources'][selection.index];
video.src = srcInfo.url; video.src = srcInfo.url;
@@ -22,9 +23,9 @@ function changeQuality(selection) {
} }
// Initialize av-merge // Initialize av-merge
var avMerge; let avMerge;
if (data.using_pair_sources) { if (data.using_pair_sources) {
var srcPair = data['pair_sources'][data['pair_idx']]; let srcPair = data['pair_sources'][data['pair_idx']];
// Do it dynamically rather than as the default in jinja // Do it dynamically rather than as the default in jinja
// in case javascript is disabled // in case javascript is disabled
avMerge = new AVMerge(video, srcPair, 0); avMerge = new AVMerge(video, srcPair, 0);
@@ -42,10 +43,10 @@ if (qs) {
if (data.time_start != 0 && video) {video.currentTime = data.time_start}; if (data.time_start != 0 && video) {video.currentTime = data.time_start};
// External video speed control // External video speed control
var speedInput = document.getElementById('speed-control'); let speedInput = document.getElementById('speed-control');
speedInput.addEventListener('keyup', (event) => { speedInput.addEventListener('keyup', (event) => {
if (event.key === 'Enter') { if (event.key === 'Enter') {
var speed = parseFloat(speedInput.value); let speed = parseFloat(speedInput.value);
if(!isNaN(speed)){ if(!isNaN(speed)){
video.playbackRate = speed; video.playbackRate = speed;
} }
@@ -61,7 +62,7 @@ if (data.playlist && data.playlist['id'] !== null) {
// IntersectionObserver isn't supported in pre-quantum // IntersectionObserver isn't supported in pre-quantum
// firefox versions, but the alternative of making it // firefox versions, but the alternative of making it
// manually is a performance drain, so oh well // manually is a performance drain, so oh well
var observer = new IntersectionObserver(lazyLoad, { let observer = new IntersectionObserver(lazyLoad, {
// where in relation to the edge of the viewport, we are observing // where in relation to the edge of the viewport, we are observing
rootMargin: "100px", rootMargin: "100px",
@@ -86,7 +87,7 @@ if (data.playlist && data.playlist['id'] !== null) {
}; };
// Tell our observer to observe all img elements with a "lazy" class // Tell our observer to observe all img elements with a "lazy" class
var lazyImages = document.querySelectorAll('img.lazy'); let lazyImages = document.querySelectorAll('img.lazy');
lazyImages.forEach(img => { lazyImages.forEach(img => {
observer.observe(img); observer.observe(img);
}); });

View File

@@ -29,6 +29,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -95,7 +97,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -135,8 +136,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -178,7 +181,7 @@ label[for=options-toggle-cbox] {
.table td,.table th { .table td,.table th {
padding: 10px 10px; padding: 10px 10px;
border: 1px solid var(--secondary-background); border: 1px solid var(--border-bg-license);
text-align: center; text-align: center;
} }
@@ -263,7 +266,7 @@ label[for=options-toggle-cbox] {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: minmax(50px, 120px);
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
@@ -271,7 +274,12 @@ label[for=options-toggle-cbox] {
z-index: 1; z-index: 1;
position: absolute; position: absolute;
} }
#options-toggle-cbox:checked ~ .dropdown-content {
padding: 0rem 3rem 1rem 1rem;
width: 100%;
max-height: 45vh;
overflow-y: scroll;
}
.footer { .footer {
display: grid; display: grid;
grid-template-columns: repeat(3, 1fr); grid-template-columns: repeat(3, 1fr);

View File

@@ -8,10 +8,13 @@
--secondary-background: #EEEEEE; --secondary-background: #EEEEEE;
--thumb-background: #F5F5F5; --thumb-background: #F5F5F5;
--link: #212121; --link: #212121;
--link-visited: #606060; --link-visited: #808080;
--buttom: #DCDCDC; --border-bg: #212121;
--border-bg-settings: #91918C;
--border-bg-license: #91918C;
--buttom: #FFFFFF;
--buttom-text: #212121; --buttom-text: #212121;
--button-border: #91918c; --button-border: #91918C;
--buttom-hover: #BBBBBB; --buttom-hover: #BBBBBB;
--search-text: #212121; --search-text: #212121;
--time-background: #212121; --time-background: #212121;

View File

@@ -33,6 +33,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -100,7 +102,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -200,8 +201,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -474,17 +477,19 @@ hr {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem;
z-index: 1;
position: absolute; position: absolute;
z-index: 1;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
} }
.playlist-metadata { .playlist-metadata {
max-width: 50vw; max-width: 50vw;
} }
@@ -498,7 +503,7 @@ hr {
grid-area: playlist; grid-area: playlist;
} }
.play-clean { .play-clean {
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
} }
.play-clean > button { .play-clean > button {
padding-left: 0px; padding-left: 0px;

View File

@@ -0,0 +1,77 @@
/* Prevent this div from blocking right-click menu for video
e.g. Firefox playback speed options */
.plyr__poster {
display: none;
}
/* plyr fix */
.plyr:-moz-full-screen video {
max-height: initial;
}
.plyr:-webkit-full-screen video {
max-height: initial;
}
.plyr:-ms-fullscreen video {
max-height: initial;
}
.plyr:fullscreen video {
max-height: initial;
}
.plyr__preview-thumb__image-container {
width: 158px;
height: 90px;
}
.plyr__preview-thumb {
bottom: 100%;
}
.plyr__menu__container [role="menu"],
.plyr__menu__container [role="menucaptions"] {
/* Set vertical scroll */
/* issue https://github.com/sampotts/plyr/issues/1420 */
max-height: 320px;
overflow-y: auto;
}
/*
* Custom styles similar to youtube
*/
.plyr__controls {
display: flex;
justify-content: center;
}
.plyr__progress__container {
position: absolute;
bottom: 0;
width: 100%;
margin-bottom: -10px;
}
.plyr__controls .plyr__controls__item:first-child {
margin-left: 0;
margin-right: 0;
z-index: 5;
}
.plyr__controls .plyr__controls__item.plyr__volume {
margin-left: auto;
}
.plyr__controls .plyr__controls__item.plyr__progress__container {
padding-left: 10px;
padding-right: 10px;
}
.plyr__progress input[type="range"] {
margin-bottom: 50px;
}
/*
* End custom styles
*/

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -33,6 +33,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -100,7 +102,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -200,8 +201,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -484,17 +487,19 @@ hr {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem;
z-index: 1;
position: absolute; position: absolute;
z-index: 1;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
} }
.playlist-metadata { .playlist-metadata {
max-width: 50vw; max-width: 50vw;
} }
@@ -508,7 +513,7 @@ hr {
grid-area: playlist; grid-area: playlist;
} }
.play-clean { .play-clean {
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
} }
.play-clean > button { .play-clean > button {
padding-left: 0px; padding-left: 0px;

View File

@@ -33,6 +33,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -100,7 +102,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -200,8 +201,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -252,7 +255,7 @@ hr {
/* Video list item */ /* Video list item */
.video-container { .video-container {
display: grid; display: grid;
grid-row-gap: 0.5rem; grid-gap: 0.5rem;
} }
.length { .length {
@@ -295,6 +298,12 @@ hr {
cursor: pointer; cursor: pointer;
} }
.item-video address {
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.item-video.channel-item .thumbnail.channel { .item-video.channel-item .thumbnail.channel {
display: flex; display: flex;
justify-content: center; justify-content: center;
@@ -430,7 +439,7 @@ hr {
@media (min-width: 600px) { @media (min-width: 600px) {
.video-container { .video-container {
display: grid; display: grid;
grid-row-gap: 0.5rem; grid-gap: 0.5rem;
grid-template-columns: 1fr 1fr; grid-template-columns: 1fr 1fr;
} }
} }
@@ -456,17 +465,19 @@ hr {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem;
z-index: 1;
position: absolute; position: absolute;
z-index: 1;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
} }
/* playlist */ /* playlist */
.playlist { .playlist {
display: grid; display: grid;
@@ -476,7 +487,7 @@ hr {
grid-area: playlist; grid-area: playlist;
} }
.play-clean { .play-clean {
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
} }
.play-clean > button { .play-clean > button {
padding-left: 0px; padding-left: 0px;
@@ -494,7 +505,7 @@ hr {
.video-container { .video-container {
display: grid; display: grid;
grid-template-columns: repeat(4, 1fr); grid-template-columns: repeat(4, 1fr);
grid-row-gap: 1rem; grid-gap: 1rem;
grid-column-gap: 1rem; grid-column-gap: 1rem;
} }

View File

@@ -29,6 +29,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -95,7 +97,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -135,8 +136,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -151,6 +154,11 @@ label[for=options-toggle-cbox] {
padding: 1rem; padding: 1rem;
} }
.settings-form > h2 {
border-bottom: 2px solid var(--border-bg-settings);
padding-bottom: 0.5rem;
}
.settings-list { .settings-list {
display: grid; display: grid;
grid-row-gap: 1rem; grid-row-gap: 1rem;
@@ -161,7 +169,6 @@ label[for=options-toggle-cbox] {
.setting-item { .setting-item {
display: grid; display: grid;
grid-template-columns: auto auto; grid-template-columns: auto auto;
background-color: var(--secondary-focus);
align-items: center; align-items: center;
padding: 5px; padding: 5px;
} }
@@ -215,15 +222,18 @@ label[for=options-toggle-cbox] {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
position: absolute; position: absolute;
background: var(--background); z-index: 1;
padding-right: 4rem; }
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
} }
.main { .main {
display: grid; display: grid;

View File

@@ -33,6 +33,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -100,7 +102,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -200,8 +201,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -477,17 +480,20 @@ hr {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem; padding-right: 4rem;
z-index: 1; z-index: 1;
position: absolute; position: absolute;
} }
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
}
.sidebar-links { .sidebar-links {
max-width: 50vw; max-width: 50vw;
} }
@@ -501,7 +507,7 @@ hr {
grid-area: playlist; grid-area: playlist;
} }
.play-clean { .play-clean {
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
} }
.play-clean > button { .play-clean > button {
padding-left: 0px; padding-left: 0px;

View File

@@ -33,6 +33,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -100,7 +102,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -120,66 +121,6 @@ header {
background-color: var(--buttom-hover); background-color: var(--buttom-hover);
} }
/* playlist */
.playlist {
display: grid;
grid-gap: 4px;
grid-template-areas:
"play-box"
"play-hidden"
"play-add"
"play-clean";
grid-area: playlist;
}
.play-box {
grid-area: play-box;
}
.play-hidden {
grid-area: play-hidden;
}
.play-add {
grid-area: play-add;
cursor: pointer;
padding-bottom: 6px;
padding-left: .75em;
padding-right: .75em;
padding-top: 6px;
text-align: center;
white-space: nowrap;
background-color: var(--buttom);
border: 1px solid var(--button-border);
color: var(--buttom-text);
border-radius: 5px;
}
.play-add:hover {
background-color: var(--buttom-hover);
}
.play-clean {
display: grid;
grid-area: play-clean;
}
.play-clean > button {
padding-bottom: 6px;
padding-left: .75em;
padding-right: .75em;
padding-top: 6px;
text-align: center;
white-space: nowrap;
background-color: var(--buttom);
border: 1px solid var(--button-border);
color: var(--buttom-text);
border-radius: 5px;
}
.play-clean > button:hover {
background-color: var(--buttom-hover);
}
/* /playlist */
/* ------------- Menu Mobile sin JS ---------------- */ /* ------------- Menu Mobile sin JS ---------------- */
/* input hidden */ /* input hidden */
.opt-box { .opt-box {
@@ -200,8 +141,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -385,17 +328,19 @@ hr {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 100px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem;
z-index: 1;
position: absolute; position: absolute;
z-index: 1;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
} }
.import-export { .import-export {
max-width: 50vw; max-width: 50vw;
} }
@@ -408,37 +353,6 @@ hr {
align-items: center; align-items: center;
} }
/* playlist */
.playlist {
display: grid;
grid-gap: 1px;
grid-template-columns: 1fr 1.4fr 0.3fr 1.3fr;
grid-template-areas: ". play-box play-add play-clean";
grid-area: playlist;
}
.play-clean {
grid-template-columns: minmax(50px, 100px);
}
.play-clean > button {
padding-left: 0px;
padding-right: 0px;
padding-bottom: 6px;
padding-top: 6px;
text-align: center;
white-space: nowrap;
background-color: var(--buttom);
color: var(--buttom-text);
border-radius: 5px;
cursor: pointer;
}
.video-container {
display: grid;
grid-template-columns: repeat(4, 1fr);
grid-row-gap: 1rem;
grid-column-gap: 1rem;
}
.footer { .footer {
display: grid; display: grid;
grid-template-columns: repeat(3, 1fr); grid-template-columns: repeat(3, 1fr);

View File

@@ -0,0 +1,269 @@
body {
display: grid;
grid-gap: 20px;
grid-template-areas:
"header"
"main"
"footer";
/* Fix height */
height: 100vh;
grid-template-rows: auto 1fr auto;
/* fix top and bottom */
margin-left: 1rem;
margin-right: 1rem;
}
img {
width: 100%;
height: auto;
}
a:link {
color: var(--link);
}
a:visited {
color: var(--link-visited);
}
input[type="text"],
input[type="search"] {
background: var(--background);
border: 1px solid var(--button-border);
padding: 0.4rem 0.4rem;
font-size: 15px;
color: var(--search-text);
outline: none;
box-shadow: none;
}
input[type='search'] {
border-bottom: 1px solid var(--button-border);
border-top: 0px;
border-left: 0px;
border-right: 0px;
border-radius: 0px;
}
header {
display: grid;
grid-gap: 1px;
grid-template-areas:
"home"
"form"
"playlist";
grid-area: header;
}
.home {
grid-area: home;
margin-left: auto;
margin-right: auto;
margin-bottom: 1rem;
margin-top: 1rem;
}
.form {
display: grid;
grid-gap: 4px;
grid-template-areas:
"search-box"
"search-button"
"dropdown";
grid-area: form;
}
.search-box {
grid-area: search-box;
}
.search-button {
grid-area: search-button;
cursor: pointer;
padding-bottom: 6px;
padding-left: .75em;
padding-right: .75em;
padding-top: 6px;
text-align: center;
white-space: nowrap;
background-color: var(--buttom);
border: 1px solid var(--button-border);
color: var(--buttom-text);
border-radius: 5px;
}
.search-button:hover {
background-color: var(--buttom-hover);
}
.dropdown {
display: grid;
grid-gap: 1px;
grid-template-areas:
"dropdown-label"
"dropdown-content";
grid-area: dropdown;
}
.dropdown-label {
grid-area: dropdown-label;
padding-bottom: 6px;
padding-left: .75em;
padding-right: .75em;
padding-top: 6px;
text-align: center;
white-space: nowrap;
background-color: var(--buttom);
border: 1px solid var(--button-border);
color: var(--buttom-text);
border-radius: 5px;
}
.dropdown-label:hover {
background-color: var(--buttom-hover);
}
/* ------------- Menu Mobile sin JS ---------------- */
/* input hidden */
.opt-box {
display: none;
}
.dropdown-content {
display: none;
grid-area: dropdown-content;
}
label[for=options-toggle-cbox] {
cursor: pointer;
-webkit-touch-callout: none;
-webkit-user-select: none;
-khtml-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
#options-toggle-cbox:checked ~ .dropdown-content {
display: block;
white-space: nowrap;
background: var(--secondary-background);
padding: 0.5rem 1rem;
}
/*- ----------- End Menu Mobile sin JS ------------- */
.main {
grid-area: main;
display: flex;
flex-direction: column;
align-items: center;
}
/* fix hr when is children of grid */
hr {
width: 100%;
}
.list-channel {
padding: 0;
}
.list-channel > li {
list-style: none;
}
/* pagination */
.main .pagination-container {
display: grid;
justify-content: center;
}
.main .pagination-container .pagination-list {
display: flex;
flex-direction: row;
flex-wrap: wrap;
justify-content: center;
}
.main .pagination-container .pagination-list .page-link {
border-style: none;
font-weight: bold;
text-align: center;
background: var(--secondary-focus);
text-decoration: none;
align-self: center;
padding: .5rem;
width: 1rem;
}
.main .pagination-container .pagination-list .page-link.is-current {
background: var(--secondary-background);
}
.footer {
grid-area: footer;
display: grid;
grid-template-columns: auto;
align-items: center;
justify-content: center;
margin: auto;
text-align: center;
}
.footer > p {
text-align: center;
}
@media (min-width: 480px) {
.item-video {
font-size: 0.85rem;
}
.info-box {
grid-gap: 2px;
}
.title {
font-size: 1rem;
}
}
@media (min-width: 992px) {
body {
display: grid;
grid-template-columns: 0.3fr 2fr 1fr 0.3fr;
grid-template-rows: auto 1fr auto;
grid-template-areas:
"header header header header"
"main main main main"
"footer footer footer footer";
}
.form {
display: grid;
grid-gap: 1px;
grid-template-columns: 1fr 1.4fr 0.3fr 1.3fr;
grid-template-areas: ". search-box search-button dropdown";
grid-area: form;
position: relative;
}
.dropdown {
display: grid;
grid-gap: 1px;
grid-template-columns: 100px auto;
grid-template-areas:
"dropdown-label"
"dropdown-content";
grid-area: dropdown;
z-index: 1;
position: absolute;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
}
.footer {
display: grid;
grid-template-columns: repeat(3, 1fr);
grid-column-gap: 2rem;
align-items: center;
justify-content: center;
text-align: center;
margin-top: 1rem;
margin-bottom: 1rem;
}
}

View File

@@ -21,21 +21,7 @@ img {
video { video {
width: 100%; width: 100%;
height: auto; height: auto;
max-height: 480px; max-height: calc(100vh/1.5);
}
/* plyr fix */
.plyr:-moz-full-screen video {
max-height: initial;
}
.plyr:-webkit-full-screen video {
max-height: initial;
}
.plyr:-ms-fullscreen video {
max-height: initial;
}
.plyr:fullscreen video {
max-height: initial;
} }
a:link { a:link {
@@ -54,6 +40,8 @@ input[type="search"] {
padding: 0.4rem 0.4rem; padding: 0.4rem 0.4rem;
font-size: 15px; font-size: 15px;
color: var(--search-text); color: var(--search-text);
outline: none;
box-shadow: none;
} }
input[type='search'] { input[type='search'] {
@@ -121,7 +109,6 @@ header {
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
z-index: 1;
} }
.dropdown-label { .dropdown-label {
grid-area: dropdown-label; grid-area: dropdown-label;
@@ -141,6 +128,29 @@ header {
background-color: var(--buttom-hover); background-color: var(--buttom-hover);
} }
.live-url-choices {
background-color: var(--thumb-background);
margin: 1rem 0;
padding: 1rem;
}
.playability-error {
position: relative;
box-sizing: border-box;
height: 30vh;
margin: 1rem 0;
}
.playability-error > span {
display: flex;
background-color: var(--thumb-background);
height: 100%;
object-fit: cover;
justify-content: center;
align-items: center;
text-align: center;
}
.playlist { .playlist {
display: grid; display: grid;
grid-gap: 4px; grid-gap: 4px;
@@ -219,9 +229,10 @@ label[for=options-toggle-cbox] {
} }
#options-toggle-cbox:checked ~ .dropdown-content { #options-toggle-cbox:checked ~ .dropdown-content {
display: inline-grid; display: block;
white-space: nowrap; white-space: nowrap;
padding-left: 1rem; background: var(--secondary-background);
padding: 0.5rem 1rem;
} }
/*- ----------- End Menu Mobile sin JS ------------- */ /*- ----------- End Menu Mobile sin JS ------------- */
@@ -236,6 +247,9 @@ label[for=options-toggle-cbox] {
"sc-video" "sc-video"
"sc-info"; "sc-info";
} }
figure.sc-video {
margin: 1rem 0px;
}
.sc-video { grid-area: sc-video; } .sc-video { grid-area: sc-video; }
.sc-info { .sc-info {
display: grid; display: grid;
@@ -618,17 +632,21 @@ label[for=options-toggle-cbox] {
.dropdown { .dropdown {
display: grid; display: grid;
grid-gap: 1px; grid-gap: 1px;
grid-template-columns: minmax(50px, 120px); grid-template-columns: 100px auto;
grid-template-areas: grid-template-areas:
"dropdown-label" "dropdown-label"
"dropdown-content"; "dropdown-content";
grid-area: dropdown; grid-area: dropdown;
background: var(--background);
padding-right: 4rem;
z-index: 1;
position: absolute; position: absolute;
z-index: 1;
}
#options-toggle-cbox:checked ~ .dropdown-content {
width: calc(100% + 100px);
max-height: 80vh;
overflow-y: scroll;
}
.playability-error {
height: 60vh;
} }
.playlist { .playlist {
display: grid; display: grid;
@@ -638,7 +656,7 @@ label[for=options-toggle-cbox] {
grid-area: playlist; grid-area: playlist;
} }
.play-clean { .play-clean {
grid-template-columns: minmax(50px, 120px); grid-template-columns: 100px auto;
} }
.play-clean > button { .play-clean > button {
padding-bottom: 6px; padding-bottom: 6px;

View File

@@ -1,4 +1,4 @@
from youtube import util, yt_data_extract, channel, local_playlist from youtube import util, yt_data_extract, channel, local_playlist, playlist
from youtube import yt_app from youtube import yt_app
import settings import settings
@@ -108,8 +108,7 @@ def _subscribe(channels):
with connection as cursor: with connection as cursor:
channel_ids_to_check = [channel[0] for channel in channels if not _is_subscribed(cursor, channel[0])] channel_ids_to_check = [channel[0] for channel in channels if not _is_subscribed(cursor, channel[0])]
rows = ((channel_id, channel_name, 0, 0) for channel_id, rows = ((channel_id, channel_name, 0, 0) for channel_id, channel_name in channels)
channel_name in channels)
cursor.executemany('''INSERT OR IGNORE INTO subscribed_channels (yt_channel_id, channel_name, time_last_checked, next_check_time) cursor.executemany('''INSERT OR IGNORE INTO subscribed_channels (yt_channel_id, channel_name, time_last_checked, next_check_time)
VALUES (?, ?, ?, ?)''', rows) VALUES (?, ?, ?, ?)''', rows)
@@ -236,8 +235,7 @@ def _get_channel_names(cursor, channel_ids):
return result return result
def _channels_with_tag(cursor, tag, order=False, exclude_muted=False, def _channels_with_tag(cursor, tag, order=False, exclude_muted=False, include_muted_status=False):
include_muted_status=False):
''' returns list of (channel_id, channel_name) ''' ''' returns list of (channel_id, channel_name) '''
statement = '''SELECT yt_channel_id, channel_name''' statement = '''SELECT yt_channel_id, channel_name'''
@@ -434,8 +432,10 @@ def autocheck_setting_changed(old_value, new_value):
stop_autocheck_system() stop_autocheck_system()
settings.add_setting_changed_hook('autocheck_subscriptions', settings.add_setting_changed_hook(
autocheck_setting_changed) 'autocheck_subscriptions',
autocheck_setting_changed
)
if settings.autocheck_subscriptions: if settings.autocheck_subscriptions:
start_autocheck_system() start_autocheck_system()
# ---------------------------- # ----------------------------
@@ -463,7 +463,24 @@ def _get_atoma_feed(channel_id):
def _get_channel_videos_first_page(channel_id, channel_status_name): def _get_channel_videos_first_page(channel_id, channel_status_name):
try: try:
return channel.get_channel_first_page(channel_id=channel_id) # First try the playlist method
pl_json = playlist.get_videos(
'UU' + channel_id[2:],
1,
include_shorts=settings.include_shorts_in_subscriptions,
report_text=None
)
pl_info = yt_data_extract.extract_playlist_info(pl_json)
if pl_info.get('items'):
pl_info['items'] = pl_info['items'][0:30]
return pl_info
# Try the channel api method
channel_json = channel.get_channel_first_page(channel_id=channel_id)
channel_info = yt_data_extract.extract_channel_info(
json.loads(channel_json), 'videos'
)
return channel_info
except util.FetchError as e: except util.FetchError as e:
if e.code == '429' and settings.route_tor: if e.code == '429' and settings.route_tor:
error_message = ('Error checking channel ' + channel_status_name error_message = ('Error checking channel ' + channel_status_name
@@ -497,7 +514,7 @@ def _get_upstream_videos(channel_id):
) )
gevent.joinall(tasks) gevent.joinall(tasks)
channel_tab, feed = tasks[0].value, tasks[1].value channel_info, feed = tasks[0].value, tasks[1].value
# extract published times from atoma feed # extract published times from atoma feed
times_published = {} times_published = {}
@@ -535,9 +552,8 @@ def _get_upstream_videos(channel_id):
except defusedxml.ElementTree.ParseError: except defusedxml.ElementTree.ParseError:
print('Failed to read atoma feed for ' + channel_status_name) print('Failed to read atoma feed for ' + channel_status_name)
if channel_tab is None: # there was an error if channel_info is None: # there was an error
return return
channel_info = yt_data_extract.extract_channel_info(json.loads(channel_tab), 'videos')
if channel_info['error']: if channel_info['error']:
print('Error checking channel ' + channel_status_name + ': ' + channel_info['error']) print('Error checking channel ' + channel_status_name + ': ' + channel_info['error'])
return return
@@ -552,14 +568,38 @@ def _get_upstream_videos(channel_id):
if video_item['id'] in times_published: if video_item['id'] in times_published:
video_item['time_published'] = times_published[video_item['id']] video_item['time_published'] = times_published[video_item['id']]
video_item['is_time_published_exact'] = True video_item['is_time_published_exact'] = True
else: elif video_item.get('time_published'):
video_item['is_time_published_exact'] = False video_item['is_time_published_exact'] = False
try: try:
video_item['time_published'] = youtube_timestamp_to_posix(video_item['time_published']) - i # subtract a few seconds off the videos so they will be in the right order video_item['time_published'] = youtube_timestamp_to_posix(video_item['time_published']) - i # subtract a few seconds off the videos so they will be in the right order
except KeyError: except Exception:
print(video_item) print(video_item)
else:
video_item['is_time_published_exact'] = False
video_item['time_published'] = None
video_item['channel_id'] = channel_id video_item['channel_id'] = channel_id
if len(videos) > 1:
# Go back and fill in any videos that don't have a time published
# using the time published of the surrounding ones
for i in range(len(videos)-1):
if (videos[i+1]['time_published'] is None
and videos[i]['time_published'] is not None
):
videos[i+1]['time_published'] = videos[i]['time_published'] - 1
for i in reversed(range(1,len(videos))):
if (videos[i-1]['time_published'] is None
and videos[i]['time_published'] is not None
):
videos[i-1]['time_published'] = videos[i]['time_published'] + 1
# Special case: none of the videos have a time published.
# In this case, make something up
if videos and videos[0]['time_published'] is None:
assert all(v['time_published'] is None for v in videos)
now = time.time()
for i in range(len(videos)):
# 1 month between videos
videos[i]['time_published'] = now - i*3600*24*30
if len(videos) == 0: if len(videos) == 0:
average_upload_period = 4*7*24*3600 # assume 1 month for channel with no videos average_upload_period = 4*7*24*3600 # assume 1 month for channel with no videos
@@ -578,26 +618,31 @@ def _get_upstream_videos(channel_id):
with open_database() as connection: with open_database() as connection:
with connection as cursor: with connection as cursor:
# calculate how many new videos there are # Get video ids and duration of existing vids so we
existing_vids = set(row[0] for row in cursor.execute( # can see how many new ones there are and update
'''SELECT video_id # livestreams/premiers
existing_vids = list(cursor.execute(
'''SELECT video_id, duration
FROM videos FROM videos
INNER JOIN subscribed_channels INNER JOIN subscribed_channels
ON videos.sql_channel_id = subscribed_channels.id ON videos.sql_channel_id = subscribed_channels.id
WHERE yt_channel_id=? WHERE yt_channel_id=?
ORDER BY time_published DESC ORDER BY time_published DESC
LIMIT 30''', [channel_id]).fetchall()) LIMIT 30''', [channel_id]).fetchall())
existing_vid_ids = set(row[0] for row in existing_vids)
existing_durs = dict(existing_vids)
# new videos the channel has uploaded since last time we checked # new videos the channel has uploaded since last time we checked
number_of_new_videos = 0 number_of_new_videos = 0
for video in videos: for video in videos:
if video['id'] in existing_vids: if video['id'] in existing_vid_ids:
break break
number_of_new_videos += 1 number_of_new_videos += 1
is_first_check = cursor.execute('''SELECT time_last_checked FROM subscribed_channels WHERE yt_channel_id=?''', [channel_id]).fetchone()[0] in (None, 0) is_first_check = cursor.execute('''SELECT time_last_checked FROM subscribed_channels WHERE yt_channel_id=?''', [channel_id]).fetchone()[0] in (None, 0)
time_videos_retrieved = int(time.time()) time_videos_retrieved = int(time.time())
rows = [] rows = []
update_rows = []
for i, video_item in enumerate(videos): for i, video_item in enumerate(videos):
if (is_first_check if (is_first_check
or number_of_new_videos > 6 or number_of_new_videos > 6
@@ -613,16 +658,34 @@ def _get_upstream_videos(channel_id):
time_noticed = video_item['time_published'] time_noticed = video_item['time_published']
else: else:
time_noticed = time_videos_retrieved time_noticed = time_videos_retrieved
rows.append((
video_item['channel_id'], # videos which need durations updated
video_item['id'], non_durations = ('upcoming', 'none', 'live', '')
video_item['title'], v_id = video_item['id']
video_item['duration'], if (existing_durs.get(v_id) is not None
video_item['time_published'], and existing_durs[v_id].lower() in non_durations
video_item['is_time_published_exact'], and video_item['duration'] not in non_durations
time_noticed, ):
video_item['description'], update_rows.append((
)) video_item['title'],
video_item['duration'],
video_item['time_published'],
video_item['is_time_published_exact'],
video_item['description'],
video_item['id'],
))
# all other videos
else:
rows.append((
video_item['channel_id'],
video_item['id'],
video_item['title'],
video_item['duration'],
video_item['time_published'],
video_item['is_time_published_exact'],
time_noticed,
video_item['description'],
))
cursor.executemany('''INSERT OR IGNORE INTO videos ( cursor.executemany('''INSERT OR IGNORE INTO videos (
sql_channel_id, sql_channel_id,
@@ -635,6 +698,13 @@ def _get_upstream_videos(channel_id):
description description
) )
VALUES ((SELECT id FROM subscribed_channels WHERE yt_channel_id=?), ?, ?, ?, ?, ?, ?, ?)''', rows) VALUES ((SELECT id FROM subscribed_channels WHERE yt_channel_id=?), ?, ?, ?, ?, ?, ?, ?)''', rows)
cursor.executemany('''UPDATE videos SET
title=?,
duration=?,
time_published=?,
is_time_published_exact=?,
description=?
WHERE video_id=?''', update_rows)
cursor.execute('''UPDATE subscribed_channels cursor.execute('''UPDATE subscribed_channels
SET time_last_checked = ?, next_check_time = ? SET time_last_checked = ?, next_check_time = ?
WHERE yt_channel_id=?''', [int(time.time()), next_check_time, channel_id]) WHERE yt_channel_id=?''', [int(time.time()), next_check_time, channel_id])
@@ -752,7 +822,7 @@ def import_subscriptions():
except (AssertionError, IndexError, defusedxml.ElementTree.ParseError) as e: except (AssertionError, IndexError, defusedxml.ElementTree.ParseError) as e:
return '400 Bad Request: Unable to read opml xml file, or the file is not the expected format', 400 return '400 Bad Request: Unable to read opml xml file, or the file is not the expected format', 400
elif mime_type == 'text/csv': elif mime_type in ('text/csv', 'application/vnd.ms-excel'):
content = file.read().decode('utf-8') content = file.read().decode('utf-8')
reader = csv.reader(content.splitlines()) reader = csv.reader(content.splitlines())
channels = [] channels = []
@@ -767,7 +837,7 @@ def import_subscriptions():
error = 'Unsupported file format: ' + mime_type error = 'Unsupported file format: ' + mime_type
error += (' . Only subscription.json, subscriptions.csv files' error += (' . Only subscription.json, subscriptions.csv files'
' (from Google Takeouts)' ' (from Google Takeouts)'
' and XML OPML files exported from Youtube\'s' ' and XML OPML files exported from YouTube\'s'
' subscription manager page are supported') ' subscription manager page are supported')
return (flask.render_template('error.html', error_message=error), return (flask.render_template('error.html', error_message=error),
400) 400)
@@ -962,7 +1032,8 @@ def get_subscriptions_page():
'muted': muted, 'muted': muted,
}) })
return flask.render_template('subscriptions.html', return flask.render_template(
'subscriptions.html',
header_playlist_names=local_playlist.get_playlist_names(), header_playlist_names=local_playlist.get_playlist_names(),
videos=videos, videos=videos,
num_pages=math.ceil(number_of_videos_in_db/60), num_pages=math.ceil(number_of_videos_in_db/60),
@@ -1018,7 +1089,7 @@ def serve_subscription_thumbnail(thumbnail):
f.close() f.close()
return flask.Response(image, mimetype='image/jpeg') return flask.Response(image, mimetype='image/jpeg')
url = "https://i.ytimg.com/vi/" + video_id + "/mqdefault.jpg" url = f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"
try: try:
image = util.fetch_url(url, report_text="Saved thumbnail: " + video_id) image = util.fetch_url(url, report_text="Saved thumbnail: " + video_id)
except urllib.error.HTTPError as e: except urllib.error.HTTPError as e:

View File

@@ -1,14 +1,19 @@
{% if settings.app_public %}
{% set app_url = settings.app_url|string %}
{% else %}
{% set app_url = settings.app_url|string + ':' + settings.port_number|string %}
{% endif %}
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="UTF-8"/> <meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1"/> <meta name="viewport" content="width=device-width, initial-scale=1">
<meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' blob: data: https://*.googlevideo.com; {{ "img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com;" if not settings.proxy_images else "" }}"> <meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline' 'unsafe-eval'; media-src 'self' blob: {{ app_url }}/* data: https://*.googlevideo.com; {{ "img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com;" if not settings.proxy_images else "" }}">
<title>{{ page_title }}</title> <title>{{ page_title }}</title>
<link title="YT Local" href="/youtube.com/opensearch.xml" rel="search" type="application/opensearchdescription+xml"/> <link title="YT Local" href="/youtube.com/opensearch.xml" rel="search" type="application/opensearchdescription+xml">
<link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon"/> <link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon">
<link href="/youtube.com/static/normalize.css" rel="stylesheet"/> <link href="/youtube.com/static/normalize.css" rel="stylesheet">
<link href="{{ theme_path }}" rel="stylesheet"/> <link href="{{ theme_path }}" rel="stylesheet">
<link href="/youtube.com/shared.css" rel="stylesheet"> <link href="/youtube.com/shared.css" rel="stylesheet">
{% block style %} {% block style %}
{{ style }} {{ style }}
@@ -30,7 +35,7 @@
</nav> </nav>
<form class="form" id="site-search" action="/youtube.com/results"> <form class="form" id="site-search" action="/youtube.com/results">
<input type="search" name="search_query" class="search-box" value="{{ search_box_value }}" <input type="search" name="search_query" class="search-box" value="{{ search_box_value }}"
{{ "autofocus" if request.path == "/" else "" }} required placeholder="Type to search..."> {{ "autofocus" if (request.path in ("/", "/results") or error_message) else "" }} required placeholder="Type to search...">
<button type="submit" value="Search" class="search-button">Search</button> <button type="submit" value="Search" class="search-button">Search</button>
<!-- options --> <!-- options -->
<div class="dropdown"> <div class="dropdown">
@@ -128,7 +133,7 @@
{% if header_playlist_names is defined %} {% if header_playlist_names is defined %}
<form class="playlist" id="playlist-edit" action="/youtube.com/edit_playlist" method="post" target="_self"> <form class="playlist" id="playlist-edit" action="/youtube.com/edit_playlist" method="post" target="_self">
<input class="play-box" name="playlist_name" id="playlist-name-selection" list="playlist-options" type="search" placeholder="I added your playlist..."> <input class="play-box" name="playlist_name" id="playlist-name-selection" list="playlist-options" type="search" placeholder="Add name of your playlist...">
<datalist class="play-hidden" id="playlist-options"> <datalist class="play-hidden" id="playlist-options">
{% for playlist_name in header_playlist_names %} {% for playlist_name in header_playlist_names %}
<option value="{{ playlist_name }}">{{ playlist_name }}</option> <option value="{{ playlist_name }}">{{ playlist_name }}</option>
@@ -136,7 +141,7 @@
</datalist> </datalist>
<button class="play-add" type="submit" id="playlist-add-button" name="action" value="add">+List</button> <button class="play-add" type="submit" id="playlist-add-button" name="action" value="add">+List</button>
<div class="play-clean"> <div class="play-clean">
<button type="reset" id="item-selection-reset">Clear selection</button> <button type="reset" id="item-selection-reset">Clear</button>
</div> </div>
</form> </form>
<script src="/youtube.com/static/js/playlistadd.js"></script> <script src="/youtube.com/static/js/playlistadd.js"></script>

View File

@@ -1,21 +1,21 @@
{% if current_tab == 'search' %} {% if current_tab == 'search' %}
{% set page_title = search_box_value + ' - Page ' + page_number|string %} {% set page_title = search_box_value + ' - Page ' + page_number|string %}
{% else %} {% else %}
{% set page_title = channel_name + ' - Channel' %} {% set page_title = channel_name|string + ' - Channel' %}
{% endif %} {% endif %}
{% extends "base.html" %} {% extends "base.html" %}
{% import "common_elements.html" as common_elements %} {% import "common_elements.html" as common_elements %}
{% block style %} {% block style %}
<link href="/youtube.com/static/message_box.css" rel="stylesheet"/> <link href="/youtube.com/static/message_box.css" rel="stylesheet">
<link href="/youtube.com/static/channel.css" rel="stylesheet"/> <link href="/youtube.com/static/channel.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}
<div class="author-container"> <div class="author-container">
<div class="author"> <div class="author">
<img alt="{{ channel_name }}" src="{{ avatar }}"/> <img alt="{{ channel_name }}" src="{{ avatar }}">
<h2>{{ channel_name }}</h2> <h2>{{ channel_name }}</h2>
</div> </div>
<div class="summary"> <div class="summary">
@@ -33,7 +33,7 @@
<hr/> <hr/>
<nav class="channel-tabs"> <nav class="channel-tabs">
{% for tab_name in ('Videos', 'Playlists', 'About') %} {% for tab_name in ('Videos', 'Shorts', 'Streams', 'Playlists', 'About') %}
{% if tab_name.lower() == current_tab %} {% if tab_name.lower() == current_tab %}
<a class="tab page-button">{{ tab_name }}</a> <a class="tab page-button">{{ tab_name }}</a>
{% else %} {% else %}
@@ -51,8 +51,11 @@
<ul> <ul>
{% for (before_text, stat, after_text) in [ {% for (before_text, stat, after_text) in [
('Joined ', date_joined, ''), ('Joined ', date_joined, ''),
('', view_count|commatize, ' views'), ('', approx_view_count, ' views'),
('', approx_subscriber_count, ' subscribers'), ('', approx_subscriber_count, ' subscribers'),
('', approx_video_count, ' videos'),
('Country: ', country, ''),
('Canonical Url: ', canonical_url, ''),
] %} ] %}
{% if stat %} {% if stat %}
<li>{{ before_text + stat|string + after_text }}</li> <li>{{ before_text + stat|string + after_text }}</li>
@@ -65,7 +68,11 @@
<hr> <hr>
<ul> <ul>
{% for text, url in links %} {% for text, url in links %}
<li><a href="{{ url }}">{{ text }}</a></li> {% if url %}
<li><a href="{{ url }}">{{ text }}</a></li>
{% else %}
<li>{{ text }}</li>
{% endif %}
{% endfor %} {% endfor %}
</ul> </ul>
</div> </div>
@@ -73,8 +80,8 @@
<!-- new--> <!-- new-->
<div id="links-metadata"> <div id="links-metadata">
{% if current_tab == 'videos' %} {% if current_tab in ('videos', 'shorts', 'streams') %}
{% set sorts = [('1', 'views'), ('2', 'oldest'), ('3', 'newest')] %} {% set sorts = [('1', 'views'), ('2', 'oldest'), ('3', 'newest'), ('4', 'newest - no shorts'),] %}
<div id="number-of-results">{{ number_of_videos }} videos</div> <div id="number-of-results">{{ number_of_videos }} videos</div>
{% elif current_tab == 'playlists' %} {% elif current_tab == 'playlists' %}
{% set sorts = [('2', 'oldest'), ('3', 'newest'), ('4', 'last video added')] %} {% set sorts = [('2', 'oldest'), ('3', 'newest'), ('4', 'last video added')] %}
@@ -110,13 +117,9 @@
<hr/> <hr/>
<footer class="pagination-container"> <footer class="pagination-container">
{% if current_tab == 'videos' and current_sort.__str__() == '2' %} {% if current_tab in ('videos', 'shorts', 'streams') %}
<nav class="next-previous-button-row">
{{ common_elements.next_previous_ctoken_buttons(None, ctoken, channel_url + '/' + current_tab, parameters_dictionary) }}
</nav>
{% elif current_tab == 'videos' %}
<nav class="pagination-list"> <nav class="pagination-list">
{{ common_elements.page_buttons(number_of_pages, channel_url + '/' + current_tab, parameters_dictionary, include_ends=(current_sort.__str__() == '3')) }} {{ common_elements.page_buttons(number_of_pages, channel_url + '/' + current_tab, parameters_dictionary, include_ends=(current_sort.__str__() in '34')) }}
</nav> </nav>
{% elif current_tab == 'playlists' or current_tab == 'search' %} {% elif current_tab == 'playlists' or current_tab == 'search' %}
<nav class="next-previous-button-row"> <nav class="next-previous-button-row">

View File

@@ -38,10 +38,21 @@
<h4 class="title"><a href="{{ info['url'] }}" title="{{ info['title'] }}">{{ info['title'] }}</a></h4> <h4 class="title"><a href="{{ info['url'] }}" title="{{ info['title'] }}">{{ info['title'] }}</a></h4>
{% if include_author %} {% if include_author %}
{% set author_description = info['author'] %}
{% set AUTHOR_DESC_LENGTH = 35 %}
{% if author_description != None %}
{% if author_description|length >= AUTHOR_DESC_LENGTH %}
{% set author_description = author_description[:AUTHOR_DESC_LENGTH].split(' ')[:-1]|join(' ') %}
{% if not author_description[-1] in ['.', '?', ':', '!'] %}
{% set author_more = author_description + '…' %}
{% set author_description = author_more|replace('"','') %}
{% endif %}
{% endif %}
{% endif %}
{% if info.get('author_url') %} {% if info.get('author_url') %}
<address title="{{ info['author'] }}"><b><a href="{{ info['author_url'] }}">{{ info['author'] }}</a></b></address> <address title="{{ info['author'] }}"><b><a href="{{ info['author_url'] }}">{{ author_description }}</a></b></address>
{% else %} {% else %}
<address title="{{ info['author'] }}"><b>{{ info['author'] }}</b></address> <address title="{{ info['author'] }}"><b>{{ author_description }}</b></address>
{% endif %} {% endif %}
{% endif %} {% endif %}

View File

@@ -1,14 +1,14 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="es"> <html lang="en">
<head> <head>
<meta charset="UTF-8"/> <meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1"/> <meta name="viewport" content="width=device-width, initial-scale=1">
<meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline'; media-src 'self' https://*.googlevideo.com; {{ "img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com;" if not settings.proxy_images else "" }}"/> <meta http-equiv="Content-Security-Policy" content="default-src 'self' 'unsafe-inline'; media-src 'self' https://*.googlevideo.com; {{ "img-src 'self' https://*.googleusercontent.com https://*.ggpht.com https://*.ytimg.com;" if not settings.proxy_images else "" }}">
<title>{{ title }}</title> <title>{{ title }}</title>
<link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon"/> <link href="/youtube.com/static/favicon.ico" type="image/x-icon" rel="icon">
{% if settings.use_video_player == 2 %} {% if settings.use_video_player == 2 %}
<!-- plyr --> <!-- plyr -->
<link href="/youtube.com/static/modules/plyr/plyr.css" rel="stylesheet"/> <link href="/youtube.com/static/modules/plyr/plyr.css" rel="stylesheet">
<!--/ plyr --> <!--/ plyr -->
{% endif %} {% endif %}
<style> <style>
@@ -55,10 +55,15 @@
// @license-end // @license-end
</script> </script>
{% endif %} {% endif %}
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later
let storyboard_url = {{ storyboard_url | tojson }};
// @license-end
</script>
{% if settings.use_video_player == 2 %} {% if settings.use_video_player == 2 %}
<!-- plyr --> <!-- plyr -->
<script src="/youtube.com/static/modules/plyr/plyr.min.js" <script src="/youtube.com/static/modules/plyr/plyr.min.js"
integrity="sha512-0JWbXvmMLCb9fsWBlcStfEdREgVEpfT0lSgJ5JemQXZJUE5W33gnLmUqxyww7xT8ESgA+YtAtBbn8O3tgYnSQg==" integrity="sha512-l6ZzdXpfMHRfifqaR79wbYCEWjLDMI9DnROvb+oLkKq6d7MGroGpMbI7HFpicvmAH/2aQO+vJhewq8rhysrImw=="
crossorigin="anonymous"></script> crossorigin="anonymous"></script>
<script src="/youtube.com/static/js/plyr-start.js"></script> <script src="/youtube.com/static/js/plyr-start.js"></script>
<!-- /plyr --> <!-- /plyr -->

View File

@@ -1,4 +1,8 @@
{% set page_title = 'Error' %} {% if error_code %}
{% set page_title = 'Error: ' ~ error_code %}
{% else %}
{% set page_title = 'Error' %}
{% endif %}
{% if not slim %} {% if not slim %}
{% extends "base.html" %} {% extends "base.html" %}

View File

@@ -1,7 +1,7 @@
{% set page_title = title %} {% set page_title = title %}
{% extends "base.html" %} {% extends "base.html" %}
{% block style %} {% block style %}
<link href="/youtube.com/static/home.css" rel="stylesheet"/> <link href="/youtube.com/static/home.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}
<ul> <ul>

View File

@@ -1,7 +1,7 @@
{% set page_title = title %} {% set page_title = title %}
{% extends "base.html" %} {% extends "base.html" %}
{% block style %} {% block style %}
<link href="/youtube.com/static/license.css" rel="stylesheet"/> <link href="/youtube.com/static/license.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}
<table id="jslicense-labels1" class="table"> <table id="jslicense-labels1" class="table">

View File

@@ -2,8 +2,8 @@
{% extends "base.html" %} {% extends "base.html" %}
{% import "common_elements.html" as common_elements %} {% import "common_elements.html" as common_elements %}
{% block style %} {% block style %}
<link href="/youtube.com/static/message_box.css" rel="stylesheet"/> <link href="/youtube.com/static/message_box.css" rel="stylesheet">
<link href="/youtube.com/static/local_playlist.css" rel="stylesheet"/> <link href="/youtube.com/static/local_playlist.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}

View File

@@ -2,15 +2,15 @@
{% extends "base.html" %} {% extends "base.html" %}
{% import "common_elements.html" as common_elements %} {% import "common_elements.html" as common_elements %}
{% block style %} {% block style %}
<link href="/youtube.com/static/message_box.css" rel="stylesheet"/> <link href="/youtube.com/static/message_box.css" rel="stylesheet">
<link href="/youtube.com/static/playlist.css" rel="stylesheet"/> <link href="/youtube.com/static/playlist.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}
<div class="playlist-metadata"> <div class="playlist-metadata">
<div class="author"> <div class="author">
<img alt="{{ title }}" src="{{ thumbnail }}"/> <img alt="{{ title }}" src="{{ thumbnail }}">
<h2>{{ title }}</h2> <h2>{{ title }}</h2>
</div> </div>
<div class="summary"> <div class="summary">

View File

@@ -3,8 +3,8 @@
{% extends "base.html" %} {% extends "base.html" %}
{% import "common_elements.html" as common_elements %} {% import "common_elements.html" as common_elements %}
{% block style %} {% block style %}
<link href="/youtube.com/static/message_box.css" rel="stylesheet"/> <link href="/youtube.com/static/message_box.css" rel="stylesheet">
<link href="/youtube.com/static/search.css" rel="stylesheet"/> <link href="/youtube.com/static/search.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}

View File

@@ -1,7 +1,7 @@
{% set page_title = 'Settings' %} {% set page_title = 'Settings' %}
{% extends "base.html" %} {% extends "base.html" %}
{% block style %} {% block style %}
<link href="/youtube.com/static/settings.css" rel="stylesheet"/> <link href="/youtube.com/static/settings.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}
@@ -13,9 +13,9 @@
{% if not setting_info.get('hidden', false) %} {% if not setting_info.get('hidden', false) %}
<li class="setting-item"> <li class="setting-item">
{% if 'label' is in(setting_info) %} {% if 'label' is in(setting_info) %}
<label for="{{ 'setting_' + setting_name }}">{{ setting_info['label'] }}</label> <label for="{{ 'setting_' + setting_name }}" {% if 'comment' is in(setting_info) %}title="{{ setting_info['comment'] }}" {% endif %}>{{ setting_info['label'] }}</label>
{% else %} {% else %}
<label for="{{ 'setting_' + setting_name }}">{{ setting_name.replace('_', ' ')|capitalize }}</label> <label for="{{ 'setting_' + setting_name }}" {% if 'comment' is in(setting_info) %}title="{{ setting_info['comment'] }}" {% endif %}>{{ setting_name.replace('_', ' ')|capitalize }}</label>
{% endif %} {% endif %}
{% if setting_info['type'].__name__ == 'bool' %} {% if setting_info['type'].__name__ == 'bool' %}

View File

@@ -1,7 +1,7 @@
{% set page_title = 'Subscription Manager' %} {% set page_title = 'Subscription Manager' %}
{% extends "base.html" %} {% extends "base.html" %}
{% block style %} {% block style %}
<link href="/youtube.com/static/subscription_manager.css" rel="stylesheet"/> <link href="/youtube.com/static/subscription_manager.css" rel="stylesheet">
{% endblock style %} {% endblock style %}

View File

@@ -7,8 +7,8 @@
{% import "common_elements.html" as common_elements %} {% import "common_elements.html" as common_elements %}
{% block style %} {% block style %}
<link href="/youtube.com/static/message_box.css" rel="stylesheet"/> <link href="/youtube.com/static/message_box.css" rel="stylesheet">
<link href="/youtube.com/static/subscription.css" rel="stylesheet"/> <link href="/youtube.com/static/subscription.css" rel="stylesheet">
{% endblock style %} {% endblock style %}
{% block main %} {% block main %}

View File

@@ -1,17 +1,19 @@
{% set page_title = 'Unsubscribe?' %} {% set page_title = 'Unsubscribe?' %}
{% extends "base.html" %} {% extends "base.html" %}
{% block style %}
<link href="/youtube.com/static/unsubscribe.css" rel="stylesheet"/>
{% endblock style %}
{% block main %} {% block main %}
<span>Are you sure you want to unsubscribe from these channels?</span> <p>Are you sure you want to unsubscribe from these channels?</p>
<form class="subscriptions-import-form" action="/youtube.com/subscription_manager" method="POST"> <form class="subscriptions-import-form" action="/youtube.com/subscription_manager" method="POST">
{% for channel_id, channel_name in unsubscribe_list %} {% for channel_id, channel_name in unsubscribe_list %}
<input type="hidden" name="channel_ids" value="{{ channel_id }}"> <input type="hidden" name="channel_ids" value="{{ channel_id }}">
{% endfor %} {% endfor %}
<input type="hidden" name="action" value="unsubscribe"> <input type="hidden" name="action" value="unsubscribe">
<input type="submit" value="Yes, unsubscribe"> <input type="submit" value="Yes, unsubscribe">
</form> </form>
<ul> <ul class="list-channel">
{% for channel_id, channel_name in unsubscribe_list %} {% for channel_id, channel_name in unsubscribe_list %}
<li><a href="{{ '/https://www.youtube.com/channel/' + channel_id }}" title="{{ channel_name }}">{{ channel_name }}</a></li> <li><a href="{{ '/https://www.youtube.com/channel/' + channel_id }}" title="{{ channel_name }}">{{ channel_name }}</a></li>
{% endfor %} {% endfor %}

View File

@@ -3,19 +3,13 @@
{% import "common_elements.html" as common_elements %} {% import "common_elements.html" as common_elements %}
{% import "comments.html" as comments with context %} {% import "comments.html" as comments with context %}
{% block style %} {% block style %}
<link href="/youtube.com/static/message_box.css" rel="stylesheet"/> <link href="/youtube.com/static/message_box.css" rel="stylesheet">
<link href="/youtube.com/static/watch.css" rel="stylesheet"/> <link href="/youtube.com/static/watch.css" rel="stylesheet">
{% if settings.use_video_player == 2 %} {% if settings.use_video_player == 2 %}
<!-- plyr --> <!-- plyr -->
<link href="/youtube.com/static/modules/plyr/plyr.css" rel="stylesheet"/> <link href="/youtube.com/static/modules/plyr/plyr.css" rel="stylesheet">
<link href="/youtube.com/static/modules/plyr/custom_plyr.css" rel="stylesheet">
<!--/ plyr --> <!--/ plyr -->
<style>
/* Prevent this div from blocking right-click menu for video
e.g. Firefox playback speed options */
.plyr__poster {
display: none !important;
}
</style>
{% endif %} {% endif %}
{% endblock style %} {% endblock style %}
@@ -40,7 +34,7 @@
</div> </div>
{% else %} {% else %}
<figure class="sc-video"> <figure class="sc-video">
<video id="js-video-player" playsinline controls> <video id="js-video-player" playsinline controls {{ 'autoplay' if settings.autoplay_videos }}>
{% if uni_sources %} {% if uni_sources %}
<source src="{{ uni_sources[uni_idx]['url'] }}" type="{{ uni_sources[uni_idx]['type'] }}" data-res="{{ uni_sources[uni_idx]['quality'] }}"> <source src="{{ uni_sources[uni_idx]['url'] }}" type="{{ uni_sources[uni_idx]['type'] }}" data-res="{{ uni_sources[uni_idx]['quality'] }}">
{% endif %} {% endif %}
@@ -78,7 +72,7 @@
<address class="v-uploaded">Uploaded by <a href="{{ uploader_channel_url }}">{{ uploader }}</a></address> <address class="v-uploaded">Uploaded by <a href="{{ uploader_channel_url }}">{{ uploader }}</a></address>
<span class="v-views">{{ view_count }} views</span> <span class="v-views">{{ view_count }} views</span>
<time class="v-published" datetime="{{ time_published_utc }}">Published on {{ time_published }}</time> <time class="v-published" datetime="{{ time_published_utc }}">Published on {{ time_published }}</time>
<span class="v-likes-dislikes">{{ like_count }} likes {{ dislike_count }} dislikes</span> <span class="v-likes-dislikes">{{ like_count }} likes</span>
<div class="external-player-controls"> <div class="external-player-controls">
<input class="speed" id="speed-control" type="text" title="Video speed"> <input class="speed" id="speed-control" type="text" title="Video speed">
@@ -97,6 +91,7 @@
<span class="v-direct-link"><a href="https://youtu.be/{{ video_id }}" rel="noopener noreferrer" target="_blank">Direct Link</a></span> <span class="v-direct-link"><a href="https://youtu.be/{{ video_id }}" rel="noopener noreferrer" target="_blank">Direct Link</a></span>
{% if settings.use_video_download != 0 %}
<details class="v-download"> <details class="v-download">
<summary class="download-dropdown-label">Download</summary> <summary class="download-dropdown-label">Download</summary>
<ul class="download-dropdown-content"> <ul class="download-dropdown-content">
@@ -116,6 +111,9 @@
{% endfor %} {% endfor %}
</ul> </ul>
</details> </details>
{% else %}
<span class="v-download"></span>
{% endif %}
<span class="v-description">{{ common_elements.text_runs(description)|escape|urlize|timestamps|safe }}</span> <span class="v-description">{{ common_elements.text_runs(description)|escape|urlize|timestamps|safe }}</span>
<div class="v-music-list"> <div class="v-music-list">
@@ -131,7 +129,11 @@
{% for track in music_list %} {% for track in music_list %}
<tr> <tr>
{% for attribute in music_attributes %} {% for attribute in music_attributes %}
<td>{{ track.get(attribute.lower(), '') }}</td> {% if attribute.lower() == 'title' and track['url'] is not none %}
<td><a href="{{ track['url'] }}">{{ track.get(attribute.lower(), '') }}</a></td>
{% else %}
<td>{{ track.get(attribute.lower(), '') }}</td>
{% endif %}
{% endfor %} {% endfor %}
</tr> </tr>
{% endfor %} {% endfor %}
@@ -163,7 +165,7 @@
<div class="playlist-header"> <div class="playlist-header">
<a href="{{ playlist['url'] }}" title="{{ playlist['title'] }}"><h3>{{ playlist['title'] }}</h3></a> <a href="{{ playlist['url'] }}" title="{{ playlist['title'] }}"><h3>{{ playlist['title'] }}</h3></a>
<ul class="playlist-metadata"> <ul class="playlist-metadata">
<li><label for="playlist-autoplay-toggle">Autoplay: </label><input type="checkbox" class="autoplay-toggle"></li> <li><label for="playlist-autoplay-toggle">Autoplay: </label><input id="playlist-autoplay-toggle" type="checkbox" class="autoplay-toggle"></li>
{% if playlist['current_index'] is none %} {% if playlist['current_index'] is none %}
<li>[Error!]/{{ playlist['video_count'] }}</li> <li>[Error!]/{{ playlist['video_count'] }}</li>
{% else %} {% else %}
@@ -186,7 +188,7 @@
</nav> </nav>
</div> </div>
{% elif settings.related_videos_mode != 0 %} {% elif settings.related_videos_mode != 0 %}
<div class="related-autoplay"><label for="related-autoplay-toggle">Autoplay: </label><input type="checkbox" class="autoplay-toggle"></div> <div class="related-autoplay"><label for="related-autoplay-toggle">Autoplay: </label><input id="related-autoplay-toggle" type="checkbox" class="autoplay-toggle"></div>
{% endif %} {% endif %}
{% if subtitle_sources %} {% if subtitle_sources %}
@@ -225,7 +227,7 @@
<div class="comments-area-outer comments-disabled">Comments disabled</div> <div class="comments-area-outer comments-disabled">Comments disabled</div>
{% else %} {% else %}
<details class="comments-area-outer" {{'open' if settings.comments_mode == 1 else ''}}> <details class="comments-area-outer" {{'open' if settings.comments_mode == 1 else ''}}>
<summary>{{ comment_count|commatize }} comment{{'s' if comment_count != 1 else ''}}</summary> <summary>{{ comment_count|commatize }} comment{{'s' if comment_count != '1' else ''}}</summary>
<div class="comments-area-inner comments-area"> <div class="comments-area-inner comments-area">
{% if comments_info %} {% if comments_info %}
{{ comments.video_comments(comments_info) }} {{ comments.video_comments(comments_info) }}
@@ -239,12 +241,17 @@
<script src="/youtube.com/static/js/av-merge.js"></script> <script src="/youtube.com/static/js/av-merge.js"></script>
<script src="/youtube.com/static/js/watch.js"></script> <script src="/youtube.com/static/js/watch.js"></script>
<script>
// @license magnet:?xt=urn:btih:0b31508aeb0634b347b8270c7bee4d411b5d4109&dn=agpl-3.0.txt AGPL-v3-or-Later
let storyboard_url = {{ storyboard_url | tojson }};
// @license-end
</script>
<script src="/youtube.com/static/js/common.js"></script> <script src="/youtube.com/static/js/common.js"></script>
<script src="/youtube.com/static/js/transcript-table.js"></script> <script src="/youtube.com/static/js/transcript-table.js"></script>
{% if settings.use_video_player == 2 %} {% if settings.use_video_player == 2 %}
<!-- plyr --> <!-- plyr -->
<script src="/youtube.com/static/modules/plyr/plyr.min.js" <script src="/youtube.com/static/modules/plyr/plyr.min.js"
integrity="sha512-0JWbXvmMLCb9fsWBlcStfEdREgVEpfT0lSgJ5JemQXZJUE5W33gnLmUqxyww7xT8ESgA+YtAtBbn8O3tgYnSQg==" integrity="sha512-l6ZzdXpfMHRfifqaR79wbYCEWjLDMI9DnROvb+oLkKq6d7MGroGpMbI7HFpicvmAH/2aQO+vJhewq8rhysrImw=="
crossorigin="anonymous"></script> crossorigin="anonymous"></script>
<script src="/youtube.com/static/js/plyr-start.js"></script> <script src="/youtube.com/static/js/plyr-start.js"></script>
<!-- /plyr --> <!-- /plyr -->

View File

@@ -268,14 +268,15 @@ def fetch_url_response(url, headers=(), timeout=15, data=None,
# According to the documentation for urlopen, a redirect counts as a # According to the documentation for urlopen, a redirect counts as a
# retry. So there are 3 redirects max by default. # retry. So there are 3 redirects max by default.
if max_redirects: if max_redirects:
retries = urllib3.Retry(3+max_redirects, redirect=max_redirects) retries = urllib3.Retry(3+max_redirects, redirect=max_redirects, raise_on_redirect=False)
else: else:
retries = urllib3.Retry(3) retries = urllib3.Retry(3, raise_on_redirect=False)
pool = get_pool(use_tor and settings.route_tor) pool = get_pool(use_tor and settings.route_tor)
try: try:
response = pool.request(method, url, headers=headers, body=data, response = pool.request(method, url, headers=headers, body=data,
timeout=timeout, preload_content=False, timeout=timeout, preload_content=False,
decode_content=False, retries=retries) decode_content=False, retries=retries)
response.retries = retries
except urllib3.exceptions.MaxRetryError as e: except urllib3.exceptions.MaxRetryError as e:
exception_cause = e.__context__.__context__ exception_cause = e.__context__.__context__
if (isinstance(exception_cause, socks.ProxyConnectionError) if (isinstance(exception_cause, socks.ProxyConnectionError)
@@ -317,10 +318,11 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
cleanup_func(response) # release_connection for urllib3 cleanup_func(response) # release_connection for urllib3
content = decode_content( content = decode_content(
content, content,
response.getheader('Content-Encoding', default='identity')) response.headers.get('Content-Encoding', default='identity'))
if (settings.debugging_save_responses if (settings.debugging_save_responses
and debug_name is not None and content): and debug_name is not None
and content):
save_dir = os.path.join(settings.data_dir, 'debug') save_dir = os.path.join(settings.data_dir, 'debug')
if not os.path.exists(save_dir): if not os.path.exists(save_dir):
os.makedirs(save_dir) os.makedirs(save_dir)
@@ -328,11 +330,22 @@ def fetch_url(url, headers=(), timeout=15, report_text=None, data=None,
with open(os.path.join(save_dir, debug_name), 'wb') as f: with open(os.path.join(save_dir, debug_name), 'wb') as f:
f.write(content) f.write(content)
if response.status == 429: if response.status == 429 or (
response.status == 302 and (response.getheader('Location') == url
or response.getheader('Location').startswith(
'https://www.google.com/sorry/index'
)
)
):
print(response.status, response.reason, response.headers)
ip = re.search( ip = re.search(
br'IP address: ((?:[\da-f]*:)+[\da-f]+|(?:\d+\.)+\d+)', br'IP address: ((?:[\da-f]*:)+[\da-f]+|(?:\d+\.)+\d+)',
content) content)
ip = ip.group(1).decode('ascii') if ip else None ip = ip.group(1).decode('ascii') if ip else None
if not ip:
ip = re.search(r'IP=((?:\d+\.)+\d+)',
response.getheader('Set-Cookie') or '')
ip = ip.group(1) if ip else None
# don't get new identity if we're not using Tor # don't get new identity if we're not using Tor
if not use_tor: if not use_tor:
@@ -382,7 +395,6 @@ def head(url, use_tor=False, report_text=None, max_redirects=10):
round(time.monotonic() - start_time, 3)) round(time.monotonic() - start_time, 3))
return response return response
mobile_user_agent = 'Mozilla/5.0 (Linux; Android 7.0; Redmi Note 4 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Mobile Safari/537.36' mobile_user_agent = 'Mozilla/5.0 (Linux; Android 7.0; Redmi Note 4 Build/NRD90M) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Mobile Safari/537.36'
mobile_ua = (('User-Agent', mobile_user_agent),) mobile_ua = (('User-Agent', mobile_user_agent),)
desktop_user_agent = 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0' desktop_user_agent = 'Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0'
@@ -392,13 +404,13 @@ desktop_xhr_headers = (
('Accept', '*/*'), ('Accept', '*/*'),
('Accept-Language', 'en-US,en;q=0.5'), ('Accept-Language', 'en-US,en;q=0.5'),
('X-YouTube-Client-Name', '1'), ('X-YouTube-Client-Name', '1'),
('X-YouTube-Client-Version', '2.20180830'), ('X-YouTube-Client-Version', '2.20240304.00.00'),
) + desktop_ua ) + desktop_ua
mobile_xhr_headers = ( mobile_xhr_headers = (
('Accept', '*/*'), ('Accept', '*/*'),
('Accept-Language', 'en-US,en;q=0.5'), ('Accept-Language', 'en-US,en;q=0.5'),
('X-YouTube-Client-Name', '2'), ('X-YouTube-Client-Name', '2'),
('X-YouTube-Client-Version', '2.20180830'), ('X-YouTube-Client-Version', '2.20240304.08.00'),
) + mobile_ua ) + mobile_ua
@@ -450,7 +462,7 @@ class RateLimitedQueue(gevent.queue.Queue):
def download_thumbnail(save_directory, video_id): def download_thumbnail(save_directory, video_id):
url = "https://i.ytimg.com/vi/" + video_id + "/mqdefault.jpg" url = f"https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"
save_location = os.path.join(save_directory, video_id + ".jpg") save_location = os.path.join(save_directory, video_id + ".jpg")
try: try:
thumbnail = fetch_url(url, report_text="Saved thumbnail: " + video_id) thumbnail = fetch_url(url, report_text="Saved thumbnail: " + video_id)
@@ -492,7 +504,7 @@ def video_id(url):
# default, sddefault, mqdefault, hqdefault, hq720 # default, sddefault, mqdefault, hqdefault, hq720
def get_thumbnail_url(video_id): def get_thumbnail_url(video_id):
return settings.img_prefix + "https://i.ytimg.com/vi/" + video_id + "/mqdefault.jpg" return f"{settings.img_prefix}https://i.ytimg.com/vi/{video_id}/hqdefault.jpg"
def seconds_to_timestamp(seconds): def seconds_to_timestamp(seconds):
@@ -653,8 +665,183 @@ def to_valid_filename(name):
return name return name
# https://github.com/yt-dlp/yt-dlp/blob/master/yt_dlp/extractor/youtube.py#L72
INNERTUBE_CLIENTS = {
'android': {
'INNERTUBE_API_KEY': 'AIzaSyA8eiZmM1FaDVjRy-df2KTyQ_vz_yYM39w',
'INNERTUBE_CONTEXT': {
'client': {
'hl': 'en',
'gl': 'US',
'clientName': 'ANDROID',
'clientVersion': '19.09.36',
'osName': 'Android',
'osVersion': '12',
'androidSdkVersion': 31,
'platform': 'MOBILE',
'userAgent': 'com.google.android.youtube/19.09.36 (Linux; U; Android 12; US) gzip'
},
# https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-887739287
#'thirdParty': {
# 'embedUrl': 'https://google.com', # Can be any valid URL
#}
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 3,
'REQUIRE_JS_PLAYER': False,
},
'android-test-suite': {
'INNERTUBE_API_KEY': 'AIzaSyA8eiZmM1FaDVjRy-df2KTyQ_vz_yYM39w',
'INNERTUBE_CONTEXT': {
'client': {
'hl': 'en',
'gl': 'US',
'clientName': 'ANDROID_TESTSUITE',
'clientVersion': '1.9',
'osName': 'Android',
'osVersion': '12',
'androidSdkVersion': 31,
'platform': 'MOBILE',
'userAgent': 'com.google.android.youtube/1.9 (Linux; U; Android 12; US) gzip'
},
# https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-887739287
#'thirdParty': {
# 'embedUrl': 'https://google.com', # Can be any valid URL
#}
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 3,
'REQUIRE_JS_PLAYER': False,
},
'ios': {
'INNERTUBE_API_KEY': 'AIzaSyB-63vPrdThhKuerbB2N_l7Kwwcxj6yUAc',
'INNERTUBE_CONTEXT': {
'client': {
'hl': 'en',
'gl': 'US',
'clientName': 'IOS',
'clientVersion': '19.09.3',
'deviceModel': 'iPhone14,3',
'userAgent': 'com.google.ios.youtube/19.09.3 (iPhone14,3; U; CPU iOS 15_6 like Mac OS X)'
}
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 5,
'REQUIRE_JS_PLAYER': False
},
# This client can access age restricted videos (unless the uploader has disabled the 'allow embedding' option)
# See: https://github.com/zerodytrash/YouTube-Internal-Clients
'tv_embedded': {
'INNERTUBE_API_KEY': 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8',
'INNERTUBE_CONTEXT': {
'client': {
'hl': 'en',
'gl': 'US',
'clientName': 'TVHTML5_SIMPLY_EMBEDDED_PLAYER',
'clientVersion': '2.0',
'clientScreen': 'EMBED',
},
# https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-887739287
'thirdParty': {
'embedUrl': 'https://google.com', # Can be any valid URL
}
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 85,
'REQUIRE_JS_PLAYER': True,
},
'web': {
'INNERTUBE_API_KEY': 'AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8',
'INNERTUBE_CONTEXT': {
'client': {
'clientName': 'WEB',
'clientVersion': '2.20220801.00.00',
'userAgent': desktop_user_agent,
}
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 1
},
'android_vr': {
'INNERTUBE_API_KEY': 'AIzaSyA8eiZmM1FaDVjRy-df2KTyQ_vz_yYM39w',
'INNERTUBE_CONTEXT': {
'client': {
'clientName': 'ANDROID_VR',
'clientVersion': '1.60.19',
'deviceMake': 'Oculus',
'deviceModel': 'Quest 3',
'androidSdkVersion': 32,
'userAgent': 'com.google.android.apps.youtube.vr.oculus/1.60.19 (Linux; U; Android 12L; eureka-user Build/SQ3A.220605.009.A1) gzip',
'osName': 'Android',
'osVersion': '12L',
},
},
'INNERTUBE_CONTEXT_CLIENT_NAME': 28,
'REQUIRE_JS_PLAYER': False,
},
}
def get_visitor_data():
visitor_data = None
visitor_data_cache = os.path.join(settings.data_dir, 'visitorData.txt')
if not os.path.exists(settings.data_dir):
os.makedirs(settings.data_dir)
if os.path.isfile(visitor_data_cache):
with open(visitor_data_cache, 'r') as file:
print('Getting visitor_data from cache')
visitor_data = file.read()
max_age = 12*3600
file_age = time.time() - os.path.getmtime(visitor_data_cache)
if file_age > max_age:
print('visitor_data cache is too old. Removing file...')
os.remove(visitor_data_cache)
return visitor_data
print('Fetching youtube homepage to get visitor_data')
yt_homepage = 'https://www.youtube.com'
yt_resp = fetch_url(yt_homepage, headers={'User-Agent': mobile_user_agent}, report_text='Getting youtube homepage')
visitor_data_re = r'''"visitorData":\s*?"(.+?)"'''
visitor_data_match = re.search(visitor_data_re, yt_resp.decode())
if visitor_data_match:
visitor_data = visitor_data_match.group(1)
print(f'Got visitor_data: {len(visitor_data)}')
with open(visitor_data_cache, 'w') as file:
print('Saving visitor_data cache...')
file.write(visitor_data)
return visitor_data
else:
print('Unable to get visitor_data value')
return visitor_data
def call_youtube_api(client, api, data):
client_params = INNERTUBE_CLIENTS[client]
context = client_params['INNERTUBE_CONTEXT']
key = client_params['INNERTUBE_API_KEY']
host = client_params.get('INNERTUBE_HOST') or 'www.youtube.com'
user_agent = context['client'].get('userAgent') or mobile_user_agent
visitor_data = get_visitor_data()
url = 'https://' + host + '/youtubei/v1/' + api + '?key=' + key
if visitor_data:
context['client'].update({'visitorData': visitor_data})
data['context'] = context
data = json.dumps(data)
headers = (('Content-Type', 'application/json'),('User-Agent', user_agent))
if visitor_data:
headers = ( *headers, ('X-Goog-Visitor-Id', visitor_data ))
response = fetch_url(
url, data=data, headers=headers,
debug_name='youtubei_' + api + '_' + client,
report_text='Fetched ' + client + ' youtubei ' + api
).decode('utf-8')
return response
def strip_non_ascii(string): def strip_non_ascii(string):
''' Returns the string without non ASCII characters''' ''' Returns the string without non ASCII characters'''
if string is None:
return ""
stripped = (c for c in string if 0 < ord(c) < 127) stripped = (c for c in string if 0 < ord(c) < 127)
return ''.join(stripped) return ''.join(stripped)

View File

@@ -1,3 +1,3 @@
from __future__ import unicode_literals from __future__ import unicode_literals
__version__ = '0.2.1' __version__ = 'v0.3.2'

View File

@@ -15,6 +15,10 @@ import traceback
import urllib import urllib
import re import re
import urllib3.exceptions import urllib3.exceptions
from urllib.parse import parse_qs, urlencode
from types import SimpleNamespace
from math import ceil
try: try:
with open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'r') as f: with open(os.path.join(settings.data_dir, 'decrypt_function_cache.json'), 'r') as f:
@@ -46,6 +50,8 @@ def get_video_sources(info, target_resolution):
video_only_sources = {} video_only_sources = {}
uni_sources = [] uni_sources = []
pair_sources = [] pair_sources = []
for fmt in info['formats']: for fmt in info['formats']:
if not all(fmt[attr] for attr in ('ext', 'url', 'itag')): if not all(fmt[attr] for attr in ('ext', 'url', 'itag')):
continue continue
@@ -71,7 +77,6 @@ def get_video_sources(info, target_resolution):
fmt['audio_bitrate'] = int(fmt['bitrate']/1000) fmt['audio_bitrate'] = int(fmt['bitrate']/1000)
source = { source = {
'type': 'audio/' + fmt['ext'], 'type': 'audio/' + fmt['ext'],
'bitrate': fmt['audio_bitrate'],
'quality_string': audio_quality_string(fmt), 'quality_string': audio_quality_string(fmt),
} }
source.update(fmt) source.update(fmt)
@@ -173,7 +178,7 @@ def make_caption_src(info, lang, auto=False, trans_lang=None):
if trans_lang: if trans_lang:
label += ' -> ' + trans_lang label += ' -> ' + trans_lang
return { return {
'url': '/' + yt_data_extract.get_caption_url(info, lang, 'vtt', auto, trans_lang), 'url': util.prefix_url(yt_data_extract.get_caption_url(info, lang, 'vtt', auto, trans_lang)),
'label': label, 'label': label,
'srclang': trans_lang[0:2] if trans_lang else lang[0:2], 'srclang': trans_lang[0:2] if trans_lang else lang[0:2],
'on': False, 'on': False,
@@ -217,6 +222,8 @@ def get_subtitle_sources(info):
pref_lang (Automatic) pref_lang (Automatic)
pref_lang (Manual)''' pref_lang (Manual)'''
sources = [] sources = []
if not yt_data_extract.captions_available(info):
return []
pref_lang = settings.subtitles_language pref_lang = settings.subtitles_language
native_video_lang = None native_video_lang = None
if info['automatic_caption_languages']: if info['automatic_caption_languages']:
@@ -303,14 +310,6 @@ def save_decrypt_cache():
f.close() f.close()
watch_headers = (
('Accept', '*/*'),
('Accept-Language', 'en-US,en;q=0.5'),
('X-YouTube-Client-Name', '2'),
('X-YouTube-Client-Version', '2.20180830'),
) + util.mobile_ua
def decrypt_signatures(info, video_id): def decrypt_signatures(info, video_id):
'''return error string, or False if no errors''' '''return error string, or False if no errors'''
if not yt_data_extract.requires_decryption(info): if not yt_data_extract.requires_decryption(info):
@@ -341,7 +340,13 @@ def _add_to_error(info, key, additional_message):
info[key] = additional_message info[key] = additional_message
def extract_info(video_id, use_invidious, playlist_id=None, index=None): def fetch_player_response(client, video_id):
return util.call_youtube_api(client, 'player', {
'videoId': video_id,
})
def fetch_watch_page_info(video_id, playlist_id, index):
# bpctr=9999999999 will bypass are-you-sure dialogs for controversial # bpctr=9999999999 will bypass are-you-sure dialogs for controversial
# videos # videos
url = 'https://m.youtube.com/embed/' + video_id + '?bpctr=9999999999' url = 'https://m.youtube.com/embed/' + video_id + '?bpctr=9999999999'
@@ -349,56 +354,55 @@ def extract_info(video_id, use_invidious, playlist_id=None, index=None):
url += '&list=' + playlist_id url += '&list=' + playlist_id
if index: if index:
url += '&index=' + index url += '&index=' + index
watch_page = util.fetch_url(url, headers=watch_headers,
headers = (
('Accept', '*/*'),
('Accept-Language', 'en-US,en;q=0.5'),
('X-YouTube-Client-Name', '2'),
('X-YouTube-Client-Version', '2.20180830'),
) + util.mobile_ua
watch_page = util.fetch_url(url, headers=headers,
debug_name='watch') debug_name='watch')
watch_page = watch_page.decode('utf-8') watch_page = watch_page.decode('utf-8')
info = yt_data_extract.extract_watch_info_from_html(watch_page) return yt_data_extract.extract_watch_info_from_html(watch_page)
# request player urls if it's missing
# see https://github.com/user234683/youtube-local/issues/22#issuecomment-706395160
if info['age_restricted'] or info['player_urls_missing']:
if info['age_restricted']:
print('Age restricted video. Fetching /youtubei/v1/player page')
else:
print('Missing player. Fetching /youtubei/v1/player page')
# https://github.com/yt-dlp/yt-dlp/issues/574#issuecomment-887171136 def extract_info(video_id, use_invidious, playlist_id=None, index=None):
# ANDROID is used instead because its urls don't require decryption primary_client = 'android_vr'
# The URLs returned with WEB for videos requiring decryption fallback_client = 'ios'
# couldn't be decrypted with the base.js from the web page for some last_resort_client = 'tv_embedded'
# reason
url ='https://youtubei.googleapis.com/youtubei/v1/player' tasks = (
url += '?key=AIzaSyAO_FJ2SlqU8Q4STEHLGCilw_Y9_11qcW8' # Get video metadata from here
data = { gevent.spawn(fetch_watch_page_info, video_id, playlist_id, index),
'videoId': video_id, gevent.spawn(fetch_player_response, primary_client, video_id)
'context': { )
'client': { gevent.joinall(tasks)
'clientName': 'ANDROID', util.check_gevent_exceptions(*tasks)
'clientVersion': '16.20',
'clientScreen': 'EMBED', info = tasks[0].value or {}
'gl': 'US', player_response = tasks[1].value or {}
'hl': 'en',
}, yt_data_extract.update_with_new_urls(info, player_response)
# https://github.com/yt-dlp/yt-dlp/pull/575#issuecomment-887739287
'thirdParty': { # Fallback to 'ios' if no valid URLs are found
'embedUrl': 'https://google.com', # Can be any valid URL if not info.get('formats') or info.get('player_urls_missing'):
} print(f"No URLs found in '{primary_client}', attempting with '{fallback_client}'.")
} player_response = fetch_player_response(fallback_client, video_id) or {}
} yt_data_extract.update_with_new_urls(info, player_response)
data = json.dumps(data)
content_header = (('Content-Type', 'application/json'),) # Final attempt with 'tv_embedded' if there are still no URLs
player_response = util.fetch_url( if not info.get('formats') or info.get('player_urls_missing'):
url, data=data, headers=util.mobile_ua + content_header, print(f"No URLs found in '{fallback_client}', attempting with '{last_resort_client}'")
debug_name='youtubei_player', player_response = fetch_player_response(last_resort_client, video_id) or {}
report_text='Fetched youtubei player page').decode('utf-8') yt_data_extract.update_with_new_urls(info, player_response)
yt_data_extract.update_with_age_restricted_info(info,
player_response)
# signature decryption # signature decryption
decryption_error = decrypt_signatures(info, video_id) if info.get('formats'):
if decryption_error: decryption_error = decrypt_signatures(info, video_id)
decryption_error = 'Error decrypting url signatures: ' + decryption_error if decryption_error:
info['playability_error'] = decryption_error info['playability_error'] = 'Error decrypting url signatures: ' + decryption_error
# check if urls ready (non-live format) in former livestream # check if urls ready (non-live format) in former livestream
# urls not ready if all of them have no filesize # urls not ready if all of them have no filesize
@@ -412,26 +416,26 @@ def extract_info(video_id, use_invidious, playlist_id=None, index=None):
# livestream urls # livestream urls
# sometimes only the livestream urls work soon after the livestream is over # sometimes only the livestream urls work soon after the livestream is over
if (info['hls_manifest_url'] info['hls_formats'] = []
and (info['live'] or not info['formats'] or not info['urls_ready']) if info.get('hls_manifest_url') and (info.get('live') or not info.get('formats') or not info['urls_ready']):
): try:
manifest = util.fetch_url( manifest = util.fetch_url(info['hls_manifest_url'],
info['hls_manifest_url'], debug_name='hls_manifest.m3u8',
debug_name='hls_manifest.m3u8', report_text='Fetched hls manifest'
report_text='Fetched hls manifest' ).decode('utf-8')
).decode('utf-8') info['hls_formats'], err = yt_data_extract.extract_hls_formats(manifest)
if not err:
info['hls_formats'], err = yt_data_extract.extract_hls_formats(manifest) info['playability_error'] = None
if not err: for fmt in info['hls_formats']:
info['playability_error'] = None fmt['video_quality'] = video_quality_string(fmt)
for fmt in info['hls_formats']: except Exception as e:
fmt['video_quality'] = video_quality_string(fmt) print(f"Error obteniendo HLS manifest: {e}")
else: info['hls_formats'] = []
info['hls_formats'] = []
# check for 403. Unnecessary for tor video routing b/c ip address is same # check for 403. Unnecessary for tor video routing b/c ip address is same
info['invidious_used'] = False info['invidious_used'] = False
info['invidious_reload_button'] = False info['invidious_reload_button'] = False
info['tor_bypass_used'] = False
if (settings.route_tor == 1 if (settings.route_tor == 1
and info['formats'] and info['formats'][0]['url']): and info['formats'] and info['formats'][0]['url']):
try: try:
@@ -445,6 +449,7 @@ def extract_info(video_id, use_invidious, playlist_id=None, index=None):
if response.status == 403: if response.status == 403:
print('Access denied (403) for video urls.') print('Access denied (403) for video urls.')
print('Routing video through Tor') print('Routing video through Tor')
info['tor_bypass_used'] = True
for fmt in info['formats']: for fmt in info['formats']:
fmt['url'] += '&use_tor=1' fmt['url'] += '&use_tor=1'
elif 300 <= response.status < 400: elif 300 <= response.status < 400:
@@ -506,9 +511,66 @@ def format_bytes(bytes):
return '%.2f%s' % (converted, suffix) return '%.2f%s' % (converted, suffix)
@yt_app.route('/ytl-api/storyboard.vtt')
def get_storyboard_vtt():
"""
See:
https://github.com/iv-org/invidious/blob/9a8b81fcbe49ff8d88f197b7f731d6bf79fc8087/src/invidious.cr#L3603
https://github.com/iv-org/invidious/blob/3bb7fbb2f119790ee6675076b31cd990f75f64bb/src/invidious/videos.cr#L623
"""
spec_url = request.args.get('spec_url')
url, *boards = spec_url.split('|')
base_url, q = url.split('?')
q = parse_qs(q) # for url query
storyboard = None
wanted_height = 90
for i, board in enumerate(boards):
*t, _, sigh = board.split("#")
width, height, count, width_cnt, height_cnt, interval = map(int, t)
if height != wanted_height: continue
q['sigh'] = [sigh]
url = f"{base_url}?{urlencode(q, doseq=True)}"
storyboard = SimpleNamespace(
url = url.replace("$L", str(i)).replace("$N", "M$M"),
width = width,
height = height,
interval = interval,
width_cnt = width_cnt,
height_cnt = height_cnt,
storyboard_count = ceil(count / (width_cnt * height_cnt))
)
if not storyboard:
flask.abort(404)
def to_ts(ms):
s, ms = divmod(ms, 1000)
h, s = divmod(s, 3600)
m, s = divmod(s, 60)
return f"{h:02}:{m:02}:{s:02}.{ms:03}"
r = "WEBVTT" # result
ts = 0 # current timestamp
for i in range(storyboard.storyboard_count):
url = '/' + storyboard.url.replace("$M", str(i))
interval = storyboard.interval
w, h = storyboard.width, storyboard.height
w_cnt, h_cnt = storyboard.width_cnt, storyboard.height_cnt
for j in range(h_cnt):
for k in range(w_cnt):
r += f"{to_ts(ts)} --> {to_ts(ts+interval)}\n"
r += f"{url}#xywh={w * k},{h * j},{w},{h}\n\n"
ts += interval
return flask.Response(r, mimetype='text/vtt')
time_table = {'h': 3600, 'm': 60, 's': 1} time_table = {'h': 3600, 'm': 60, 's': 1}
@yt_app.route('/watch') @yt_app.route('/watch')
@yt_app.route('/embed') @yt_app.route('/embed')
@yt_app.route('/embed/<video_id>') @yt_app.route('/embed/<video_id>')
@@ -563,8 +625,11 @@ def get_watch_page(video_id=None):
# prefix urls, and other post-processing not handled by yt_data_extract # prefix urls, and other post-processing not handled by yt_data_extract
for item in info['related_videos']: for item in info['related_videos']:
item['thumbnail'] = "https://i.ytimg.com/vi/{}/hqdefault.jpg".format(item['id']) # set HQ relateds thumbnail videos
util.prefix_urls(item) util.prefix_urls(item)
util.add_extra_html_info(item) util.add_extra_html_info(item)
for song in info['music_list']:
song['url'] = util.prefix_url(song['url'])
if info['playlist']: if info['playlist']:
playlist_id = info['playlist']['id'] playlist_id = info['playlist']['id']
for item in info['playlist']['items']: for item in info['playlist']['items']:
@@ -594,12 +659,6 @@ def get_watch_page(video_id=None):
'/videoplayback', '/videoplayback',
'/videoplayback/name/' + filename) '/videoplayback/name/' + filename)
if settings.gather_googlevideo_domains:
with open(os.path.join(settings.data_dir, 'googlevideo-domains.txt'), 'a+', encoding='utf-8') as f:
url = info['formats'][0]['url']
subdomain = url[0:url.find(".googlevideo.com")]
f.write(subdomain + "\n")
download_formats = [] download_formats = []
for format in (info['formats'] + info['hls_formats']): for format in (info['formats'] + info['hls_formats']):
@@ -616,20 +675,19 @@ def get_watch_page(video_id=None):
'codecs': codecs_string, 'codecs': codecs_string,
}) })
target_resolution = settings.default_resolution if (settings.route_tor == 2) or info['tor_bypass_used']:
target_resolution = 240
else:
target_resolution = settings.default_resolution
source_info = get_video_sources(info, target_resolution) source_info = get_video_sources(info, target_resolution)
uni_sources = source_info['uni_sources'] uni_sources = source_info['uni_sources']
pair_sources = source_info['pair_sources'] pair_sources = source_info['pair_sources']
uni_idx, pair_idx = source_info['uni_idx'], source_info['pair_idx'] uni_idx, pair_idx = source_info['uni_idx'], source_info['pair_idx']
video_height = yt_data_extract.deep_get(source_info, 'uni_sources',
uni_idx, 'height',
default=360)
video_width = yt_data_extract.deep_get(source_info, 'uni_sources',
uni_idx, 'width',
default=640)
pair_quality = yt_data_extract.deep_get(pair_sources, pair_idx, 'quality') pair_quality = yt_data_extract.deep_get(pair_sources, pair_idx, 'quality')
uni_quality = yt_data_extract.deep_get(uni_sources, uni_idx, 'quality') uni_quality = yt_data_extract.deep_get(uni_sources, uni_idx, 'quality')
pair_error = abs((pair_quality or 360) - target_resolution) pair_error = abs((pair_quality or 360) - target_resolution)
uni_error = abs((uni_quality or 360) - target_resolution) uni_error = abs((uni_quality or 360) - target_resolution)
if uni_error == pair_error: if uni_error == pair_error:
@@ -639,9 +697,18 @@ def get_watch_page(video_id=None):
closer_to_target = 'uni' closer_to_target = 'uni'
else: else:
closer_to_target = 'pair' closer_to_target = 'pair'
using_pair_sources = (
bool(pair_sources) and (not uni_sources or closer_to_target == 'pair') if settings.prefer_uni_sources == 2:
) # Use uni sources unless there's no choice.
using_pair_sources = (
bool(pair_sources) and (not uni_sources)
)
else:
# Use the pair sources if they're closer to the desired resolution
using_pair_sources = (
bool(pair_sources)
and (not uni_sources or closer_to_target == 'pair')
)
if using_pair_sources: if using_pair_sources:
video_height = pair_sources[pair_idx]['height'] video_height = pair_sources[pair_idx]['height']
video_width = pair_sources[pair_idx]['width'] video_width = pair_sources[pair_idx]['width']
@@ -653,6 +720,8 @@ def get_watch_page(video_id=None):
uni_sources, uni_idx, 'width', default=640 uni_sources, uni_idx, 'width', default=640
) )
# 1 second per pixel, or the actual video width # 1 second per pixel, or the actual video width
theater_video_target_width = max(640, info['duration'] or 0, video_width) theater_video_target_width = max(640, info['duration'] or 0, video_width)
@@ -685,12 +754,10 @@ def get_watch_page(video_id=None):
template_name = 'embed.html' template_name = 'embed.html'
else: else:
template_name = 'watch.html' template_name = 'watch.html'
return flask.render_template( return flask.render_template(template_name,
template_name, header_playlist_names = local_playlist.get_playlist_names(),
header_playlist_names = local_playlist.get_playlist_names(), uploader_channel_url = ('/' + info['author_url']) if info['author_url'] else '',
uploader_channel_url = ('/' + info['author_url']) if info['author_url'] else '', time_published = info['time_published'],
time_published = info['time_published'],
time_published_utc=time_utc_isoformat(info['time_published']),
view_count = (lambda x: '{:,}'.format(x) if x is not None else "")(info.get("view_count", None)), view_count = (lambda x: '{:,}'.format(x) if x is not None else "")(info.get("view_count", None)),
like_count = (lambda x: '{:,}'.format(x) if x is not None else "")(info.get("like_count", None)), like_count = (lambda x: '{:,}'.format(x) if x is not None else "")(info.get("like_count", None)),
dislike_count = (lambda x: '{:,}'.format(x) if x is not None else "")(info.get("dislike_count", None)), dislike_count = (lambda x: '{:,}'.format(x) if x is not None else "")(info.get("dislike_count", None)),
@@ -726,6 +793,9 @@ def get_watch_page(video_id=None):
invidious_reload_button = info['invidious_reload_button'], invidious_reload_button = info['invidious_reload_button'],
video_url = util.URL_ORIGIN + '/watch?v=' + video_id, video_url = util.URL_ORIGIN + '/watch?v=' + video_id,
video_id = video_id, video_id = video_id,
storyboard_url = (util.URL_ORIGIN + '/ytl-api/storyboard.vtt?' +
urlencode([('spec_url', info['storyboard_spec_url'])])
if info['storyboard_spec_url'] else None),
js_data = { js_data = {
'video_id': info['id'], 'video_id': info['id'],
@@ -739,7 +809,7 @@ def get_watch_page(video_id=None):
'related': info['related_videos'], 'related': info['related_videos'],
'playability_error': info['playability_error'], 'playability_error': info['playability_error'],
}, },
font_family=youtube.font_choices[settings.font], font_family = youtube.font_choices[settings.font], # for embed page
**source_info, **source_info,
using_pair_sources = using_pair_sources, using_pair_sources = using_pair_sources,
) )

View File

@@ -7,7 +7,7 @@ from .everything_else import (extract_channel_info, extract_search_info,
extract_playlist_metadata, extract_playlist_info, extract_comments_info) extract_playlist_metadata, extract_playlist_info, extract_comments_info)
from .watch_extraction import (extract_watch_info, get_caption_url, from .watch_extraction import (extract_watch_info, get_caption_url,
update_with_age_restricted_info, requires_decryption, update_with_new_urls, requires_decryption,
extract_decryption_function, decrypt_signatures, _formats, extract_decryption_function, decrypt_signatures, _formats,
update_format_with_type_info, extract_hls_formats, update_format_with_type_info, extract_hls_formats,
extract_watch_info_from_html) extract_watch_info_from_html, captions_available)

View File

@@ -109,7 +109,7 @@ def concat_or_none(*strings):
def remove_redirect(url): def remove_redirect(url):
if url is None: if url is None:
return None return None
if re.fullmatch(r'(((https?:)?//)?(www.)?youtube.com)?/redirect\?.*', url) is not None: # youtube puts these on external links to do tracking if re.fullmatch(r'(((https?:)?//)?(www.)?youtube.com)?/redirect\?.*', url) is not None: # YouTube puts these on external links to do tracking
query_string = url[url.find('?')+1: ] query_string = url[url.find('?')+1: ]
return urllib.parse.parse_qs(query_string)['q'][0] return urllib.parse.parse_qs(query_string)['q'][0]
return url return url
@@ -133,11 +133,11 @@ def _recover_urls(runs):
for run in runs: for run in runs:
url = deep_get(run, 'navigationEndpoint', 'urlEndpoint', 'url') url = deep_get(run, 'navigationEndpoint', 'urlEndpoint', 'url')
text = run.get('text', '') text = run.get('text', '')
# second condition is necessary because youtube makes other things into urls, such as hashtags, which we want to keep as text # second condition is necessary because YouTube makes other things into urls, such as hashtags, which we want to keep as text
if url is not None and (text.startswith('http://') or text.startswith('https://')): if url is not None and (text.startswith('http://') or text.startswith('https://')):
url = remove_redirect(url) url = remove_redirect(url)
run['url'] = url run['url'] = url
run['text'] = url # youtube truncates the url text, use actual url instead run['text'] = url # YouTube truncates the url text, use actual url instead
def extract_str(node, default=None, recover_urls=False): def extract_str(node, default=None, recover_urls=False):
'''default is the value returned if the extraction fails. If recover_urls is true, will attempt to fix YouTube's truncation of url text (most prominently seen in descriptions)''' '''default is the value returned if the extraction fails. If recover_urls is true, will attempt to fix YouTube's truncation of url text (most prominently seen in descriptions)'''
@@ -185,7 +185,7 @@ def extract_int(string, default=None, whole_word=True):
return default return default
def extract_approx_int(string): def extract_approx_int(string):
'''e.g. "15.1M" from "15.1M subscribers"''' '''e.g. "15.1M" from "15.1M subscribers" or '4,353' from 4353'''
if not isinstance(string, str): if not isinstance(string, str):
string = extract_str(string) string = extract_str(string)
if not string: if not string:
@@ -193,7 +193,10 @@ def extract_approx_int(string):
match = re.search(r'\b(\d+(?:\.\d+)?[KMBTkmbt]?)\b', string.replace(',', '')) match = re.search(r'\b(\d+(?:\.\d+)?[KMBTkmbt]?)\b', string.replace(',', ''))
if match is None: if match is None:
return None return None
return match.group(1) result = match.group(1)
if re.fullmatch(r'\d+', result):
result = '{:,}'.format(int(result))
return result
MONTH_ABBREVIATIONS = {'jan':'1', 'feb':'2', 'mar':'3', 'apr':'4', 'may':'5', 'jun':'6', 'jul':'7', 'aug':'8', 'sep':'9', 'oct':'10', 'nov':'11', 'dec':'12'} MONTH_ABBREVIATIONS = {'jan':'1', 'feb':'2', 'mar':'3', 'apr':'4', 'may':'5', 'jun':'6', 'jul':'7', 'aug':'8', 'sep':'9', 'oct':'10', 'nov':'11', 'dec':'12'}
def extract_date(date_text): def extract_date(date_text):
@@ -249,6 +252,9 @@ def extract_item_info(item, additional_info={}):
primary_type = type_parts[-2] primary_type = type_parts[-2]
if primary_type == 'video': if primary_type == 'video':
info['type'] = 'video' info['type'] = 'video'
elif type_parts[0] == 'reel': # shorts
info['type'] = 'video'
primary_type = 'video'
elif primary_type in ('playlist', 'radio', 'show'): elif primary_type in ('playlist', 'radio', 'show'):
info['type'] = 'playlist' info['type'] = 'playlist'
info['playlist_type'] = primary_type info['playlist_type'] = primary_type
@@ -295,7 +301,11 @@ def extract_item_info(item, additional_info={}):
info['time_published'] = timestamp.group(1) info['time_published'] = timestamp.group(1)
if primary_type == 'video': if primary_type == 'video':
info['id'] = item.get('videoId') info['id'] = multi_deep_get(item,
['videoId'],
['navigationEndpoint', 'watchEndpoint', 'videoId'],
['navigationEndpoint', 'reelWatchEndpoint', 'videoId'] # shorts
)
info['view_count'] = extract_int(item.get('viewCountText')) info['view_count'] = extract_int(item.get('viewCountText'))
# dig into accessibility data to get view_count for videos marked as recommended, and to get time_published # dig into accessibility data to get view_count for videos marked as recommended, and to get time_published
@@ -313,17 +323,35 @@ def extract_item_info(item, additional_info={}):
if info['view_count']: if info['view_count']:
info['approx_view_count'] = '{:,}'.format(info['view_count']) info['approx_view_count'] = '{:,}'.format(info['view_count'])
else: else:
info['approx_view_count'] = extract_approx_int(item.get('shortViewCountText')) info['approx_view_count'] = extract_approx_int(multi_get(item,
'shortViewCountText',
'viewCountText' # shorts
))
# handle case where it is "No views" # handle case where it is "No views"
if not info['approx_view_count']: if not info['approx_view_count']:
if ('No views' in item.get('shortViewCountText', '') if ('No views' in item.get('shortViewCountText', '')
or 'no views' in accessibility_label.lower()): or 'no views' in accessibility_label.lower()
or 'No views' in extract_str(item.get('viewCountText', '')) # shorts
):
info['view_count'] = 0 info['view_count'] = 0
info['approx_view_count'] = '0' info['approx_view_count'] = '0'
info['duration'] = extract_str(item.get('lengthText')) info['duration'] = extract_str(item.get('lengthText'))
# dig into accessibility data to get duration for shorts
accessibility_label = deep_get(item,
'accessibility', 'accessibilityData', 'label',
default='')
duration = re.search(r'(\d+) (second|seconds|minute) - play video$',
accessibility_label)
if duration:
if duration.group(2) == 'minute':
conservative_update(info, 'duration', '1:00')
else:
conservative_update(info,
'duration', '0:' + duration.group(1).zfill(2))
# if it's an item in a playlist, get its index # if it's an item in a playlist, get its index
if 'index' in item: # url has wrong index on playlist page if 'index' in item: # url has wrong index on playlist page
info['index'] = extract_int(item.get('index')) info['index'] = extract_int(item.get('index'))
@@ -395,6 +423,8 @@ _item_types = {
'gridVideoRenderer', 'gridVideoRenderer',
'playlistVideoRenderer', 'playlistVideoRenderer',
'reelItemRenderer',
'playlistRenderer', 'playlistRenderer',
'compactPlaylistRenderer', 'compactPlaylistRenderer',
'gridPlaylistRenderer', 'gridPlaylistRenderer',
@@ -542,9 +572,13 @@ def extract_items(response, item_types=_item_types,
item_types=item_types) item_types=item_types)
if items: if items:
break break
elif 'onResponseReceivedEndpoints' in response: if ('onResponseReceivedEndpoints' in response
for endpoint in response.get('onResponseReceivedEndpoints', []): or 'onResponseReceivedActions' in response):
items, ctoken = extract_items_from_renderer_list( for endpoint in multi_get(response,
'onResponseReceivedEndpoints',
'onResponseReceivedActions',
[]):
new_items, new_ctoken = extract_items_from_renderer_list(
multi_deep_get( multi_deep_get(
endpoint, endpoint,
['reloadContinuationItemsCommand', 'continuationItems'], ['reloadContinuationItemsCommand', 'continuationItems'],
@@ -553,13 +587,17 @@ def extract_items(response, item_types=_item_types,
), ),
item_types=item_types, item_types=item_types,
) )
if items: items += new_items
break if (not ctoken) or (new_ctoken and new_items):
elif 'contents' in response: ctoken = new_ctoken
if 'contents' in response:
renderer = get(response, 'contents', {}) renderer = get(response, 'contents', {})
items, ctoken = extract_items_from_renderer( new_items, new_ctoken = extract_items_from_renderer(
renderer, renderer,
item_types=item_types) item_types=item_types)
items += new_items
if (not ctoken) or (new_ctoken and new_items):
ctoken = new_ctoken
if search_engagement_panels and 'engagementPanels' in response: if search_engagement_panels and 'engagementPanels' in response:
new_items, new_ctoken = extract_items_from_renderer_list( new_items, new_ctoken = extract_items_from_renderer_list(

View File

@@ -9,7 +9,7 @@ import re
import urllib import urllib
from math import ceil from math import ceil
def extract_channel_info(polymer_json, tab): def extract_channel_info(polymer_json, tab, continuation=False):
response, err = extract_response(polymer_json) response, err = extract_response(polymer_json)
if err: if err:
return {'error': err} return {'error': err}
@@ -23,7 +23,8 @@ def extract_channel_info(polymer_json, tab):
# channel doesn't exist or was terminated # channel doesn't exist or was terminated
# example terminated channel: https://www.youtube.com/channel/UCnKJeK_r90jDdIuzHXC0Org # example terminated channel: https://www.youtube.com/channel/UCnKJeK_r90jDdIuzHXC0Org
if not metadata: # metadata and microformat are not present for continuation requests
if not metadata and not continuation:
if response.get('alerts'): if response.get('alerts'):
error_string = ' '.join( error_string = ' '.join(
extract_str(deep_get(alert, 'alertRenderer', 'text'), default='') extract_str(deep_get(alert, 'alertRenderer', 'text'), default='')
@@ -44,7 +45,7 @@ def extract_channel_info(polymer_json, tab):
info['approx_subscriber_count'] = extract_approx_int(deep_get(response, info['approx_subscriber_count'] = extract_approx_int(deep_get(response,
'header', 'c4TabbedHeaderRenderer', 'subscriberCountText')) 'header', 'c4TabbedHeaderRenderer', 'subscriberCountText'))
# stuff from microformat (info given by youtube for every page on channel) # stuff from microformat (info given by youtube for first page on channel)
info['short_description'] = metadata.get('description') info['short_description'] = metadata.get('description')
if info['short_description'] and len(info['short_description']) > 730: if info['short_description'] and len(info['short_description']) > 730:
info['short_description'] = info['short_description'][0:730] + '...' info['short_description'] = info['short_description'][0:730] + '...'
@@ -69,10 +70,10 @@ def extract_channel_info(polymer_json, tab):
info['ctoken'] = None info['ctoken'] = None
# empty channel # empty channel
if 'contents' not in response and 'continuationContents' not in response: #if 'contents' not in response and 'continuationContents' not in response:
return info # return info
if tab in ('videos', 'playlists', 'search'): if tab in ('videos', 'shorts', 'streams', 'playlists', 'search'):
items, ctoken = extract_items(response) items, ctoken = extract_items(response)
additional_info = { additional_info = {
'author': info['channel_name'], 'author': info['channel_name'],
@@ -84,23 +85,84 @@ def extract_channel_info(polymer_json, tab):
if tab in ('search', 'playlists'): if tab in ('search', 'playlists'):
info['is_last_page'] = (ctoken is None) info['is_last_page'] = (ctoken is None)
elif tab == 'about': elif tab == 'about':
items, _ = extract_items(response, item_types={'channelAboutFullMetadataRenderer'}) # Latest type
if not items: items, _ = extract_items(response, item_types={'aboutChannelRenderer'})
info['error'] = 'Could not find channelAboutFullMetadataRenderer' if items:
return info a_metadata = deep_get(items, 0, 'aboutChannelRenderer',
channel_metadata = items[0]['channelAboutFullMetadataRenderer'] 'metadata', 'aboutChannelViewModel')
if not a_metadata:
info['error'] = 'Could not find aboutChannelViewModel'
return info
info['links'] = [] info['links'] = []
for link_json in channel_metadata.get('primaryLinks', ()): for link_outer in a_metadata.get('links', ()):
url = remove_redirect(deep_get(link_json, 'navigationEndpoint', 'urlEndpoint', 'url')) link = link_outer.get('channelExternalLinkViewModel') or {}
if not (url.startswith('http://') or url.startswith('https://')): link_content = extract_str(deep_get(link, 'link', 'content'))
url = 'http://' + url for run in deep_get(link, 'link', 'commandRuns') or ():
text = extract_str(link_json.get('title')) url = remove_redirect(deep_get(run, 'onTap',
info['links'].append( (text, url) ) 'innertubeCommand', 'urlEndpoint', 'url'))
if url and not (url.startswith('http://')
or url.startswith('https://')):
url = 'https://' + url
if link_content is None or (link_content in url):
break
else: # didn't break
url = link_content
if url and not (url.startswith('http://')
or url.startswith('https://')):
url = 'https://' + url
text = extract_str(deep_get(link, 'title', 'content'))
info['links'].append( (text, url) )
info['date_joined'] = extract_date(channel_metadata.get('joinedDateText')) info['date_joined'] = extract_date(
info['view_count'] = extract_int(channel_metadata.get('viewCountText')) a_metadata.get('joinedDateText')
info['description'] = extract_str(channel_metadata.get('description'), default='') )
info['view_count'] = extract_int(a_metadata.get('viewCountText'))
info['approx_view_count'] = extract_approx_int(
a_metadata.get('viewCountText')
)
info['description'] = extract_str(
a_metadata.get('description'), default=''
)
info['approx_video_count'] = extract_approx_int(
a_metadata.get('videoCountText')
)
info['approx_subscriber_count'] = extract_approx_int(
a_metadata.get('subscriberCountText')
)
info['country'] = extract_str(a_metadata.get('country'))
info['canonical_url'] = extract_str(
a_metadata.get('canonicalChannelUrl')
)
# Old type
else:
items, _ = extract_items(response,
item_types={'channelAboutFullMetadataRenderer'})
if not items:
info['error'] = 'Could not find aboutChannelRenderer or channelAboutFullMetadataRenderer'
return info
a_metadata = items[0]['channelAboutFullMetadataRenderer']
info['links'] = []
for link_json in a_metadata.get('primaryLinks', ()):
url = remove_redirect(deep_get(link_json, 'navigationEndpoint',
'urlEndpoint', 'url'))
if url and not (url.startswith('http://')
or url.startswith('https://')):
url = 'https://' + url
text = extract_str(link_json.get('title'))
info['links'].append( (text, url) )
info['date_joined'] = extract_date(a_metadata.get('joinedDateText'))
info['view_count'] = extract_int(a_metadata.get('viewCountText'))
info['description'] = extract_str(a_metadata.get(
'description'), default='')
info['approx_video_count'] = None
info['approx_subscriber_count'] = None
info['country'] = None
info['canonical_url'] = None
else: else:
raise NotImplementedError('Unknown or unsupported channel tab: ' + tab) raise NotImplementedError('Unknown or unsupported channel tab: ' + tab)
@@ -167,7 +229,7 @@ def extract_playlist_metadata(polymer_json):
if metadata['first_video_id'] is None: if metadata['first_video_id'] is None:
metadata['thumbnail'] = None metadata['thumbnail'] = None
else: else:
metadata['thumbnail'] = 'https://i.ytimg.com/vi/' + metadata['first_video_id'] + '/mqdefault.jpg' metadata['thumbnail'] = f"https://i.ytimg.com/vi/{metadata['first_video_id']}/hqdefault.jpg"
metadata['video_count'] = extract_int(header.get('numVideosText')) metadata['video_count'] = extract_int(header.get('numVideosText'))
metadata['description'] = extract_str(header.get('descriptionText'), default='') metadata['description'] = extract_str(header.get('descriptionText'), default='')
@@ -190,6 +252,19 @@ def extract_playlist_metadata(polymer_json):
elif 'updated' in text: elif 'updated' in text:
metadata['time_published'] = extract_date(text) metadata['time_published'] = extract_date(text)
microformat = deep_get(response, 'microformat', 'microformatDataRenderer',
default={})
conservative_update(
metadata, 'title', extract_str(microformat.get('title'))
)
conservative_update(
metadata, 'description', extract_str(microformat.get('description'))
)
conservative_update(
metadata, 'thumbnail', deep_get(microformat, 'thumbnail',
'thumbnails', -1, 'url')
)
return metadata return metadata
def extract_playlist_info(polymer_json): def extract_playlist_info(polymer_json):
@@ -197,13 +272,11 @@ def extract_playlist_info(polymer_json):
if err: if err:
return {'error': err} return {'error': err}
info = {'error': None} info = {'error': None}
first_page = 'continuationContents' not in response
video_list, _ = extract_items(response) video_list, _ = extract_items(response)
info['items'] = [extract_item_info(renderer) for renderer in video_list] info['items'] = [extract_item_info(renderer) for renderer in video_list]
if first_page: info['metadata'] = extract_playlist_metadata(polymer_json)
info['metadata'] = extract_playlist_metadata(polymer_json)
return info return info

View File

@@ -133,32 +133,59 @@ def _extract_from_video_information_renderer(renderer_content):
return info return info
def _extract_likes_dislikes(renderer_content): def _extract_likes_dislikes(renderer_content):
info = { def extract_button_count(toggle_button_renderer):
'like_count': None,
'dislike_count': None,
}
for button in renderer_content.get('buttons', ()):
button_renderer = button.get('slimMetadataToggleButtonRenderer', {})
# all the digits can be found in the accessibility data # all the digits can be found in the accessibility data
count = extract_int(deep_get( count = extract_int(multi_deep_get(
button_renderer, toggle_button_renderer,
'button', 'toggleButtonRenderer', 'defaultText', ['defaultText', 'accessibility', 'accessibilityData', 'label'],
'accessibility', 'accessibilityData', 'label')) ['accessibility', 'label'],
['accessibilityData', 'accessibilityData', 'label'],
['accessibilityText'],
))
# this count doesn't have all the digits, it's like 53K for instance # this count doesn't have all the digits, it's like 53K for instance
dumb_count = extract_int(extract_str(deep_get( dumb_count = extract_int(extract_str(multi_get(
button_renderer, 'button', 'toggleButtonRenderer', 'defaultText'))) toggle_button_renderer, ['defaultText', 'title'])))
# The accessibility text will be "No likes" or "No dislikes" or # The accessibility text will be "No likes" or "No dislikes" or
# something like that, but dumb count will be 0 # something like that, but dumb count will be 0
if dumb_count == 0: if dumb_count == 0:
count = 0 count = 0
return count
if 'isLike' in button_renderer: info = {
info['like_count'] = count 'like_count': None,
elif 'isDislike' in button_renderer: 'dislike_count': None,
info['dislike_count'] = count }
for button in renderer_content.get('buttons', ()):
if 'slimMetadataToggleButtonRenderer' in button:
button_renderer = button['slimMetadataToggleButtonRenderer']
count = extract_button_count(deep_get(button_renderer,
'button',
'toggleButtonRenderer'))
if 'isLike' in button_renderer:
info['like_count'] = count
elif 'isDislike' in button_renderer:
info['dislike_count'] = count
elif 'slimMetadataButtonRenderer' in button:
button_renderer = button['slimMetadataButtonRenderer']
liberal_update(info, 'like_count', extract_button_count(
multi_deep_get(button_renderer,
['button', 'segmentedLikeDislikeButtonRenderer',
'likeButton', 'toggleButtonRenderer'],
['button', 'segmentedLikeDislikeButtonViewModel',
'likeButtonViewModel', 'likeButtonViewModel',
'toggleButtonViewModel', 'toggleButtonViewModel',
'defaultButtonViewModel', 'buttonViewModel']
)
))
'''liberal_update(info, 'dislike_count', extract_button_count(
deep_get(
button_renderer, 'button',
'segmentedLikeDislikeButtonRenderer',
'dislikeButton', 'toggleButtonRenderer'
)
))'''
return info return info
def _extract_from_owner_renderer(renderer_content): def _extract_from_owner_renderer(renderer_content):
@@ -212,6 +239,36 @@ def _extract_metadata_row_info(renderer_content):
return info return info
def _extract_from_music_renderer(renderer_content):
# latest format for the music list
info = {
'music_list': [],
}
for carousel in renderer_content.get('carouselLockups', []):
song = {}
carousel = carousel.get('carouselLockupRenderer', {})
video_renderer = carousel.get('videoLockup', {})
video_renderer_info = extract_item_info(video_renderer)
video_id = video_renderer_info.get('id')
song['url'] = concat_or_none('https://www.youtube.com/watch?v=',
video_id)
song['title'] = video_renderer_info.get('title')
for row in carousel.get('infoRows', []):
row = row.get('infoRowRenderer', {})
title = extract_str(row.get('title'))
data = extract_str(row.get('defaultMetadata'))
if title == 'SONG':
song['title'] = data
elif title == 'ARTIST':
song['artist'] = data
elif title == 'ALBUM':
song['album'] = data
elif title == 'WRITERS':
song['writers'] = data
info['music_list'].append(song)
return info
def _extract_from_video_metadata(renderer_content): def _extract_from_video_metadata(renderer_content):
info = _extract_from_video_information_renderer(renderer_content) info = _extract_from_video_information_renderer(renderer_content)
liberal_dict_update(info, _extract_likes_dislikes(renderer_content)) liberal_dict_update(info, _extract_likes_dislikes(renderer_content))
@@ -235,6 +292,7 @@ visible_extraction_dispatch = {
'slimVideoActionBarRenderer': _extract_likes_dislikes, 'slimVideoActionBarRenderer': _extract_likes_dislikes,
'slimOwnerRenderer': _extract_from_owner_renderer, 'slimOwnerRenderer': _extract_from_owner_renderer,
'videoDescriptionHeaderRenderer': _extract_from_video_header_renderer, 'videoDescriptionHeaderRenderer': _extract_from_video_header_renderer,
'videoDescriptionMusicSectionRenderer': _extract_from_music_renderer,
'expandableVideoDescriptionRenderer': _extract_from_description_renderer, 'expandableVideoDescriptionRenderer': _extract_from_description_renderer,
'metadataRowContainerRenderer': _extract_metadata_row_info, 'metadataRowContainerRenderer': _extract_metadata_row_info,
# OR just this one, which contains SOME of the above inside it # OR just this one, which contains SOME of the above inside it
@@ -307,17 +365,18 @@ def _extract_watch_info_mobile(top_level):
# https://www.androidpolice.com/2019/10/31/google-youtube-app-comment-section-below-videos/ # https://www.androidpolice.com/2019/10/31/google-youtube-app-comment-section-below-videos/
# https://www.youtube.com/watch?v=bR5Q-wD-6qo # https://www.youtube.com/watch?v=bR5Q-wD-6qo
if header_type == 'commentsEntryPointHeaderRenderer': if header_type == 'commentsEntryPointHeaderRenderer':
comment_count_text = extract_str(comment_info.get('headerText')) comment_count_text = extract_str(multi_get(
comment_info, 'commentCount', 'headerText'))
else: else:
comment_count_text = extract_str(deep_get(comment_info, comment_count_text = extract_str(deep_get(comment_info,
'header', 'commentSectionHeaderRenderer', 'countText')) 'header', 'commentSectionHeaderRenderer', 'countText'))
if comment_count_text == 'Comments': # just this with no number, means 0 comments if comment_count_text == 'Comments': # just this with no number, means 0 comments
info['comment_count'] = 0 info['comment_count'] = '0'
else: else:
info['comment_count'] = extract_int(comment_count_text) info['comment_count'] = extract_approx_int(comment_count_text)
info['comments_disabled'] = False info['comments_disabled'] = False
else: # no comment section present means comments are disabled else: # no comment section present means comments are disabled
info['comment_count'] = 0 info['comment_count'] = '0'
info['comments_disabled'] = True info['comments_disabled'] = True
# check for limited state # check for limited state
@@ -369,26 +428,28 @@ def _extract_watch_info_desktop(top_level):
return info return info
def update_format_with_codec_info(fmt, codec): def update_format_with_codec_info(fmt, codec):
if (codec.startswith('av') if any(codec.startswith(c) for c in ('av', 'vp', 'h263', 'h264', 'mp4v')):
or codec in ('vp9', 'vp8', 'vp8.0', 'h263', 'h264', 'mp4v')):
if codec == 'vp8.0': if codec == 'vp8.0':
codec = 'vp8' codec = 'vp8'
conservative_update(fmt, 'vcodec', codec) conservative_update(fmt, 'vcodec', codec)
elif (codec.startswith('mp4a') elif (codec.startswith('mp4a')
or codec in ('opus', 'mp3', 'aac', 'dtse', 'ec-3', 'vorbis')): or codec in ('opus', 'mp3', 'aac', 'dtse', 'ec-3', 'vorbis',
'ac-3')):
conservative_update(fmt, 'acodec', codec) conservative_update(fmt, 'acodec', codec)
else: else:
print('Warning: unrecognized codec: ' + codec) print('Warning: unrecognized codec: ' + codec)
fmt_type_re = re.compile( fmt_type_re = re.compile(
r'(text|audio|video)/([\w0-9]+); codecs="([\w0-9\.]+(?:, [\w0-9\.]+)*)"') r'(text|audio|video)/([\w0-9]+); codecs="([^"]+)"')
def update_format_with_type_info(fmt, yt_fmt): def update_format_with_type_info(fmt, yt_fmt):
# 'type' for invidious api format # 'type' for invidious api format
mime_type = multi_get(yt_fmt, 'mimeType', 'type') mime_type = multi_get(yt_fmt, 'mimeType', 'type')
if mime_type is None: if mime_type is None:
return return
match = re.fullmatch(fmt_type_re, mime_type) match = re.fullmatch(fmt_type_re, mime_type)
if match is None:
print('Warning: Could not read mimetype', mime_type)
return
type, fmt['ext'], codecs = match.groups() type, fmt['ext'], codecs = match.groups()
codecs = codecs.split(', ') codecs = codecs.split(', ')
for codec in codecs: for codec in codecs:
@@ -411,6 +472,13 @@ def _extract_formats(info, player_response):
for yt_fmt in yt_formats: for yt_fmt in yt_formats:
itag = yt_fmt.get('itag') itag = yt_fmt.get('itag')
# Translated audio track
# Example: https://www.youtube.com/watch?v=gF9kkB0UWYQ
# Only get the original language for now so a foreign
# translation will not be picked just because it comes first
if deep_get(yt_fmt, 'audioTrack', 'audioIsDefault') is False:
continue
fmt = {} fmt = {}
fmt['itag'] = itag fmt['itag'] = itag
fmt['ext'] = None fmt['ext'] = None
@@ -562,6 +630,25 @@ def extract_watch_info(polymer_json):
info['translation_languages'] = [] info['translation_languages'] = []
captions_info = player_response.get('captions', {}) captions_info = player_response.get('captions', {})
info['_captions_base_url'] = normalize_url(deep_get(captions_info, 'playerCaptionsRenderer', 'baseUrl')) info['_captions_base_url'] = normalize_url(deep_get(captions_info, 'playerCaptionsRenderer', 'baseUrl'))
# Sometimes the above playerCaptionsRender is randomly missing
# Extract base_url from one of the captions by removing lang specifiers
if not info['_captions_base_url']:
base_url = normalize_url(deep_get(
captions_info,
'playerCaptionsTracklistRenderer',
'captionTracks',
0,
'baseUrl'
))
if base_url:
url_parts = urllib.parse.urlparse(base_url)
qs = urllib.parse.parse_qs(url_parts.query)
for key in ('tlang', 'lang', 'name', 'kind', 'fmt'):
if key in qs:
del qs[key]
base_url = urllib.parse.urlunparse(url_parts._replace(
query=urllib.parse.urlencode(qs, doseq=True)))
info['_captions_base_url'] = base_url
for caption_track in deep_get(captions_info, 'playerCaptionsTracklistRenderer', 'captionTracks', default=()): for caption_track in deep_get(captions_info, 'playerCaptionsTracklistRenderer', 'captionTracks', default=()):
lang_code = caption_track.get('languageCode') lang_code = caption_track.get('languageCode')
if not lang_code: if not lang_code:
@@ -651,6 +738,8 @@ def extract_watch_info(polymer_json):
# other stuff # other stuff
info['author_url'] = 'https://www.youtube.com/channel/' + info['author_id'] if info['author_id'] else None info['author_url'] = 'https://www.youtube.com/channel/' + info['author_id'] if info['author_id'] else None
info['storyboard_spec_url'] = deep_get(player_response, 'storyboards', 'playerStoryboardSpecRenderer', 'spec')
return info return info
single_char_codes = { single_char_codes = {
@@ -730,10 +819,15 @@ def extract_watch_info_from_html(watch_html):
return extract_watch_info(fake_polymer_json) return extract_watch_info(fake_polymer_json)
def captions_available(info):
return bool(info['_captions_base_url'])
def get_caption_url(info, language, format, automatic=False, translation_language=None): def get_caption_url(info, language, format, automatic=False, translation_language=None):
'''Gets the url for captions with the given language and format. If automatic is True, get the automatic captions for that language. If translation_language is given, translate the captions from `language` to `translation_language`. If automatic is true and translation_language is given, the automatic captions will be translated.''' '''Gets the url for captions with the given language and format. If automatic is True, get the automatic captions for that language. If translation_language is given, translate the captions from `language` to `translation_language`. If automatic is true and translation_language is given, the automatic captions will be translated.'''
url = info['_captions_base_url'] url = info['_captions_base_url']
if not url:
return None
url += '&lang=' + language url += '&lang=' + language
url += '&fmt=' + format url += '&fmt=' + format
if automatic: if automatic:
@@ -745,7 +839,7 @@ def get_caption_url(info, language, format, automatic=False, translation_languag
url += '&tlang=' + translation_language url += '&tlang=' + translation_language
return url return url
def update_with_age_restricted_info(info, player_response): def update_with_new_urls(info, player_response):
'''Inserts urls from player_response json''' '''Inserts urls from player_response json'''
ERROR_PREFIX = 'Error getting missing player or bypassing age-restriction: ' ERROR_PREFIX = 'Error getting missing player or bypassing age-restriction: '