• XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          15 days ago

          This offends a lot of Mozilla stans, but Firefox isn’t much better.

          They have similar links to shady people, often the same shady people… That includes two friends of Jeffrey Epstein.

          And Mozilla still engages in discrimination today.

          From the linked document, describing an unneeded round of layoffs:

          People from groups underrepresented in technology, like female leaders and persons of color, were disproportionately impacted by the [Mozilla’s] layoff.

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          12
          ·
          15 days ago

          Mozilla Firefox isn’t much better. They have similar links to shady people, often the same shady people… That includes two friends of Jeffrey Epstein.

          And Mozilla still engages in discrimination today.

          From the linked document, describing an unneeded round of layoffs:

          People from groups underrepresented in technology, like female leaders and persons of color, were disproportionately impacted by the [Mozilla’s] layoff.

          • helpImTrappedOnline@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            15 days ago

            Vivaldi seems to block adds well to iOS too.

            Really the best I’ve found is 1blocker for Safari, but I got that when the lifetime version was a only few dollars. No clue how limiting the free version is. (Imagine subscribing to an ad-blocker! - but than again, how many people donate to ublock?).

            • XLE@piefed.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 days ago

              Vivaldi’s ad blocker is really subpar compared to Brave or uBO. I tried using it for a while, and you have to tamper with filter lists (including disabling pre-approved advertisers) and it still fails in areas Brave doesn’t.

      • Rusty@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        There is no chromium on iOS, all the browsers are actually the Safari in a trench coat.

      • Barbuzie@piefed.social
        link
        fedilink
        English
        arrow-up
        16
        ·
        16 days ago

        So which browser would you recommend? It looks like Firefox is the only one not based on Chromium

        • XLE@piefed.social
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          1
          ·
          16 days ago

          Firefox has some very good forks including Waterfox (pretty normal) and LibreWolf (pretty privacy-hardened out of the box and may require a little Settings menu tweaking to make normal).

          It’s unfortunate, but at the end of the day you kind of have to bite the bullet and accept that you will be using something downstream of something bad, e.g. Google (Chrome forks) or their money (Firefox is funded not by donations but by them).

          • grue@lemmy.world
            link
            fedilink
            English
            arrow-up
            19
            ·
            16 days ago

            Chrome forks aren’t just tainted by Google’s money; they’re tainted by Google’s power. Prefer a Firefox-derived browser if you care about web standards.

  • Nihilistic_Mystics@lemmy.world
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    1
    ·
    16 days ago

    Absolutely do not use Brave. Just use Firefox mobile as well, it has ublock origin, sponsorblock, and background play.

    • Imaginary_Stand4909@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      15 days ago

      When did iOS get ublock origin and sponsorblock as extensions for FF?

      Edit: To all the replies, I’m just pointing out that the guy above me is wrong. Yes, Brave is bad. But FF is not a good replacement for Brave if you want ad blocking on iOS because iOS doesn’t support any browser extensions outside of Safari.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      edit-2
      15 days ago

      I have yet to see a reason for not using Brave that wouldn’t also apply to Firefox developer Mozilla. That includes appeals to morality, control from Big Tech, etc.

      If Brave works (and on iOS it’s basically the only option with a reliable ad blocker) then I don’t see a reason to avoid it.

      Would love to see somebody levy a complaint that doesn’t also apply to Firefox. Any takers?

      • NekuSoul@lemmy.nekusoul.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        15 days ago

        That includes appeals to morality

        I mean, you say that, and to some degree you’re right, but you do know that the Brave CEO is the same person that brought JavaScript upon us, right?

        /j

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    16 days ago

    yt-dlp is great for downloading media you’ve already found (or at least, playlists or creator channels you’ve already found), but you can’t use it for discovering new media. You still need a browser or GUI app like FreeTube or Newpipe for that, and it works better when you’re actually signed in with your Google account so that the recommendation algorithm works and it can keep track of what you watched for you.

    Don’t get me wrong; I would love to limit my interaction with Google to anonymously fetching video URLs. But none of the alternatives sync my watch history between devices or recommend new videos (beyond just new uploads from subscribed channels) to me.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 days ago

        Thanks for the suggestion, but that’s not quite it. It basically does the same thing wrapping yt-dlp with a shell script and a cron job would: it takes a Youtube channel or playlist as input, and then automatically downloads it.

        You can tell by this screenshot:

        I’m looking for something that handles the step before that, helping me discovering which channels and playlists I want.

        It also doesn’t have anything to do with “syncing” in the way that I’m talking about, which is syncing account metadata between my devices, not syncing video data between Youtube and a local folder.

        What I want is to be able to watch a video in Newpipe on my phone, and have it be automatically marked as watched in FreeTube on my desktop . And in my Google account, to the extent that I continue to use it while transitioning away. In fact, if I stop watching a video partway on one device, I want it to know the timestamp I stopped at so I can pick back up at the same point on another device.

        Basically, I want the same experience I get if both devices are using the official Youtube website or app, but replacing the “report my habits to Google” part with a self-hosted solution.

        • DaMummy@hilariouschaos.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          Have you tried GrayJay with sync set up? It might suit your needs once set up properly. Yes, there’s a Linux desktop version, and an android mobile version. No idea on iOS

  • ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    15 days ago

    Any plugin claiming to bring back youtube dislikes usually does it with some off-site database, that can be easily manipulated, as the API for it has been deprecated completely.

  • HMWYSPlease@lemmy.org
    link
    fedilink
    English
    arrow-up
    13
    ·
    15 days ago

    I don’t see it here but the top comment on reddit for this post was that:

    If you have a VPN with a server in Albania to switch to that because serving Ads during streaming is illegal there. I have yet to test it but sounds legit and no one was nay saying it.

  • HeyJoe@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    16 days ago

    I just use ublock, but even if i didnt and used all the ones above that would be like what 5 min of my time, one time? Now you want me to directly download all the videos i may watch and somehow thats easier? Yeah im good, no ads for years now and thats all I want.

    • Final Remix@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 days ago

      Ytdlp doesn’t even work right ever since the bullshit “flagged as grownup material” algorithm started account-restricting and silently hiding videos.

      • Imaginary_Stand4909@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        I’ve been trying to pirate music for my Navidrome, and the age verification is quite literally making it impossible to download some songs.

        Thankfully some kind soul (Samiaouuuu@jlai.lu) a few days ago told me about monochrome.tf which provides files in better format anyway, so as long as the song is by an artist or band (and not an unpopular game OST 😭) it will probably be on there. I guess it’s built on Tidal.

        • Final Remix@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          I’d just like to be able to watch flagged videos from RedLetterMedia on my roku without logging into fucking YouTube. Lol

          • FG_3479@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 days ago

            Ditch the Roku and get a Google TV box. You can sideload SmartTube on it which is an open source and ad free client.

            • Final Remix@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 days ago

              I’ve already got a different frontend installed, but the other instances don’t have the flagged videos, hence the need to somehow download and self-host some of those episodes.

  • w3dd1e@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    ·
    15 days ago

    You can use Orion in iOS instead of Brave, which is shady as hell and owned by a bigot.

  • Maroon@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    14 days ago

    STOP USING YOUTUBE. USE PEER TUBE.

    If you are a content creator; especially a new creator, make peertube your default.

    • AHemlocksLie@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      14 days ago

      I’d love to see PeerTube get more use, but the one issue for creators is monetization. I don’t really see a great way for creators to make a decent income through PeerTube. We all hate the ads, but… That’s where a lot of their money comes from. Without a solution to that, creators are never going to embrace it, unfortunately.

      • Renat@szmer.info
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        14 days ago

        There is buycoffie site to give donation to creators. Many yt creators dont get much money from yt ads cause they get demonetisation frequently.

        • AHemlocksLie@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          14 days ago

          But do things like that actually translate into respectable revenue? I understand that there are technically ways to get paid, but they only matter to creators if they actually fill their pockets.

          • imjustmsk@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 days ago

            The problem is nobody is crazy enough to host these much videos other than Google. Google wants to stay as a monopoly in long-form video sharing platforms and I don’t think Google is actually making much money in return comparing the cost if Petabytes of video files getting uploaded all the time.

            Even after keeping a huge chunk of money that they get from advertisers, I still don’t think it’s that profitable but Somany people use YouTube and - they get to also stalk our online activity and do god knows what with allaaat data.

            • AHemlocksLie@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              14 days ago

              That’s a big part of what PeerTube tries to address. Yes, the videos still must be hosted somewhere, but PeerTube streams the video as a torrent where the host is the tracker and guaranteed seed while every client streaming the video is a torrent client that shares what it already has with every other active stream to reduce demand on the host. It’s not a perfect solution since the host must act as a guaranteed seeder, but for popular videos actively being streamed by many people at once, it has the potential to massively reduce traffic for those streams.

              For less popular videos that may not have more than one viewer in any given moment, though, there’s likely no real impact. If it got some more development interest, I could see it getting archival clients that behave sort of like an *Arr server for media management, allowing users to save their favorite videos in exchange for acting as an extra seed over some longer term. That’d help, but it’s definitely not a full solution.

  • Chloé 🥕@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    8
    ·
    16 days ago

    on ios you can also use firefox focus, it doesn’t have ads on youtube, but iirc you can’t stay logged in because it doesn’t save cookies (tho that could be a positive depending on how you look at it)

    vivaldi ios also didn’t have ads on youtube, but it’s been a while since i used it so it may have changed and it’s a pretty heavy browser in my experience

    orion also supports firefox/chrome extensions but in my experience it’s adblocking (even with ublock) isn’t perfect. but again, it’s been a while so maybe it’s better now

    • webghost0101@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      16 days ago

      Its an open source tool to download youtube videos

      About every mainstream youtube download program you or your parents have ever used are actually just a wrapper for this.

      Bonus: If you want to learn more about coding its not that hard to make a script that automatically downloads the last video from a list of channels that runs on a schedule. Even ai can do it.

        • moody@lemmings.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          16 days ago

          It’s a command line tool. You type in “yt-dlp” followed by the url of a video, and it does the rest.

          It has many other options, but the defaults are good enough for most cases.

            • FG_3479@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              15 days ago

              Use winget install yt-dlp-nightly to install it.

              Then run yt-dlp -f "bestvideo[height<=1080][ext=mp4]+bestaudio[ext=m4a]" "https://youtube.com/watch?v=EXAMPLE" to download a video.

              The file will be in C:\Users\YourUsername unless you use cd to enter a certain folder.

              If yt-dlp stops working, then yt-dlp --update-to nightly should fix it.

          • webghost0101@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            16 days ago

            There is no single stop for a tutorial for stuff like this because you could use any scripting language and which ones you have available may depend on your os.

            But honestly any half decent llm can generate something that works for your specific case.

            If you really want to avoid using those,

            Here is a simple example for windows powershell.

            
            # yt-dlp Channel Downloader
            # --------------------------
            # Downloads the latest video from each channel in channels.txt
            #
            # Setup:
            #   1. Install yt-dlp:  winget install yt-dlp
            #   2. Install ffmpeg:  winget install ffmpeg
            #   3. Create channels.txt next to this script, one URL per line:
            #        https://www.youtube.com/@SomeChannel
            #        https://www.youtube.com/@AnotherChannel
            #   4. Right-click this file → Run with PowerShell
            
            # Read each line, skip blanks and comments (#)
            foreach ($url in Get-Content ".\channels.txt") {
                $url = $url.Trim()
                if ($url -eq "" -or $url.StartsWith("#")) { continue }
            
                Write-Host "`nDownloading latest from: $url"
            
                yt-dlp --playlist-items 1 --merge-output-format mp4 --no-overwrites `
                    -o "downloads\%(channel)s\%(title)s.%(ext)s" $url
            }
            
            Write-Host "`nDone."
            

            And here is my own bash script (linux) which has only gotten bigger with more customization over the years.

            (part 1, part 2 in the next reply)

            #!/bin/bash
            # ============================================================================
            #  yt-dlp Channel Downloader (Bash)
            # ============================================================================
            #
            #  Automatically downloads new videos from a list of YouTube channels.
            #
            #  Features:
            #    - Checks RSS feeds first to avoid unnecessary yt-dlp calls
            #    - Skips livestreams, premieres, shorts, and members-only content
            #    - Two-pass download: tries best quality first, falls back to 720p
            #      if the file exceeds the size limit
            #    - Maintains per-channel archive and skip files so nothing is
            #      re-downloaded or re-checked
            #    - Embeds thumbnails and metadata into the final .mp4
            #    - Logs errors with timestamps
            #
            #  Requirements:
            #    - yt-dlp       (https://github.com/yt-dlp/yt-dlp)
            #    - ffmpeg        (for merging video+audio and thumbnail embedding)
            #    - curl          (for RSS feed fetching)
            #    - A SOCKS5 proxy on 127.0.0.1:40000 (remove --proxy flags if not needed)
            #
            #  Channel list format (Channels.txt):
            #    The file uses a simple key=value block per channel, separated by blank
            #    lines. Each block has four fields:
            #
            #      Cat=Gaming
            #      Name=SomeChannel
            #      VidLimit=5
            #      URL=https://www.youtube.com/channel/UCxxxxxxxxxxxxxxxxxx
            #
            #    Cat       Category label (currently unused in paths, available for sorting)
            #    Name      Short name used for filenames and archive tracking
            #    VidLimit  How many recent videos to consider per run ("ALL" for no limit)
            #    URL       Full YouTube channel URL (must contain the UC... channel ID)
            #
            # ============================================================================
            
            export PATH=$PATH:/usr/local/bin
            
            # --- Configuration ----------------------------------------------------------
            # Change these to match your environment.
            
            SCRIPT_DIR="/path/to/script"           # Folder containing this script and Channels.txt
            ERROR_LOG="$SCRIPT_DIR/download_errors.log"
            DOWNLOAD_DIR="/path/to/downloads"      # Where videos are saved
            MAX_FILESIZE="5G"                      # Max file size before falling back to lower quality
            PROXY="socks5://127.0.0.1:40000"       # SOCKS5 proxy (remove --proxy flags if unused)
            
            # --- End of configuration ---------------------------------------------------
            
            cd "$SCRIPT_DIR"
            
            # ============================================================================
            #  log_error - Append or update an error entry in the error log
            # ============================================================================
            #  If an entry with the same message (ignoring timestamp) already exists,
            #  it replaces it so the log doesn't fill up with duplicates.
            #
            #  Usage: log_error "[2025-01-01 12:00:00] ChannelName - URL: ERROR message"
            
            log_error() {
                local entry="$1"
            
                # Strip the timestamp prefix to get a stable key for deduplication
                local key=$(echo "$entry" | sed 's/^\[[0-9-]* [0-9:]*\] //')
            
                local tmp_log=$(mktemp)
                if [[ -f "$ERROR_LOG" ]]; then
                    grep -vF "$key" "$ERROR_LOG" > "$tmp_log"
                fi
                echo "$entry" >> "$tmp_log"
                mv "$tmp_log" "$ERROR_LOG"
            }
            
            # ============================================================================
            #  Parse Channels.txt
            # ============================================================================
            #  awk reads the key=value blocks and outputs one line per channel:
            #    Category  Name  VidLimit  URL
            #  The while loop then processes each channel.
            
            awk -F'=' '
              /^Cat/ {Cat=$2}
              /^Name/ {Name=$2}
              /^VidLimit/ {VidLimit=$2}
              /^URL/ {URL=$2; print Cat, Name, VidLimit, URL}
            ' "$SCRIPT_DIR/Channels.txt" | while read -r Cat Name VidLimit URL; do
            
                archive_file="$SCRIPT_DIR/DLarchive$Name.txt"   # Tracks successfully downloaded video IDs
                skip_file="$SCRIPT_DIR/DLskip$Name.txt"          # Tracks IDs to permanently ignore
                mkdir -p "$DOWNLOAD_DIR"
            
                # ========================================================================
                #  Step 1: Check the RSS feed for new videos
                # ========================================================================
                #  YouTube provides an RSS feed per channel at a predictable URL.
                #  Checking this is much faster than calling yt-dlp, so we use it
                #  as a quick "anything new?" test.
            
                # Extract the channel ID (starts with UC) from the URL
                channel_id=$(echo "$URL" | grep -oP 'UC[a-zA-Z0-9_-]+')
                rss_url="https://www.youtube.com/feeds/videos.xml?channel_id=%24channel_id"
            
                # Fetch the feed and pull out all video IDs
                new_videos=$(curl -s --proxy "$PROXY" "$rss_url" | \
                    grep -oP '(?<=<yt:videoId>)[^<]+')
            
                if [[ -z "$new_videos" ]]; then
                    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] RSS fetch failed or empty, skipping"
                    continue
                fi
            
                # Compare RSS video IDs against archive and skip files.
                # If every ID is already known, there's nothing to do.
                has_new=false
                while IFS= read -r vid_id; do
                    in_archive=false
                    in_skip=false
            
                    [[ -f "$archive_file" ]] && grep -q "youtube $vid_id" "$archive_file" && in_archive=true
                    [[ -f "$skip_file" ]]    && grep -q "youtube $vid_id" "$skip_file"    && in_skip=true
            
                    if [[ "$in_archive" == false && "$in_skip" == false ]]; then
                        has_new=true
                        break
                    fi
                done <<< "$new_videos"
            
                if [[ "$has_new" == false ]]; then
                    echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] No new videos, skipping"
                    continue
                fi
            
                echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] New videos found, processing"
            
                # ========================================================================
                #  Step 2: Build shared option arrays
                # ========================================================================
            
                # Playlist limit: restrict how many recent videos yt-dlp considers
                playlist_limit=()
                if [[ $VidLimit != "ALL" ]]; then
                    playlist_limit=(--playlist-end "$VidLimit")
                fi
            
                # Options used during --simulate (dry-run) passes
                sim_base=(
                    --proxy "$PROXY"
                    --extractor-args "youtube:player-client=default,-tv_simply"
                    --simulate
                    "${playlist_limit[@]}"
                )
            
                # Options used during actual downloads
                common_opts=(
                    --proxy "$PROXY"
                    --download-archive "$archive_file"
                    --extractor-args "youtube:player-client=default,-tv_simply"
                    --write-thumbnail
                    --convert-thumbnails jpg
                    --add-metadata
                    --embed-thumbnail
                    --merge-output-format mp4
                    --output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s"
                    "${playlist_limit[@]}"
                )
            
                # ========================================================================
                #  Step 3: Pre-pass — identify and skip filtered content
                # ========================================================================
                #  Runs yt-dlp in simulate mode twice:
                #    1. Get ALL video IDs in the playlist window
                #    2. Get only IDs that pass the match-filter (no live, no shorts)
                #  Any ID in (1) but not in (2) gets added to the skip file so future
                #  runs don't waste time on them.
            
                echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pre-pass: identifying filtered videos (live/shorts)"
            
                all_ids=$(yt-dlp "${sim_base[@]}" --print "%(id)s" "$URL" 2>/dev/null)
                passing_ids=$(yt-dlp "${sim_base[@]}" \
                    --match-filter "!is_live & !was_live & original_url!*=/shorts/" \
                    --print "%(id)s" "$URL" 2>/dev/null)
            
                while IFS= read -r vid_id; do
                    [[ -z "$vid_id" ]] && continue
                    grep -q "youtube $vid_id" "$archive_file" 2>/dev/null && continue
                    grep -q "youtube $vid_id" "$skip_file"    2>/dev/null && continue
                    if ! echo "$passing_ids" | grep -q "^${vid_id}$"; then
                        echo "youtube $vid_id" >> "$skip_file"
                        echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (live/short/filtered)"
                    fi
                done <<< "$all_ids"
            
            
              • webghost0101@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                15 days ago

                Absolutely Fair, they are quite a major source in the accelerated enshitification of modern life, thats why I provided examples so people can still learn without one.

                But it would also be ignorant for me to not recognise how much i managed to learn about linux/open source from these same tools in the last few years. The traditional ways of learning things were never compatible with my personal neurology.

                Without llms, id probably still be stuck on windows.

            • webghost0101@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              16 days ago

              part 2

              # ========================================================================
                  #  Step 4 (Pass 1): Download at best quality, with a size cap
                  # ========================================================================
                  #  Tries: best AVC1 video + best M4A audio → merged into .mp4
                  #  If a video exceeds MAX_FILESIZE, its ID is saved for the fallback pass.
                  #  Members-only and premiere errors cause the video to be permanently skipped.
               
                  echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 1: best quality under $MAX_FILESIZE"
               
                  yt-dlp \
                      "${common_opts[@]}" \
                      --match-filter "!is_live & !was_live & original_url!*=/shorts/" \
                      --max-filesize "$MAX_FILESIZE" \
                      --format "bestvideo[vcodec^=avc1]+bestaudio[ext=m4a]/best[ext=mp4]/best" \
                      "$URL" 2>&1 | while IFS= read -r line; do
                          echo "$line"
                          if echo "$line" | grep -q "^ERROR:"; then
               
                              # Too large → save ID for pass 2
                              if echo "$line" | grep -qi "larger than max-filesize"; then
                                  vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}')
                                  [[ -n "$vid_id" ]] && echo "$vid_id" >> "$SCRIPT_DIR/.size_failed_$Name"
               
                              # Permanently unavailable → skip forever
                              elif echo "$line" | grep -qE "members only|Join this channel|This live event|premiere"; then
                                  vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}')
                                  if [[ -n "$vid_id" ]]; then
                                      if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then
                                          echo "youtube $vid_id" >> "$skip_file"
                                          echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (permanent failure)"
                                      fi
                                  fi
                              fi
               
                              log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line"
                          fi
                      done
               
                  # ========================================================================
                  #  Step 5 (Pass 2): Retry oversized videos at lower quality
                  # ========================================================================
                  #  For any video that exceeded MAX_FILESIZE in pass 1, retry at 720p max.
                  #  If it's STILL too large, log the actual size and skip permanently.
               
                  if [[ -f "$SCRIPT_DIR/.size_failed_$Name" ]]; then
                      echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: lower quality fallback for oversized videos"
               
                      while IFS= read -r vid_id; do
                          [[ -z "$vid_id" ]] && continue
                          echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Retrying $vid_id at 720p max"
               
                          yt-dlp \
                              --proxy "$PROXY" \
                              --download-archive "$archive_file" \
                              --extractor-args "youtube:player-client=default,-tv_simply" \
                              --write-thumbnail \
                              --convert-thumbnails jpg \
                              --add-metadata \
                              --embed-thumbnail \
                              --merge-output-format mp4 \
                              --max-filesize "$MAX_FILESIZE" \
                              --format "bestvideo[vcodec^=avc1][height<=720]+bestaudio[ext=m4a]/bestvideo[height<=720]+bestaudio[ext=m4a]/best[height<=720]/worst" \
                              --output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s" \
                              "https://www.youtube.com/watch?v=%24vid_id" 2>&1 | while IFS= read -r line; do
                                  echo "$line"
                                  if echo "$line" | grep -q "^ERROR:"; then
               
                                      # Still too large even at 720p — give up and log the size
                                      if echo "$line" | grep -qi "larger than max-filesize"; then
                                          filesize_info=$(yt-dlp \
                                              --proxy "$PROXY" \
                                              --extractor-args "youtube:player-client=default,-tv_simply" \
                                              --simulate \
                                              --print "%(filesize,filesize_approx)s" \
                                              "https://www.youtube.com/watch?v=%24vid_id" 2>/dev/null)
                                          if [[ "$filesize_info" =~ ^[0-9]+$ ]]; then
                                              filesize_gb=$(echo "scale=1; $filesize_info / 1073741824" | bc)
                                              size_str="${filesize_gb}GB"
                                          else
                                              size_str="unknown size"
                                          fi
                                          if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then
                                              echo "youtube $vid_id" >> "$skip_file"
                                              log_error "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Skipped $vid_id - still over $MAX_FILESIZE at 720p ($size_str)"
                                          fi
                                      fi
               
                                      log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line"
                                  fi
                              done
                      done < "$SCRIPT_DIR/.size_failed_$Name"
               
                      rm -f "$SCRIPT_DIR/.size_failed_$Name"
                  else
                      echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: no oversized videos to retry"
                  fi
               
                  # Clean up any stray .description files yt-dlp may have left behind
                  find "$DOWNLOAD_DIR" -name "${Name} - *.description" -type f -delete
               
              done
              
      • Mothra@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        16 days ago

        I see. I am not a programmer, not by a long shot. More on the grandma side of things instead. So please forgive if I’m saying something very stupid - I’m just ignorant.

        I’ve been happy with NewPipe so far, 95% of my video watching happens on my phone. The only thing Newpipe can’t do is access age restricted videos. If this tool can do that on my phone, then I’m definitely interested.

        • webghost0101@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          15 days ago

          Yes and no,

          Yes because i am doing it, no because it’s just one part of the process.

          Newpipe is cool but it doesn’t run on my phone so i needed something else.

          You may have heard of plex, “run your own netflix”, i much prefer its competitor jellyfin but that doesn’t matter here.

          Point is i download my YouTube videos on a schedule/script straight to the library folder of jellyfin, from which i can login from any type of device.