I already host multiple services via caddy as my reverse proxy. Jellyfin, I am worried about authentication. How do you secure it?

  • Dr. Moose
    link
    fedilink
    English
    166 days ago

    Tailscale is awesome. Alternatively if you’re more technically inclined you can make your own wireguard tailscale and all you need is to get a static IP for your home network. Wireguard will always be safer than each individual service.

    • DefederateLemmyMl
      link
      fedilink
      English
      16 days ago

      all you need is to get a static IP for your home network

      Don’t even need a static IP. Dyndns is enough.

    • irmadlad
      link
      fedilink
      English
      66 days ago

      Love tailscale. The only issue I had with it is making it play nice with my local, daily driver VPN. Got it worked out tho. So, now everything is jippity jippity.

  • DefederateLemmyMl
    link
    fedilink
    English
    76 days ago

    What I used to do was: I put jellyfin behind an nginx reverse proxy, on a separate vhost (so on a unique domain). Then I added basic authentication (a htpasswd file) with an unguessable password on the whole domain. Then I added geoip firewall rules so that port 443 was only reachable from the country I was in. I live in small country, so this significantly limits exposure.

    Downside of this approach: basic auth is annoying. The jellyfin client doesn’t like it … so I had to use a browser to stream.

    Nowadays, I put all my services behind a wireguard VPN and I expose nothing else. Only issue I’ve had is when I was on vacation in a bnb and they used the same IP range as my home network :-|

    • exu
      link
      fedilink
      English
      117 days ago

      I think that breaks most clients

      • @Svinhufvud@sopuli.xyz
        link
        fedilink
        English
        27 days ago

        Yes, it breaks native login, but you can authenticate with Authentik on your phone for example, and use Quick connect to authorize non-browser sessions with it.

        • λλλOP
          link
          fedilink
          English
          57 days ago

          Clients are built to speak directly to the Jellyfin API. if you put an auth service in front it won’t even ask you to try and authenticate with that.

  • @Kusimulkku@lemm.ee
    link
    fedilink
    English
    66 days ago

    I’ve put it behind WireGuard since only my wife and I use it. Otherwise I’d just use Caddy or other such reverse proxy that does https and then keep Jellyfin and Caddy up to date.

  • @jagged_circle@feddit.nl
    link
    fedilink
    English
    76 days ago

    I have another site on a different port that sits behind basic auth and adds the IP to a short ipset whitelist.

    So first I have to auth into that site with basic auth, then I load jellyfin on the other port.

  • @Batman@lemmy.world
    link
    fedilink
    English
    76 days ago

    I am using tailscale but I went a little further to let my family log in with their Gmail( they will not make any account for 1 million dollars)

    Tailscale funneled Jellyfin Keycloak (adminless)

    Private Tailscale Keycloak admin Postgres dB

    I hook up jellyfin to Keycloak (adminless) using the sso plugin. And hook Keycloak up (using the private instance) to use Google as an identity provider with a private app.

    • λλλOP
      link
      fedilink
      English
      36 days ago

      SSO plugin is good to know about. Does that address any of the issues with security that someone was previously talking about?

      • @Batman@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        5 days ago

        I’d say it’s nearly as secure as

        basic authentication. If you restrict deletion to admin users and use role (or group) based auth to restrict that jellyfin admin ability to people with strong passwords in keycloak, i think you are good. Still the only risk is people could delete your media if an adminusers gmail is hacked.

        Will say it’s not as secure as restricting access to vpn, you could be brute forced. Frankly it would be preferable to set up rate limiting, but that was a bridge too far for me

        • @Appoxo@lemmy.dbzer0.com
          link
          fedilink
          English
          25 days ago

          I set mine up with Authelia 2FA and restricted media deletion to one user: The administrator.
          All others arent allowed to delete. Not even me.

  • @borax7385@lemmy.world
    link
    fedilink
    English
    207 days ago

    I use fail2ban to ban IPs that fall to login and also IPs that perform common scans in the reverse proxy

    • NullPointer
      link
      fedilink
      English
      137 days ago

      also have jellyfin disable the account after a number of failed logins.

    • @Evil_Shrubbery@lemm.ee
      link
      fedilink
      English
      6
      edit-2
      7 days ago

      Or wireguard, depending where & how they want to implement it might be simpler or better/worse on hardware.

  • @jagged_circle@feddit.nl
    link
    fedilink
    English
    -26 days ago

    Kinda hard because they have an ongoing bug where if you put it behind a reverse proxy with basic auth (typical easy button to secure X web software on Internet), it breaks jellyfin.

    Best thing is to not. Put it on your local net and connect in with a vpn

  • Gagootron
    link
    fedilink
    English
    76 days ago

    I use good ol’ obscurity. My reverse proxy requires that the correct subdomain is used to access any service that I host and my domain has a wildcard entry. So if you access asdf.example.com you get an error, the same for directly accessing my ip, but going to jellyfin.example.com works. And since i don’t post my valid urls anywhere no web-scraper can find them. This filters out 99% of bots and the rest are handled using authelia and crowdsec

    • @Nibodhika@lemmy.world
      link
      fedilink
      English
      25 days ago

      If you’re using jellyfin as the url, that’s an easily guessable name, however if you use random words not related to what’s being hosted chances are less, e.g. salmon.example.com . Also ideally your server should reply with a 200 to * subdomains so scrappers can’t tell valid from invalid domains. Also also, ideally it also sends some random data on each of those so they don’t look exactly the same. But that’s approaching paranoid levels of security.

    • @andreluis034@bookwormstory.social
      link
      fedilink
      English
      86 days ago

      Are you using HTTPS? It’s highly likely that your domains/certificates are being logged for certificate transparency. Unless you’re using wildcard domains, it’s very easy to enumerate your sub-domains.

      • Gagootron
        link
        fedilink
        English
        15 days ago

        It seems to that it works. I don’t get any web-scrapers hitting anything but my main domain. I can’t find any of my subdomains on google.

        Please tell me how you believe that it works. Maybe i overlooked something…

        • @ocean@lemmy.selfhostcat.com
          link
          fedilink
          English
          05 days ago

          My understanding is that scrappers check every domain and subdomain. You’re making it harder but not impossible. Everything gets scrapped

          It would be better if you also did IP whitelisting, rate limiting to prevent bots, bot detection via cloudflare or something similar, etc.

    • @sludge@lemmy.ml
      link
      fedilink
      English
      6
      edit-2
      6 days ago

      And since i don’t post my valid urls anywhere no web-scraper can find them

      You would ah… be surprised. My urls aren’t published anywhere and I currently have 4 active decisions and over 300 alerts from crowdsec.

      It’s true none of those threat actors know my valid subdomains, but that doesn’t mean they don’t know I’m there.

      • Gagootron
        link
        fedilink
        English
        26 days ago

        Of course i get a bunch of scanners hitting ports 80 and 443. But if they don’t use the correct domain they all end up on an Nginx server hosting a static error page. Not much they can do there

        • DefederateLemmyMl
          link
          fedilink
          English
          56 days ago

          This is how I found out Google harvests the URLs I visit through Chrome.

          Got google bots trying to crawl deep links into a domain that I hadn’t published anywhere.

          • @zod000@lemmy.ml
            link
            fedilink
            English
            1
            edit-2
            6 days ago

            This is true, and is why I annoyingly have to keep robots.txt on my unpublished domains. Google does honor them for the most part, for now.

    • @darkknight@discuss.online
      link
      fedilink
      English
      16 days ago

      I was thinking of setting this up recntly after seeing it on Jim’s garage. Do you use it for all your external services or just jellyfin? How does it compare to a fairly robust WAF like bunkerweb?

      • @sludge@lemmy.ml
        link
        fedilink
        English
        16 days ago

        I use it for all of my external services. It’s just wireguard and traefik under the hood. I have no familiarity with bunkerweb, but pangolin integrates with crowdsec. Specifically it comes out of the box with traefik bouncer, but it is relatively straightforward to add the crowdsec firewall bouncer on the host machine which I have found to be adequate for my needs.

  • @geography082@lemm.ee
    cake
    link
    fedilink
    English
    17 days ago

    My setup is: Proxmox - restricted LXC running docker which runs jellyfin, tailscale funnel as reverse proxy and certificate provider. So so don’t care about jellyfin security, it can get hacked / broken , its an end road. If so i will delete the LXC and bring it up again using backups. Also i dont think someone will risk or use time to hack a jellyfin server. My strategy is, with webservices that don’t have critical personal data, i have them isolated in instances. I don’t rely on security on anything besides the firewall. And i try not to have services with personal sensitive data, and if i do, on my local lan with the needed protections. If i need access to it outside my local lan, vpn.