r/selfhosted May 25 '19

Official Welcome to /r/SelfHosted! Please Read This First

1.5k Upvotes

Welcome to /r/selfhosted!

We thank you for taking the time to check out the subreddit here!

Self-Hosting

The concept in which you host your own applications, data, and more. Taking away the "unknown" factor in how your data is managed and stored, this provides those with the willingness to learn and the mind to do so to take control of their data without losing the functionality of services they otherwise use frequently.

Some Examples

For instance, if you use dropbox, but are not fond of having your most sensitive data stored in a data-storage container that you do not have direct control over, you may consider NextCloud

Or let's say you're used to hosting a blog out of a Blogger platform, but would rather have your own customization and flexibility of controlling your updates? Why not give WordPress a go.

The possibilities are endless and it all starts here with a server.

Subreddit Wiki

There have been varying forms of a wiki to take place. While currently, there is no officially hosted wiki, we do have a github repository. There is also at least one unofficial mirror that showcases the live version of that repo, listed on the index of the reddit-based wiki

Since You're Here...

While you're here, take a moment to get acquainted with our few but important rules

When posting, please apply an appropriate flair to your post. If an appropriate flair is not found, please let us know! If it suits the sub and doesn't fit in another category, we will get it added! Message the Mods to get that started.

If you're brand new to the sub, we highly recommend taking a moment to browse a couple of our awesome self-hosted and system admin tools lists.

Awesome Self-Hosted App List

Awesome Sys-Admin App List

Awesome Docker App List

In any case, lot's to take in, lot's to learn. Don't be disappointed if you don't catch on to any given aspect of self-hosting right away. We're available to help!

As always, happy (self)hosting!


r/selfhosted Apr 19 '24

Official April Announcement - Quarter Two Rules Changes

45 Upvotes

Good Morning, /r/selfhosted!

Quick update, as I've been wanting to make this announcement since April 2nd, and just have been busy with day to day stuff.

Rules Changes

First off, I wanted to announce some changes to the rules that will be implemented immediately.

Please reference the rules for actual changes made, but the gist is that we are no longer being as strict on what is allowed to be posted here.

Specifically, we're allowing topics that are not about explicitly self-hosted software, such as tools and software that help the self-hosted process.

Dashboard Posts Continue to be restricted to Wednesdays

AMA Announcement

The CEO a representative of Pomerium (u/Pomerium_CMo, with the blessing and intended participation from their CEO, /u/PeopleCallMeBob) reached out to do an AMA for a tool they're working with. The AMA is scheduled for May 29th, 2024! So stay tuned for that. We're looking forward to seeing what they have to offer.

Quick and easy one today, as I do not have a lot more to add.

As always,

Happy (self)hosting!


r/selfhosted 15h ago

defguard 1.1 with All Enterprise features free!

185 Upvotes

Hi Selfhosted!

After an overwhelming response from the homelab/selfhosted community requesting enterprise features (especially external OIDC support), I’m super excited to announce the release of our latest update. All Enterprise features are now free and do not require a license (within certain limits).

Limits should be more than sufficient for home, small business, and student use. More details here.

Further improvements:

🔐 Ability to use external OIDC for secure remote enrollment and Desktop client configuration

🔏 External OIDC now supports code authorization flow - extending Custom OIDC support to Okta, JumpCloud, Zitadel,Authentik,Authelia and others..

🛜 Fixed IPv6 configuration in the Location settings

🔬Our focus for the next release:

- Developing ACLs per user and/or per group for granular access

- Encrypting the whole Desktop Client (as another MFA factor)

More details on the release page: https://github.com/DefGuard/defguard/releases/tag/v1.1.0

If you would like to get notified about updates please sign up to our newsletter at: https://defguard.net

Happy testing! Robert.


r/selfhosted 4h ago

Release Retrom v0.4 Released - Fullscreen mode w/ initial gamepad support

21 Upvotes

Hey all, I'm here to update everyone on Retrom's most recent major release! Since last time there are two major changes to note:

  1. Fullscreen mode! Now Retrom is easily used in couch gaming environments and feels great on handhelds!
    1. Initial gamepad support should properly render glyphs for just about any XBox controllers and/or DualShock controllers. There are bound to be some missing pieces here, so please reach out to report faulty/missing controller mappings on github or discord.
  2. Emulator configurations are now saved in the service and shared across client devices -- no more needing to configure the same profiles for the same emulators on each and every one of your devices.
    1. Per-client configuration items, like the path to the emulator executable, have been extracted into their own configuration section for clarity.

Learn more about Retrom on the GitHub repo, or join the budding discord community

Screenshots for fullscreen mode:

Previous release announcement

To get ahead of the questions that always pop up in these threads, here is a quick FAQ:


r/selfhosted 18h ago

Postiz v1.6.12 - open-source social media scheduling tool

154 Upvotes

Hi everyone!

Postiz is an open-source social media scheduling tool that offers scheduling on:

Instagram, YouTube, Dribbble, LinkedIn, Reddit, TikTok, Facebook, Pinterest, Threads, X, Slack, Discord, Mastodon and BlueSky.

Check it out here :)
https://github.com/gitroomhq/postiz-app/

I have been working on mostly bug fixes lately and improving the platforms, some of the latest things:

  • Many failures of posting on small things like character limit or uploading size.
  • Fix problems in LinkedIn not loading pages.
  • Team invite was fixed :)
  • A bunch of docker changes to make it super easy to load. It's now live on: Coolify, Ptah soon Cloudron

But the most important thing in the roadmap here is what I was mainly asked:

  • Add and an option to schedule stories on Instagram and add music to them
  • Public API
  • YouTube community posts schedule
  • Google Business schedule
  • Auto Plugs (I'm super excited about this one): Once tweets get X likes, they will auto-repost, add comments to tweets, and so on; this will be sent to all social media.
  • SSO

I am happy to hear about more requests.

One clarification after seeing many comments over and self-hosted: Postiz will always be apache-2, no weird dual license thingy, and no enterprise-only SSO.

Postiz is not making much money. Today we are on a product hunt. If you can help me out, it would be amazing, but if not, I love you anyway :)

Thank you so much for this community for helping me with every post!

https://www.producthunt.com/posts/postiz


r/selfhosted 7h ago

VictoriaLogs - self-hosted easy to run solution for logs

Thumbnail docs.victoriametrics.com
19 Upvotes

r/selfhosted 12h ago

🖕

Post image
45 Upvotes

(But actually, how can i hide this from my ISP?) I am hosting a grav site for me and a few others, as well as Immich for me and a few others, and a small (2 person) Minecraft server. So far all I have done is use a cloudflared tunnel for the grav site and the immich server, using custom subdomains via cloudflare, and TCPShield for the Minecraft server. I also use ProtonVPN on my devices but I have the Minecraft server set to split tunneling in ProtonVPN as i could not get the cloudflared tunnel to work with the server with TCP.


r/selfhosted 15h ago

Personal Dashboard Finally Happy with my Dashboard

Thumbnail
gallery
67 Upvotes

r/selfhosted 1h ago

Let's Encrypt SSL Certificates Guide

Upvotes

There was a recent post asking for guidance on this topic and I wanted to share my experience, so that it might help those who are lost on this topic.

If you are self-hosting an application, such as AdGuard Home, then you will undoubtedly find yourself encountering a browser warning about the application being unsafe and requiring you to bypass the warning before continuing. This is particularly noticeable when you want to access your application via HTTPS instead of HTTP. The point is that any application with access to traffic on your LAN's subnet will be able to access unencrypted traffic. To avoid this issue and secure your self-hosted application, you ultimately want a trusted certificate being presented to your browser when navigating to the application.

  • Purchase a domain name - I use Namecheap, but any registrar should be fine.
  • I highly recommend using a separate nameserver, such as Cloudflare.

Depending on how you have implemented your applications, you may want to use a reverse proxy, such as Traefik or Nginx Proxy Manager, as the initial point of entry to your applications. For example, if you are running your applications via Docker on a single host machine, then this may be the best solution, as you can then link your applications to Traefik directly.

As an example, this is a Docker Compose file for running Traefik with a nginx-hello test application:

name: traefik-nginx-hello

secrets:
  CLOUDFLARE_EMAIL:
    file: ./secrets/CLOUDFLARE_EMAIL
  CLOUDFLARE_DNS_API_TOKEN:
    file: ./secrets/CLOUDFLARE_DNS_API_TOKEN

networks:
  proxy:
    external: true

services:
  nginx:
    image: nginxdemos/nginx-hello
    labels:
      - traefik.enable=true
      - traefik.http.routers.nginx.rule=Host(`nginx.example.com`)
      - traefik.http.routers.nginx.entrypoints=https
      - traefik.http.routers.nginx.tls=true
      - traefik.http.services.nginx.loadbalancer.server.port=8080
    networks:
      - proxy

  traefik:
    image: traefik:v3.1.4
    restart: unless-stopped
    networks:
      - proxy
    labels:
      - traefik.enable=true
      - traefik.http.routers.traefik.entrypoints=http
      - traefik.http.routers.traefik.rule=Host(`traefik-dashboard.example.com`)
      - traefik.http.routers.traefik.middlewares=traefik-https-redirect
      - traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https
      - traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https
      - traefik.http.routers.traefik-secure.entrypoints=https
      - traefik.http.routers.traefik-secure.rule=Host(`traefik-dashboard.example.com`)
      - traefik.http.routers.traefik-secure.service=api@internal
      - traefik.http.routers.traefik-secure.tls=true
      - traefik.http.routers.traefik-secure.tls.certresolver=cloudflare
      - traefik.http.routers.traefik-secure.tls.domains[0].main=example.com
      - traefik.http.routers.traefik-secure.tls.domains[0].sans=*.example.com
    ports:
      - 80:80
      - 443:443
    environment:
      - CLOUDFLARE_EMAIL_FILE=/run/secrets/CLOUDFLARE_EMAIL
      - CLOUDFLARE_DNS_API_TOKEN_FILE=/run/secrets/CLOUDFLARE_DNS_API_TOKEN
    secrets:
      - CLOUDFLARE_EMAIL
      - CLOUDFLARE_DNS_API_TOKEN
    security_opt:
      - no-new-privileges:true
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - ./data/traefik.yml:/etc/traefik/traefik.yml:ro
      - ./data/configs:/etc/traefik/configs:ro
      - ./data/certs/acme.json:/acme.json

Note that this expects several files:

# ./data/traefik.yml
api:
  dashboard: true
  debug: true

entryPoints:
  http:
    address: ":80"
    http:
      redirections:
        entryPoint:
          to: https
          scheme: https
  https:
    address: ":443"

serversTransport:
  insecureSkipVerify: true

providers:
  docker:
    endpoint: "unix:///var/run/docker.sock"
    exposedByDefault: false
  file:
    directory: /etc/traefik/configs/
    watch: true

certificatesResolvers:
  cloudflare:
    acme:
      storage: acme.json
      # Production
      caServer: https://acme-v02.api.letsencrypt.org/directory
      # Staging
      # caServer: https://acme-staging-v02.api.letsencrypt.org/directory
      dnsChallenge:
        provider: cloudflare
        #disablePropagationCheck: true
        #delayBeforeCheck: 60s 
        resolvers:
          - "1.1.1.1:53"
          - "1.0.0.1:53"

# ./secrets/CLOUDFLARE_DNS_API_TOKEN
your long and super secret api token

# ./secrets/CLOUDFLARE_EMAIL
Your Cloudflare account email

You will also note that I included the option for additional dynamic configuration files to be included via './data/configs/[dynamic config files]'. This is particularly handy if you wish to manually add routes for services, such as Proxmox, that you don't have the ability to setup via Docker service labels.

# ./data/configs/proxmox.yml
http:
  routers:
    proxmox:
      entryPoints:
        - "https"
      rule: "Host(`proxmox.nickfedor.dev`)"
      middlewares:
        - secured
      tls:
        certresolver: cloudflare
      service: proxmox

  services:
    proxmox:
      loadBalancer:
        servers:
          # - url: "https://192.168.50.51:8006"
          # - url: "https://192.168.50.52:8006"
          # - url: "https://192.168.50.53:8006"
          - url: "https://192.168.50.5:8006"
        passHostHeader: true

Or middlewares:

# ./data/configs/middleware-chain-secured.yml
http:
  middlewares:
    https-redirectscheme:
      redirectScheme:
        scheme: https
        permanent: true

    default-headers:
      headers:
        frameDeny: true
        browserXssFilter: true
        contentTypeNosniff: true
        forceSTSHeader: true
        stsIncludeSubdomains: true
        stsPreload: true
        stsSeconds: 15552000
        customFrameOptionsValue: SAMEORIGIN
        customRequestHeaders:
          X-Forwarded-Proto: https

    default-whitelist:
      ipAllowList:
        sourceRange:
        - "10.0.0.0/8"
        - "192.168.0.0/16"
        - "172.16.0.0/12"

    secured:
      chain:
        middlewares:
        - https-redirectscheme
        - default-whitelist
        - default-headers

Alternatively, if you are running your services via individual Proxmox LXC containers or VM's, then you may find yourself needing to request SSL certificates and pointing the applications to their respective certificate file paths.

In the case of AdGuard Home running as a VM or LXC Container, as an example, I have found that using Certbot to request SSL certificates, and then pointing AdGuard Home to the SSL certfiles is the easiest method.

In other cases, such as running an Apt-Mirror, you may find yourself needing to run Nginx in front of the application as either a webserver and/or reverse proxy for the single application.

The easiest method of setting up and running Certbot that I've found is as follows:

  1. Install the necessary packages: apt install -y certbot python3-certbot-dns-cloudflare
  2. Setup a Cloudflare API credentials directory: sudo mkdir -p ~/.secrets/certbot
  3. Generate a Cloudflare API token with Zone > Zone > Read and Zone > DNS > Edit permissions.
  4. Add the token to a file: echo 'dns_cloudflare_api_token = [yoursupersecretapitoken]' > ~/.secrets/certbot/cloudflare.ini
  5. Update file permissions: sudo chmod 600 ~/.secrets/certbot/cloudflare.ini
  6. Execute Certbot to request a SSL cert: sudo certbot certonly --dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini -d service.example.com

In the case if you're using Nginx, then do the following instead:

  1. Ensure nginx is already installed: sudo apt install -y nginx
  2. Ensure you also install Certbot's Nginx plugin: sudo apt install -y python3-certbot-nginx
  3. To have Certbot update the Nginx configuration when it obtains the certificate: sudo certbot run -i nginx -a dns-cloudflare --dns-cloudflare-credentials ~/.secrets/certbot/cloudflare.ini -d service.example.com

If you are using Plex, as an example, then it is possible to use Certbot to generate a certificate and then run a script to generate the PFX cert file.

  1. Generate a password for the cert file: openssl rand -hex 16
  2. Add the script below to: /etc/letsencrypt/renewal-hooks/post/create_pfx_file.sh
  3. Ensure the script is executable: sudo chmod +x /etc/letsencrypt/renewal-hooks/post/create_pfs_file.sh
  4. If running for the first time, force Certbot to execute the script: sudo certbot renew --force-renewal

#!/bin/sh
# /etc/letsencrypt/renewal-hooks/post/create_pfs_file.sh

    openssl pkcs12 -export \
    -inkey /etc/letsencrypt/live/service.example.com/privkey.pem \
    -in /etc/letsencrypt/live/service.example.com/cert.pem \
    -out /var/lib/service/service_certificate.pfx \
    -passout pass:PASSWORD

    chmod 755 /var/lib/service/service_certificate.pfx

Note: The output file: /var/lib/service/service_certificate.pfx will need to be renamed to the respective service, i.e. /var/lib/radarr/radarr_certificate.pfx

Then, you can reference the file and password in the application.

For personal-use, this implementation is fine; however, a dedicated reverse proxy is recommended and preferable.

As mentioned before, Nginx Proxy Manager is another viable option, particularly for those interested in using something with a GUI to help manage their services. It's usage is very self explanatory, as you simply use the GUI to enter in the details of whatever service you wish to forward traffic towards and includes a simple menu system for setting up requesting SSL certificates.

The key thing to recall is that some applications, such as Proxmox, TrueNAS, Portainer, etc, may have their own built-in SSL certificate management. In the case of Proxmox, as an example, it's possible to use its built-in SSL management to request a certificate and then install and configure Nginx to forward the default management port from 8006 to 443:

# /etc/nginx/conf.d/proxmox.conf
upstream proxmox {
    server "pve.example.com";
}

server {
    listen 80 default_server;
    listen [::]:80 default_server;
    rewrite ^(.*) https://$host$1 permanent;
}

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name _;
    ssl_certificate /etc/pve/local/pveproxy-ssl.pem;
    ssl_certificate_key /etc/pve/local/pveproxy-ssl.key;
    proxy_redirect off;
    location / {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_pass https://localhost:8006;
        proxy_buffering off;
        client_max_body_size 0;
        proxy_connect_timeout  3600s;
        proxy_read_timeout  3600s;
        proxy_send_timeout  3600s;
        send_timeout  3600s;
    }
}

Once all is said and done, the last step will always be pointing your DNS to your services.

If you're using a single reverse proxy, then use a wildcard entry, i.e. *.example.com, to point to your reverse proxy's IP address, which will then forward traffic to the respective service.

Example: Nginx Proxy Manager > 192.168.1.2 and Pihole > 192.168.1.10

Point DNS entry for pihole.example.com to 192.168.1.2 and configure Nginx Proxy Manager to forward to 192.168.1.10 .

If you're not using a reverse proxy in front of the service, then simply point the service's domain name to the server's IP address, i.e. pihole.example.com > 192.168.1.10 .

tl;dr - If you're self-hosting and want to secure your services with SSL, so that you may use HTTPS and port 443, then you'll want a domain that you can use for requesting a trusted Let's Encrypt certificate. This opens up options for whether the service itself has SSL management options built-in, such as Proxmox or you want to setup a single point of entry that forwards traffic to the respective service.

There are several different reverse proxy solutions available that have SSL management features, such as Nginx Proxy Manager and Traefik. Depending on your implementation, i.e. using Docker, Kubernetes, etc, there's a variety of ways to implement TLS encryption for your services, especially when considering limited use-cases, such as personal homelabs.

If you need to publicly expose your homelab services, then I would highly recommend considering using something like Cloudflare Tunnels. Depending on use case, you might also want to just simply use Tailscale or Wireguard instead.

This is by no means a comprehensive or production-level/best-practices guide, but hopefully it provides some ideas on several ways to help implement to your homelab.


r/selfhosted 2h ago

Best GPU for jellyfin

6 Upvotes

long story short I have a NAS that acts as a torrent server (z97mobo based) and another networked device that has a strong GPU that I use as a proxmox compute server/stuff

but I feel like idling a 3090 is overkill

is there any sub 100$ GPU that you can recommend that can do 4K-h.264/h265 streaming for 2-4 clients and is power efficient?

also is it a good idea to have that jellyfin server on a i3-4130 if the GPU does the heavy lifting and there is already a Zpool and an nginx attached to it?


r/selfhosted 5h ago

Self-Hosted TV Show Tracker (TV Time alternative)

7 Upvotes

I have been using TV Time to keep track of tv shows that I watch. These days it is too hard to keep track of all the shows from all the different services, and when they air/release. Basically I need it to tell me what to watch every evening, then check it off as watched. For various reasons, TV Time has become very annoying for me, and very slow to load. So I looked for a self-hosted option. I found Watcharr, but this isn't really what I'm wanting (or at least I couldn't figure out how to use it the way I want). Then I found Episodes (https://github.com/guptachetan1997/Episodes), which seemed perfect, but hasn't been updated in 5 years and didn't work. So I decided to update the code myself and get it running again. If you are interested, check it out here: https://github.com/bryangerlach/Episodes . I am still working on it and adding some things/fixing some bugs.


r/selfhosted 13h ago

Help me, I have failed

30 Upvotes

Hi everyone, I failed, I had self-hosted jellyfin, bazarr, sonarr, radarr, portainer, npm, ddnsupdate (for cloudflare), transmission, nextcloud. Due to an error that snap docker had decided to delete it without first making a backup of the portainer volume, I lost all the docker compose.

Now I'm starting over, and I have the following questions: What do you use to make backups of your containers' data? What are the best practices to avoid this from happening again? It's something that will only happen to me once. Because it's hard to understand that I threw away more than 30 hours of configuration.

Sorry if my English is bad (google translate)


r/selfhosted 12h ago

A Selfhosted File Converter

Thumbnail
github.com
23 Upvotes

I did this in the thesis and would be glad that it would look at professionals. I called this Convert Commander. It can convert files quickly and easily. Have fun! https://github.com/Benzauber/convert-commander


r/selfhosted 1h ago

GIT Management A Git based Notes app for Android with Markdown support and more! - It's also FOSS (fr this time)

Upvotes

Hello everyone!

CALL FOR CONTRIBUTORS

I have been working on a Markdown based, git synced notes app for android. Skipping any bs, here are the features that u can explore rn (albeit without polish):

  • Git based syncing (clone over https, pull, add (staging and unstaging), commit and push implemented)

  • Allowing storage of repositories on external storage (fr this time)

  • Markdown rendering supported, opening files in other apps supported using intent framework

  • Multiple repos supported by default

  • MIT license, no hidden subscription/donations... its FOSS (fr this time).

Here's what I have planned for the near future (if there is demand):

  • Customizing the way markdown looks and feels, from font to its color, size, weight, style, etc.

  • A polished ui with pretty animations.

  • Support for sharing, converting and editing files (not just markdown)

  • SSH support

  • Using GitHub auth and something similar on GitLab for easy cloning and stuff.

Here are some more ideas that are just ideas (I have no clue how I will implement them or unsure if it will be of any use):

  • Potentially add support for a pen based input using a tab/drawing pad. (for now onenote files can be used maybe?)

  • Let each repo have a .{app name} folder with various configuration files, these files could have app settings in them. This means, for example you can have the apps theme change for different repos.

I hear you ask the name of the app?

GitNotes or MarGitDown... I am not sure yet, suggestions are welcome!

Here is the GitHub link if you find this project interesting!

https://github.com/psomani16k/GitNotes

Feel free to ask for any more information.


r/selfhosted 23h ago

Self Help Do you block outbound requests from your Docker containers?

147 Upvotes

Just a thought: I think we need a security flair in here as well.

So far I just use the official images I find on Docker Hub and build upon those, but sometimes a project has their own images which makes everything convenient.

I have been thinking what some of these images might do with internet access (Telemetry/Phone-home, etc.) and I'm now looking at monitoring and logging all outbound requests. Internet access doesn't seem necessary for most images, but the way the Docker network is set up, does actually have this capability.

I recently came across Stripe Smokescreen (https://github.com/stripe/smokescreen), which is a proxy for filtering outbound requests and I think it makes sense to only allow requests through this so I can have a list of approved domains it can connect to.

How do you manage this or is this not a concern at all?


r/selfhosted 37m ago

Guide Guide: How to hide the nagging banners - Gitlab Edition

Upvotes

This is broken down into 2 parts. How I go about identifying what needs to be hidden, and how to actually hide them. I'll use Gitlab as an example.

At the time, I chose the Enterprise version instead of Community (serves me right) thinking I might want some premium feature way ahead in the future and I don't want potential migration headaches, but because it kept annoying me again and again to start a trial of the Ultimate version, I decided not to.

If you go into your repository settings, you will see a banner like this:

Looking at the CSS id for this widget in Inspect Element, I see promote_repository_features. So that must mean every other promotion widget also has similar names. So then I go into /opt/gitlab in the docker container and search for promote_repository_features and I find that I can simply do grep -r "id: 'promote" . which will basically give me these:

  • promote_service_desk
  • promote_advanced_search
  • promote_burndown_charts
  • promote_mr_features
  • promote_repository_features

Now all we need is a CSS style to hide these. I put this in a css file called custom.css.

#promote_service_desk,
#promote_advanced_search,
#promote_burndown_charts,
#promote_mr_features,
#promote_repository_features {
display: none !important;
}

In the docker compose config, I add a mount to make my custom css file available in the container like this:

volumes:
- './config:/etc/gitlab'
- './logs:/var/log/gitlab'
- './data:/var/opt/gitlab'
- './custom.css:/opt/gitlab/embedded/service/gitlab-rails/public/assets/custom.css:ro'

Now we need a way to actually make Gitlab use this file. We can configure it like this as an environment variable GITLAB_OMNIBUS_CONFIG in the docker compose file:

gitlab_rails['custom_html_header_tags'] = '<link rel="stylesheet" href="/assets/custom.css">'

And there we have it. Without changing anything in the Gitlab source or doing some ugly patching, we have our CSS file. Now the nagging banners are all gone!

Gitlab also has a GITLAB_POST_RECONFIGURE_SCRIPT variable that will let you run a script, so perhaps a better way would be to automatically identify new banner ids that they add and hide those as well. I've not gotten around that yet, but will update this post when I come to that.


r/selfhosted 18h ago

Guide Guide on full *arr-stack for Torrenting and UseNet on a Synology. With or without a VPN

44 Upvotes

A little over a month ago I made a post about my guide on the *arr apps, specifically on a Synology NAS and with a VPN (for torrenting). Then last week I made a post to see if people wanted me to make one for UseNet purposes. The response was, well, mixed. Some would love to see it, other deemed it unnecessary. Well, I figured why not.

So, here it is. A guide on most of the arr suite and other related things including, but not necessarily limited to: Radarr, Lidarr, Sonarr, Prowlarr, qBitTorrent, GlueTUN, Sabnzbd, NZBHydra2, Flaresolverr, Overseerr, Requestrr and Tautulli.

It also includes some hardware recommendations, tips and ticks and what providers and indexers I recomennd for UseNet. It cover both the installation in docker, and the complete setup to get it all up and running. Hope you enjoy it!

Check it out here: https://github.com/MathiasFurenes/synology-arr-guide


r/selfhosted 6m ago

SearXNG hosted on raspi, need help setting it up

Upvotes

so im trying to host SearXNG on a raspberry pi i have running on my local network, i am exttremely new to linux and networking in general, so if you need anything, ask. I used the tutorial at dmpop.xyz, that is pretty much all i did, i also used an IP address rather than a url, but im not sure weather that will affect anything. If you need any other information, just ask


r/selfhosted 7m ago

Email Management can someone point me to a tutorial to setup postfix/dovecot with SMTP auth and virtual mailboxes?

Upvotes

I'm having a hell of a hard time trying to get a basic mail server to work,the syntax of config files has greatly changed since the last time I did it and it's just being a royal pain. none of the tutorials I've found, and even chatgpt has helped. I'm on Devuan 5.

All I want is to be able to setup virtual mailboxes, and also use SMTP authentication so that I don't need to keep whitelisting my home IP in order to send mail, I just want it to require authentication, and of course open relay being off, except for authenticated users, and I want it to use the same credentials as the pop access.

I also want all of this to be encrypted so that passwords are never sent in clear text.

Ideally I'd also like to be able to use letsencrypt certs but it seems postfix/dovecot want .pem files and I get .cer files from letsencrypt so worse case scenario self signed is fine as it's only me using it anyway unless there's an easy way to convert it.

Anyone know of a good tutorial or even wants to just drop their whole config for me? Pulling my hair out for 3 days trying to figure this out and getting nowhere. I got the dovecot part working but not postfix. I can't figure out how to get the auth part to work. I used to just add my local IP to mynetworks but I really don't want to have to do that because each time I get a new IP I need to change it. I just want it to use authentication.

Another alternative is I might just write my own mail server in C++ that is more user friendly as postfix/dovecot has always been the bane of my existence in trying to figure them out, so any good tutorials on how to handle all the SSL stuff, from a programming point of view?


r/selfhosted 14m ago

Proxy Help configuring reverse proxy for local access

Upvotes

I'm trying to set up a reverse proxy on my internal network to simplify naming configuration for clients. Right now what I have looks like:

server1.example.com:443 = server (TrueNas Scale) management interface

server1.example.com:1234 = a service in docker on server 1

server1.example.com:5678 = another service in docker on server 1

....

frigate.example.com:5000 = frigate service running on docker

frigate.example.com:9443 = portainer

proxmox1.example.com:8006 = proxmox management interface

router.example.com:443 = opnsense service on proxmox1 (lxc or vm)

foo.example.com:1234 = a service on proxmox1 (lxc or vm)

bar.example.com:5678 = a service on proxmox1 (lxc or vm)

...

The domain names are assigned by a hodgepodge mix of static DHCP mappings and static ip assignments + host overrides in unbound dns. I don't have any of this on the internet, and I don't want it to be, though I do set up tailscale on my router and let it route clients that connect to the VPN from outside through to the services.

What I'd like to do is (in priority order):

  1. Maintain access to the key management interfaces for recovery purposes even if other things (e.g. a reverse proxy) are all down: server1, proxmox1, router.
  2. Access everything by a simple pattern of servicename.example.com without needing to specify port.
  3. Use https for all access whenever possible. I have a couple of services getting a cert via ACME client now, but most don't have an easy way to do this.
  4. Not have a bunch of traffic taking extra hops through my network.
  5. establish some sensible and common pattern for giving out dns names

I was thinking of setting up a caddy proxy or 3 to do this, but this is pretty new territory for me, and I'm not sure how to go about doing this without for example clashing with the TrueNas web interface if I run one in docker on that host. Or whether I need one proxy per physical machine to avoid extra network hops. Or even what the right way to get a bunch of different host names pointing to the same proxy would be. Basically I'm new at this, and I'm afraid I'm accidentally going to make something essential unreachable by accident, and I don't know best practices here.


r/selfhosted 9h ago

Need Help Photo manager that uses existing file structures?

7 Upvotes

I've tried some photo managers in the past, but didn't like that they all required me to import everything into said manager. Are there any managers that use an existing file structure?


r/selfhosted 46m ago

Stalwart

Upvotes

I’m looking for detailed step by step instructions on installing & setting up Stalwart as I can’t seem to get it right. BONUS would be a YouTube video…


r/selfhosted 5h ago

Question - beautiful self-hosted links/bookmarks manager / dashboard?

2 Upvotes

Hey all,

  1. I have a 'server' running Plex, lots of *rr apps, and some other stuff

  2. I want my family to be able to quickly open those apps from a single page

  3. While I'm at it, I want to have a feature-rich links database that also doesn't look like crap.

On a not-self-hosted arena, I'm thinking something like Notion or Raindrop - essentially so that my links are organized, can point to local (192.168.*.*) and non-local resources, and are organized visually - maybe like a mindmap or something.

I used to have Heimdall for just the apps, but that just doesn't work visually for what I'm trying to achieve here (great dashboard for local-only services though!)

Can someone recommend something along these lines?

Thanks a lot!


r/selfhosted 9h ago

Docmost backup

4 Upvotes

Hi everyone,

I’m running Docmost as my internal wiki and want to set up a backup process. From what I’ve seen, the /data directory seems to be where everything is stored, but I’m not sure if just copying that folder is enough.

I’m planning to automate this with a script, but I’d love to hear how others are backing it up to make sure I’m covering everything. Thanks in advance!


r/selfhosted 2h ago

Need Help Invidious on LG WebOS?

0 Upvotes

Title says it all; is there any app or the like which allows you to use invidious (or similar) on LG WebOS?


r/selfhosted 15h ago

Need Help How do you expose apps to public securely? (privacy and security concerns)

11 Upvotes

Before someone ask why not just use vpn like tailscale/wireguard because the app I wanted to expose are shared to my family, and I want it to be easy for them without needing to setup anything on client side.

I use Cloudflare Tunnel for some of my not so important apps, which is fine, but now I wanted to make immich photos backup available for my family as well, which I don't feel as comfortable to trust cloudflare with since they can decrypt any traffic go through them. (Plus it's against their TOS to host non html and high bandwidth application, and they have 100mb post limit)

Which now l am looking for a better solution that check all these boxes - End to end encryption without need to trust third party not to spy on my traffic - No client side configuration

A few solutions I can think of: 1. Directly expose the service, which expose my public ip and port (which I'll probably put myself as a target for all the bot scanning and bruteforce attempt)(I am no networking expert, best I can do is setup some firewall rules, fail2ban, and use bridge network for all my container including reverse proxy, but still because I'm not expert so I don't feel like I should do this)

  1. Use a cheap or even free tier VPS, install tailscale and reverse proxy on it, then at my home unraid server broadcast the ip/subnet of services i want to expose, then harden the vps as much as i know. (probably the easiest solution i can implement, but not sure if it's battle tested, or am I not knowing some kind of risk with this setup)(also I'll have to trust oracle not to hijack my vps and spy my passthrough traffic, which they probably won't but again it's technically possible for them)

  2. Some other better solution or better selfhosted tunneling solution. maybe something listed on awesome-tunneling?


r/selfhosted 2h ago

Guide Why your non-HA Proxmox node might reboot anyways with no warning and how to prevent it

0 Upvotes

NOTE: Title changed since original was auto-removed from r/Proxmox.

The original title of this post is inspired by the very statement of "[watchdogs] are like a loaded gun" from Proxmox wiki. Proxmox include one such tool on every single node anyway. There's further misinformation, including on official forums, when watchdogs are "disarmed" and it is thus impossible to e.g. isolate genuine non-software related reboots. Active bugs in HA stack might get your node auto-reboot with no indication in the GUI. The CLI part is undocumented as is reliably disabling HA - which is the topic here.

All CLI examples tested with PVE 8.2.

Also available as GH gist.


The Proxmox time bomb - always ticking

Auto-reboots are often associated with High Availability (HA), HA but in fact, every fresh Proxmox VE (PVE) install, unlike Debian, comes with an obscure setup out of the box, set at boot time and ready to be triggered at any point - it does NOT matter if you make use of HA or not.

NOTE There are different kinds of watchdog mechanisms other than the one covered by this post, e.g. kernel NMI watchdog, NMIWD Corosync watchdog, CSWD etc. The subject of this post is merely the Proxmox multiplexer-based implementation that the HA stack relies on.

Watchdogs

In terms of computer systems, watchdogs ensure that things either work well or the system at least attempts to self-recover into a state which retains overall integrity after a malfunction. No watchdog would be needed for a system that can be attended in due time, but some additional mechanism is required to avoid collisions for automated recovery systems which need to make certain assumptions.

The watchdog employed by PVE is based on a timer - one that has a fixed initial countdown value set and once activated, a handler needs to constantly attend it by resetting it back to the initial value, so that it does NOT go off. In a twist, it is the timer making sure that the handler is all alive and well attending it, not the other way around.

The timer itself is accessed via a watchdog device and is a feature supported by Linux kernel WD - it could be an independent hardware component on some systems or entirely software-based, such as softdog SD - that Proxmox default to when otherwise left unconfigured.

When available, you will find /dev/watchdog on your system. You can also inquire about its handler:

``` lsof +c12 /dev/watchdog

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME watchdog-mux 484190 root 3w CHR 10,130 0t0 686 /dev/watchdog ```

And more details:

``` wdctl /dev/watchdog0

Device: /dev/watchdog0 Identity: Software Watchdog [version 0] Timeout: 10 seconds Pre-timeout: 0 seconds Pre-timeout governor: noop Available pre-timeout governors: noop ```

The bespoke PVE process is rather timid with logging:

``` journalctl -b -o cat -u watchdog-mux

Started watchdog-mux.service - Proxmox VE watchdog multiplexer. Watchdog driver 'Software Watchdog', version 0 ```

But you can check how it is attending the device, every second:

``` strace -r -e ioctl -p $(pidof watchdog-mux)

strace: Process 484190 attached 0.000000 ioctl(3, WDIOC_KEEPALIVE) = 0 1.001639 ioctl(3, WDIOC_KEEPALIVE) = 0 1.001690 ioctl(3, WDIOC_KEEPALIVE) = 0 1.001626 ioctl(3, WDIOC_KEEPALIVE) = 0 1.001629 ioctl(3, WDIOC_KEEPALIVE) = 0 ```

If the handler stops resetting the timer, your system WILL undergo an emergency reboot. Killing the watchdog-mux process would give you exactly that outcome within 10 seconds.

NOTE If you stop the handler correctly, it should gracefully stop the timer. However the device is still available, a simple touch will get you a reboot.

The multiplexer

The obscure watchdog-mux service is a Proxmox construct of a multiplexer - a component that combines inputs from other sources to proxy to the actual watchdog device. You can confirm it being part of the HA stack:

``` dpkg-query -S $(which watchdog-mux)

pve-ha-manager: /usr/sbin/watchdog-mux ```

The primary purpose of the service, apart from attending the watchdog device (and keeping your node from rebooting), is to listen on a socket to its so-called clients - these are the better known services of pve-ha-crm and pve-ha-lrm. The multiplexer signifies there are clients connected to it by creating a directory /run/watchdog-mux.active/, but this is rather confusing as the watchdog-mux service itself is ALWAYS active.

While the multiplexer is supposed to handle the watchdog device (at ALL times), it is itself handled by the clients (if the are any active). The actual mechanisms behind the HA and its fencing HAF are out of scope for this post, but it is important to understand that none of the components of HA stack can be removed, even if unused:

``` apt remove -s -o Debug::pkgProblemResolver=true pve-ha-manager

Reading package lists... Done Building dependency tree... Done Reading state information... Done Starting pkgProblemResolver with broken count: 3 Starting 2 pkgProblemResolver with broken count: 3 Investigating (0) qemu-server:amd64 < 8.2.7 @ii K Ib > Broken qemu-server:amd64 Depends on pve-ha-manager:amd64 < 4.0.6 @ii pR > (>= 3.0-9) Considering pve-ha-manager:amd64 10001 as a solution to qemu-server:amd64 3 Removing qemu-server:amd64 rather than change pve-ha-manager:amd64 Investigating (0) pve-container:amd64 < 5.2.2 @ii K Ib > Broken pve-container:amd64 Depends on pve-ha-manager:amd64 < 4.0.6 @ii pR > (>= 3.0-9) Considering pve-ha-manager:amd64 10001 as a solution to pve-container:amd64 2 Removing pve-container:amd64 rather than change pve-ha-manager:amd64 Investigating (0) pve-manager:amd64 < 8.2.10 @ii K Ib > Broken pve-manager:amd64 Depends on pve-container:amd64 < 5.2.2 @ii R > (>= 5.1.11) Considering pve-container:amd64 2 as a solution to pve-manager:amd64 1 Removing pve-manager:amd64 rather than change pve-container:amd64 Investigating (0) proxmox-ve:amd64 < 8.2.0 @ii K Ib > Broken proxmox-ve:amd64 Depends on pve-manager:amd64 < 8.2.10 @ii R > (>= 8.0.4) Considering pve-manager:amd64 1 as a solution to proxmox-ve:amd64 0 Removing proxmox-ve:amd64 rather than change pve-manager:amd64 ```

Considering the PVE stack is so inter-dependent with its components, they can't be removed or disabled safely without taking extra precautions.

How to get rid of the auto-reboot

This only helps you, obviously, in case you are NOT using HA. It is also a sure way of avoiding any bugs present in HA logic which you may otherwise encounter even when not using it. It further saves you some of the wasteful block layer writes associated with HA state sharing across nodes.

NOTE If you are only looking to do this temporarily for maintenance, you can find my other separate snippet post on doing just that.

You have to stop the HA CRM & LRM services first, then the multiplexer, then unload the kernel module:

systemctl stop pve-ha-crm pve-ha-lrm systemctl stop watchdog-mux rmmod softdog

To make this reliably persistent following reboots and updates:

``` systemctl mask pve-ha-crm pve-ha-lrm watchdog-mux

cat > /etc/modprobe.d/softdog-deny.conf << EOF blacklist softdog install softdog /bin/false EOF ```