While true, they still collect data on the results hosting your own instance can prevent you from hitting rate-limits as often.
While true, they still collect data on the results hosting your own instance can prevent you from hitting rate-limits as often.
- SearxNG (Google Privacy frontend)
SearXNG is more than just a front end for google search, it’s an aggregator, if configured properly can collect results from Bing, Startpage, Wikipedia, DuckDuckGo, Brave.
Yes, back up your stuff regularly, don’t be like me and break your partition table with a 4 month gap between backups. Accomplishing 4 months of work in 5 hours is not fun.
While true, at least I ain’t getting the updates that bloat applications with Ai… yet
So why would you not write out the full path?
The other day my raspberry pi decided it didn’t want to boot up, I guess it didn’t like being hosted on an SD card anymore, so I backed up my compose
folder and reinstalled Rasp Pi OS under a different username than my last install.
If I specified the full path on every container it would be annoying to have to redo them if I decided I want to move to another directory/drive or change my username.
People praising Arch, people hating on Ubuntu, meanwhile me on Debian satisifed with the minimalism.
As other stated it’s not a bad way of managing volumes. In my scenario I store all volumes in a /config
folder.
For example on my SearXNG instance I have a volume like such:
services:
searxng:
…
volumes:
- ./config/searx:/etc/searxng:rw
This makes the files for SearXNG two folders away. I also store these in the /home/YourUser
directory so docker avoids needing sudoers access.
Setting static IP’s is generally a good practice to take if you want to keep track of any device.
Grandma probably doesn’t do the actually torrenting herself, chances are OP has a overseerr or jellyseerr type of setup, grandma makes the request and things just flow.
Done did their final sudo docker compose down
Been using Jellyfin to host my music and Finamp to play it, Lyrics are pulled from https://lrclib.net/ using a Jellyfin plugin, certain lyrics are timestamped allowing for synchronization other are just static.
ProtonVPN works fine for Debian based systems, wish I could say the same for ProtonDrive.
“Technically” my jellyfin is exposed to the internet however, I have Fail2Ban setup blocking every public IP and only whitelisting IP’s that I’ve verified.
I use GeoBlock for the services I want exposed to the internet however, I should also setup Authelia or something along those lines for further verification.
Reverse proxy is Traefik.
If you aren’t already familiarized with the Docker Engine - you can use Play With Docker to fiddle around, spin up a container or two using the docker run
command, once you get comfortable with the command structure you can move into Docker Compose which makes handling multiple containers easy using .yml
files.
Once you’re comfortable with compose I suggest working into Reverse Proxying with something like SWAG or Traefik which let you put an domain behind the IP, ssl certificates and offer plugins that give you more control on how requests are handled.
There really is no “guide for dummies” here, you’ve got to rely on the documentation provided by these services.
You aren’t trapped though! Come to Canada!
And this is how we achieved a population greater than what our housing can support. Unless you’re ready to fork over 1200-2k a month for a 1 bedroom & 1 bathroom then I would advise against this.
If you don’t mind DM’ing me or dropping it in a comment here it would be greatly appreciated! The docker engine isn’t something entirely new to me so i’m a bit skeptical into thinking that i missed something but always happy to compare with others, actually Docker is what pushed me to switch fully to Linux on my personal computers.
Snippet from my docker-compose.yml:
pihole:
container_name: pihole
hostname: pihole
image: pihole/pihole:latest
networks:
main:
ipv4_address: 172.18.0.25
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
ports:
- "53:53/tcp"
- "53:53/udp"
- "127.0.0.1:67:67/udp" # Only required if you are using Pi-hole as your DHCP server
- "127.0.0.1:85:80/tcp"
- "127.0.0.1:7643:443"
environment:
TZ: 'America/Vancouver'
FTLCONF_webserver_api_password: 'insert-password-here'
FTLCONF_dns_listeningMode: 'all'
# Volumes store your data between container upgrades
volumes:
- './config/pihole/etc-pihole:/etc/pihole'
- './config/pihole/etc-dnsmasq.d:/etc/dnsmasq.d'
- '/etc/hosts:/etc/hosts:ro'
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
- CAP_SYS_TIME
- CAP_SYS_NICE
- CAP_CHOWN
- CAP_NET_BIND_SERVICE
- CAP_NET_RAW
- CAP_NET_ADMIN
restart: unless-stopped
labels:
- "traefik.enable=true"
- "traefik.http.routers.pihole.rule=Host(`pihole.my.domain`)"
- "traefik.http.routers.pihole.entrypoints=https"
- "traefik.http.routers.pihole.tls=true"
- "traefik.http.services.pihole.loadbalancer.server.port=80"
- "traefik.http.routers.pihole.middlewares=fail2ban@file"
unbound:
image: alpinelinux/unbound
container_name: unbound
hostname: unbound
networks:
main:
ipv4_address: 172.18.0.26
ports:
- "127.0.0.1:5334:5335"
volumes:
- ./config/unbound/:/var/lib/unbound/
- ./config/unbound/unbound.conf:/etc/unbound/unbound.conf
- ./config/unbound/unbound.conf.d/:/etc/unbound/unbound.conf.d/
- ./config/unbound/log/unbound.log:/var/log/unbound/unbound.log
restart: unless-stopped
Edit: After re-reading the Unbound github and their documentation it seems i may have missed some volume mounts that are key to the function of Unbound, i’ll definitely have to dive deeper into it.
I got two PiHoles running on my network via Docker Compose, I tried setting up Unbound in Docker-Compose and that fell flat, from my understanding DNSSEC was preventing DNS resolution outright.
Also tried OpenSense + Unbound which led to the same thing.
Eventually got tired of having my network cutting in and out over minor changes so I just stuck with Quad9 for my upstream needs.
The Docker Engine makes hosting applications over your network easy, if you have spare hardware I highly recommend setting up your own server.
I may not know much about software development & programming itself however, I feel like I did my part here.
+1 for Linux folks.
How is the art a positive?