SeeJayEmm

joined 1 year ago

We don't? We lost. I'm going to go back to huddling in the corner.

I run docker exclusively in VMs and VPS and it works fine.

[–] SeeJayEmm@lemmy.procrastinati.org 4 points 4 weeks ago (2 children)

So Grocy doesn't directly support OIDC/SAML but it does support auth being passed along via the reverse proxy. This is how my grocy is configured. No double logins required.

[–] SeeJayEmm@lemmy.procrastinati.org 2 points 4 weeks ago (2 children)

I'm going to add Hoarder to the pile of suggestions.

[–] SeeJayEmm@lemmy.procrastinati.org 18 points 4 weeks ago (9 children)

A VPS is already a VM and nesting VMs, even if you get it to work, is generally a Bad Idea™️.

What you're asking for is squarely in "bare metal" territory. Does that reduce your flexibility? Sure. But it doesn't entirely eliminate it. Down the road if you decide you need more RAM or disk those are things you can have added (at a cost). CPU would likely necessitate a migration to a different system so I'd keep that in mind during initial sizing. Also, if you are using proxmox, migration will be as simple as backing up a container/VM and restoring it at the destination.

Your other alternative is multiple VPSes or possibly augmenting the bare metal server with one or more VPSes.

As far as unified billing goes, just have all the services with the same provider. Most providers I've encountered offer both services.

I can't speak to providers in our around Sydney, but I'd recommend checking out lowendbox.com to start your search.

Only by exposing the docker socket. And it doesn't support managing network or volumes.

The constant argument in this space that you must know the arcane workings of everything you use, is exhausting.

[–] SeeJayEmm@lemmy.procrastinati.org 1 points 1 month ago (3 children)

Just because something doesn't fit your use case doesn't make it a terrible product. Portainer isn't meant to complement managing docker via CLI. It's meant to be the management interface.

If you want to manage your environment via CLI, I agree, don't use Portainer. If you're content (or prefer) a GUI, Portainer is a solid option. Esp if you have multiple hosts or want to manage more than just the compose stack. Last time I checked Dockge doesn't do either.

Personal preference? I prefer the Portainer's presentation over the CLI. I especially find it easier to manage networks and volumes.

But my main reason is I have multiple docker hosts and it gives me a "single pane on glass" to manage everything from.

My note 20 still gets updates.

Dex is pretty cool. I just lack a use case.

26
Proxmox Disk Performance Problems (lemmy.procrastinati.org)
submitted 6 months ago* (last edited 6 months ago) by SeeJayEmm@lemmy.procrastinati.org to c/selfhosted@lemmy.world
 

I've started encountering a problem that I should use some assistance troubleshooting. I've got a Proxmox system that hosts, primarily, my Opnsense router. I've had this specific setup for about a year.

Recently, I've been experiencing sluggishness and noticed that the IO wait is through the roof. Rebooting the Opnsense VM, which normally only takes a few minutes is now taking upwards of 15-20. The entire time my IO wait sits between 50-80%.

The system has 1 disk in it that is formatted ZFS. I've checked dmesg, and the syslog for indications of disk errors (this feels like a failing disk) and found none. I also checked the smart statistics and they all "PASSED".

Any pointers would be appreciated.

Example of my most recent host reboot.

Edit: I believe I've found the root cause of the change in performance and it was a bit of shooting myself in the foot. I've been experimenting with different tools for log collection and the most recent one is a SIEM tool called Wazuh. I didn't realize that upon reboot it runs an integrity check that generates a ton of disk I/O. So when I rebooted this proxmox server, that integrity check was running on proxmox, my pihole, and (I think) opnsense concurrently. All against a single consumer grade HDD.

Thanks to everyone who responded. I really appreciate all the performance tuning guidance. I've also made the following changes:

  1. Added a 2nd drive (I have several of these lying around, don't ask) converting the zfs pool into a mirror. This gives me both redundancy and should improve read performance.
  2. Configured a 2nd storage target on the same zpool with compression enabled and a 64k block size in proxmox. I then migrated the 2 VMs to that storage.
  3. Since I'm collecting logs in Wazuh I set Opnsense to use ram disks for /tmp and /var/log.

Rebooted Opensense and it was back up in 1:42 min.

 

I'd like to start doing a better job of tracking the changes I made to my homelab environment. Hardware, software, network, etc. I'm just not sure what path I want to take and was hoping to get some recommendations. So far the thoughts I have are:

  • A change history sub-section of my wiki. (I'm not a fan of this idea.)
  • A ticketing system of some sort. (I tried this one and it was too heavy. I'd need to find a simple solution.)
  • A nextcloud task list.
  • Self-host a gitlab instance, make a project for changes and track with issues. Move what stuff I have in github to this instance and kill my github projects. (It's all private stuff.)

I know that several of you are going to say "config as code" and I get it. But I'm not there yet and I want to track the changes I'm making today.

Thanks

 

I can't seem to find anything so I was hoping someone here has run into this.

Does anyone know if there's a way to get reporting on a per application key basis or per bucket. I periodically get threshold alerts (usually the download cap) but that doesn't give me any idea of what utilization is triggering the alert. The reporting I can find is pretty rudimentary and account wide.

 

I'm experimenting with running NextCloud (AIO) on a VPS with a B2 bucket as the primary storage. I want to compare performance compared to running it on my home server (esp. when I'm remote) and get an idea of the kinds of costs I'd rack up doing it.

As part of the setup I have configured the built in borg backup but it has this caveat:

Be aware that this solution does not back up files and folders that are mounted into Nextcloud using the external storage app - but you can add further Docker volumes and host paths that you want to back up after the initial backup is done.

The primary storage is external but I'm not using the "external storage" app. So, I have 2 questions.

  1. Does it backup object storage if it's primary (my gut says no)?
  2. If no, what's a good way to backup the B2 bucket?

I've done some research on this topic and I'm kinda coming up empty. I would normally use restic but restic doesn't work in that direction (B2 -> local backup).

It looks like rclone can be used to mount a B2 bucket. One idea I had was to mount it, read-only, and let AIO/borg backup that path with the container backups.

Has anyone done this before? Any thoughts?

17
submitted 10 months ago* (last edited 10 months ago) by SeeJayEmm@lemmy.procrastinati.org to c/selfhosted@lemmy.world
 

So, I'm experimenting with running a Mailu instance on my home server but proxying all of the relevant traffic through a WireGuard tunnel to my VPS. I'm currently using NGINX Proxy Manager streams to redirect the traffic and it all seems to be working.

The only problem is that, all connections appear to come from the VPS. It's really screwing with the spam filter. I'm trying to figure out if there's a way to retain the source IP while still tunneling the traffic.

The only idea I have, and I don't know if it's a bad one, is to us iptables to NAT the ports inbound on the VPS and on my home router (opnsense) route all outbound traffic from that IP back through the VPS instead of the default gateway. This way I shouldn't need to rewrite the destination port on the VPS side.

It sound a bit hacky tho, and I'm open to better suggestions.

Thanks

Edit: I think I need to clarify my post as there's some confusion in the comments. I would like the VPS to masquerade/nat for my mailu system accessible over a WG tunnel so that inbound traffic to the SMTP reports it's actual public IP instead of the IP of the VPS host that's currently proxying.

After giving that some thought I think the only way this could work would be if I treated the VPS as the upstream gateway for all traffic. My current setup is below:

[VPS] <-- wg --> [opnsense] <--eth-->[mailu]

I can source route all traffic from mailu to the VPS, via wg, but I don't know how to properly configure iptables to do the masquerading as I'd only want to masquerade that one IP. I'm not concerned about mailu not having internet access when wg is down, and frankly, I think I'd prefer it didn't.

Edit 2: I got the basic masquerading working. Can ping public IPs and traceroute verifies it's taking the correct path.

iptables -A FORWARD -i wg0 -s <mailu-ip> -j ACCEPT
iptables -t nat -A POSTROUTING -o eth0 -s <mailu-ip> -j MASQUERADE

I think I got the port forwarding working.

iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 25 -j DNAT --to-destination <mailu-ip>
iptables -A FORWARD -p tcp -d <mailu-ip> --dport 25 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
  • tcpdump on the VPS eth0 shows traffic in.
  • tcpdump on the VPS wg0 shows the natted traffic.
  • tcpdump on mailu shows both inbound and outbound traffic.
  • tcpdump on opnsense shows 2 way traffic on the vlan interface mailu is on.
  • tcpdump on opnsense only shows inbound, but not outbound traffic on the wg interface.

I think the problem is now in opnsense but I'm trying to suss out why. If I initiate traffic on mailu (i.e. a ping or a web request) I see it traversing the opnsense wg interface, but I do not see any of the return SMTP traffic.

Edit 3:

I found the missing packets. They're going out the WAN interface on the router, I do not know why. Traffic I initiate from the mailu box gets routed through the WG tunnel as expected but replies to traffic sourced from the internet and routed over the WG tunnel, are going out the WAN.

The opnsense rule is pretty basic. Source: , Dest: any, gateway: wg.

Edit 4:

I ran out of patience trying to figure out what was going on in opnsense and configured a direct tunnel between the mailu vm and the VPS. That immediately solved my problems although it's not the solution I was striving for.

It was pointed out to me in the comments that my source routing rule likely wasn't configured properly. I'll need to revisit that later. If I was misconfiguring it I'd like to know that.

 

I've hit a wall with a weird Wireguard issue. I'm trying to connect my phone (over cell) to my home router using wireguard and it will not connect.

  • The keys are all correct.
  • The IPs are all correct.
  • The ports are open on the firewall.
  • My router has a public IP, no CGNAT.

The router is opnsense, I have a tcpdump session going and when I attempt a connection from the phone I see 0 packets on that port. I am able to ping the router and reach the web server sitting behind it from the phone.

I have a VPS that I configured WG on and the phone connects fine to that. I also tested configuring the VPS to connect to my home router and that also works fine.

I'm really at a loss as to where to go next.

Edit 2: I completely blew out the config on both sides and rebuilt it from scratch, using a different UDP port, and it all appears to be working now. Thanks for everyone's help in tracking this down.

Edit: It was requested I provide my configs.

opnsense:

####################################################
# Interface settings, not used by `wg`             #
# Only used for reference and detection of changes #
# in the configuration                             #
####################################################
# Address =  172.31.254.1/24
# DNS =
# MTU =
# disableroutes = 0
# gateway =

[Interface]
PrivateKey = 
ListenPort = 51821

[Peer]
# friendly_name = note20
PublicKey = 
AllowedIPs = 172.31.254.100/32

Android:

[Interface]
Address = 172.31.254.100/32
PrivateKey = 

[Peer]
AllowedIPs = 0.0.0.0/32
Endpoint = :51821
PublicKey = 
 

Since switching to Proxmox I've noticed an issue with intermittent network connectivity on my VMs. I've narrowed it down to the realtek based PCI NIC (Rosewill RNG-407-Dualv2) I currently have installed. Basically when I see a ton of these in my syslog:

Dec 14 13:55:37 server kernel: r8169 0000:09:00.0 enp9s0: rtl_rxtx_empty_cond == 0 (loop: 42, delay: 100).

It means it's time to reboot. I did some digging on it and it appears to be a kernel driver issue. Unless someone in this community has encountered this and knows of a good fix (other than rebooting) I'd rather just ditch Realtek and replace the NIC.

Can anyone recommend a 2 port PCIe (x1) card that has good driver support under Linux and (hopefully) won't cost me a small fortune? Bonus points if it's 2.5GbE capable.

 

I'm going to start off but saying I know that self-hosting email can be a bad idea. That being said, I'm trying to de-googlfy my life and would like to experiment.

I have a VPS and a domain that doesn't get used for much at the moment. I'd like to try configuring a full mail suite on that domain and see if I can make it work. I've been looking into the various options on this list and was hoping for some feed back on options that people have used. If this works out it would be fairly low volume.

Ideally I'd like a full solution that includes web administration if at all possible. I think I'm leaning towards mailcow but it might be overkill.

I'd appreciate any input on what has or hasn't worked for people. Thanks.

 

I'm not sure where to start with to troubleshoot this. I segregated my network into a few different VLANs (servers, workstations, wifi, etc...). I have VMs and LxC containers running in Proxmox, routing is handled by Opnsense, and I have a couple tplink managed switches. All of this is working fine except for 1 problem.

I have a couple systems (VM and LxC) that have interfaces on multiple VLANs. If I SSH to one of these systems, on the IP that's on the same VLAN as the client, it works fine. If I SSH to one of the other IPs it'll initially connect and work but within a minute or so the connection hangs and times out.

I tried running ssh in verbose mode and got this, which seems fairly generic:

debug3: recv - from CB ERROR:10060, io:00000210BBFC6810
debug3: send packet: type 1
debug3: send - WSASend() ERROR:10054, io:00000210BBFC6810
client_loop: send disconnect: Connection reset
debug3: Successfully set console output code page from 65001 to 65001
debug3: Successfully set console input code page from 65001 to 65001 

I realize the simple solution is to just use the IP on the same subnet, but my current DNS setup doesn't allow for me to provide responses based on client subnet. I'd also like to better understand (and potentially) solve this problem.

Thanks

 

I'm in the process of re-configuring my home lab and would like to get some help figuring out log collection. My setup was a hodgepodge of systems/OSes using rsyslog to send syslogs to a syslog listener on my qnap but that's not going to work anymore (partly because the qnap is gone).

My end-goal is going to be as homogeneous as I can manage. Mostly Debian 12 systems (phy and vm) and Docker containers. Does anyone know of a FOSS solution that can ingest journald, syslog, and if it's even possible to send docker logs to a log collector?

Thanks

 

Hi. I currently run plex in a kvm VM. Have for years without any real trouble. I'm in the process of refreshing my homelab and replacing the plex VM is next on my list.

I'm curious if there are any pros or cons to running Plex in a docker container vs it's own dedicated VM? Is there anyone here who's done both and saw a difference?

view more: next ›