rehydrate5503

joined 1 year ago
[–] rehydrate5503@lemmy.world 1 points 4 days ago

I actually tried this as my second step in trouble shooting, the first being using different ports.

In the non-omada management software, it defaults to 10G, and if the devices is on before the switch it negotiates 10G correctly and works at full speed (tested with iperf3). As soon as any of the 10G connected devices is rebooted, I’m back to 1G. To fix it, I then have to set the port to 1G with flow control on, apply changes, save config, refresh page, change to 10G with flow control off, apply, save config and it goes back to 10G again. Alternatively I can reboot their switch and it’s fine again.

In Omada its the same, fewer steps to get there but I have to sometimes do it 2-3 times before it works.

Same issue with both 10G TP-Link switches, so I’m thinking it might be the SFP. Using Intel SFP+ with FS optical cables. I’m using a DAC for the uplink from the 10G switch to my unmanaged 2.5G switch, and that doesn’t have the problem of dropping, always works max speed.

[–] rehydrate5503@lemmy.world 1 points 5 days ago (2 children)

Fair enough. Is there anything one can do to mitigate? Like I know for the recent issue in the news, a mitigation strategy for consumers is to basically reboot their router often. I keep my router and all hardware up to date, and try to follow news here. Not sure if there is really anything else I could do.

[–] rehydrate5503@lemmy.world 1 points 5 days ago

Oh wow, hard to believe a huge bug like that would make it to production. What do you recommend instead? Stick with TP-Link?

[–] rehydrate5503@lemmy.world 2 points 5 days ago (4 children)

From what I’ve seen it seems consumer routers, but it raises flags is all, and makes me reconsider options.

 

cross-posted from: https://lemmy.world/post/21641378

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing, and was previously using a TP-Link SX-3008F switch as an aggregate (which I no longer need). I’m still within the return window for the new switch and access point, and have to admit the sale prices were my main reason with going for these items. I understand there have been recent articles mentioning TP-Link and security risks, so I’m thinking if I should consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more, however still only has 2 SFP+ ports, while I need 3 at absolute minimum.

I’m generally happy with the performance, however there is a really annoying bug where if I reboot a device, the switch drops down to 1G speed instead of 10G, and I have to tinker with the settings or reboot the switch to get 10G working again. This is true for the OPNSense uplink, my NAS and workstation. Same thing happened with the 3008F, and support threads on the forums have not been helpful.

In any case, any opinions of switching to ubiquity would be worth it?

 

cross-posted from: https://lemmy.world/post/21641378

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing, and was previously using a TP-Link SX-3008F switch as an aggregate (which I no longer need). I’m still within the return window for the new switch and access point, and have to admit the sale prices were my main reason with going for these items. I understand there have been recent articles mentioning TP-Link and security risks, so I’m thinking if I should consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more, however still only has 2 SFP+ ports, while I need 3 at absolute minimum.

I’m generally happy with the performance, however there is a really annoying bug where if I reboot a device, the switch drops down to 1G speed instead of 10G, and I have to tinker with the settings or reboot the switch to get 10G working again. This is true for the OPNSense uplink, my NAS and workstation. Same thing happened with the 3008F, and support threads on the forums have not been helpful.

In any case, any opinions of switching to ubiquity would be worth it?

 

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing, and was previously using a TP-Link SX-3008F switch as an aggregate (which I no longer need). I’m still within the return window for the new switch and access point, and have to admit the sale prices were my main reason with going for these items. I understand there have been recent articles mentioning TP-Link and security risks, so I’m thinking if I should consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more, however still only has 2 SFP+ ports, while I need 3 at absolute minimum.

I’m generally happy with the performance, however there is a really annoying bug where if I reboot a device, the switch drops down to 1G speed instead of 10G, and I have to tinker with the settings or reboot the switch to get 10G working again. This is true for the OPNSense uplink, my NAS and workstation. Same thing happened with the 3008F, and support threads on the forums have not been helpful.

In any case, any opinions of switching to ubiquity would be worth it?

[–] rehydrate5503@lemmy.world 2 points 5 days ago (1 children)

So I just added a TP-Link switch (TL-SG3428X) and access point (EAP670) to my network, using OPNSense for routing. I’m still within the return window for both items. I understand the article mentions routers, but should I consider returning these, and upping my budget to go for ubiquity? The AP would only be like $30 more for an equivalent, so that’s negligible, but a switch that meets my needs is about 1.6x more. And still only has 2 SFP+ ports, while I need 3 at minimum.

[–] rehydrate5503@lemmy.world 7 points 1 week ago (2 children)

Sure could. Unfortunately, those trains don’t run between my home and work, or grocery store, or doctor and hospital, or movie, or the mountains for skiing/camping, or any other amenities where I live. I wish I had great public transit options like in Europe, but I don’t.

[–] rehydrate5503@lemmy.world 7 points 1 week ago (4 children)

Now that’s a great looking car. Could never afford it, but it’s great nonetheless.

[–] rehydrate5503@lemmy.world 1 points 2 weeks ago

I’ll give that a shot, thanks!

[–] rehydrate5503@lemmy.world 1 points 2 weeks ago

Thanks for the detailed reply.

So the command gives me an error that nfs-client cannot be found.

The fstab just has basic default config. No timeout set.

I considered network issues, though it seems to be quite stable for other services. Not ruling it out just yet. I have a new switch coming in the next week, so will test if the issue persists when I put that in.

I will also give autofs a shot.

Thanks!

[–] rehydrate5503@lemmy.world 1 points 2 weeks ago

Haha don’t cut it up just yet! I’ll try some of the other options suggested here, as I’d like to learn what the issue is. The worst case I’ll try smb.

[–] rehydrate5503@lemmy.world 2 points 2 weeks ago

Thank you, will try this when I have time later this week.

[–] rehydrate5503@lemmy.world 1 points 2 weeks ago* (last edited 2 weeks ago)

They are mounted via the gui, but it just puts the mount into fstab. I checked the config there and it is just the standard default options for an nfs mount.

Edit: and no, I don’t lose it on reboot. Reboot re-mounts the share correctly.

 

Hi all,

I’m having an issue with an NFS mount that I use for serving podcasts through audibookshelf. The issue has been ongoing for months, and I’m not sure where the problem is and how to start debugging.

My setup:

  • Unraid with NFS share “podcasts” set up
  • Proxmox on another machine, with VM running Fedora Server 40.
  • Storage set up in Fedora to mount the “podcasts” share on boot, works fine
  • docker container on the same Fedora VM has Audiobookshelf configured with the “podcasts” mount passed through in the docker-compose file.

The issue:

NFS mount randomly drops. When it does, I need to manually mount it again, then restart the Audiobookshelf container (or reboot the VM, but I have other services).

There doesn’t seem to be any rhyme or reason to the unmount. It doesn’t coincide to any scheduled updates or spikes in activity. No issue on the Unraid side that I can see. Sometimes it drops over night, sometimes mid day. Sometimes it’s fine for a week, other times I’m remounting twice a day. What has finally forced me to seek help is the other day I was listening to a podcast, paused for 10-15 mins and couldn’t restart the episode until I went through the manual mount procedure. I checked and it was not due to the disk sinning down.

I’ve tried updating everything I could, issue persists. I only just updated to Fedora 40. It was on 38 previously and initially worked for many months without issue, then randomly started dropping the NFS mounts (I tried setting up other share mounts and same problem). Update to 39, then 40 and issue persists.

I’m not great with logs but I’m trying to learn. Nothing sticks out so far.

Does anyone have any ideas how I can debug and hopefully fix this?

 

Hi folks,

I want to refinish and paint my kitchen cabinets, but before touching the doors I want to ask opinions on how to repair this peeling on edges of 3 cabinets. Looks like steam from the range and kettle did this.

I was thinking to trim off the excess bit that has peeled and expanded, then sand down and fill with wood/general filler before painting with bullseye 123. Is there a better approach?

 

Hello, I’m planning a rather large trip later this year and have been searching for something to help me plan and organize. I’ve come across a few apps that are not exactly privacy friendly, like TripIt and Wanderlog.

Does anyone know of any self hosted or otherwise open source alternatives to these apps?

 

Hi everyone,

I’m at my wits end here getting port forwarding working on my setup with Nginx Proxy Manager (NPM) and OPNsense.

I recently upgraded my networking gear, and everything is working great, I’m loving OPNsense and 10G networking. I’ve had the same setup for port forwarding for years and never had issues, the main change was the addition of OPNsense and a switch.

Previous setup (I realize this wasn’t the best):

ISP modem -> DHCPv4 with ports 80/443 forwarded to ASUS wireless router WAN -> DHCPv4 with ports 80/443 forwarded to VM on proxmox running NPM -> NPM set up with hosts to proxy services on other VMs/server.

This (or a variation thereof) has all been working great for years, along with ddns set up as I have a dynamic IP.

New setup:

ISP modem -> DHCP off with ports 80/443 forwarded to OPNsense WAN via MAC address -> OPNsense NAT-Port Forwarding set up to the NPM host/port, rest is the same as before.

The settings for the port forward are the standard I’ve found in guides. WAN address, any source/port, redirect to NPM host and ports. Tried the domain I usually use, no luck. Port checker shows the ports are closed.

Tried the following:

  1. DMZ on the ISP modem keeping WAN IP default/automatic and adding OPNsense to the DMZ, no change.
  2. Advanced DMZ on ISP, WAN is the external IP, no change
  3. Same as 2, but changed OPNsense WAN settings from DHCPv4 to PPPoE, and added the ISP login info. Received new IP, updated ddns, still no change.
  4. Checked over port forwarding settings, enabled NAT reflection, still nothing.

I’m between all these steps, I rebooted OPNsense, proxmox, switches, etc.

Any ideas on what I could try for next steps? All of the local networking and external connections work awesome, it’s just the port forwarding as the last piece. Thanks!

Edit 2023-01-03:

I finally solved this, turned out the OPNSense and NPM configuration was all correct.

The problem was a glitch in the docker compose/portainer. I had my ports in docker compose set to 80:80/443:443, but when the container was deployed, it assigned 1880:80/18443:443 because of…reasons, and I didn’t notice until going through it all line by line 🤦.

Redeploying the stack/container didn’t solve it, so I changed the time zone to another city, redeployed and viola, everything works perfect as it should!

20
submitted 1 year ago* (last edited 1 year ago) by rehydrate5503@lemmy.world to c/linux@lemmy.ml
 

Hi,

I’ve been running Linux for some time, currently on Nobara and happy. Running it on a 1TB NVME, and a second 1TB NVME drive for extra storage for games, etc., both at gen 3.

I find myself running out of room and just picked up 2TB and 1TB NVME drives, both gen 4, and am thinking as to what the best partition layout would be. The 2x1TB gen 3 will be moved to my NAS as a cache pool.

The PC is used for gaming, photo/video editing and web development.

I guess options would be:

  1. OS on 2TB, and the 1TB for extra storage, call it a day.
  2. OS on 1TB, and the 2TB for extra storage
  3. Divy up the 1TB to have a partition for /, another for /home and another for /var and maybe another for games, then on the 2TB have one big partition for games and scratch disk for videos.
  4. Same as option 3 but swap the drives around.

What would YOU do in this situation? I’m leaning towards option 3 or a variation there of, as it gives versatility to hop to a new distro if I want relatively easy, and one big partition for game storage/video scratch.

My mobo only supports 2xNVME drives unfortunately (regret not spending an extra $60-70 on a better one), but I have a USB-C NVME enclosure that I might use with a a spare 1TB that will be removed from the NAS.

Any thoughts?

Edit: sorry forgot to reply. Thank you all for the input, this was great information and I took a deep dive researching some solutions. I ended up just keeping it simple and went with option 2, with the 1TB as the OS drive and 2TB as additional storage, no additional partitions.

74
submitted 1 year ago* (last edited 1 year ago) by rehydrate5503@lemmy.world to c/selfhosted@lemmy.world
 

Hi everyone,

I’m not sure if this is the right community, but the home networking magazines seem to be pretty dead. I’m a bit green with regard to networking, and am looking for help to see if the plan I’ve come up with will work.

The main image in the post is my current network setup. Basically the ISP modem/router is just a pass through and the 10 Gb port is connected to my Asus router, which has the DHCP server activated. All of my devices, home lab and smart home devices are connected to the Asus router via either Wifi or Ethernet. This works well, but I have many neighbours close by, and with my 30+ wifi devices, I think things aren’t working as well as they could be. I guess you could say one of my main motivations to start messing with this is to clean it up and move all possible devices to Ethernet.

The planned new setup is as follows, but I’m not sure if it’s even possible to function this way.

https://i.postimg.cc/7YftSFt6/IMG-9281.jpg

ISP modem/router > 2.5 Gb unmanaged switch > 2.5 Gb capable devices (NAS, hypervisor, PCs) will connect directly here, along with a 1 Gb managed switch to handle the DHCP > Asus router would connect to the managed switch to provide wifi, and remaining wired devices will all connect to the managed switch as well.

Any assistance would be appreciated! Thanks!

Edit: fixed second image url

 

Hello!

I’ve been running unRAID for about two years now, and recently had a thought to use some spare parts and separate my server into two based on use. The server was used for personal photos, videos, documents, general storage, projects, AI art, media, multitude of docker containers, etc. But I was thinking, it’s a bit wasteful to run parts that I use once or twice a week or less 24/7, there is just no need for the power use and wear and tear on the components. So I was thinking to separate this into a server for storage of photos, videos and documents powered on when needed, and then a second server for the media which can be accessed 24/7.

Server 1 (photos, videos, documents, AI experiments): 1 x 16TB parity, 2 x 14TB array. I7 6700k, 16GB ram

Server 2 (media, docker): 1 x 10TB parity, 1 x 10TB and 2 x 6TB array. Cheap 2 core skylake CPU from spare parts, 8GB ram.

With some testing, server 2 only pulls about 10w while streaming media locally, which is a huge drop from the 90+ watts at idle that it was running when I had everything combined.

I was hoping to use an old laptop I have laying around for the second server instead, which has an 8 core CPU, 16GB ram, and runs at 5w idle. I have a little NVMe to SATA adapter that works well but the trouble is powering the drives reliably.

Anyways, pros of separating it out, lower power usage, less wear and tear on HDDs so I will have to replace them less frequently.

Cons, running and managing two servers.

Ideally, I’d like to run server 1 on the cheap 2 core skylake CPU (it’s only serving some files after all), server 2 on the laptop with 8 cores (but still have the issue of powering the drives), and then take the i7 6700 for a spare gaming PC for family.

Alternative would be to just combine everything back into one server and manage the shares better, have drives online only when needed, etc. But I had issues with this, and would sometimes log into the web ui to find all drives spun up even though nothing was being accessed.

Anyways, I hope all of that makes sense. Any insight or thoughts would be appreciated!

 

Hello!

I’ve been running unRAID for about two years now, and recently had a thought to use some spare parts and separate my server into two based on use. The server was used for personal photos, videos, documents, general storage, projects, AI art, media, multitude of docker containers, etc. But I was thinking, it’s a bit wasteful to run parts that I use once or twice a week or less 24/7, there is just no need for the power use and wear and tear on the components. So I was thinking to separate this into a server for storage of photos, videos and documents powered on when needed, and then a second server for the media which can be accessed 24/7.

Server 1 (photos, videos, documents, AI experiments): 1 x 16TB parity, 2 x 14TB array. I7 6700k, 16GB ram

Server 2 (media, docker): 1 x 10TB parity, 1 x 10TB and 2 x 6TB array. Cheap 2 core skylake CPU from spare parts, 8GB ram.

With some testing, server 2 only pulls about 10w while streaming media locally, which is a huge drop from the 90+ watts at idle that it was running when I had everything combined.

I was hoping to use an old laptop I have laying around for the second server instead, which has an 8 core CPU, 16GB ram, and runs at 5w idle. I have a little NVMe to SATA adapter that works well but the trouble is powering the drives reliably.

Anyways, pros of separating it out, lower power usage, less wear and tear on HDDs so I will have to replace them less frequently.

Cons, running and managing two servers.

Ideally, I’d like to run server 1 on the cheap 2 core skylake CPU (it’s only serving some files after all), server 2 on the laptop with 8 cores (but still have the issue of powering the drives), and then take the i7 6700 for a spare gaming PC for family.

Alternative would be to just combine everything back into one server and manage the shares better, have drives online only when needed, etc. But I had issues with this, and would sometimes log into the web ui to find all drives spun up even though nothing was being accessed.

Anyways, I hope all of that makes sense. Any insight or thoughts would be appreciated!

view more: next ›