corroded

joined 1 year ago
[–] corroded@lemmy.world 2 points 3 days ago

I'm very far away from "young," but I was once. Even in my early 20s, this would have seemed terrible. Sure, you have access to an interesting city, but you have no space for anything you can call your own. Even a rented studio anywhere else gives you somewhere to have a home you can call yours.

[–] corroded@lemmy.world 225 points 3 days ago (17 children)

I am American, and I have always loved my country. Until now, I've never been ashamed to call myself patriotic. My thought has always been than there will always be uninformed, uneducated assholes that vote against their own self-interests and the interests of their own country.

This election is different, though. We knew exactly what we were getting if we re-elected Trump. We responded by not only electing him in a landslide election, but handing the House and the Senate over to the Republicans, too. It was a clear message. America is not a nation of mostly good people with a few vocal "bad apples." We are a nation of hateful, scared bigots, and we proved it in a big way.

This was a turning point in American history, and the majority of us sent a clear message to their fellow citizens and to the world. America is not a nation of mostly good people being overshadowed by a media that covers the loudest assholes in the room. America is a nation of people who by a majority support exactly what the "crazy" Republicans are saying. I would feel better if Trump lost the popular vote but won the electoral vote, but that's not what happened.

This isn't an election where I've lost only lost faith in the democratic process or my fellow citizens, although both are true. This is an election where I've lost faith in my country as a whole. I have never been proudly Republican or proudly Democrat, but I've always been proudly American. Now I'm just... sad. I don't expect I'll see a day any time soon where I can honestly say I'm proud of my country. The best I can do is retreat into my own personal bubble, live my life, and watch the world burn around me until the flames consume everything I care about.

[–] corroded@lemmy.world 4 points 5 days ago

Wouldn't find any. It's all in his diaper.

[–] corroded@lemmy.world 69 points 5 days ago (8 children)

If you're living in a place where renting a pod with less space than a prison cell costs nearly as much as a studio somewhere else... move.

[–] corroded@lemmy.world 2 points 5 days ago (1 children)

I think at least for me, you really nailed it when you said that politicians are like celebrities to a lot of people. I personally have just never had any interest in celebrities. Music is a big thing for me, but if I had the opportunity to go meet one of my favorite artists, I wouldn't. What am I going to do, say "hey, I really like your music," and that's the end of it? There's no point. I enjoy the art that they make, but meeting them briefly in-person isn't going to change anything for them or for me. It'd be a better use of my time to stay home and do just about anything else, maybe even stay home and listen to one of their albums.

Politicians are the same. I'm not buying their album, I'm voting for them. They don't produce an entertainment product, but they produce a change in my country (be it good or bad) that directly affects me. It still doesn't make the slightest bit of difference to me or to them if I meet them in-person or not. I can respect what they they do professionally without having a desire to shake hands.

 

At least in this post, I'm not advocating for any particular political position; I mean for this to be a more generalized discussion.

I have never understood what prompts people to attend political rallies. None of the current US political candidates 100% align with my views, but I am very confident that I made the right choice in who I voted for. That is to say, I'd consider myself a strong supporter of [name here].

To me, it feels like attending a political rally is like attending a college lecture. You have a person giving you information, but you don't gain anything by hearing it in-person as opposed to reading it or watching a recording. If I want to learn something, it's much more comfortable for me to read and article or watch a video in the comfort of my own home. If I want to understand what a political candidate stands for, I'd much rather watch a recording of a town-hall meeting or read something she (oops) wrote rather than taking the time to drive to a rally, get packed in with a bunch of other people, and simply stand and listen.

I understand concerts. Hearing live music sounds vastly different than listening to a recording. Same with movies; most of us don't have an IMAX theater at home. When you're trying to gather information, though, what's the draw in standing outside in a crowd at listening to it in person?

[–] corroded@lemmy.world 12 points 6 days ago

I haven't had cable/satellite TV in well over a decade, probably more. When I say I'm "watching TV," rather than "watching videos" or "watching YouTube," it means I'm watching something episodic, created by a major studio.

[–] corroded@lemmy.world 15 points 1 week ago

Manner of death: "Natural." That's bullshit. Manner of death: "Religious" or "Political."

[–] corroded@lemmy.world 62 points 1 week ago* (last edited 1 week ago) (8 children)

The biggest difference I've noticed is that while Reddit may have a lot of large active communities, I would rarely get a quality response if I posted a question or a discussion topic.

Here, I can post to a community that hasn't had a new post in a few days, and within an hour I have several people offering help or discussion.

Reddit is far more active, but Lemmy users are far more helpful.

[–] corroded@lemmy.world 21 points 1 week ago (1 children)

I'm curious what actually happened here. If he stopped payments for his truck, it would be a repossessor taking the truck, not police. If it's unregistered, it's still legal to use on private property (like a farm truck).

I'm guessing his truck was involved in a crime of some sort, or more likey, there's a lot of important information missing.

[–] corroded@lemmy.world 19 points 1 week ago (11 children)

Why is kernel-level anti-cheat even a thing?

If I was trying to prevent cheating, I'd hash the relevant game files, encrypt the values, and hard-code them into the executable. Then when the game is launched, calculated the hash of the existing files and compare to the saved values.

What is gained by running anti-cheat in kernel mode? I only play single-player games, so I assume I'm missing something.

[–] corroded@lemmy.world 22 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

I use Jellyfin heavily, and it's a fantastic project, but I really wish they would address the issues with transcoding, specifically the ability to force it on.

My library contains a decent amount of HDR (lots of DV) content. On my TVs (using Nvidia Shield), it will direct play the DV content, resulting in a green picture. If I turn on burned-in subtitles or drop the bitrate and FORCE it to transcode, it's looks perfect. I've resorted to just setting a low bitrate on clients so it always transcodes.

I'm really hoping a future version gives us the ability to set more fine-grained transcoding settings per-client. Even the ability to disable direct-play completely would be fantastic.

[–] corroded@lemmy.world 55 points 2 weeks ago (8 children)

Something isn't adding up here. The first article I read about this said that there were employees nearby who saw her but were unable to open the door. If I see someone being literally cooked, I'm going to grab the closest metal object and smash the fuck out of the door. I would imagine most people would have the same reaction. Even if it's a metal door, 4 or 5 people could almost certainly pry it open.

 

This is more "home networking" than "homelab," but I imagine the people here might be familiar with what in talking about.

I'm trying to understand the logic behind ISPs offering asymmetrical connections. From a usage standpoint, the vast majority of traffic goes to the end-user instead of from the end-user. From a technical standpoint, though, it seems like it would be more difficult and more expensive to offer an asymmetrical connection.

While consumers may be connected via fiber, cable, DSL, etc, I assume that the ISP has a number of fiber links to "the internet." Those links are almost surely some symmetrical standard (maybe 40 or 100Gb). So if they assume that they can support 1000 users at a certain download speed, what is the advantage of limiting the upload? If their incoming trunks can support 1000 users at 100Mb download, shouldn't it also support 1000 users at 100Mb upload since the trunks themselves are symmetrical?

Limiting the upload speed to a different rate than download seems like it would just add a layer of complexity. I don't see a financial benefit either; if their links are already saturated for download, reducing upload speed doesn't help them add additional users. Upload bandwidth doesn't magically turn into download bandwidth.

Obviously there's some reason for this, but I can't think of one.

 

I generally try to stay informed on current events. With the exception of what gets posted here, I normally get my news from CNN. I tend to lean left politically, but not always.

The problem I always run into is that every news site I read, regardless of where they stand on the political spectrum, is always filled with pointless bullshit. Specifically, sports, celebrity news, and product placement. "Some shitty pop singer is dating some shitty actor" or "These are our recommendations for the best mass-produced garbage-quality fast fashion from Temu" or "Some overpaid dickhead threw a ball faster than some other overpaid dickhead."

What I'd love to find is a news source that's just news that matters. No celebrity gossip, sports, opinion pieces, etc. Just real events that have an impact on some part of the world. Legislation, natural events, economic changes, wars, political changes, that kind of thing.

Does this exist, or is all journalism just entertainment?

 

A few months ago, I upgraded all my network switches. I have a 16-port SFP+ switch and a 1GB switch (LAGG to the SPF+ with two DACs). These work perfectly, and I'm really happy with the setup so far.

My main switch ties into a remote switch in another building over a 10Gb fiber line, and this switch ties into another switch of the same model (on a different floor) over a Cat6e cable. These switches are absolute garbage: https://www.amazon.com/gp/product/B084MH9P8Q

I should have known better than to buy a cheap off-brand switch, but I had hoped that Zyxel was a decent enough brand that I'd be okay. Well, you get what you pay for, and that's $360 down the toilett. I constantly have dropped connections, generally resulting in any attached devices completely losing network connectivity, or if I'm lucky, dropping down to dial-up speeds (I'm not exaggerating). The only way to fix it is to pull the power cable to the switch. Even under virtually no load, the switch gets so hot that it's painful to touch. Judging from the fact that my connection is far more stable when the switch is sitting directly in front of an air conditioner, that tells me just about all I need to know.

I'm trying to find a pair of replacement switches, but I'm really striking out. I have two ancient Dell PowerConnect switches that are rock solid, but they're massive, they sound like jet engines, and they use a huge amount of power. Since these are remote from my homelab and live in occupied areas, they just won't work. All I need is a switch that has:

  • At least 2 SFP+ ports (or 1 SFP+ port for fiber and a 10Gb copper port)
  • At least 4 1Gb ports (or SFP ports; I have a pile of old 1GB SFP adapters)
  • Management/VLAN capability Everything I find online is either Chinese white-label junk or is much larger than what I need. A 16-port SFP+ switch would work, but I'd never use most of the ports, and I'd be wasting a lot of money on overkill hardware. As an example, one of these switches is in my home office; it exists solely so I have a connection between my server rack, two PCs, and a single WAP. I am never going to need another LAN connection in my home office; any hardware is going to go in the server rack, but I do need 10GB connectivity on at least one of those PCs.

Does anyone have a suggestion for a small reliable switch that has a few SFP+ ports, is made by a reputable brand, and isn't a fire hazard?

 

I have been using the BlueIris NVR integration (from HACS) for quite some time, and it works great for triggering BI from HA. I've trying to do the opposite now: Fire off automations in HA whenever BI detects motion on one of my cameras.

I've never used MQTT before, so I'm learning as I go, but I think I have most of my setup configured properly. I've installed Mosquitto and the MQTT integration in HA. I've configured BI to connect to HA, and running "Test" in the "Edit MQTT Server" menu in BI shows a good connection and no errors. I've set my cameras to post an MQTT event when the alert is triggered (and I've verified that the alerts are in fact being triggered).

Nothing happens in HA, though. The "Motion" sensor for my camera in HA stays at "Clear." In fact, the history shows no change at all, ever.

I have the events in BI set up as follows: On Alert: MQTT Topic - BlueIris/&CAM/Status and Payload - { "type": "&TYPE", "trigger": "ON" } On Reset: Exactly the same, but change ON to OFF.

I've tried change the MQTT autodiscovery header in HA from "homeassistant" to "BlueIris," and it made no difference. The Mosquitto logs show a login from HA, so I feel like I'm close, but I'm not sure where else to look.

Edit: I installed MQTT explorer, and I've verified that the messages are making it to Mosquitto, and they appear to be correctly formatted.

UPDATE: I set the MQTT integration to listen to the MQTT messages coming from BI, and sure enough, they were coming through just fine. For some reason, the BI integration just wasn't seeing them. Digging through the system logs, I saw some errors "creating a binary sensor" coming from the BI integration. The only thing I can think is that because I didn't have MQTT set up when I first installed the BI integration, something went wrong with the config (although I had already rebooted the system several times). I re-downloaded the BI integration and re-installed it, and now everything works perfectly.

 

This isn't strictly "homelab" related, but I'm not sure if there's a better community to post it.

I'm curious what kind of real-world speeds everyone is getting over their wireless network. I was testing tonight, and I'm getting a max of 250Mbit down/up on my laptop. I have 4 Unifi APs, each set to 802.11ac/80Mhz, and my laptop supports 2x2 MIMO. Testing on my phone (Galaxy S23) gives basically the exact same result.

The radio spectrum around me is ideal for WiFi; on 5Ghz, there is no AP in close enough range for me to detect. With an 80Mhz channel width, I can space all 4 of my APs so that there's no interference (using a non-DFS channel for testing, btw).

Am I wasting my time trying to chase higher speeds with my current setup? What kind of speeds are you getting on your WiFi network?

23
submitted 5 months ago* (last edited 5 months ago) by corroded@lemmy.world to c/cpp@programming.dev
 

I have been programming in C++ for a very long time, and like a lot of us, I have an established workflow that hasn't really changed much over time. With the exception of bare-metal programming for embedded systems, though, I have been developing for Windows that entire time. With the recent "enshittification" of Windows 11, I'm starting to realize that it's going to be time to make the switch to Linux in the very near future. I've become very accustomed to (spoiled by?) Visual Studio, though, and I'm wondering about the Linux equivalent of features I probably take for granted.

  • Debugging: In VS, I can set breakpoints, step through my code line-by-line, pause and inspect the contents of variable on-the-fly, switch between threads, etc. My understanding of Linux programming is that it's mostly done in a code editor, then compiled on the command line. How exactly do you debug code when your build process is separate from your code editor? Having to compile my code, run it until I find a bug, then open it up in a debugger and start it all over sounds extremely inefficient.
  • Build System: I'm aware that cmake exists, and I've used it a bit, but I don't like it. VS lets me just drop a .h and .cpp file into the solution explorer and I'm good-to-go. Is there really no graphical alternative for Linux?

It seems like Linux development is very modular; each piece of the development process exists in its own application, many of which are command-line only. Part of what I like about VS is that it ties this all together into a nice package and allows interoperability between the functions. I can create a new header or source file, add some code, build it, run it, and debug it, all within the same IDE.

This might come across as a rant against Linux programming, but I don't intend it to. I guess what I'm really looking for is suggestions on how to make the transition from a Visual Studio user to a Linux programmer. How can I transition to Linux and still maintain an efficient workflow?

As a note, I am not new to Linux; I have used it extensively. However, the only programming I've done on Linux is bash scripting.

 

I've noticed recently that my network speed isn't what I would expect from a 10Gb network. For reference, I have a Proxmox server and a TrueNAS server, both connected to my primary switch with DAC. I've tested the speed by transferring files from the NAS with SMB and by using OpenSpeedTest running on a VM in Proxmox.

So far, this is what my testing has shown:

  • Using a Windows PC connected directly to my primary switch with CAT6: OpenSpeedTest shows around 2.5-3Gb to Proxmox, which is much slower than I'd expect. Transferring a file from my NAS hits a max of around 700-800MB (bytes, not bits), which is about what I'd expect given hard drive speed and overhead.
  • Using a Windows VM on Proxmox: OpenSpeedTest shows around 1.5-2Gb, which is much slower than I would expect. I'm using VirtIO network drivers, so I should realistically only be limited by CPU; it's all running internally in Proxmox. Transferring a file from my NAS hits a max of around 200-300MB, which is still unacceptably slow, even given the HDD bottleneck and SMB overhead.

The summary I get from this is:

  • The slowest transfer rate is between two VMs on my Proxmox server. This should be the fastest transfer rate.
  • Transferring from a VM to a bare-metal PC is significantly slower than expected, but better than between VMs.
  • Transferring from my NAS to a VM is faster than between two VMs, but still slower than it should be.
  • Transferring from my NAS to a bare-metal PC gives me the speeds I would expect.

Ultimately, this shows that the bottleneck is Proxmox. The more VMs involved in the transfer, the slower it gets. I'm not really sure where to look next, though. Is there a setting in Proxmox I should be looking at? My server is old (two Xeon 2650v2); is it just too slow to pass the data across the Linux network bridge at an acceptable rate? CPU usage on the VMs themselves doesn't get past 60% or so, but maybe Proxmox itself is CPU-bound?

The bulk of my network traffic is coming in-and-out of the VMs on Proxmox, so it's important that I figure this out. Any suggestions for testing or for a fix are very much appreciated.

 

In c++17, std::any was added to t he standard library. Boost had their own version of "any" for quite some time before that.

I've been trying to think of a case where std::any is the best solution, and I honestly can't think of one. std::any can hold a variable of any type at runtime, which seems incredibly useful until you consider that at some point, you will need to actually use the data in std::any. This is accomplished by calling std::any_cast with a template argument that corresponds to the correct type held in the std::any object.

That means that although std::any can hold a type of any object, the list of valid objects must be known at the point that the variable is any_cast out of the std::any object. While the list of types that can be assigned to the object is unlimited, the list of types that can be extracted from the object is still finite.

That being said, why not just use a std::variant that can hold all the possible types that could be any_cast out of the object? Set a type alias for the std::variant, and there is no more boilerplate code than you would have otherwise. As an added benefit, you ensure type safety.

 

I'm looking for a portable air conditioner (the kind with 1 or 2 hoses that go to outside air). The problem I'm running into is that every single one I find has some kind of "smart" controller built in. The ones with no WiFi connectivity still have buttons to start/stop the AC, meaning that a simple Zigbee outlet switch won't work. I could switch the AC off, but it would require a button-press to switch it back on. The ones with WiFi connectivity all require "cloud" access; my IoT devices all connect to a VLAN with no internet access, and I plan to keep it that way.

I suppose I could hack a relay in place of the "start" button, but I'd really rather just have something I can plug in and use.

I can't use a window AC; the room has no windows. I'll need to route intake/exhaust through the wall. So far, I can't find any "portable" AC that will work for me.

What I'm looking for is a portable AC that either:

  • Connects to WiFi and integrates with HA locally.
  • Has no connectivity but uses "dumb" controls so I can switch it with a Zigbee outlet switch.

Any ideas?

 

Yesterday, Brian Dorsey was executed for a crime he committed in 2006. By all accounts, during his time in prison, he became remorseful for his actions and was a "model prisoner," to the point that multiple corrections officers backed his petition for clemency.

https://www.cnn.com/2024/04/09/us/brian-dorsey-missouri-execution-tuesday/index.html

In general, the media is painting him as the victim of a justice system that fails to recognize rehabilitation. I find this idea disgusting. Brian Dorsey, in a drug-induced stupor, murdered the people who gave him shelter. He brutally ended the life of a woman and her husband, and (allegedly) sexually assaulted her corpse. There is an argument that he had ineffective legal representation, but that doesn't negate the fact that he is guilty.

While I do believe that he could have been released or had his sentence converted to life in prison, and he could have potentially been a model citizen, this would have been a perversion of justice. Actions that someone takes after committing a barbaric act do not undo the damage that was done. Those two individuals are still dead, and he needed to face the ramifications for his actions.

Rehabilitation should not be an option for someone who committed crimes as depraved as he did. Quite frankly, a lethal injection was far less than what he deserved, given the horror he inflicted on others. If the punishment should fit the crime, then he was given far more leniency than was warranted.

 

I just set up a local instance of Invidious. I created an account, exported my YouTube subscriptions, and imported them into Invidious. The first time I tried, it imported 5 subscriptions out of 50 or so. The second time I tried, it imported 9.

Thinking there might be a problem with the import function, I decided to manually add each subscription. Every time I click "Subscribe," the button will switch to "Unsubscribe," then immediately switch back to "Subscribe." If I look at my subscriptions, it was never added.

My first thought was a problem with the PostgreSQL database, but that wouldn't explain why some subscriptions work when I import them.

I tried rebooting the container, and it made no difference. I'm running Invidious in a Ubuntu 22.04 LXC container in Proxmox. I installed it manually (not with Docker). It has 100GB of HDD space, 4 CPU cores, and 8GB of memory.

What the hell is going on?

view more: next ›