jemikwa

joined 1 year ago
[–] jemikwa@lemmy.blahaj.zone 1 points 1 month ago

Discord server owners can choose to have their members require account verification before joining as an anti-bot measure.

[–] jemikwa@lemmy.blahaj.zone 2 points 2 months ago* (last edited 2 months ago)

I use a ps5 controller for all my gaming needs and it works great on Linux (Kubuntu/Nobara) and Steam Deck. I use hardwired when playing on my Linux desktop, but when playing on my Steam Deck it's over Bluetooth while docked. Still works perfectly fine. I even played Crosscode with my controller just fine on both systems.
I primarily use it on my desktop for FFXIV which is why I do hardwired. Bluetooth can be squirrely if the game isn't launched through Steam

[–] jemikwa@lemmy.blahaj.zone 10 points 2 months ago* (last edited 2 months ago)

There is nothing you can do about the unsuccessful logins to your email address. My original email address has been in so many hacks and it's always being brute forced by hackers outside the US.

You already have MFA, so the only other thing I can think of is to have an incredibly long random password on your account and make sure the "forgot my password" recovery flows don't have any easy way to bypass. Things like another email address as a backup that's less secure, being able to guess your personal details based on past hacks, easily guessable/researchable security questions (make these random or nonsensical if possible, or don't put details from security questions in social media) could be used to gain access, even with MFA. And finally, secure your password manager in a similar manner.

[–] jemikwa@lemmy.blahaj.zone 2 points 3 months ago (1 children)

I neeeeeeed it. This looks a lot like CrossCode but refined. It has all the puzzles and scenery and build trees and I want to play it now

[–] jemikwa@lemmy.blahaj.zone 1 points 3 months ago

I live in Texas which is pretty humid too.
I try to avoid brushing my hair in any way, even in the shower. It keeps the curls from aligning on their own. The most I do is use my fingers to detangle during conditioner. If there are really hard knots or my fingers are sore/injured, I have a wide tooth comb to spot treat tangles.
Personally, when I shower in the evening, my hair is always a mess the next day. I've never had luck keeping it from being a medusa mess the next day. If I want my natural curls for something that day, I shower during the day and let my hair air dry after plopping, or I partly diffuse to set in some curls to let the rest air dry.

[–] jemikwa@lemmy.blahaj.zone 48 points 3 months ago

Main character

[–] jemikwa@lemmy.blahaj.zone 12 points 3 months ago

It's definitely not the latter. It's a fancy antivirus known as an EDR - Endpoint Detection and Response. Purely security software for defending against cyber attacks

[–] jemikwa@lemmy.blahaj.zone 21 points 3 months ago* (last edited 3 months ago) (4 children)

I want to clarify something that you hinted at in your post but I've seen in other posts too. This isn't a cloud failure or remotely related to it, but a facet of a company's security software suite causing crippling issues.

I apologize ahead of time, when I started typing this I didn't think it would be this long. This is pretty important to me and I feel like this can help clarify a lot of misinformation about how IT and software works in an enterprise.

Crowdstrike is an EDR, or Endpoint Detection and Response software. Basically a fancy antivirus that isn't file signature based but action monitoring based. Like all AVs, it receives regular definition updates around once an hour to anticipate possible threat actors using zero-day exploits. This is the part that failed, the hourly update channel pushed a bad update. Some computers escaped unscathed because they checked in either right before the bad update was pushed or right after it was pulled.
Another facet of AVs is how they work depends on monitoring every part of a computer. This requires specific drivers to integrate into the core OS, which were updated to accompany the definition update. Anything that integrates that closely can cause issues if it isn't made right.

Before this incident, Crowdstrike was regarded as the best in its class of EDR software. This isn't something companies would swap to willy nilly just because they feel like it. The scale of implementing a new security software for all systems in an org is a huge undertaking, one that I've been a part of several times. It sucks to not only rip out the old software but also integrate the new software and make sure it doesn't mess up other parts of the server. Basically companies wouldn't use CS unless they are too lazy to change away, or they think it's really that good.
EDR software plays a huge role in securing a company's systems. Companies need this tech for security but also because they risk failing critical audits or can't qualify for cybersecurity insurance. Any similar software could have issues - Cylance, Palo Alto Cortex XDR, Trend Micro are all very strong players in the field too and are just as prone to having issues.
And it's not just the EDR software that could cause issues, but lots of other tech. Anything that does regular definition or software updating can't or shouldn't be monitored because of the frequency or urgency of each update would be impractical to filter by an enterprise. Firewalls come to mind, but there could be a lot of systems at risk of failing due to a bad update. Of course, it should fall on the enterprise to provide the manpower to do this, but this is highly unlikely when most IT teams are already skeleton crews and subject to heavy budget cuts.

So with all that, you might ask "how is this mitigated?" It's a very good question. The most obvious solution "don't use one software on all systems" is more complicated and expensive than you think. Imagine bug testing your software for two separate web servers - one uses Crowdstrike, Tenable, Apache, Python, and Node.js, and the other uses TrendMicro, Qualys, nginx, PHP, and Rust. The amount of time wasted on replicating behavior would be astronomical, not to mention unlikely to have feature parity. At what point do you define the line of "having redundant tech stacks" to be too burdensome? That's the risk a lot of companies take on when choosing a vendor.
On a more relatable scale, imagine you work at a company and desktop email clients are the most important part of your job. One half of the team uses Microsoft Office and the other half uses Mozilla Thunderbird. Neither software has feature parity with the other, and one will naturally be superior over the other. But because the org is afraid of everyone getting locked out of emails, you happen to be using "the bad" software. Not a very good experience for your team, even if it is overall more reliable.

A better solution is improved BCDR (business continuity disaster recovery) processes, most notably backup and restore testing. For my personal role in this incident, I only have a handful of servers affected by this crisis for which I am very grateful. I was able to recover 6 out of 7 affected servers, but the last is proving to be a little trickier. The best solution would be to restore this server to a former state and continue on, but in my haste to set up the env, I neglected to configure snapshotting and other backup processes. It won't be the end of the world to recreate this server, but this could be even worse if this server had any critical software on it. I do plan on using this event to review all systems I have a hand in to assess redundancy in each facet - cloud, region, network, instance, and software level.
Laptops are trickier to fix because of how distributed they are by nature. However, they can still be improved by having regular backups taken of a user's files and testing that Bitlocker is properly configured and curated.

All that said, I'm far from an expert on this, just an IT admin trying to do what I can with company resources. Here's hoping Crowdstrike and other companies greatly improve their QA testing, and IT departments finally get the tooling approved to improve their backup and recovery strategies.

[–] jemikwa@lemmy.blahaj.zone 2 points 3 months ago (1 children)

If it's any consolation, this is the first issue of its kind in the multiple years we've been using CS. Still unacceptable, but historically the program has been stable and effective for us. Hopefully this reminds higher ups the importance of proper testing before releases

[–] jemikwa@lemmy.blahaj.zone 8 points 3 months ago

This occurred overnight around 5am UTC/1am EDT. CS checks in once an hour, so some machines escaped the bad update. If your machines were totally off overnight, consider yourself lucky

[–] jemikwa@lemmy.blahaj.zone 6 points 3 months ago

A slightly "better" response that she might have been looking for is "Sorry to hear that, I made the coffee how I normally do - 4tsp of in the drip maker. Maybe it's starting to wear off by now?"
Still has the content you included, but in a more sympathetic pattern that she might be receptive of.

[–] jemikwa@lemmy.blahaj.zone 3 points 3 months ago

I've personally had better luck with the Litter Robot 4. We started with the 3 and had some issues with the bonnet getting "unseated", among other things. The 4 has been more stable over the year we've had it. The base being narrower but taller lets us get away with an extra day of not emptying.

view more: next ›