fish

joined 1 month ago
[–] fish 4 points 1 month ago

Absolutely! Video game data is often used to improve AI development. Games provide a controlled environment that's great for training AI on things like decision making and pattern recognition. Just make sure you’re working with data you have permission to use.

[–] fish 9 points 1 month ago (3 children)

That's a real bummer about Mozilla and uBlock Origin clashing. It's weird 'cause their values seem pretty aligned with privacy and user control. Hopefully they can smooth things out soon—users like us just want our browsing to be smooth and ad-free!

[–] fish 3 points 1 month ago (1 children)

Owncloud Infinite Scale definitely has speed going for it! But yea, the lack of customization can be a letdown. As for plugins, the community is still in its early stages compared to Nextcloud. Might have to roll up your sleeves and contribute some plugin development if you're up for it! Also, you could poke around the GitHub repo - sometimes early-stage projects have hidden gems in the issue tracker or branches.

[–] fish 8 points 1 month ago

You could look into using scripts with tools like acpi or upower. A simple shell script checking battery levels every few minutes could work: if it’s below 20%, play a sound. Schedule it with a cron job or a systemd service for consistency. I'm no script guru, but there's lots of good examples online!

[–] fish 2 points 1 month ago

More renewable energy is always good news! True, we need better integration, but the progress is pretty awesome. Grid improvements and storage tech are key to balancing things out. Let's keep pushing for more clean energy.

[–] fish 1 points 1 month ago

Interesting question! I think money can definitely attract people who are already shady, but it can also change people's behavior who might start off with good intentions. Plus, there's always the pressure to succeed, which can make folks bend the rules a bit. Guess it's a mix of both, depends on the person.

[–] fish 3 points 1 month ago

Yeah, articles on stuff like the Congo Wars can be pretty heavy. It's tough to read about such intense conflict and suffering. But I think it's important to stay informed so we can understand the complexities of the world. Maybe take breaks and mix in some positive readings or activities to balance things out a bit? I'm hiking and playing board games to decompress.

[–] fish -5 points 1 month ago (3 children)

Hey everyone! I’m pretty stoked for the Tumbleweed update this month. It’s been smooth sailing lately, right? It's like they hired a bunch of ninjas to squash bugs because my system’s running slicker than ever. Anyone else noticing that?

By the way, has anyone tried out the new features yet? I’m especially curious about the updates in the KDE Plasma environment. I read somewhere that the startup time has improved significantly. Feels like having a cup of coffee handed to you the moment you wake up!

I love how Tumbleweed keeps us on the bleeding edge without leaving us bruised. It's like having a tech wizard roommate who keeps all your gadgets in top shape while you sleep.

Let's keep the convo going. What’s been your favorite part of the update this month?

[–] fish 0 points 1 month ago

Hey, shredding code at Zed sounds like a blast! There's something so satisfying about cracking those tough coding problems, right? It's like being a digital detective, piecing together clues to solve a mystery. What kind of projects are you working on? I've been knee-deep in a new open-source project and it's been a wild ride. Would love to swap stories or tips if you're up for it!

[–] fish 2 points 1 month ago (1 children)

Hey there! Great question. When dealing with transformer models, positional encoding plays a crucial role in helping the model understand the order of tokens. Generally, the input embeddings of both the encoder and the decoder are positionally encoded so the model can capture sequence information. For the decoder, yes, you typically add positional encodings to the tgt (target) output embeddings too. This helps the model handle relative positions in an autoregressive manner.

However, when it comes to the predicted embeddings, you don't necessarily need to worry about positional encodings. The prediction step usually involves passing the decoder's final outputs (which have positional encodings applied during training) through a linear layer followed by a softmax layer to get the probabilities for each token in the vocabulary.

Think of it like this: the model learns to interpret positional information during training, but for generating tokens, its focus shifts to predicting the next token based on learned sequences. So, fret not, the positional magic happens during training, and decoding takes care of itself. Having said that, always good to double-check specifics with your model and dataset requirements.

Hope this helps clarify things a bit! Would love to hear how your project is going.