lysdexic

joined 1 year ago
MODERATOR OF
[–] lysdexic@programming.dev -1 points 2 days ago* (last edited 2 days ago) (1 children)

The only (arguably*) baseless claim in that quote is this part:

You do understand you're making that claim on the post discussing the proposal of Safe C++ ?

And to underline the absurdity of your claim, would you argue that it's impossible to write a"hello, world" program in C++ that's not memory-safe? From that point onward, what would it take to make it violate any memory constraints? Are those things avoidable? Think about it for a second before saying nonsense about impossibilities.

[–] lysdexic@programming.dev 3 points 2 days ago

Custom methods won't have the benefit of being dealt with as if they shared specific semantics, such as being treated as safe methods or idempotent, but ultimately that's just an expected trait that anyone can work with.

In the end, specifying a new standard HTTP method like QUERY extends some very specific assurances regarding semantics, such as whether frameworks should enforce CRSF tokens based on whether a QUERY has the semantics of a safe method or not.

[–] lysdexic@programming.dev -4 points 2 days ago (3 children)

If you could reliably write memory safe code in C++, why do devs put memory safety issues intontheir code bases then?

That's a question you can ask to the guys promoting the adoption of languages marketed based on memory safety arguments. I mean, even Rust has a fair share of CVEs whose root cause is unsafe memory management.

[–] lysdexic@programming.dev 1 points 2 days ago* (last edited 2 days ago) (21 children)

From the article.

Josh Aas, co-founder and executive director of the Internet Security Research Group (ISRG), which oversees a memory safety initiative called Prossimo, last year told The Register that while it's theoretically possible to write memory-safe C++, that's not happening in real-world scenarios because C++ was not designed from the ground up for memory safety.

That baseless claim doesn't pass the smell check. Just because a feature was not rolled out in the mid-90s would that mean that it's not available today? Utter nonsense.

If your paycheck is highly dependent on pushing a specific tool, of course you have a vested interest in diving head-first in a denial pool.

But cargo cult mentality is here to stay.

[–] lysdexic@programming.dev 3 points 2 days ago (2 children)

However, we’re still implementing IPv6, so how long until we could actually use this?

We can already use custom verbs as we please: we only need to have clients and servers agree on a contract.

What we don't have is the benefit of high-level "batteries included" web frameworks doing the work for us.

[–] lysdexic@programming.dev 1 points 2 days ago (1 children)

So that’s where I would say, as long as performance doesn’t matter it’s better to default to B-Tree maps than to hash maps, because the chance of avoiding bugs is more valuable than immeasurable performance benefits (...)

I don't quite follow. What leads you to believe that a B-Tree map implementation would have a lower chance of having a bug when you can simply pick any standard and readily available hash map implementation?

Also, you fail to provide any concrete reasoning for b-tree maps. It's not performance on any of the dictionary operationd, and bugs ain't it as well. What's the selling point that you are seeing?

[–] lysdexic@programming.dev 2 points 3 days ago (3 children)

the reason I tend to recommend B-Tree maps over hash maps for ordinary programming is consistent iteration order.

Hash maps tend to be used to take advantage of constant time lookup and insertion, not iterations. Hash maps aren't really suites for that usecase.

Programming languages tend to provide two standard dictionary containers: a hash map implementation suited for lookups and insertions, and a tree-based hash map that supports sorting elements by key.

[–] lysdexic@programming.dev 1 points 4 days ago* (last edited 2 days ago) (2 children)

Yeah, the quality on Lemmy is nowhere (...)

Go ahead and contribute things that you find interesting instead of wasting your time whining about what others might like.

So far, all you're contributing is whiny shitposting. You can find plenty of that in Reddit too.

[–] lysdexic@programming.dev 2 points 4 days ago* (last edited 4 days ago)

It’s from 2015, so its probably what you are doing anyway

No, you are probably not using this at all. The problem with JSON is that this details are all handled in an implementation-defined way, and most implementation just fail/round silently.

Just give it a try and send down the wire a JSON with, say, a huge integer, and see if that triggers a parsing error. For starters, in .NET both Newtonsoft and System.Text.Json set a limit of 64 bits.

https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializeroptions.maxdepth

[–] lysdexic@programming.dev 4 points 4 days ago* (last edited 4 days ago)

Why restrict to 54-bit signed integers?

Because number is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.

Meaning, it's the highest integer precision that a double-precision object can express.

I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types.

It's not about compatibility. It's because JSON only has a number type which covers both floating point and integers, and number is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.

[–] lysdexic@programming.dev 1 points 4 days ago* (last edited 4 days ago) (3 children)

The only think that TCP_NODELAY does is disabling packet batching/merging through Naggle's algorithm. Supposedly that increases throughput by reducing the volume of redundant information required to send small data payloads in individual packets, with the tradeoff of higher latency. It's a tradeoff between latency and throughput. I don't see any reason for transfer rates to lower; quite the opposite. In fact the very few benchmarks I saw showed exactly that: TCP_NODELAY causing a drop in the transfer rate.

There are also articles on the cargo cult behind TCP_NODELAY.

But feel free to show your data.

view more: next ›