They state they take inspiration from Tor and IPFS, so there are added transport layers below the top layer "P2P" that obfuscates ones IP address. It's nothing new really, and I'm honestly not sure what the advantages are over something like I2P, which largely doesn't suffer from Tor's issues of node ownership as there are no guard or exit nodes to own (unless expressly configured), while also being faster overall.
juni
This will be my final reply on the matter as I do not believe you are operating in good faith. But in case you are:
Firstly, the idea that you "cannot be forced to do something within your job description" is unequivocally false, and a sign of a toxic work environment. She actively requested to not be put in charge of a platform that made her uncomfortable, and the request was denied and she was forced, against her will, to do so. I have never in my life worked at a place where I could not request to be taken off a project or task due to being uncomfortable with it. This is not a point of discussion, this is a categorical fact.
Secondly, it does not matter that she was not public facing on OnlyFans. She, alongside her coworkers, were active public figures on multiple LMG affiliated channels during her employment. And OnlyFans is a platform known to be near exclusively used for sexual gratification, and it is therefore entirely unsurprising that the LMG OnlyFans account received a large amount of sexual advancements, objectification, and harassment of LMG employees. And due to my prior comment, I fully believe a large majority of what was received would have been targeting the women employed at LMG. Therefore, putting one of the main victims of said harassment and objectification in charge of managing it is wholly and entirely unacceptable behavior by the management at LMG.
These are not complex concepts, and are not even all that contemporary anymore. And as such I do not feel there is any real discussion to be had on the matter, there are people more intelligent than I that do a better job expressing these things in more empirical detail. I suggest you seek them out if you need more detail than I have provided here.
The fact they're still hiding their testing methodologies behind floatplane makes me dubious of how effective this "housekeeping" week will be. Not that I plan on watching or interacting with anything LMG related going forward until the allegations brought up by Madison are properly handled anyway, in which then my final decision will be made.
It was expressly against her will as per her own words. And as for why "a woman", its rather well known women already deal with much more sexual harassment and maltreatment online than men do. Just look at the market of AI generated porn of celebrities and online personalities as proof of this. So forcing a woman, who already has a public presence no less, to manage a platform such as OnlyFans, and constantly see and have to manage sexual objectification and harassment towards her as well as her coworkers, is unacceptable in my opinion.
I agree with a lot thats been said in this thread. But I think a lot of it has to do with speed as well. The worlds moving so fast for so many people a break in their habit/routine too large a deal to manage.
Admittedly I also believe this acceleration of the world is also intentional by the 1%, if not for this push for anxiety, just for increased perceived productivity. But those who are unable to slow themselves down will smash into changes in their daily lives much harder than those who can, and I think a lot of people are losing that ability due to technology and modern socioeconomic factors.
Most of my hobby programming is in ANSI C and C99, so I'm unfortunately far too aware of the weird and counter-intuitive things the C and POSIX standards say. :P
clock()
is fantastic for sub-second timings, such as deltatimes in games, or peripheral synchronization, which matches the use case you mention very well. I recommended time()
over it as OP's use case is for calculating the amount of hours a user has had their software open, and unix timestamps are the perfect mechanism to do that in my opinion.
Using clock()
solely for delta values is absolutely a valid approach, as stated. The issue is that clock_t
may not be large enough of some systems to safely keep you from an overflow, especially with arbitrary values. Additionally, some systems will include the time children processes were alive in subsequent clock()
calls, furthering possible confusion. These are reasons why I would avoid clock()
in favor of time()
, even though your concerns are absolutely valid.
At the end of the day you have to determine which style of unpredictability you want to work around. Dealing with the times()
, clock()
, and clock_gettime()
class of functions opens you up to managing what the kernel considers time passed, and what is accumulated vs what is not. While using time()
can have shifts in time according to upstream NTP servers, as well as daylight savings time.
I would also make the argument that if an NTP server is adjusting your time, it is most likely more accurate than what your internal clock (CMOS or otherwise) was counting, and is worth following.
Professional on a haitus here. Fully self taught, done a ton of hobby projects, most of my fleshed out ones being in either C89 or C99. Most recently has been a calculator application for myself in X11 too brush off the rust on my X11 knowledge, as well as a Lemmy client library for C.
This is correct, however it is important to note that the C standard allows arbitrary values at the beginning of the program. The manpage does a better job explaining it.
Doing a bit of research, it looks like the POSIX time_t time(time_t *dest)
function (man) is available on Windows (see here). So I would recommend that over clock_t clock(void)
as it will operate more consistently across platforms.
This just serves as another reminder that I need to finish reading SICP. But that said, I think this brings up some very interesting points. The example provided of DRY is focused on what is happening on a human/company level, while the abstraction barriers provided focus heavily on what is happening on a software level. This is a differentiation that I feel like is extremely important when programming robust, maintainable software. You cannot let non-software related terminology seep into what is fundamentally, well, software.
When you let non-software terminology work its way into software, the software has to start making assumptions. What is a C level employee? What bonuses do they require? Are these things subject to change? The list goes on. But if you approach the problem with software first and foremost, it is clear that all is happening is a variable bonus is added to an employees compensation. It is not this layers problem what that value is, nor is it this layers problem who is being compensated. That is all concerns for a DB layer (of some form) somewhere up the chain. All the financial layer cares about is applying the calculations.
So I don't feel like this is really a case against DRY, as much as it's a case against using non-software terminology and applying non-software assumptions to what is fundamentally, software. The arguments for maintaining independent layers is also important, but if you're thinking fully in terms of software operation, I feel you can more comfortable determine when layers can be interlinked.
This is precisely it, and is a similar approach to the ones used by other anonymization networks as well. This allows your entry node to know your node/IP is using the network, but with a secure end-to-end tunnel, nobody along that tunnel knows the entire source -> destination path or data, so it is usually considered sufficiently anonymous and secure.