• 0 Posts
  • 59 Comments
Joined 1 year ago
cake
Cake day: August 2nd, 2023

help-circle



  • StarDreamer@lemmy.blahaj.zonetoLinux@lemmy.mlHelp w/ crash
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    9 months ago

    Look at the line with the asm_exc_invalid_op. That seems like a hardware fault caused by an invalid asm instruction to me. Either something wrong is being interpreted as an opcode (unlikely) or maybe the driver was compiled with extensions not available on the current machine.

    OP, how old is your CPU? And how old is the nic you are using?

    Edit: did you use a custom driver for the NIC? I’m looking at the Linux src and rt_mutex_schedule does not exist. Nevermind. Was checking 4.18 instead of 6.7. found it now. The bug is most likely inside a macro called preempt_disable(). Unfortunately most of the functions are pretty heavily inlined and architecture dependent so you won’t get much out of it. But it is likely any changes you made in terms of premption might also be causing the bug.










  • Some people play games to turn their brains off. Other people play them to solve a different type of problem than they do at work. I personally love optimizing, automating, and min-maxing numbers while doing the least amount of work possible. It’s relatively low-complexity (compared to the bs I put up with daily), low-stakes, and much easier to show someone else.

    Also shout-out to CDDA and FFT for having some of the worst learning curves out there along with DF. Paradox games get an honorable mention for their wiki.




  • The argument is that processing data physically “near” where the data is stored (also known as NDP, near data processing, unlike traditional architecture designs, where data is stored off-chip) is more power efficient and lower latency for a variety of reasons (interconnect complexity, pin density, lane charge rate, etc). Someone came up with a design that can do complex computations much faster than before using NDP.

    Personally, I’d say traditional Computer Architecture is not going anywhere for two reasons: first, these esoteric new architecture ideas such as NDP, SIMD (probably not esoteric anymore. GPUs and vector instructions both do this), In-network processing (where your network interface does compute) are notoriously hard to work with. It takes CS MS levels of understanding of the architecture to write a program in the P4 language (which doesn’t allow loops, recursion, etc). No matter how fast your fancy new architecture is, it’s worthless if most programmers on the job market won’t be able to work with it. Second, there’re too many foundational tools and applications that rely on traditional computer architecture. Nobody is going to port their 30-year-old stable MPI program to a new architecture every 3 years. It’s just way too costly. People want to buy new hardware, install it, compile existing code, and see big numbers go up (or down, depending on which numbers)

    I would say the future is where you have a mostly Von Newman machine with some of these fancy new toys (GPUs, Memory DIMMs with integrated co-processors, SmartNICs) as dedicated accelerators. Existing application code probably will not be modified. However, the underlying libraries will be able to detect these accelerators (e.g. GPUs, DMA engines, etc) and offload supported computations to them automatically to save CPU cycles and power. Think your standard memcpy() running on a dedicated data mover on the memory DIMM if your computer supports it. This way, your standard 9to5 programmer can still work like they used to and leave the fancy performance optimization stuff to a few experts.




  • This is solving a problem we DO have, albeit in a different way. Email is ancient, the protocol allows you to self identify as whoever you want. Let’s say I send an email from the underworld (server ip address) claiming I’m Napoleon@france (user@domain), the only reason my email is rejected is because the recipient knows Napoleon resides on the server France, not underworld. This validation is mostly done via tricky DNS hacks and a huge part of it is built on top of Google’s infrastructure. If for some reason Google decides I’m not trustworthy, then it doesn’t matter if I’m actually sending Napoleon’s mail from France, it’s gonna be recognized as spam on most servers regardless.

    A decentralized chain of trust could potentially replace Google + all these DNS hacks we have in place. No central authority gets to control who is legitimate or not. Of all the bs use cases of block chain I think this one doesn’t seem that bad. It’s building a decentralized chain of trust for an existing decentralized system (email), which is exactly what “block chain” was originally designed for.


  • Is there a specific reason you’re looking at shadowsocks? The original developer has been MIA for years. People who used it in the past largely consider it insecure for its original stated purpose

    trojan-gfw is a better modern replacement. However that requires a certificate in order to work. You can easily get one via lets encrypt.

    At this point, let Shadowsocks, obfs, and kcp die a graceful death like GoAgent before it did.