• 8 Posts
  • 695 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • For a lorry, no. For a private vehicle, yes. Standard driving licenses only allow for up to 3.5t combined permissible weight (that is, vehicle and trailer plus maximum load), 750kg of those for trailer and load. If you want to drive a combination of vehicle and trailer individually up to 3.5t (so total 7t) you need a trailer license, anything above that you need a lorry license with all bells and whistles such as regular medical checkups.

    Or, differently put: A standard VW Golf can pull almost thrice as much as most drivers are allowed to pull.

    A small load for a private vehicle would be a small empty caravan, or a light trailer with some bikes. A Smart Fourtwo can pull 550kg which will definitely look silly but is otherwise perfectly reasonable, that’s enough for both applications.


  • My kitchen scales have a USB-C port. While I certainly would like it to have the capability to stream GB/s worth of measuring data over it fact of the matter is I paid like ten bucks for it, all it knows is how to charge the CR2032 cell inside. I also don’t expect it to support displayport alt mode, it has a seven-segment display I don’t really think it’s suitable as a computer monitor.

    What’s true though is that it’d be nice to have proper labelling standards for cables. It should stand to reason that the cable that came with the scales doesn’t support high performance modes, heck it doesn’t even have data lines literally the only thing it’s capable of is low-power charging, nothing wrong with that but it’d be nice to be able to tell that it can only do that at a semi-quick glance when fishing for a cable in the spaghetti bin.


  • A and B are the original, used for host and device sides, respectively. C is the same on both ends of the cable because figures there’s device classes which can sensibly act as both, in particular phones. It’s also the most modern of the bunch supporting higher data transfer and power delivery rates because back in the days where A and B where designed people were thinking about connecting mice and keyboards, not 8k monitors or kWhs worth of lithium batteries.

    The whole mini/micro shennanigans are alternative B types and quite deeply flawed, mechanically speaking.



  • barsoap@lemm.eetoFediverse@lemmy.worldThe Fediverse
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    4 days ago

    anyone can host an email service

    Eh, no. You could in the 2000s, nowadays spam protection is so tight, and necessarily that tight, that you need at least a full-time position actively managing the server or you’re getting blacklisted for some reason or the other. Other servers will simply not accept emails sent by you if you don’t look legit and professional.

    Definitely possible for a company with IT department, as a small company you want to outsource it (emails being on your domain doesn’t mean you’re managing the server), as a hobbyist, well you might be really into it but generally also no. Send protonmail or posteo or whoever a buck or something a month.


  • That’s the sort of thing that should just be an extension

    It most likely is on the technical level, just shipped by default and integrated into standard settings instead of the add-on ones. And it’s going to be opt-in, so you won’t have to go into about:config to disable it. Speaking of: You’re looking for extensions.pocket.enabled, it should be false. And before you say “muh diskspace” it’s probably like 5k of js and css or such.


  • They have to be hotter than the temperature of the Sun

    Well they don’t strictly speaking have to but to get fusion you need a combination of pressure and temperature and increasing temperature is way easier than increasing pressure if you don’t happen to have the gravity of the sun to help you out. Compressing things with magnetic fields isn’t exactly easy.

    Efficiency in a fusion reactor would be how much of the fusion energy is captured, then how much of it you need to keep the fusion going, everything from plasma heating to cooling down the coils. Fuel costs are very small in comparison to everything else so being a bit wasteful isn’t actually that bad if it doesn’t make the reactor otherwise more expensive.

    What’s much more important is to be economical: All the currently-existing reactors are research reactors, they don’t care about operating costs, what the Max Planck people are currently figuring out is exactly that kind of stuff, “do we use a cheap material for the diverters and exchange them regularly, or do we use something fancy and service the reactor less often”: That’s an economical question, one that makes the reactor cheaper to operate so the overall price per kWh is lower. They’re planning on having the first commercial prototype up and running in the early 2030s. If they can achieve per kWh fuel and operating costs lower than gas they’ve won, even though levelised costs (that is, including construction of the plant amortised over time) will definitely still need lowering. Can’t exactly buy superconducting coils off the shelf right now, least of all in those odd shapes that stellerators use.


  • The ISA does include sse2 though which is 128 bit, already more than the pointer width. They also doubled the number of xmm registers compared to 32-bit sse2.

    Back in the days using those instructions often gained you nothing as the CPUs didn’t come with enough APUs to actually do operations on the whole vector in parallel.


  • graphics, video, neural-net acceleration.

    All three are kinda at least half-covered by the vector instructions which absolutely and utterly kills any BLAS workload dead. 3d workloads use fancy indexing schemes for texture mapping that aren’t included, video I guess you’d want some special APU sauce for wavelets or whatever (don’t know the first thing about codecs), neural nets should run fine as they are provided you have a GPU-like memory architecture, the vector extension certainly has gather/scatter opcodes. Oh, you’d want reduced precision but that’s in the pipeline.

    Especially with stuff like NNs though the microarch is going to matter a lot. Even if a say convolution kernel from one manufacturers uses instructions a chip from another manufacturer understands, it’s probably not going to perform at an optimal level.

    VPUs AFAIU are usually architected like DSPs: A bunch of APUs stitched together with a VLIW insn encoder very much not intended to run code that is in any way general-purpose, because the only thing it’ll ever run is hand-written assembly, anyway. Can’t find the numbers right now but IIRC my rk3399 comes with a VPU that out-flops both the six arm cores and the Mali GPU, combined, but it’s also hopeless to use for anything that can’t be streamed linearly from and to memory.

    Graphics is the by far most interesting one in my view. That is, it’s a lot general purpose stuff (for GPGPU values of “general purpose”) with only a couple of bits and pieces domain-specific.






  • have variable width instructions,

    compressed instruction set /= variable-width. x86 instructions are anything from one to a gazillion bytes, while RISC-V is four bytes or optionally (very commonly supported) two bytes. Much easier to handle.

    vector instructions,

    RISC-V is (as far as I’m aware) the first ISA since Cray to use vector instructions. Certainly the only one that actually made a splash. SIMD isn’t vector instructions, most crucially with vector insns the ISA doesn’t care about vector length on an opcode level. That’s like if you wrote MMX code back in the days and if you run the same code now on a modern CPU it’s using just as wide registers as SSE3.

    But you’re right the old definitions are a bit wonky nowadays, I’d say the main differentiating factor nowadays is having a load/store architecture and disciplined instruction widths. Modern out-of-order CPUs with half a gazillion instructions of a single thread in flight at any time of course don’t really care about the load/store thing but both things simplify insn decoding to ludicrous degrees, saving die space and heat. For simpler cores it very much does matter, and “simpler core” here can also could mean barely superscalar, but with insane vector width, like one of 1024 GPU cores consisting mostly of APUs, no fancy branch prediction silicon, supporting enough hardware threads to hide latency and keep those APUs saturated. (Yes the RISC-V vector extension has opcodes for gather/scatter in case you’re wondering).


    Then, last but not least: RISC-V absolutely deserves the name it has because the whole thing started out at Berkeley. RISC I and II were the originals, II is what all the other RISC architectures were inspired by, III was a Smalltalk machine, IV Lisp. Then a long time nothing, then lecturers noticed that teaching modern microarches with old or ad-hoc insn sets is not a good idea, x86 is out of the question because full of hysterical raisins, ARM is actually quite clean but ARM demands a lot, and I mean a lot of money for the right to implement their ISA in custom silicon, so they started rolling their own in 2010. Calling it RISC V was a no-brainer.