Here’s a quick idea of what you’d want in a PC build https://newegg.io/2d410e4
Here’s a quick idea of what you’d want in a PC build https://newegg.io/2d410e4
You can have a slightly bigger package in PC form and doing 4x the work for half the price. That’s the gist.
~I just looked, and the MM maxes out at 24G anyway. Not sure where you got the thought of 196GB at.~ NVM you said m2 ultra
Look, you have two choices. Just pick one. Whichever is more cost effective and works for you is the winner. Talking it down to the Nth degree here isn’t going to help you with the actual barriers to entry you’ve put in place.
That’s as close as you get to it as well, and still not quite there…
😉
Lol. This will be about as popular as all the speciality keyboards that have claimed to be faster/better at typing.
Specialty input devices are impossible market to crack, because what YOU design the changed input to be is almost 90% not going to translate for other people. This thing looks to have 10 inputs, so there’s a massive number of key combos and changes one has to memorize. Not gonna happen.
I’ve not run such things on Apple hardware, so can’t speak to the functionality, but you’d definitely be able to do it cheaper with PC hardware.
The problem with this kind of setup is going to be heat. There are definitely cheaper minipcs, but I wouldn’t think they have the space for this much memory AND a GPU, so you’d be looking for an AMD APU/NPU combo maybe. You could easily build something about the size of a game console that does this for maybe $1.5k.
It’s like the pro-democracy version of the Ashley Madison hack.
https://discourse.joplinapp.org/t/joplin-server-documentation/24026
4 minimum. Joplin doesn’t need a server though. Just configure the storage backends to be whatever you want.
Christ. This guy just can’t help himself.
Sure seems like you’re either sourcing these images wrong, or they’re missing something. The docs themselves even reference this command as it’s a good way to test the container is linked to the host hardware properly.
Maybe try starting a shell and finding if that executable exists in the image.
‘docker exec -it jellyfin nvidia-smi’
You need to be running the Nvidia container toolkit and specify the container be launched with that runtime if you want direct hardware access to enc/dec hardware.
Yup
No offense, but this literally talked about nothing at all. The last half is just some ideas that don’t follow a cogent line from one thing to the next.
You should try writing a script about a topic, and sticking to it . Especially if you don’t feel comfortable about speaking off-the-cuff facts about what you’re delivering. It connotes a lack of understanding and knowledge about a subject.
Ask yourself when speaking about something: if a listener took something away from this speech, what would it be? Then write for that prompt.
SPICY CHICKEN SANDWICH REPRESENTATION
Probably easier to just unblock Google, send some messages, then look at your filter logs to see where they are going.
Guarantee you’ll run into issues when you hop towers or networks though.
Not like Framework at all. It’s a development device. No mention of what the CPU or display driving hardware is, it’s only 32GB of flash, and 4GB of memory. Definitely meant to be a single purpose device.
Don’t think it’s technically possible, but they haven’t elaborated. I’m pretty sure it’s new-gen only because of the additional coprocessors and new ray-tracing, but that doesn’t mean they won’t continue making improvements to previous gen FSR. They’ll probably backport some of the general improvements like they did with previous generations.
Nope. Definitely got a reason, and it’s stated. There have been countless reworks of keyboards, for example, that promise lots of benefits, but it’s a problem that doesn’t need solving for most people. What’s a 30% increase in typing speed with a 200% learning curve going to do for most people? Not much. I’ve seen hundreds come and go throughout the years in engineering teams, and people always go back to the thing they learned on.
That being said, as someone else pointed out in this thread, this is essentially just a remix of stenography. They’re trying to make it seem more useful than it is, which whatever, it’s their product. The thing that is most problematic about this particular product is the cognitive dissonance of staring at someone like this guy making weird faces and not speaking, where you’re actually listening to his phone.
Now, is this a solution for mute people? Quite possibly. Is it better than natural language conversational translation by a device in normal conversation? Not a chance.