The YouTube channel looking glass universe (highly recommended!) also has a video on how alphafold works.
The YouTube channel looking glass universe (highly recommended!) also has a video on how alphafold works.
Server’s down :(
That is true for most current “self driving” systems, because they are all just glorified assist features. Tesla is misleading its customers massively with their advertisement, but on paper it’s very clear that the car will only assist in safe conditions, the driver needs to be able to react immediately at all times and therefore is also liable.
However, Mercedes (I think it was them) have started to roll out a feather where they will actually take responsibility for any accidents that happen due to this system. For now it’s restricted to nice weather and a few select roads, but the progress is there!
Definitely JS if you want to also have a website. Use electron to turn your website into an executable for the desktop. Python+qt is ok for Desktop apps, but does not work for a website.
Languages that compile to wasm would also be an option, (e.g. https://egui.rs with rust), but as far as i am aware none of the languages you’ve listed are in that set. (Only go would even be a contender between python, ruby, js and go)
Eh it’s not that great.
One million Blackwell GPUs would suck down an astonishing 1.875 gigawatts of power. For context, a typical nuclear power plant only produces 1 gigawatt of power.
Fossil fuel-burning plants, whether that’s natural gas, coal, or oil, produce even less. There’s no way to ramp up nuclear capacity in the time it will take to supply these millions of chips, so much, if not all, of that extra power demand is going to come from carbon-emitting sources.
If you ignore the two fastest growing methods of power generation, which coincidentally are also carbon free, cheap and scalable, the future does indeed look bleak. But solar and wind do exist…
The rest is purely a policy rant. Yes, if productivity increases we need some way of distributing the gains from said productivity increase fairly across the population. But jumping to the conclusion that, since this is a challenge to be solved, the increase in productivity is bad, is just stupid.
Alternatively the y axis could be “blog posts not about …”
You can literally run large language models with a single exe download: https://github.com/Mozilla-Ocho/llamafile
It doesn’t get much simpler than that.
Addendum:
The docs say
For reproducible outputs, set temperature to 0 and seed to a number:
But what they should say is
For reproducible outputs, set temperature to 0 or seed to a number:
Easy mistake to make
I appreciate the constructive comment.
Unfortunately the API docs are incomplete (insert obi wan meme here). The seed value is both optional and irrelevant when setting the temperature to 0. I just tested it.
Yeah no, that’s not how this works.
Where in the process does that seed play a role and what do you even mean with numerical noise?
Edit: I feel like I should add that I am very interested in learning more. If you can provide me with any sources to show that GPTs are inherently random I am happy to eat my own hat.
Ah, gotcha.
Is there like a list where you can enter your server so that other people use it as an ntp server? Or how did you advertise it to have 2800 requests flooding in?
Crypto is basically cash for online transactions. Pretty niche, but cool and definitely in demand for some situations.
Just how in the real world you’re shit outta luck if you lose your wallet. Or if you give someone money, but they laugh you in the face you can either cut your losses or try your luck in a fist fight. It’s the same with crypto.
With banks you have a separate authority that can handle all these cases, which is desirable in 99% of all transactions.
Unfortunately it’s volatile af, and the most popular crypto currency (Bitcoin)has untenable transaction costs and transaction limitations (10 transactions per second, globally - what a stupid design decision)
I’ve used it to improve selected paragraphs of my writing, provide code snippets and find an old comic based on a crude description of a friend.
I feel like these interactions were valuable to me and only one (code snippets) could have been easily replaced with existing tools.
I have similar specs and cost with ionos
It says posted 4 days ago, updated yesterday.
For most stuff the pi4 is also enough. Jellyfin (no transcoding) works fine on mine. It takes a bit to generate the chapter images and the timeline peek images when ingesting a new movie, but I’ve never had any issues with playback.
Wait what? Do I understand that correctly? You have a raspberry pi with a direct network connection to an atomic clock? That’s so awesome!
Yes it is intentional.
Some interferences even expose a way to set the “temperature” - higher values of that mean more randomized (feels creative) output, lower values mean less randomness. A temperature of 0 will make the model deterministic.
It does not perform very well when asked to answer a stack overflow question. However, people ask questions differently in chat than on stack overflow. Continuing the conversation yields much better results than zero shot.
Also I have found ChatGPT 4 to be much much better than ChatGPT 3.5. To the point that I basically never use 3.5 any more.
No, it’s simply contradicting the claim that it is possible.
We literally don’t know how to fix it. We can put on bandaids, like training on “better” data and fine-tune it to say “I don’t know” half the time. But the fundamental problem is simply not solved yet.
No, because you can’t mathematically guarantee that pi contains long strings of predetermined patterns.
The 1.101001000100001… example by the other user was just that - an example. Their number is infinite, but never contains a 2. Pi is also infinite, but does it contain the number e to 100 digits of precision? Maybe. Maybe not. The point is, we don’t know and we can’t prove it either way (except finding it by accident).