

Removed by mod


Removed by mod


Removed by mod


Removed by mod


Removed by mod


Removed by mod


Removed by mod


https://en.wikipedia.org/wiki/MathWorld
https://mathworld.wolfram.com/
I see people using wolfram occasionally.


It is the beginning of cataclysm-simulation_67
Complex social hierarchy is a super important aspect to account for too. In the proprietary software realm, you infer confidence in the accumulated wealth hierarchy. In FOSS the hierarchy is not wealth, but reputation like in academia or the film industry. If some company in Oman makes some really great proprietary app, are you going to build your European startup over top of it? Likewise, if in FOSS someone with no reputation makes some killer app, the first question to ask is whether this is going to anchor or support a stellar reputation. Maybe they are just showing off skills to land a job. If that is the case, they are just like startups that are only looking to get bought up quickly by some bigger fish. We are all conditioned to think in terms of horded wealth as the only form of hierarchy, but that is primitive. If all the wealth was gone, humans are still fundamentally complex social animals, and will always establish a complex hierarchy. This is one of the spaces where it is different.


Check DNS logs. Discord is proprietary undocumented garbage that connects to dozens of raw IP addresses that have no documentation, rhyme, or reasoning. You have no clue what or who is connected in that mess of garbage, or why they are there.
It is about like, I’m going to give you access to a phone, a special phone, it just works.
It is a prison phone. You are in prison when you use it… technically. But you don’t really “see” the “place”. The other inmates are all around you. They see you, but you don’t see them. Never mind that though, the phone just works. Lots of people love that phone. Nobody asks questions. Just use the phone and pay no attention to all the rest. It will be fine.
Business model? Viability? Never mind all of that. Don’t ask questions like that. The numbers do not add up in the slightest. That is the magic of prisons. Justice costs a lot, but it is worth it right. Magic phone is easy. Ask no questions. Expect no answers. Totally normal, everyone is doing it.
The whole thing is a mass of clueless zombie morons that ask no questions and have no idea who what or why they are connected to with all those raw IP addresses. They all give trust blindly without accountability or understanding.


Is fluxer as network f’ed up as Discord without the minimum democratic standard of human readable domains, or is it the slavery of dozens of undocumented raw IP addresses?
llama.cpp is at the core of almost all offline, open weights models. The server it creates is Open AI API compatible. Oobabooga Textgen WebUI is more user GUI oriented but based on llama.cpp. Oobabooga has the setup for loading models with a split workload between the CPU and GPU which makes larger gguf quantized models possible to run. Llama.cpp, has this feature, Oobabooga implements it. The model loading settings and softmax sampling settings take some trial and error to dial in well. It helps if you have a way of monitoring GPU memory usage in real time. Like I use a script that appends my terminal window title bar with GPU memory usage until inference time.
Ollama is another common project people use for offline open weights models, and it also runs on top of llama.cpp. It is a lot easier to get started in some instances and several projects use Ollama as a baseline for “Hello World!” type stuff. It has pretty good model loading and softmax settings without any fuss, but it does this at the expense of only running on GPU or CPU but never both in a split workload. This may seem great at first, but if you never experience running much larger quantized models in the 30B-140B range, you are unlikely to have success or a positive experience overall. The much smaller models in the 4B-14B range are all that are likely to run fast enough on your hardware AND completely load in your GPU memory if you only have 8GB-24GB. Most of the newer models are actually Mixture of Experts architectures. This means it is like loading ~7 models initially, but then only inferencing two of them at any one time. All you need is the system memory or the Deepspeed package (uses disk drive for excess space required) to load these larger models. Larger quantized models are much much smarter and more capable. You also need llama.cpp if you want to use function calling for agentic behaviors. Look into the agentic API and pull history in this area of llama.cpp before selecting what models to test in depth.
Huggingface is the goto website for sharing and sourcing models. That is heavily integrated with GitHub, so it is probably as toxic long term, but I do not know of a real FOSS alternative for that one. Hosting models is massive I/O for a server.


K&R?


deleted by creator
Graduated pacman emerges… and we all know emerge is Gentoo. This one doesn’t compile.


deleted by creator


Most people’s routers are already up 24/7.
We should be able to do our own DNS. Who cares if it is on the wider clearweb. You are paying for an IP address with your internet connection. If you are running a server with verified hardware and signed code, all we need is a half dozen nodes mirroring our own DNS. There must be a backup proxy for the few terrible providers that cause issues with IP. The addresses are not static, but they do not change very often. At worse, you hit a manual button to reset or wait 10 minutes before the DNS updates.


Pipe Pipe is better than Newpipe. I use F-droid’s VLC front end for local music because the built in android back end is VLC. For everything else, in browser


Rπ is proprietary. You really need a hard drive for storage. The point is a TPM based encryption with no user configuration or worry about securing the thing. It just works with no excuses.
OCR tool+ to autogen a suggested alt text. The path of least resistance needs to be lowered.
Alternatively, inverting the paradigm is likely to cause less issues and push back. Add the automated tool the the end user in need of the version. This obviously creates the issue of data quality and trust, but for the smaller group. What if there was a reply field silently posted to everyone’s notifications feed indicating anonymous instances of the tool being used to fill in the gaps for alt text? The message would need to be opt out or carefully presented. Perhaps it could be possible to modify the post itself via the tool? Better yet, make the alt text field a Wikipedia style affair anyone with an account can edit, but with a lock available to the OP. That would create much more healthy awareness of the need for alt text, as people posting the content will see the places where gaps are filled by an automated tool. It gives them the chance to edit. This does little to initially improve the experience of the most active alt text users, but it creates a strong cultural shift in awareness that should improve the situation greatly in the long term IMO.