Yeah…. That shit is going on right here on lemmy. Some of them have even allowed to be come so popular, we know them by name.
I think Lemmy is so tiny that those named below truly are just utter idiots doing that work for free.
I don’t rule it out. The prior era of “reddit alternatives” in the Voat era was quickly overrun too even though they were very small. The key to the internet has always been first mover advantage. If they have enough power to manipulate the top sites, it would take very little to hedge bets on budding platforms. They risk losing their advantage if a replacement platform establishes itself without them. That’s pretty much the whole history of modern tech. To actively seek and snuff out your competitors.
Someone always has to be king shit of whatever hill they’re on.
Ozma, yogthos, tokenboomer, alcoholicorn… am I forgetting anyone?
Honesrly: tokenboomer likely is all those accounts as well.
You will notice that anytime you respond to boomer with disagreement, multiple accounts will begin flooding you, and they all brave in and out of the conversations as if they are the ones who said things said by others.
You wouldn’t expect a paid professional to be so sloppy, but on the other hand, if he was good at his job, he wouldn’t be working for Hamas.
UniversalMonk potentially
.ml .ee
Is .ee one of the ones with bad rep? I picked it randomly.
It’s not as bad as ml who actively filter content they don’t like.
Starts with “L,” ends with “bann.”
Knew I was letting someone out. How non-inclusive of me lol
deleted by creator
It’s just basic economics. The amount of power and influence you can generate with a disinformation troll farm dramatically outweighs the cost. It’s a high impact, low cost form of geopolitical influence. And it works incredibly well.
It’s like saying that bullets and knives being more convenient than bricks for killing people are basic economics.
Doesn’t explain why those brownshirt types have guns and knives and kill people on the streets, while you don’t have those, and the police doesn’t shoot them, and more than that, they’d arrest you were you to do something to brownshirts.
What I wanted to take from this bad analogy is that the systems are designed for troll farms to work, and not vice versa. Social media are an instrument to impose governments’ will upon population. There are things almost all governments converge on, so the upsides of such existing outweigh the downsides.
Our world is dangerous.
You make it sound like everyone should be doing it. We could also save a lot of money invested into courts and prisons if we just executed suspects the state deemed guilty.
Can we open the farm that convinces people to stop listening to farms?
These are called schools and universities and whatnot.
You mean the “radical liberal elite” that are “indoctrinating our children” and secretly control our country???
Potayto, potahto, all depends on which reality you choose to inhabit.
No, only the farm that convinces people to listen to other farm(s). And those already exist, anyway :(
The trace buster buster BUSTER!
That’s my word, playa!
You are awesome. Can’t believe someone picked up that reference.
That movie probably didn’t age well, but it’s a gem
Everyone is doing it. What are you even talking about. You can’t unfuck the chicken.
The imperial core is so fragile and indoctrinated that troll farms are an existential threat.
They NEED shit like that to exist. Bullshit needs more bullshit to survive. This is why despite murdering countless people and endless campaigns funded by billions of dollars there are still leftist movements that won’t die.
My new rule of social media: Unless I know and trust the person or the organization making a post, I assume it’s worthless unless I double check it against a person or organization I trust. Opinions are also included in this rule.
Hello fellow Western capitalist nation citizen, it is I, a relatable friend colleague of David! David will vouch for me, what a capital guy!
So, me and the boys (David’s idea, you know what he’s like) thought it might be pretty radical to scooby over to our local air force base and take some cool photos of us doing kick flips with their buildings and infrastructure in the background.
We think it would look totally tubular and really show those capitalist pig dogs (who we outwardly claim to love otherwise the secret police present in every Western nation will disappear us and our families, something we deny, but other nations with better intelligence services who don’t lie to the great people of their glorious nation educate their populace about) what we think of their money war machine.
Oh! And then we can up post the photographs immediately to TikTok for the klout points!
What do you say, are you in, Breve? David said you’d be in.
Okay so now that everyone is here, the above comment is precisely what to look for when checking for signs of stroke.
On to .ml for the rest of our addled brains tour. Stay with you buddy and the group!
This is such a David thing to say, ho ho, you japester!
If it ain’t broke, don’t fix it.
I’m Soviet Russia, don’t fix broke you!
And continues to do so. You’ve grown since 2013
People complain about Russian troll factories, but hey, they are helping develop a reddit alternative so they can’t be all that bad.
I’d love to debate politics with you but first tell me how many r’s are in the word strawberry. (AI models are starting to get that answer correct now though)
I tried this with Gemini. Regardless of the number of rs in a word (zero to 3), it said two.
So ask it about a made up or misspelled word - “how many r’s in the word strauburrry” or ask it something with no answer like “what word did I just type?”. Anything other than, “you haven’t typed anything yet” is wrong.
But it’s a phrase you typed, the very one that contains the question, unless you ask by voice or in a picture
Its 3 right? Am i real? Why can’t ai guess that one?
Llms look for patterns in their training data. So like if you asked 2+2= it would look its training and finds high likelihood the text that follows 2+2= is 4. Its not calculating, its finding the most likely completion of the pattern based on what data it has.
So its not deconstructing the word strawberry into letters and running a count… it tries to finish the pattern and fails at simple logic tasks that arent baked into the training data.
But a new model chatgpt-o1 checks against itself in ways i dont fully understand and scores like 85% on international mathematic standardized test now so they are making great improvements there. (Compared to a score of like 14% from the model that cant count the r’s in strawberry)
Over simplification but partly it has to do with how LLMs split language into tokens and some of those tokens are multi-letter. To us when we look for R’s we split like S - T - R - A - W - B - E - R - R - Y where each character is a token, but LLMs split it something more like STR - AW - BERRY which makes predicting the correct answer difficult without a lot of training on the specific problem. If you asked it to count how many times STR shows up in “strawberrystrawberrystrawberry” it would have a better chance.
Thanks, you explained it well enough this layman kinda gets it!