InfoSec Person | Alt-Account#2
Yes, this would essentially be a detecting mechanism for local instances. However, a network trained on all available federated data could still yield favorable results. You may just end up not needing IP Addresses and emails. Just upvotes / downvotes across a set of existing comments would even help.
The important point is figuring out all possible data you can extract and feed it to a “ML” black box. The black box can deal with things by itself.
My bachelor’s thesis was about comment amplifying/deamplifying on reddit using Graph Neural Networks (PyTorch-Geometric).
Essentially: there used to be commenters who would constantly agree / disagree with a particular sentiment, and these would be used to amplify / deamplify opinions, respectively. Using a set of metrics [1], I fed it into a Graph Neural Network (GNN) and it produced reasonably well results back in the day. Since Pytorch-Geomteric has been out, there’s been numerous advancements to GNN research as a whole, and I suspect it would be significantly more developed now.
Since upvotes are known to the instance administrator (for brevity, not getting into the fediverse aspect of this), and since their email addresses are known too, I believe that these two pieces of information can be accounted for in order to detect patterns. This would lead to much better results.
In the beginning, such a solution needs to look for patterns first and these patterns need to be flagged as true (bots) or false (users) by the instance administrator - maybe 200 manual flaggings. Afterwards, the GNN could possibly decide to act based on confidence of previous pattern matching.
This may be an interesting bachelor’s / master’s thesis (or a side project in general) for anyone looking for one. Of course, there’s a lot of nuances I’ve missed. Plus, I haven’t kept up with GNNs in a very long time, so that should be accounted for too.
Edit: perhaps IP addresses could be used too? That’s one way reddit would detect vote manipulation.
[1] account age, comment time, comment time difference with parent comment, sentiment agreement/disgareement with parent commenters, number of child comments after an hour, post karma, comment karma, number of comments, number of subreddits participated in, number of posts, and more I can’t remember.
This website shows the SearXNG public instances. It is updated every 24 hours, except the response times which are updated every 3 hours. It requires Javascript until the issue #9 is fixed.
I think the difference lies in two things:
You can share an article from a user of a different instance. In this case, your instance will have to look up the rel=“author” tag and check whether the URL is a fediverse instance. I’m not sure whether this is scalable as compared to a tag that directly indicates that the author is on the fediverse. Imagining a scenario where there are 100, 1000, 10,000, or 100,000 instances on different versions.
The tag is to promote that the author is on the fediverse. If the rel=“author” tag points to twitter for example, maybe Eugen Rochko + team didn’t want a post on the fediverse to link to twitter.
These are my thoughts and idk if they’re valid. But I think just reusing the rel=“author” isn’t the most elegant solution.
I know that mastodon already uses rel=“me” for link verification (I use it on mu website + my mastodon account), but that’s a different purpose - that’s more for verification. There’s still no way of guaranteeing that the rel=“author” tag points to a fediverse account. You’re putting the onus on the mastodon instance.
It works in a pretty neat way:
We’ve decided to create a new kind of OpenGraph tag—the same kind of tags you have on your website to determine which thumbnail image will appear on the preview for the page when shared on Discord, iMessage, or Mastodon. It looks like this: <meta name=“fediverse:creator” content=“@Gargron@mastodon.social” />.
via: https://blog.joinmastodon.org/2024/07/highlighting-journalism-on-mastodon/
Isn’t Angstrom 10^-10 meters? And nanometers 10^-9 meters? So 20A (assuming A = Angstrom) is just 2nm?
Are they trying to say that by moving to this new era, they’ll go single digit Angstrom i.e., 0.x nm?
The debug version you compile doesn’t affect the code; it just stores more information about symbols. The whole shtick about the debugger replacing instructions with INT3 still happens.
You can validate that the code isn’t affected yourself by running objdump on two binaries, one compiled with debug symbols and one without. Otherwise if you’re lazy (like me 😄):
https://stackoverflow.com/a/8676610
And for completeness: https://gcc.gnu.org/onlinedocs/gcc-14.1.0/gcc/Debugging-Options.html
Excellent question!
Before replacing the instruction with INT 3, the debugger keeps a note of what instruction was at that point in the code. When the CPU encounters INT 3, it hands control to the debugger.
When the debugging operations are done, the debugger replaces the INT 3 with the original instruction and makes the instruction pointer go back one step, thereby ensuring that the original instruction is executed.
https://en.wikipedia.org/wiki/INT_(x86_instruction) (scroll down to INT3)
https://stackoverflow.com/a/61946177
The TL;DR is that it’s used by debuggers to set a breakpoint in code.
For example, if you’re familiar with gdb, one of the simplest ways to make code stop executing at a particular point in the code is to add a breakpoint there.
Gdb replaces the instruction at the breakpoint with 0xCC, which happens to be the opcode for INT 3 — generate interrupt 3. When the CPU encounters the instruction, it generates interrupt 3, following which the kernel’s interrupt handler sends a signal (SIGTRAP) to the debugger. Thus, the debugger will know it’s meant to start a debugging loop there.
… I am 100% certain that if they switched to being individually wrapped tomorrow, a complaint about excessive packaging would be one of the top posts here.
You’re undeniably right. The best situation would be to not have any wrapping at all… but with the crumb situation, that’d be another top post here :/
Surprised no one’s mentioned HTTP Cats yet:
Personally, HTTP 405 (Method not allowed) is my favorite:
That’s not a very valid argument.
First and foremost, most devs probably see it as a job and they do what they’re told. They don’t have the power to refute decisions coming from above.
Second, in this economy where jobs are scarer than a needle in multiple haystacks, people are desperate to get a job.
Third, yes, there may be some Microsoft (M$) fan-people who end up being devs at M$. Sure, they may willingly implement the things upper management may request. However, I’m not sure whether that’s true for most of the people who work at M$.
Your comment suggests to shift the blame to the devs who implement the features that upper management request for. Don’t shoot the (MSN) messenger.
Looks cool and I’m glad something new has arrived after nitter.
A few things, however:
See Wendover Productions’ most recent video, “The Increasing Reality of War in Space” (from around 7:54); they talk about SpaceX launching unknown satellites and not reporting it either.
Will you (the community) be setting your username to your public username (a username you use everywhere) or something that’s different from your public username?
Idk why, but signal feels more… personal(?) and I’d hate for general people to stumble across my signal account just by guessing whether my signal username is my public username.
I’d be fine if they got my Discord account, mastodon account, Lemmy account (they’re all different usernames anyway) because they’re public-ish accounts. Signal feels less public and I’d want to go with a username that only I can send to people I know.
It looks like there will be a message requests area and it looks like usernames can also be changed (should a username ever be doxxed).
I’m still on the fence.
- Got a text-based launcher (Lunar Launcher)
By this, do you mean this launcher for Android? Searching duckduckgo predominantly leads me to a launcher with the same name for Minecraft
That title is… something
Yep, a few forks were identified within a few hours. I think the maintainers had forks too.