Photon is a strange beast. How do you install it?
It seems to only come as a docker container. That’s weird. I don’t have docker installed but docker should really be a choice… not a sole means of installation. I see no deb file or tarball. It seems that it has taken a direction that makes it non-conducive to ever becoming part of the official Debian repos.
Then it seems as well that their official site “phtn.app” is a Cloudflare site – which is a terrible sign. It shows that the devs are out of touch with digital rights, decentralisation, and privacy. That doesn’t in itself mean the app is bad but the tool is looking quite sketchy so far. Several red flags here.
(edit) I found a tarball on the releases page.
I just need to work out exactly what the effect of the user-configured node block is. In principle, if an LW user replies to either my thread or one of my comments in someone else’s thread, I would still want to see their comments and I would still want a notification. But I would want all LW-hosted threads to be hidden in timelines and search results.
On one occasion I commented in an LW-hosted thread without realising it. Then I later blocked the community that thread was in (forgetting about my past comment). Then at one point I discovered someone replied to me and I did not get the notification. That scenario should be quite rare but I wonder how it would pan out with the node-wide blocking option.
Ah, I see! Found it. Indeed that was not there last time I checked.
I’m on both Lemmy and mbin. I have several Lemmy accounts.
Now I need to understand the consequences of blocking lemmy.world. Is it just the same as blocking every lemmy.world community, or does it go further than that? E.g. If I post a thread and a LW user replies, I would not want to block their reply from appearing in my notifications. I just don’t want LW threads coming up in searches or appearing on timelines.
I think he is talking about admins blocking instances in the settings for the whole node. AFAIK, users on Lemmy and k/mBin have no such setting.
I don’t get why you want users to be able to apply cloudflare filters, though.
Suppose an instance has these users:
And suppose the instance is a special interest instance focused on travel. The diverse group of the above people have one thing in common: they want to converge on the expat travel node and the admin wants to accommodate all of them. Norm, and many like him, are happy to subscribe to countless exclusive and centralised forums as they are pragmatic people with no thought about tech ethics. These subscriptions flood an otherwise free world node with exclusive content. Norm subscribes to !travelpics@exclusivenode.com
. Then Victor, Terry and sometimes Cindy are all seeing broken pics in their view because they are excluded by Cloudflare Inc. Esther is annoyed from an ethical standpoint that this decentralised free world venue is being polluted by exclusive content from places like like Facebook Threads™ and LemmyWorld. Even though she can interact with it from her clearnet position, she morally objects to feeding content to oppressive services.
The blunt choice of the admin to federate or not with LemmyWorld means the admin cannot satisfy everyone. It’s too blunt of an instrument. Per-community blocks per user give precision but it’s a non-stop tedious manual workload to keep up with the flood of LW communities. It would be useful for a user to block all of LemmyWorld in one action. I don’t want to see LW-hosted threads and I don’t want LW forums cluttering search results.
Cloudflare is an exclusive walled garden that excludes several demographics of people. I am in Cloudflare’s excluded group. This means:
CF nodes like LW breaks the fedi in arbitrary ways that undermine the fedi design and philosophy. So the use case is to get rid of the pollution. To get broken pieces out of sight and unbury the content that is decentralised, inclusive, open and free. To reach conversations with people who have the same values and who oppose digital exclusion, oppose centralised corporate control, and who embrace privacy. It’s also necessary to de-pollute searches. If I search for “privacy”, the results are flooded with content from people and nodes that are antithetical to privacy. Blocking fixes that. If I take a couple min. to block oxymoron venues like lemmy.world/c/privacy and do the same for a dozen other cloudflared nodes, then search for “privacy” again, I get better results.
When crossposting from Lemmy, there is a pulldown list of target communities which is another search tool. That is broken when there are more communities than what fits in the box. And it’s often ram-packed with Cloudflare venues – places that digital rights proponents will not feed. Blocking the junk CF-centralised communities makes it possible to select the target community I’m after.
So it works. The federated timeline is also more interesting now because it’s decluttered of exclusive places. The problem is that it’s more tedious that it needs to be. I am blocking hundreds of LW communities right now. It probably required 500 clicks to get the config that I have right now and I probably have hundreds of more clicks to go. When in fact I should have simply been able to enter ~10 or nodes.
tl;dr:
I’ve been using Lemmy for years, back when there were only 2 or 3 nodes and federation capability did not exist. It’s a shit show. Extremely buggy web clients and no useful proper desktop clients. I must say it’s sensible that the version numbers are still 0.x. It’s also getting worse. 0.19.3 was more usable than 0.19.5 which introduced serious bugs that make it unusable in some variants of Chromium browser.
mBin has been plagued with serious bugs. But it’s also very young. It was not ready for prime-time when it got rolled out, but I think it (or kbin) was pushed out early because many Redditors were jumping ship and those refugees needed a place to go. IMO mbin will out-pace Lemmy and take the lead. Mbin is bad at searching. You can search for mags that are already federated but if a community does not appear in a search I’m not even sure if or how a user can create the federated relationship.
The running goat fuck with Lemmy is in recent years with the shitty javascript web client. There’s only so much blame you can fairly put on those devs though because they need to focus on a working server. The shitty JavaScript web client should just be considered a proof-of-concept experimental test sandbox. JavaScript is unfit for this kind of purpose. It’s really on the FOSS community to produce a decent proper client. And what has happened is there has been focus on a dozen or so different phone apps (wtf?) and no real effort on a desktop app.
Both Lemmy and Mbin lack the ability to filter out or block Cloudflare nodes. They both only give a way to block specific forums. So you get imersed/swamped in LemmyWorld’s walled garden and to get LemmyWorld out of sight there is a big manual effort of blocking hundreds of communities. It’s a never ending game of whack-a-mole.
Yes indeed… “threads” in the generic sense of the word pre-dates the web. And threadiverse is a few years older than “FB Threads™”. That’s what’s so despicable about Facebook hi-jacking the name. It’s also why I will not refer to them by Meta (another hi-jacking of a generic term with useful meaning that their ego-centric marketers fucked up)
As far as we know, Google is not giving up any data. The crawler still must store a copy of the text for the index. The only certainty we have is that Google is no longer sharing it.
Here’s the heart of the not-so-obvious problem:
Websites treat the Google crawler like a 1st class citizen. Paywalls give Google unpaid junk-free access. Then Google search results direct people to a website that treats humans differently (worse). So Google users are led to sites they cannot access. The heart of the problem is access inequality. Google effectively serves to refer people to sites that are not publicly accessible.
I do not want to see search results I cannot access. Google cache was the equalizer that neutralizes that problem. Now that problem is back in our face.
From the article:
“was meant for helping people access pages when way back, you often couldn’t depend on a page loading. These days, things have greatly improved. So, it was decided to retire it.” (emphasis added)
Bullshit! The web gets increasingly enshitified and content is less accessible every day.
For now, you can still build your own cache links even without the button, just by going to “https://webcache.googleusercontent.com/search?q=cache:” plus a website URL, or by typing “cache:” plus a URL into Google Search.
You can also use 12ft.io.
Cached links were great if the website was down or quickly changed, but they also gave some insight over the years about how the “Google Bot” web crawler views the web. … A lot of Google Bot details are shrouded in secrecy to hide from SEO spammers, but you could learn a lot by investigating what cached pages look like.
Okay, so there’s a more plausible theory about the real reason for this move. Google may be trying to increase the secrecy of how its crawler functions.
The pages aren’t necessarily rendered like how you would expect.
More importantly, they don’t render the way authors expect. And that’s a fucking good thing! It’s how caching helps give us some escape from enshification. From the 12ft.io faq:
“Prepend 12ft.io/ to the URL webpage, and we’ll try our best to remove the popups, ads, and other visual distractions.”
It also circumvents #paywalls. No doubt there must be legal pressure on Google from angry website owners who want to force their content to come with garbage.
The death of cached sites will mean the Internet Archive has a larger burden of archiving and tracking changes on the world’s webpages.
The possibly good news is that Google’s role shrinks a bit. Any Google shrinkage is a good outcome overall. But there is a concerning relationship between archive.org and Cloudflare. I depend heavily on archive.org largely because Cloudflare has broken ~25% of the web. The day #InternetArchive becomes Cloudflared itself, we’re fucked.
We need several non-profits to archive the web in parallel redundancy with archive.org.
Bingo. When I read that part of the article, I felt insulted. People see the web getting increasingly enshitified and less accessible. The increased need for cached pages has justified the existence of 12ft.io.
~40% of my web access is now dependant on archive.org and 12ft.io.
So yes, Google is obviously bullshitting. Clearly there is a real reason for nixing cached pages and Google is concealing that reason.
This is probably an attempt to save money on storage costs.
That’s in fact what the article claims as Google’s reason. But seems irrational. Google still needs to index websites for the search engine. So the storage is still needed since the data collection is still needed. The only difference (AFAICT) is Google is simply not sharing that data. Also, there are bigger pots of money in play than piddly storage costs.
Isn’t this different because there are specifically truth-in-advertising laws? Not even a natural person is immune to truth-in-advertising laws. So it seems like Tesla is making a despirate move.
In addition to its first amendment argument, Tesla also said that the California DMV is violating its rights to have a jury trial, under the US Constitution’s 7th Amendment and Article I, Section 16 of California’s Constitution, both of which cover rights to trial by a jury.
Yikes. What does a jury of Tesla’s peers look like? Representatives from 12 other giant corporations?
I’ve been saying for years that Invidious needs to support comments. Glad there’s finally a free world option.
I’m not keen on browser extensions though. Is there a manual way? Is it a matter of searching a particular Lemmy instance for the video ID?
Ungoogled Chromium indeed reproduces the issue. But so does the public library, which likely was Firefox in Windows. So i guess it might be hasty to conclude that it’s browser specific, particularly when other videos on the same instance behave differently in the same browser.
It’s like saying “you’re a bad company. . .but damn do I like your product and will consume it anyway!” it doesn’t make much sense, logically or morally.
Sony is a dispensible broker/manager who no one likely assigns credit to for a work. I didn’t even know who Sony pimped – just had to look it up. The Karate Kid, Spider-man, Pink Floyd… Do you really think that when someone experiences those works, they walk away saying “what a great job Sony did”?
I don’t praise Sony for the quality of the works they market any more than I would credit a movie theater for a great movie that I experience. Roger Waters will create his works whether Sony is involved or not.
You also seem to be implying they have good metrics on black market activity and useful feedback from that. This is likely insignificant compared to rating platforms like Netflix and the copious metrics Netflix collects.
Can you explain further why grabbing an unlicensed work helps Sony? Are you assuming the consumer would recommend the work to others who then go buy it legitimately?
If it becomes a trend to shoplift Sony headphones, the merchant takes a hit and has to decide whether to spend more money on security, or to simply quit selling Sony headphones due to reduced profitability. I don’t see how that helps Sony. I don’t shoplift myself but if I did I would target brands I most object to.
Thanks for the insights. I was looking for a client not a server. So maybe this can’t help me. A server somewhat hints that it would be bandwidth heavy. I’m looking to escape the stock JS web client. At the same time, I am on a very limited uplink. To give an idea, I browse web with images disabled because they would suck my quota dry.