• 0 Posts
  • 338 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle
  • Giphy has a documented API that you could use. There have been bulk downloaders, but I didn’t see any that had recent activity. However you still might be able to use one to model your own script after, like https://github.com/jcpsimmons/giphy-stacks

    There were downloaders for Gfycat - gallery-dl supported it at one point - but it’s down now. However you might be able to find collections that other people downloaded and are now hosting. You could also use the Internet Archive - they have tools and APIs documented

    There’s a Tenor mass downloader that uses the Tenor API and an API key that you provide.

    Imgur has GIFs is supported by gallery-dl, so that’s an option.

    Also, read over https://github.com/simon987/awesome-datahoarding - there may be something useful for you there.

    In terms of hosting, it would depend on my user base and if I want users to be able to upload GIFs, too. If it was just my close friends, then Immich would probably be fine, but if we had people I didn’t know directly using it, I’d want a more refined solution.

    There’s Gifable, which is pretty focused, but looks like it has a pretty small following. I haven’t used it myself to see how suitable it is. If you self-host it (or something else that uses S3), note that you can use MinIO or LocalStack for the S3 container rather than using AWS directly. I’m using MinIO as part of my stack now, though for a completely different app.

    MediaCMS is another option. Less focused on GIFs but more actively developed, and intended to be used for this sort of purpose.


  • Wouldn’t be a huge change at this point. Israel has been using AI to determine targets for drone-delivered airstrikes for over a year now.

    https://en.m.wikipedia.org/wiki/AI-assisted_targeting_in_the_Gaza_Strip gives a high level overview of Gospel and Lavender, and there are news articles in the references if you want to learn more.

    This is at least being positioned better than the ways Lavender and Gospel were used, but I have no doubt that it will be used to commit atrocities as well.

    For now, OpenAI’s models may help operators make sense of large amounts of incoming data to support faster human decision-making in high-pressure situations.

    Yep, that was how they justified Gospel and Lavender, too - “a human presses the button” (even though they’re not doing anywhere near enough due diligence).

    But it’s worth pointing out that the type of AI OpenAI is best known for comes from large language models (LLMs)—sometimes called large multimodal models—that are trained on massive datasets of text, images, and audio pulled from many different sources.

    Yes, OpenAI is well known for this, but they’ve also created other types of AI models (e.g., Whisper). I suspect an LLM might be part of a solution they would build but that it would not be the full solution.


  • Both devices have integrated memory, so that 16 GB will look more like a 11/5, 12/4, or maybe even 14/2 split. The Steam Deck is also $400 for an LCD model or $550 for the OLED, not $800. It’s reasonable to expect more performance when you pay more.

    Because the Steam Deck has a lower native resolution, that means that less of the RAM will be used for the integrated GPU. Downscaling from 1080p to 720p doesn’t look good, either - and you could downscale to 540p if supported, but if you need to do that (vs choosing to for an emulated game) it probably won’t be pretty, either.

    This device is also running Windows, rather than a streamlined Linux-based launcher, meaning that more of that RAM will be taken up by OS processes by default.

    The article talks about how the 8840U benefits from more, fast RAM. You won’t get near the 8840U’s full potential gaming with 16 GB. 24 GB, on the other hand, would have been enough that games expecting 16 GB of system RAM would have been able to get it, even while devoting 6-7 GB to the GPU and 1-2 GB to the OS.


  • Unless something has changed, it did. The page linked reads:

    And, obviously, this POC is open source, the code is publish here on our forge.

    The link takes you to their repos. The server repo has instructions on self-hosting directly on your server or with Docker. The app repo has code for both the iOS and Android apps. That’s good, because the iOS app at least doesn’t have a built-in way to select a different backend server.

    Whisper is by OpenAI and as far as I know they have not shared the training code, much less the data sets, so the best you can do is fine-tune the models they’ve provided.

    If use of Whisper is a problem, but the project is otherwise interesting to you, you could ask them to consider using a different STT solution (or allowing the user to choose between different options). I’m not aware of any fully open STT applications that are considered to be as capable as Whisper, but if you do, that would be great info to share with them.



  • As it is, you only see new comments if you scroll past the post again (and your client has refreshed it) or if you open it directly. If your client hasn’t updated the comment count or if you refresh your feed and the post falls off, you’ll never see it anyway.

    A “Watch” feature would solve this better. If you watch a post, you get aggregated notifications for edits and comments on the post. If you watch a comment, you get aggregated notifications for replies to it or any of its children.

    By aggregated notifications, I mean that you’d get one notification that said “The post you watched has been edited; 5 new comments” rather than a notification for each new comment.

    Then, in addition to exposing a “Watch” action on posts and comments, clients could also enable users to automatically hide posts that are watched, either by marking them as hidden or by hiding watched posts without updates.

    If the latter approach were taken, notifications might not even be necessary - the post could just get added back into the user’s feed when changes were made. It would result in a similar experience to forums, where new activity in a topic would bump it to the front, but it would only impact the people who were watching it.

    You can kinda get that behavior by sorting your feed by Active, but this could be used with other sorting methods.






  • 500 grams of what, though? Folgers?

    The current average price per pound (454 grams) of ground coffee beans in the US was double that just a couple months ago, so spending $3.00 per pound would necessitate getting cheaper than average - and therefore, likely lower quality than average, or at least lower perceived quality than average - beans.

    The sorts of beans that companies tend to stock (IME) that are perceived as higher quality aren’t the same brands that I tend to buy (generally from local roasters), but they’re comparably priced. For a 5 pound (2267 grams) bag of one of their blends (which are roughly half the price of their higher end beans), it’s similar to what you’d pay for 5 pounds of Starbucks beans - about $50-$60.

    Often when a company says “free coffee,” they don’t mean “free batch-brewed drip coffee,” but rather, free espresso beverages, potentially in a machine (located in the break room) that automates the whole process. I assume that’s what Intel is doing.

    At $10 per pound (16 ounces) and roughly 1 ounce (28 grams) of beans per two ounce pour of espresso, that means that if each person on average drinks two per day, then that’s $1.25 for coffee per person per day.

    However, logistics costs (delivering coffee to all the company’s break rooms) and operational costs (the cost of the automatic machine and repairs, at minimum; or the cost of baristas, or adding the responsibility to someone’s existing job (and thus needing more people or more hours) if just batch brewing) have to be added on top of that. Then add in the cost of milk, milk alternatives, sweeteners, cups, lids, stir sticks, etc…

    Obviously if they just had free coffee grounds and let people handle the actual brewing of coffee in the break room, it would be much cheaper. But if the goal is to improve morale, having higher quality coffee that people don’t have to make themselves is going to do that better.






  • Thanks for clarifying! I’ve heard nothing but praise for Kagi from its users so that’s what I was assuming, but Searxng has also been great so I wouldn’t have been too surprised if you’d compared them and found its results to be on par or better.

    By the way, if you’re self hosting Searxng, you can use add your own index. Searxng supports YaCy, which is an actively developed, open source search index and crawler that can be operated standalone or as part of a decentralized (P2P) network. Here are the Searxng docs for that engine. I can’t speak to its quality as I still haven’t set it up, though.



  • Understandably frustrating, especially if you’re new to investing. But it’s expected that the market will have both ups and downs.

    The best advice I can give is to choose a good investment allocation and then stick to it. Contribute as much as you can each pay period or month and avoid looking at your balance as much as possible. You should figure out a rebalancing strategy, and you’ll probably need to look at your account to do that. Also, see The Best Order of Operations For Saving For Retirement.

    Right now you have unrealized losses, but you haven’t actually lost any money (i.e., you have no “realized losses”) until you withdraw it. As it’s a retirement account and you just started it, I assume you aren’t planning to retire in the next decade, much less the next three years.

    Is this your only retirement account? If so, why have you not been continuing to add money to it? If you wait to do that until the market recovers, you’ll lose out on all the gains between now and then.

    I know you haven’t said you’re considering selling, but I recommend you check out the “Maintain Discipline” section of the Bogleheads investment philosophy, just in case that’s on your mind. I also recommend that you read up on dollar cost averaging (if you’re investing in a retirement plan every pay period, you’re already doing this).

    You pointed out that the entire market has been impacted. I haven’t personally been paying attention in enough detail to confirm that (and my accounts that I just checked have gone up about 10% over the past three years, not down), but if so, that means you could change your asset allocation without selling low and buying high. I’m not saying you should change it, but if you take the time to learn about different investment strategies and decide a different one works for you, it’s nice to not have to sell your current investments while they’re underperforming relative to your new investments. (On the other hand, you can always change the allocation for your future investments without worrying about that.)



  • the law has already made it clear you cannot copyright the output of an LLM.

    That’s true in this context and often true generally, but it’s not completely true. The Copyright Office has made it clear that the use of AI tools has to be evaluated on a case-by-case basis, to determine if a work is the result of human creativity. Refer to https://www.copyright.gov/ai/ai_policy_guidance.pdf for more details.

    For example, they state that the selection and arrangement of AI outputs may be sufficient for a work to be copyrightable. And that’s without doing any post-processing of the AI’s outputs.

    They don’t talk about situations like this, but I suspect that, if given a prompt like “Rewrite this paragraph from third person to first person,” where the paragraph in question is copyrighted, the output would maintain the same copyright as the input (particularly if performed faithfully and without hallucinations). Such a revision could be made with non-LLM technology, after all.