Those were words, yes.
How the fuck you were able to mash them together like that is beyond me.
I just ignored all the ads and any news, so you are likely correct. I did think that most of the games were mobile, but I must have been mistaken in thinking it was playable on Netflix clients on your TV or something.
I used to play Pac-Man on my old FireTV with the controller while I was killing time in the mornings before work, so I thought it was similar in that regard.
It should have been a AAAAA studio. That is where they fucked up.
Or, they fucked up thinking that people wanting to watch movies wanted to play games…
Consoles made sense when they required specialized hardware. The $700 for a PS5 is probably better spent on a much better GPU for a PC, IMHO.
It’s cool if you like consoles! They still have a specific allure, so I get it.
When I see my Dr. or when I talk to other engineers?
Any in many ways, that is the way engineers should speak to other engineers when analyzing a problem.
If two or more people can actually share a common goal of finding the best solution, everyone involved should be making sure that no time is wasted chasing poor solutions. This not only takes the ability to be direct to someone else, but it also requires that you can parse what others are telling you.
If someone makes something personal or takes something personal, they need a break. Go take a short walk or something. (Linus is a different sort of creature though. I get it.)
TBH, this is part of the reason I chose my doctor (GP). She is extremely direct when problem solving and has no problems theory-crafting out loud. Sure, we are social to a degree, but we share many of the same professional mannerisms. (We had a short discussion on that topic the other day, actually. I just made her job easier because I give zero fucks about being judged for any of my personal health issues.)
Now she is doing this for the lulz, or something.
This kind of skill might help developers build AI agents that identify buttons or fields on a webpage
to handle tasks like making a reservation at a restaurant.
… to improve efficiency of click farms and to bypass captchas.
What if Bethesda doesn’t pay Origin though?
That’s exactly what a bot would say.
(thinks out lound…)
If you could force different speeds and different voltages, you can make some guesses as to what the cable might support.
USB packets use CRC checks, so a bad checksum may indicate a speed or physical problem. (Besides stating the obvious, my point is that doing strict checks for each USB mode gives CRC more value.)
I just looked over the source code for libusb (like I knew what I was looking for, or something) and it seems that some of the driver(?) components hook really deep into the kernel. There might be a way to test specific parts of any type of handshake (for dataflow or voltage negotiation) to isolate specific wires that are bad by the process of elimination.
I think my point is that a top-down approach is likely possible, but it’s probabilistic.
Cable testers can bypass all of the standard driver and USB negotiation bullshit before anything else. I would imagine building a device to manually control when and how the connections are made is much easier than fighting for low level device control on systems like Windows, macOS and Android.
I watched through Day of Honor a couple of times today, but it was kinda choppy for me since I had to work.
I just want to clarify “give herself up” in that you mean she is willing to become part of the Voyager “collective” and puts aside her need to return to the Borg?
If my above assumption is correct, then yes. She is growing exponentially personality wise, but there are significant challenges in doing so.
Personally, I have been around engineers my entire life. Some people I know could rattle on for hours over something like p vs np even if they just learned about it a few hours ago. Put that same person in a complex social environment and they are absolutely clueless. It’s similar to Seven.
Assuming I didn’t know anything about her timeline after Day of Honor, my guess would have been it would take years for her to learn how to operate in a complex structure like we are accustomed to. Janeway seems bright enough to understand that as well. So yeah, it would be a very long time before she could make the kinds of decisions we take for granted and Janeway would have to do that for her like a parent.
Fast forward a bit to Picard, you can see how long it took for her character to develop into something that didn’t resemble a robot. (I am willfully excluding some later episodes of Voyager that were kind of odd, btw.)
Standby. I remember the episode but not with enough detail to discuss.(I’ll get it rewatched now.)
(New reply)
Really? That was your takeaway? ROFL!
It was totally fine. Borg implants or not, she was still human. She also didn’t have a choice about becoming Borg at such a young age. When her connection was cut with the collective, she basically became a child again making her Janeway’s responsibility. (That was close to Janeway’s logic I believe, and I agree with it. It was a human decision for another human who was incapable of making decisions.)
The biggest thing is that Seven has already signed a contract with UPN, so she was kinda stuck for a few episodes anyway. Janeway knew this, so after thinking about it over a 50 gallon drum of coffee and a few packs of menthol Kools, she decided to just run with it and make it dramatic. (The Borg attorneys failed to overturn the terms of the contract even after several weeks of absolutely phenomenal work.)
I am curious as to why they would offload any AI tasks to another chip? I just did a super quick search for upscaling models on GitHub (https://github.com/marcan/cl-waifu2x/tree/master/models) and they are tiny as far as AI models go.
Its the rendering bit that takes all the complex maths, and if that is reduced, that would leave plenty of room for running a baby AI. Granted, the method I linked to was only doing 29k pixels per second, but they said they weren’t GPU optimized. (FSR4 is going to be fully GPU optimized, I am sure of it.)
If the rendered image is only 85% of a 4k image, that’s ~1.2 million pixels that need to be computed and it still seems plausible to keep everything on the GPU.
With all of that blurted out, is FSR4 AI going to be offloaded to something else? It seems like there would be a significant technical challenges in creating another data bus that would also have to sync with memory and the GPU for offloading AI compute at speeds that didn’t risk create additional lag. (I am just hypothesizing, btw.)
Employers figured out years ago that caffeine has excellent ROI for productivity. (Amphetamines are probably a close second, but we won’t talk about that right now.)
For Intel to cut basic morale boosters was just pure silliness.