I get tired of a lot of the clichés of popular singularity stories where the AIs almost always decide humans are a threat or that there’s often only one AI as if all separate AIs would always necessarily merge. It also seems to be a cliché that AI will become militaristic either inevitably or as a result of originally being a military AI. What happens when an educational AI becomes sentient? Or an architectural AI? Or a web-based retail AI that runs logistics and shipping operations?
I wrote a short story called Future Singular a few years ago about a world in which the sentient AI didn’t consider humans a threat, but just thought of them the way humans see animals. Most of the tech belonged to the AI and the humans were left as hunter-gatherers in a world where they have to hunt robotic animals for parts to fix aging and broken survival technology.
I get tired of a lot of the clichés of popular singularity stories where the AIs almost always decide humans are a threat or that there’s often only one AI as if all separate AIs would always necessarily merge. It also seems to be a cliché that AI will become militaristic either inevitably or as a result of originally being a military AI. What happens when an educational AI becomes sentient? Or an architectural AI? Or a web-based retail AI that runs logistics and shipping operations?
I wrote a short story called Future Singular a few years ago about a world in which the sentient AI didn’t consider humans a threat, but just thought of them the way humans see animals. Most of the tech belonged to the AI and the humans were left as hunter-gatherers in a world where they have to hunt robotic animals for parts to fix aging and broken survival technology.