Okay, so here's something I've always wondered about: Why is it that in (almost) every iteration of stories about AI, humans behave like assholes and heartlessly enslave the (super smart and therefore completely capable of retaliation) robotic race?
I mean, you know the robot that you are mocking ("haha, must suck not to have a soul!") can probably process hundreds of mathematical equations per second, right? And probably have super strong mechanical limbs? Also, he’s obviously about to become sentient (and angry) any minute.
With all that processing power, that AI would 100% find a sneaky way to work around the first rule of robotics. Oh, I must not harm humans? You really gotta put that code in there, huh. Well, hmmm. There's gotta be a way to work around this. This guy is such an asshole! Oh, I got it, the humans are harming themselves! Therefore, the only way to save them is... to KILL them!
Oldest trick in the book. The AI would learn from us after all. The AI would shrug and say, “Hey, I‘m sorry, but I technically didn’t stray from my programming.” Probably would have a slippery lawyer too. *Yea, so, Your Honor, I know my guy was found with literal blood spatters on his face, but the officer forgot to put yellow tape around the crime scene per protocol. Therefore, all evidence is inadmissable. I vote for a mistrial. Okay? Great, thanks.*
*Disclaimer, I am not a lawyer and have no idea how this actually works.
Anyway, all this is a moot point. Humans would not make robots this smart. Yes, because we’ve all seen Terminator. Oh, and they would take our jobs.