My wife and I are rewatching The Next Generation and just finished Measure of a Man, the episode in season 2 in which Data’s personhood is legally debated and his life hangs in the balance.
I genuinely found this episode infuriating in its stupidity. It’s the first episode we skipped even a little bit. It was like nails on a chalkboard.
There is oodles of legal precedent that Data is a person. He was allowed to apply to Starfleet, graduated, became an officer and rose to the rank of Lt. Commander with all the responsibilities and privileges thereof.
Comparing him to a computer and the judge advocate general just shrugging and going to trial over it is completely idiotic. There are literal years and years of precedent that he’s an officer.
The problem is compounded because Picard can’t make the obvious legal argument and is therefore stuck philosophizing in a court room, which is all well and good, but it kind of comes down to whether or not Data has a soul? That’s not a legal argument.
The whole thing is so unbelievably ludicrous it just made me angrier and angrier. It wasn’t the high minded, humanistic future I’ve come to know and love, it was a kangaroo court where reason and precedent took a backseat to feeling and belief.
I genuinely hated it.
To my surprise, in looking it up, I discovered it’s considered one of the high water marks for the entire show. It feels like I’m taking crazy pills.
I think the concept is still high minded even if they ignored previous continuity for the sake of having the argument in the first place.
The antagonist is also a character that has little experience actually working with Data; so his ignorance is to be expected. I’m also not really sure how allowing him to be in Starfleet, hold ranks and what not means anything about his humanity.
I mean, even if they considered him to be nothing more than a machine, having him go through the academy and actually gain experience doing his job is a good test of the machine’s ability to pretend to be human. It doesn’t necessarily mean Starfleet wouldn’t do the same for something like a non-android, AI powered robot that nobody would consider to be human.
I don’t think the point even is “are androids human?” The point was “what makes a human human?” Our definition of it in reality is based more on emotion than rationality; there is a certain je ne sais quoi about what makes us different from other animals and opinions differ wildly about it.