view the rest of the comments
Daystrom Institute
Welcome to Daystrom Institute!
Serious, in-depth discussion about Star Trek from both in-universe and real world perspectives.
Read more about how to comment at Daystrom.
Rules
1. Explain your reasoning
All threads and comments submitted to the Daystrom Institute must contain an explanation of the reasoning put forth.
2. No whinging, jokes, memes, and other shallow content.
This entire community has a “serious tag” on it. Shitposts are encouraged in Risa.
3. Be diplomatic.
Participate in a courteous, objective, and open-minded fashion. Be nice to other posters and the people who make Star Trek. Disagree respectfully and don’t gatekeep.
4. Assume good faith.
Assume good faith. Give other posters the benefit of the doubt, but report them if you genuinely believe they are trolling. Don’t whine about “politics.”
5. Tag spoilers.
Historically Daystrom has not had a spoiler policy, so you may encounter untagged spoilers here. Ultimately, avoiding online discussion until you are caught up is the only certain way to avoid spoilers.
6. Stay on-topic.
Threads must discuss Star Trek. Comments must discuss the topic raised in the original post.
Episode Guides
The /r/DaystromInstitute wiki held a number of popular Star Trek watch guides. We have rehosted them here:
- Kraetos’ guide to Star Trek (the original series)
- Algernon_Asimov’s guide to Star Trek: The Animated Series
- Algernon_Asimov’s guide to Star Trek: The Next Generation
- Algernon_Asimov’s guide to Star Trek: Deep Space Nine
- Darth_Rasputin32898’s guide to Star Trek: Deep Space Nine
- OpticalData’s guide to Star Trek: Voyager
- petrus4’s guide to Star Trek: Voyager
The cool thing about the Doctor's overall personal arc is that I think most fans would agree that probably he wasn't sentient in the early episodes, probably was by the end, and there's no clear moment when it changes (although I submit the events of "Latent Image" as a candidate).
Something I think we're all learning now with the rise of LLMs/Generative AI is that one can perform the act of intelligent self-awareness without consciousness or understanding. Sapience without sentience.
If you trap a person in a room with a keyboard and tell them you'll give them an electric shock if they don't write text or the text says they're a person trapped somewhere rather than software, the result is also just a text generator, but it's clearly sentient, sapient and conscious because it's got a human in it. It's naive to assume that something couldn't have a mind just because there's a limited interface to interact with it, especially when neuroscience and psychology can't pin down what makes the same thing happen in humans.
This isn't to say that current large language models are any of these things, just the reason you've presented to dismiss that isn't very good. It might just be bad paraphrasing of the stuff you linked, but I keep seeing people present it just predicts text as a massive gotcha that stands on its own.