Discussion about this post

User's avatar
Dharma Debate's avatar

Ultimately, AI ends up being no different than working with any human in that regard. Human output is full of errors too and will still demand a decision at the end of the day.

The issue always was ideology and the users. The users who were too lazy to think wanted AI to think for them, we should wonder what's wrong with people who were trying to avoid thinking that much.

Where human reasoning falters is the same reason the AI is struggling, factoring in useless information, like identity, is how it's going to end up calculating prediction errors like anyone with a neoliberal childhood.

People never believe me when I say it, but the links are there. It's just a side effect of fascism, being willing to hand over control of decision making to an authority because reasoning is too much work.

No posts

Ready for more?