I cannot silently take it anymore. One more LinkedIn post about the need to implement safe AI and I would burst. Let me please logically explain to you why so called Safe AI will not happen. No matter how much anyone tries.
Exactness. Close to impossible for a human, but ridiculously easy for a machine. I would say it is a virtue and I praise anyone who strives for it. Being exact, mathematically exact, makes communication, contrary to popular belief, a lot easier.
Why did (and do) humans ever bother to do anything? What proper reason might a person have to stay alive? Is it love? Is the reason in having someone who you do not want to sadden by killing yourself? Or someone you do not want to leave behind? Or maybe it is fear. Fear of…
Merriam-Webster defines intelligence as the ability to learn or understand or to deal with new or trying situations. That sounds like just the thing we would like SI to do for us. But is it?
A compressed thought about the future of humanity in the hands of Superintelligence