I cannot silently take it anymore. One more LinkedIn post about the need to implement safe AI and I would burst. Let me please logically explain to you why so called Safe AI will not happen. No matter how much anyone tries.
“Our goal at the Safe AI Society is to broaden awareness about the amazing progress made in Artificial Intelligence (AI), as well as the potential risk to all of us if not developed thoughtfully.”Safe AI Society landing page
This is just one example of this idealist vision that human nature is so great it is able to write code without bugs. Let me explain. There are different kinds of bugs. There is a bug that is discoverable before the code is run. Then there is a bug that shows itself on runtime. Both of these are quite easy to find, in comparison to the 3rd kind of bug.
That is a bug, that nobody knows about. No one can anticipate it. It is a mistake somewhere along the way that propagates itself an amplifies itself until it becomes noticeable. It is a lack of black people in training data, it is one port left open by mistake enabling the attacker to get in.
Why do you think, that windows has updates all the time? No (big enough) software is safe. How dares anyone call for a safe AI, when it is pretty obvious that AI will be a very complicated program. It will most likely understand human readable data, thus it will be able to read the internet. What is going to be its conclusion after that? None of us should even dare to pretend to know the answer.
Call me a pessimist, I don’t care. There is no point in trying to create safe AI. I suppose that there is nothing wrong with being “thoughtful” about it as the Safe AI Society suggests, but that is an empty statement. People should be thoughtful about anything.
For example, DeepMind with their famous AI Safety Gridworlds didn’t achieve anything. Event if those ideas are solved, there will always be some unsolved issues, that we might not even know about. Even if we let multiple AI agents supervise each other, the potential of them reaching a multilateral agreement without humans noticing points to the only conclusion, that nothing ever will ensure that an AI is safe.
Thus, if anyone is actually trying to make a safe AI, they should stop. Stop creating a “safe AI” that will inevitably prove to be unsafe. All the rest might as well stop pretending. But I leave that to each and every one of you…