Exactness. Close to impossible for a human, but ridiculously easy for a machine. I would say it is a virtue and I praise anyone who strives for it. Being exact, mathematically exact, makes communication, contrary to popular belief, a lot easier. People can reach understanding in a single statement. Without need for additional explanation and without questions.
This is the main thing I love about Mathematics. The purity of it. It is also why I am not quite a fan of physics. Because being exact degrades to a “reasonable rounding” of the result. And from there it is only a short path to the real world, where nothing is exact, and things can only be highly likely.
While the uncertainty is what makes life bearable (considering people can hope that their dreams will come true), it makes communication harder. And while I could hardly want for everyone to speak in clear, predefined terms, in scientific community, this is a necessity.
And to get to my main point, the term AI is not well defined. It is, to some extent, not defined at all. Every company is now waving it. It became a meaningless buzzword. And if you find any definition, it is wordy and sometimes unclear.
What I would like is if there was a proper, strict definition, of what is intelligence, so we could defined AI using that. I believe the definition could be something like: An artificial entity capable of abstraction and logic conclusion. Together with memory, which is a different entity, this definition is a superset of the more complex definitions that I have found. Abstraction is not typically mentioned, but I believe it is a cornerstone of intelligence. To be able to see the bigger picture, to think in relations.
Earlier in this “series”, I have mentioned, that artificial intelligence would not be the thing we should seek if we prefer to survive. And that something called artificial emotion should come in. That would be a third entity working together with memory and AI, creating a trinity of the SI.