It doesn’t seem that hard to imagine something of a “beyond human” intelligence. What is the really hard part is imagining a version of reality, where this super-intelligence (SI) and humankind coexist peacefully while both achieving their full potential.
Constraints
Sooner or later, we will create something that will be considered SI. And when it happens, we better believe it is well defined, and its potentials are not colliding with our values. (What exactly can be considered our values is also a good question, but about that maybe another time.) Notice that it does not need to be goals that are not in collision with our values, but potentialities. If it is able to do something, we might as well expect that it will happen, one day. Thus, it would seem that the best way to let it operate is in theoretical concepts. If we do not let it have an impact on reality, it will not be able to do anything against us. Or will it?
Of course it will, it might give us a result, which we, in our slow pace, verify and implement, but not notice the secondary impact it has, as it quite often happens even with our own ideas. (take global warming for example)
“The best thing we can hope for is that Superintelligence will consider us its pet.”
So, if you cannot control it, befriend it. But how do you befriend something, that is so much more intelligent? How do you ever trust it? You can’t. And yet we must. The best thing we can hope for, when we create the SI is that it will be created well enough to consider us important. And, of course, not important in a way that we have or do something without which it cannot exist. Because, as in all things, if we can have/do it, so can SI. (And possibly even more/better.)
To draw an analogy, we consider our pets important, even though they don’t give us anything important (physically). And I would argue that pets have OK lives (mostly) in our care. Thus, the best thing we can hope for is that Superintelligence will consider us its pet.