The way we do Artificial Intelligence has emerged over the years, as the result of various shortcuts we took, to bypass difficult problems. The behavior of the current AI systems, including some concerning aspects, is due to those choices.
Should we rely on black-boxes that learn to imitate narrowly certain human behaviors?
Should we train them on examples obtained from the wild-web?
Should machines observe user choices to decide what they really want? And if we answer no, what other method should we use?