It seems like there is a two-party system developing in the AI-sphere: one argues for purposefully imbuing the systems with common sense, the other suggests that the cognitive abilities of human infants will eventually appear spontaneously in these models. Not quite sure how that last part works yet. I think the big difference between the two camps is the time differential: infants seems to have these innately, current systems can only learn similar approaches after lots and lots of training.
I’m particularly interested in the idea if adding a sense of cost-benefit analysis to AI systems. I imagine this is the function of the reward function in reinforcement learning. Or at least the concept of the reward function. But a properly tuned and aligned cost-benefit analysis would likely avoid the strawberry fields forever doomsday scenario.