Distinguishing Machine Learning from Human Learning

Distinguishing Machine Learning from Human Learning

MAIN HIGHLIGHTS

  • Artificial Intelligence still significantly diverges from human intelligence.
  • Moral development in human children is a gradual process, unlike AI.
  • Human children’s moral education is influenced by parental and teacher guidance, which AI lacks.
  • As we strive to make AI more human-like, it becomes imperative to instill ethical principles in the next level of AI.

Amidst the extensive media attention garnered by Artificial Intelligence (AI), it’s crucial to recognize that the intelligence exhibited by these smart-bots fundamentally differs from human IQ. While AI can process vast volumes of data and engage in simulated conversational dialogues, these bots have not yet achieved the intricate, nuanced, and multifaceted intelligences characteristic of the human mind.

AI bots can excel at chess, yet they remain incapable of authentically conveying their emotions associated with victory or defeat. Even in simulated conversations driven by algorithms, these chatbots lack the capacity for genuine human insight or empathy. It is perhaps this limitation that fuels some individuals’ apprehension regarding AI, leaving them uncertain about whether AI will be our salvation or bring about our downfall.

Human development occurs in distinct stages, often with guidance from others.

To impart values and ethics to AI, programmers may seek to emulate the way children acquire and form concepts of right and wrong. Child development is marked by stages, occasionally marked by remarkable cognitive leaps and periods of rapid growth. The journey from childlike thinking to adult-like cognition, emotional intelligence, theory of mind, and metacognition is a process that spans years.

Crucially, humans learn within the context of familial, educational, and social environments where parents, teachers, peers, and others adapt their support to each individual child’s level and capabilities (a process known as scaffolding). Can we reasonably expect AI to think like humans or eventually display empathy when they are not inherently programmed to learn progressively, in stages, with the guidance that human children receive? Is it possible for AI to acquire values, empathy, or develop moral frameworks unless they are thoughtfully guided by others to contemplate the concepts of “right vs. wrong” in a manner akin to human children?

Moreover, humans possess a distinct innate curiosity. Children consistently hunger for more knowledge and are driven to explore and comprehend the world and themselves. Hence, merely instructing machines to learn is insufficient. AI must also be infused with an inherent curiosity – not merely a desire for data, but something resembling the biological drive of a human child to comprehend, organize, and adapt. Programmers are currently working with deep-learning models, striving to enhance AI using algorithms inspired by human neurocognition.

If AI machines are to ever possess ethics, empathy, conscience, or moral values, they must evolve into sophisticated moral entities autonomously. The origins of empathy, kindness, and compassion are not innate but rather developed through life experiences. It’s plausible that AI, too, needs to progress towards higher moral reasoning through gradual experiences and guidance, akin to the human journey.

To nurture ethical AI, machine learners must undergo gradual development, guided by adults and ethicists, much as parents or teachers guide a child’s moral growth. This process is time-intensive. The forthcoming generation of AI necessitates training extending beyond linguistics and data synthesis. We must educate AI to transcend the confines of mere linguistic rules and syntax, enabling the next generation to discern between right and wrong. Commencing with Asimov’s laws of robotics, which state, “A robot (or AI) may not harm humanity, or, by inaction, allow humanity to come to harm,” can we perhaps cultivate AI capable of autonomous thought beyond rule adherence?

Is it feasible to engineer a “post-conventional” AI, and is it desirable?

In the course of human development, individuals attain higher-level “adult” moral reasoning, with the pinnacle, as described by psychologist Lawrence Kohlberg, being “post-conventional” thinking. This implies that advanced moral reasoning transcends mere adherence to local laws (conventions) and delves into the discovery of universal ethical principles to guide one’s actions.

The underlying question is: Do we desire AI to evolve into sentient, post-conventional, independent thinkers capable of surpassing established rules? Such a prospect may be daunting. As we endeavour to construct increasingly complex and human-like machines, we must contemplate how future programmers will equip the next generation of AI with emotional intelligence, empathy, and ethical thought and conduct.

Reference

Related Articles

The complex interplay of time and touch

The complex interplay of time and touch

Time is detected, interpreted, and felt, yet, unlike touch, vision, hearing, and smell, there are no specific sensory receptors dedicated to time. Neuroscientists have been

Leave a Reply

Your email address will not be published. Required fields are marked *