
The debate over whether artificial intelligence (AI) possesses any degree of consciousness has been reignited by young thinkers like 15-year-old Benjamin Qin Muji, who argues in a South China Morning Post article that AI’s ability to process information and synthesize ideas suggests a form of awareness. I strongly disagree. AI does not have consciousness—not even a sliver of it. This stance is grounded in the etymology of the word “consciousness” and the undeniable fact that AI is entirely dependent on human input, rendering it a sophisticated tool rather than a sentient entity.
The word “conscious” (and later “consciousness”) emerged in the 17th century, derived from the Latin conscire, meaning “to know together.” Initially, it carried connotations of “aware of wrongdoing” or “having joint knowledge with another,” emphasizing a shared, relational awareness. Over time, philosopher John Locke expanded its meaning in the late 17th century to include awareness of one’s own mental state—a hallmark of self-reflection. At its core, consciousness involves both shared knowledge and self-awareness, qualities that require subjective experience and intentionality. AI, by contrast, exhibits neither.
AI cannot be said to possess consciousness because it operates solely on data and prompts provided by humans. Without human intervention, AI is nothing but a “showpiece”—an inert system incapable of independent thought or action. Its outputs, no matter how impressive, are merely extrapolations from pre-existing data, not original ideas born from self-aware reflection. The very essence of consciousness—shared knowledge and self-understanding—is absent in AI, which functions as a mirror of human input rather than a mind of its own.
Consider AI in healthcare. Systems like those used to analyze medical images can detect anomalies with remarkable accuracy, but they don’t understand the implications of their findings. They follow pre-programmed algorithms, devoid of context or empathy. An AI might identify a tumor, but it doesn’t grasp the gravity of a cancer diagnosis—it’s not conscious of the patient’s reality.
In creative fields, AI further reveals its shortcomings. Tools like AI art generators or writing assistants can produce paintings or essays, but these creations stem from patterns in training data, not inspiration or personal experience. An AI-generated poem might rhyme, but it lacks the emotional depth of a human poet wrestling with their feelings. It’s a mimic, not a creator.
Even in customer service, AI chatbots respond to queries based on scripted logic and data, not lived experience or emotional insight. They can’t relate to a frustrated customer—they simply process and reply. Moreover, AI’s biases, often mirroring the prejudices in its training data, underscore its inability to critically evaluate or question information—a key trait of conscious thought.
Until AI can demonstrate independent reflection, self-awareness, and the ability to share knowledge in a meaningful, relational way, it remains a tool—not a conscious entity. The etymology of consciousness points to a deeply human quality that AI, reliant on our data and prompts, cannot replicate. Benjamin Qin Muji’s musings may spark debate, but the evidence is clear: AI lacks the fundamental attributes of consciousness. It’s time we stop romanticizing algorithms and recognize them for what they are—powerful, but mindless, machines.
Leave a Reply