Predictions are being made that one day soon computerized artificial therapists (ATs) will be treating human mental healthcare patients. Such ATs are likely to function adequately within an if-then drug-allocation paradigm: if »A« condition exists then take »B« medication. But it seems unlikely that programmers will ever produce technology that has the insightful wisdom and sensitive empathy of a caring human therapist. The many social, legal, logical, metaphysical, and ethical concerns raised by the idea of conscious machinery are almost completely ignored by researchers into artificial intelligence (AI). Before an accurately functioning mental healthcare computer programme can be written it is necessary that coders first clarify the current vague and ambiguous diagnostic criteria for psychiatry’s many so-called »mental illnesses«. To improve the practice of psychotherapy, contemporary clinicians would do better to reduce their reliance on both »smart« technology and psychoactive medications, and instead increase their use of philosophy in mental healthcare.