Your search
Results 4 resources
-
I argue here that consciousness can be engineered. The claim that functional consciousness can be engineered has been persuasively put forth in regards to first-person functional consciousness; robots, for instance, can recognize colors, though there is still much debate about details of this sort of consciousness. Such consciousness has now become one of the meanings of the term phenomenal consciousness (e.g., as used by Franklin and Baars). Yet, we extend the argument beyond the tradition of behaviorist or functional reductive views on consciousness that still predominate within cognitive science. If Nagel-Chalmers-Block-style non-reductive naturalism about first-person consciousness (h-consciousness) holds true, then, eventually we should be able to understand how such consciousness operates and how it gets produced (this is not the same as bridging the explanatory gap or solving Chalmers’s hard problem of consciousness). If so, the consciousness it involves can in principle be engineered.
-
AI can think, lthough we need to clarify definition of thinking. It is cognitive, though we need more clarity on cognition. Definitions of consciousness are so diversified that it is not clear whether present-level AI can be conscious – this is primarily for definitional reasons. To fix this would require four definitional clusters: functional consciousness, access consciousness, phenomenal consciousness, hard consciousness. Interestingly, phenomenal consciousness may be understood as first-person functional consciousness, as well as non-reductive phenomenal consciousness the way Ned Block intended [1]. The latter assumes non-reducible experiences or qualia, which is how Dave Chalmers defines the subject matter of the so-called Hard Problem of Consciousness [2]. To the contrary, I pose that the Hard Problem should not be seen as the problem of phenomenal experiences, since those are just objects in the world (specifically, in our mind). What is special in non-reductive consciousness is not its (phenomenal) content, but its epistemic basis (the carrier-wave of phenomenal qualia) often called the locus of consciousness [3]. It should be understood through the notion of ‘subject that is not an object’ [4]. This requires a complementary ontology of subject and object [5, 6, 4]. Reductionism is justified in the context of objects, including the experiences (phenomena), but not in the realm of pure subjectivity – such subjectivity is relevant for epistemic co-constitution of reality as it is for Husserl and Fichte [7, 8]. This is less so for Kant for whom the subject was active, so it was a mechanism and mechanism are all objects [9]). Pure epistemicity is hard to grasp; it transpires in second-person relationships with other conscious beings [10] or monads [11, 12]. If Artificial General Intelligence (AGI) is to dwell in the world of meaningful existences, not just their shadows, as the case of Church-Turing Lovers highlights [13], it requires full epistemic subjectivity, meeting the standards of the Engineering Thesis in Machine Consciousness [14, 15].
-
Machines can be conscious, is the claim to be adopted here, upon examination of its different versions. We distinguish three kinds of consciousness: functional, phenomenal and hard consciousness (f-, p- and h-consciousness). Robots are functionally conscious already. There is also a clear project in AI on how to make computers phenomenally conscious, though criteria differ. Discussion about p-consciousness is clouded by the lack of clarity on its two versions: (1) First-person functional elements (here, p-consciousness), and (2) Non-functional (h-consciousness). I argue that: (1) There are sufficient reasons to adopt h-consciousness and move forward with discussion of its practical applications; this does not have anti-naturalistic implications. (2) A naturalistic account of h-consciousness should be expected, in principle; in neuroscience we are some way towards formulating such an account. (3) Detailed analysis of the notion of consciousness is needed to clearly distinguish p- and h-consciousness. This refers to the notion of subject that is not an object and complementarity of the subjective and objective perspectives.(4) If we can understand the exact mechanism that produces h-consciousness we should be able to engineer it. (5) H-consciousness is probably not a computational process (more like a liver-function). Machines can, in principle, be functionally, phenomenally and h-conscious; all those processes are naturalistic. This is the engineering thesis on machine consciousness formulated within non-reductive naturalism.
-
Sloman criticizes all existing attempts to define machine consciousness for being overly one-sided. He argues that such definition is not only unattainable but also unnecessary. The critique is well taken in part; yet, whatever his intended aims, by not acknowledging the non-reductive aspects of consciousness, Sloman, in fact, sides with the reductivist view.