Artificial consciousness[1] (AC), also known as machine consciousness (MC),[2][3] synthetic consciousness[4] or digital consciousness,[5] is the consciousness hypothesized to be possible in artificial intelligence.[6] It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience. The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia).[7]
Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness or NCC. Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious.[8]
Philosophical views
As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.[9]
Plausibility debate
Type-identity theorists and other skeptics hold the view that consciousness can only be realized in particular physical systems because consciousness has properties that necessarily depend on physical constitution.[10][11][12][13]
In his article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that "Working in a fully automated mode, they [the computers] cannot exhibit creativity, unreprogrammation (which means can no longer be reprogrammed, from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."[14]
For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.[15]
Computational Foundation argument
One of the most explicit arguments for the plausibility of artificial sentience comes from David Chalmers. His proposal is roughly that the right kinds of computations are sufficient for the possession of a conscious mind. Chalmers proposes that a system implements a computation if "the causal structure of the system mirrors the formal structure of the computation", and that any system that implements certain computations is sentient.[16]
The most controversial part of Chalmers' proposal is that mental properties are "organizationally invariant". Mental properties are of two kinds, psychological and phenomenological. Psychological properties, such as belief and perception, are those that are "characterized by their causal role". Aided by previous work,[17][18] he says that "[s]ystems with the same causal topology…will share their psychological properties".
Phenomenological properties, unlike psychological properties, are not definable in terms of their causal roles. Establishing that phenomenological properties are a consequence of a causal topology, therefore, requires argument. Chalmers provides his Dancing Qualia argument for this purpose.[19]
Chalmers begins by assuming that his principle of organization invariance is false: that agents with identical causal organizations could have different experiences. He then asks us to conceive of changing one agent into the other by the replacement of parts (neural parts replaced by silicon, say) while preserving its causal organization. The experience of the agent under transformation would change (as the parts were replaced), but there would be no change in causal topology and therefore no means whereby the agent could "notice" the shift in experience; Chalmers considers this state of affairs an implausible reducto ad absurdum establishing that his principle of organizational invariance must almost certainly be true.
Critics of artificial sentience object that Chalmers begs the question in assuming that all mental properties and external connections are sufficiently captured by abstract causal organization.
Controversies
In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous.[20] Philosopher Nick Bostrom said that he thinks LaMDA probably is not conscious, but asked "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain... there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."[21]
Testing
The most well-known method for testing machine intelligence is the Turing test. But when interpreted as only observational, this test contradicts the philosophy of science principles of theory dependence of observations. It also has been suggested that Alan Turing's recommendation of imitating not a human adult consciousness, but a human child consciousness, should be taken seriously.[22]
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Although various systems may display various signs of behavior correlated with functional consciousness, there is no conceivable way in which third-person tests can have access to first-person phenomenological features. Because of that, and because there is no empirical definition of sentience,[23] a test of presence of sentience in AC may be impossible.
In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments.[24] He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
Ethics
If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer of a building of larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction (see below).
In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering".[25]
Aspects of consciousness considered necessary
Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious.[26] The functions of consciousness suggested by Bernard Baars are Definition and Context Setting, Adaptation and Learning, Editing, Flagging and Debugging, Recruiting and Control, Prioritizing and Access-Control, Decision-making or Executive Function, Analogy-forming Function, Metacognitive and Self-monitoring Function, and Autoprogramming and Self-maintenance Function. Igor Aleksander suggested 12 principles for artificial consciousness[27] and these are: The Brain is a State Machine, Inner Neuron Partitioning, Conscious and Unconscious States, Perceptual Learning and Memory, Prediction, The Awareness of Self, Representation of Meaning, Learning Utterances, Learning Language, Will, Instinct, and Emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Awareness
Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling of the physical world, modeling of one's own internal states and processes, and modeling of other conscious entities.
There are at least three types of awareness:[28] agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.[29]
Memory
Conscious events interact with memory systems in learning, rehearsal, and retrieval.[30] The IDA model[31] elucidates the role of consciousness in the updating of perceptual memory,[32] transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA, there is evidence that this is also the case in the nervous system.[33] In IDA, these two memories are implemented computationally using a modified version of Kanerva’s Sparse distributed memory architecture.[34]
Learning
Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events.[26] Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".[35]
Anticipation
The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander.[36] The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism enabling the organism to predict events.[36] An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.
Subjective experience
Subjective experiences or qualia are widely considered to be the hard problem of consciousness. Indeed, it is held to pose a challenge to physicalism, let alone computationalism.
Implementation proposals
Symbolic or hybrid
Intelligent Distribution Agent
Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory.[26][37] His brainchild IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language e-mail dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996–2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation." It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent[s] typically implemented as a small piece of code running as a separate thread." In IDA's top-down architecture, high-level cognitive functions are explicitly modeled.[38][39]
While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to his own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task." IDA has been extended to LIDA (Learning Intelligent Distribution Agent).
CLARION cognitive architecture
The CLARION cognitive architecture posits a two-level representation that explains the distinction between conscious and unconscious mental processes. CLARION has been successful in accounting for a variety of psychological data. A number of well-known skill learning tasks have been simulated using CLARION that span the spectrum ranging from simple reactive skills to complex cognitive skills. The tasks include serial reaction time (SRT) tasks, artificial grammar learning (AGL) tasks, process control (PC) tasks, the categorical inference (CI) task, the alphabetical arithmetic (AA) task, and the Tower of Hanoi (TOH) task.[40] Among them, SRT, AGL, and PC are typical implicit learning tasks, very much relevant to the issue of consciousness as they operationalized the notion of consciousness in the context of psychological experiments.
OpenCog
Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.
Connectionist
Haikonen's cognitive architecture
Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."[41][42]
Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many.[43][44] A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.[45][46]
Shanahan's cognitive architecture
Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").[47][2][3][48]
Takeno's self-awareness research
Self-awareness in robots is being investigated by Junichi Takenoat Meiji University in Japan.[49] Takeno is asserting that he has developed a robot capable of discriminating between a self-image in a mirror and any other having an identical image to it.[50][51][52] Takeno asserts that he first contrived the computational module called a MoNAD, which has a self-aware function, and he then constructed the artificial consciousness system by formulating the relationships between emotions, feelings and reason by connecting the modules in a hierarchy (Igarashi, Takeno 2007). Takeno completed a mirror image cognition experiment using a robot equipped with the MoNAD system. Takeno proposed the Self-Body Theory stating that "humans feel that their own mirror image is closer to themselves than an actual part of themselves." The most important point in developing artificial consciousness or clarifying human consciousness is the development of a function of self-awareness, and he claims that he has demonstrated physical and mathematical evidence for this in his thesis.[53] He also demonstrated that robots can study episodes in memory where the emotions were stimulated and use this experience to take predictive actions to prevent the recurrence of unpleasant emotions (Torigoe, Takeno 2009).
Impossible Minds: My Neurons, My Consciousness
Igor Aleksander, emeritus professor of Neural Systems Engineering at Imperial College, has extensively researched artificial neural networks and wrote in his 1996 book Impossible Minds: My Neurons, My Consciousness that the principles for creating a conscious machine already exist but that it would take forty years to train such a machine to understand language.[54] Whether this is true remains to be demonstrated and the basic principle stated in Impossible Minds—that the brain is a neural state machine—is open to doubt.[55]
Creativity Machine
Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),[56][57][58] or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies.[59] He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.[60][61][62] Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.[61][63][64][65][66]
Attention schema theory
In 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.[67] Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".[8] This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body.[8] This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.
"Self-modeling"
Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model or simulation of itself.[68][69]
In fiction
- In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.
See also
- General fields and theories
- Artificial intelligence
- Artificial general intelligence (AGI) – some consider AC a subfield of AGI research
- Intelligence explosion – what may happen when an AGI redesigns itself in iterative cycles
- Artificial general intelligence (AGI) – some consider AC a subfield of AGI research
- Brain–computer interface – Direct communication pathway between an enhanced or wired brain and an external device
- Cognitive architecture – Blueprint for intelligent agents
- Computational philosophy – the area of philosophy in which AI ponder their own place in the world
- Computational theory of mind – Family of views in the philosophy of mind
- Consciousness in animals – Quality or state of self-awareness within an animal
- Simulated consciousness (science fiction) – Science fiction theme
- Hardware for artificial intelligence – Hardware specially designed and optimized for artificial intelligence
- Identity of indiscernibles – Impossibility for separate objects to have all their properties in common
- Mind uploading – Hypothetical process of digitally emulating a brain
- Neurotechnology – Technology that interfaces with the nervous system to monitor or modify neural function
- Philosophy of mind – Branch of philosophy
- Quantum cognition – application of quantum mechanics to cognitive phenomena
- Simulated reality – Hypothesis that reality could be a computer simulation
- Artificial intelligence
- Proposed concepts and implementations
- Attention schema theory
- ADS-AC (system)
- Brain waves and Turtle robot by William Grey Walter
- Conceptual space – conceptual prototype
- Copycat (cognitive architecture)
- Global workspace theory
- Greedy reductionism – avoid oversimplifying anything essential
- Hallucination (artificial intelligence) – Confident unjustified claim by AI
- Image schema – spatial patterns
- Kismet (robot)
- LIDA (cognitive architecture)
- Memory-prediction framework
- Omniscience
- Psi-theory
- Quantum mind
- Self-awareness
References
Citations
- ↑ Thaler, S. L. (1998). "The emerging intelligence and its critical look at us". Journal of Near-Death Studies. 17 (1): 21–29. doi:10.1023/A:1022990118714. S2CID 49573301.
- 1 2 Gamez 2008.
- 1 2 Reggia 2013.
- ↑ Smith, David Harris; Schillaci, Guido (2021). "Why Build a Robot With Artificial Consciousness? How to Begin? A Cross-Disciplinary Dialogue on the Design and Implementation of a Synthetic Model of Consciousness". Frontiers in Psychology. 12: 530560. doi:10.3389/fpsyg.2021.530560. ISSN 1664-1078. PMC 8096926. PMID 33967869.
- ↑ Elvidge, Jim (2018). Digital Consciousness: A Transformative Vision. John Hunt Publishing Limited. ISBN 978-1-78535-760-2.
- ↑ Chrisley, Ron (October 2008). "Philosophical foundations of artificial consciousness". Artificial Intelligence in Medicine. 44 (2): 119–137. doi:10.1016/j.artmed.2008.07.011. PMID 18818062.
- ↑ Institute, Sentience. "The Terminology of Artificial Sentience". Sentience Institute. Retrieved 2023-08-19.
- 1 2 3 Graziano 2013.
- ↑ Block, Ned (2010). "On a confusion about a function of consciousness". Behavioral and Brain Sciences. 18 (2): 227–247. doi:10.1017/S0140525X00038188. ISSN 1469-1825. S2CID 146168066.
- ↑ Block, Ned (1978). "Troubles for Functionalism". Minnesota Studies in the Philosophy of Science: 261–325.
- ↑ Bickle, John (2003). Philosophy and Neuroscience. Dordrecht: Springer Netherlands. doi:10.1007/978-94-010-0237-0. ISBN 978-1-4020-1302-7.
- ↑ Schlagel, R. H. (1999). "Why not artificial consciousness or thought?". Minds and Machines. 9 (1): 3–28. doi:10.1023/a:1008374714117. S2CID 28845966.
- ↑ Searle, J. R. (1980). "Minds, brains, and programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/s0140525x00005756. S2CID 55303721.
- ↑ Artificial consciousness: Utopia or real possibility? Buttazzo, Giorgio, July 2001, Computer, ISSN 0018-9162
- ↑ Putnam, Hilary (1967). The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion. University of Pittsburgh Press.
- ↑ David J. Chalmers (2011). "A Computational Foundation for the Study of Cognition" (PDF). Journal of Cognitive Science. 12 (4): 325–359. doi:10.17791/JCS.2011.12.4.325. S2CID 248401010.
- ↑ Armstrong, D. M. (1968). Honderich, Ted (ed.). A Materialist Theory of the Mind. New York: Routledge.
- ↑ Lewis, David (1972). "Psychophysical and theoretical identifications". Australasian Journal of Philosophy. 50 (3): 249–258. doi:10.1080/00048407212341301. ISSN 0004-8402.
- ↑ Chalmers, David (1995). "Absent Qualia, Fading Qualia, Dancing Qualia". Retrieved 12 April 2016.
- ↑ "'I am, in fact, a person': can artificial intelligence ever be sentient?". the Guardian. 14 August 2022. Retrieved 5 January 2023.
- ↑ Leith, Sam (7 July 2022). "Nick Bostrom: How can we be certain a machine isn't conscious?". The Spectator.
- ↑ "Mapping the Landscape of Human-Level Artificial General Intelligence" (PDF). Archived from the original (PDF) on 2017-07-06. Retrieved 2012-07-05.
- ↑ "Consciousness". In Honderich T. The Oxford companion to philosophy. Oxford University Press. ISBN 978-0-19-926479-7
- ↑ Victor Argonov (2014). "Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach". Journal of Mind and Behavior. 35: 51–70.
- ↑ Metzinger, Thomas (2021). "Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology". Journal of Artificial Intelligence and Consciousness. 08: 43–66. doi:10.1142/S270507852150003X. S2CID 233176465.
- 1 2 3 Baars 1995.
- ↑ Aleksander, Igor (1995). "Artificial neuroconsciousness an update". In Mira, José; Sandoval, Francisco (eds.). From Natural to Artificial Neural Computation. Lecture Notes in Computer Science. Vol. 930. Berlin, Heidelberg: Springer. pp. 566–583. doi:10.1007/3-540-59497-3_224. ISBN 978-3-540-49288-7.
- ↑ Joëlle Proust in Neural Correlates of Consciousness, Thomas Metzinger, 2000, MIT, pages 307–324
- ↑ Christof Koch, The Quest for Consciousness, 2004, page 2 footnote 2
- ↑ Tulving, E. 1985. Memory and consciousness. Canadian Psychology 26:1–12
- ↑ Franklin, Stan, et al. "The role of consciousness in memory." Brains, Minds and Media 1.1 (2005): 38.
- ↑ Franklin, Stan. "Perceptual memory and learning: Recognizing, categorizing, and relating." Proc. Developmental Robotics AAAI Spring Symp. 2005.
- ↑ Shastri, L. 2002. Episodic memory and cortico-hippocampal interactions. Trends in Cognitive Sciences
- ↑ Kanerva, Pentti. Sparse distributed memory. MIT press, 1988.
- ↑ "Implicit Learning and Consciousness: An Empirical, Philosophical and Computational Consensus in the Making". Routledge & CRC Press. Retrieved 2023-06-22.
- 1 2 Aleksander 1995
- ↑ Baars, Bernard J. (2001). In the theater of consciousness: the workspace of the mind. New York Oxford: Oxford University Press. ISBN 978-0-19-510265-9.
- ↑ Franklin, Stan (1998). Artificial minds. A Bradford book (3rd print ed.). Cambridge, Mass.: MIT Press. ISBN 978-0-262-06178-0.
- ↑ Franklin, Stan (2003). "IDA: A Conscious Artefact". Machine Consciousness.
- ↑ (Sun 2002)
- ↑ Haikonen, Pentti O. (2003). The cognitive approach to conscious machines. Exeter: Imprint Academic. ISBN 978-0-907845-42-3.
- ↑ "Pentti Haikonen's architecture for conscious machines – Raúl Arrabales Moreno". 2019-09-08. Retrieved 2023-06-24.
- ↑ Freeman, Walter J. (2000). How brains make up their minds. Maps of the mind. New York Chichester, West Sussex: Columbia University Press. ISBN 978-0-231-12008-1.
- ↑ Cotterill, Rodney M J (2003). "CyberChild - A simulation test-bed for consciousness studies". Journal of Consciousness Studies. 10 (4–5): 31–45. ISSN 1355-8250.
- ↑ Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1.
- ↑ Haikonen, Pentti O. (2019). Consciousness and robot sentience. Series on machine consciousness (2nd ed.). Singapore Hackensack, NJ London: World Scientific. ISBN 978-981-12-0504-0.
- ↑ Shanahan, Murray (2006). "A cognitive architecture that combines internal simulation with a global workspace". Consciousness and Cognition. 15 (2): 433–449. doi:10.1016/j.concog.2005.11.005. ISSN 1053-8100. PMID 16384715. S2CID 5437155.
- ↑ Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). "chapter 20". Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1.
- ↑ "Robot". Archived from the original on 3 July 2007. Retrieved 3 July 2007.
- ↑ "Takeno – Archive No..." Archived from the original on 7 November 2018. Retrieved 7 January 2010.
- ↑ The world first self-aware robot and The success of mirror image cognition, Takeno
- ↑ Takeno, Inaba & Suzuki 2005.
- ↑ A Robot Succeeds in 100% Mirror Image Cognition Archived 9 August 2017 at the Wayback Machine, Takeno, 2008
- ↑ Aleksander I (1996) Impossible Minds: My Neurons, My Consciousness, Imperial College Press ISBN 1-86094-036-6
- ↑ Wilson, RJ (1998). "review of Impossible Minds". Journal of Consciousness Studies. 5 (1): 115–6.
- ↑ Thaler, S.L., "Device for the autonomous generation of useful information"
- ↑ Marupaka, N.; Lyer, L.; Minai, A. (2012). "Connectivity and thought: The influence of semantic network structure in a neurodynamical model of thinking" (PDF). Neural Networks. 32: 147–158. doi:10.1016/j.neunet.2012.02.004. PMID 22397950. Archived from the original (PDF) on 2016-12-19. Retrieved 2015-05-22.
- ↑ Roque, R. and Barreira, A. (2011). "O Paradigma da "Máquina de Criatividade" e a Geração de Novidades em um Espaço Conceitual," 3º Seminário Interno de Cognição Artificial – SICA 2011 – FEEC – UNICAMP.
- ↑ Minati, Gianfranco; Vitiello, Giuseppe (2006). "Mistake Making Machines". Systemics of Emergence: Research and Development. pp. 67–78. doi:10.1007/0-387-28898-8_4. ISBN 978-0-387-28899-4.
- ↑ Thaler, S. L. (2013) The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship, (ed.) E.G. Carayannis, Springer Science+Business Media
- 1 2 Thaler, S. L. (2011). "The Creativity Machine: Withstanding the Argument from Consciousness," APA Newsletter on Philosophy and Computers
- ↑ Thaler, S. L. (2014). "Synaptic Perturbation and Consciousness". Int. J. Mach. Conscious. 6 (2): 75–107. doi:10.1142/S1793843014400137.
- ↑ Thaler, S. L. (1995). ""Virtual Input Phenomena" Within the Death of a Simple Pattern Associator". Neural Networks. 8 (1): 55–65. doi:10.1016/0893-6080(94)00065-t.
- ↑ Thaler, S. L. (1995). Death of a gedanken creature, Journal of Near-Death Studies, 13(3), Spring 1995
- ↑ Thaler, S. L. (1996). Is Neuronal Chaos the Source of Stream of Consciousness? In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.
- ↑ Mayer, H. A. (2004). A modular neurocontroller for creative mobile autonomous robots learning by temporal difference, Systems, Man and Cybernetics, 2004 IEEE International Conference(Volume:6 )
- ↑ Graziano, Michael (1 January 2011). "Human consciousness and its relationship to social neuroscience: A novel hypothesis". Cognitive Neuroscience. 2 (2): 98–113. doi:10.1080/17588928.2011.565121. PMC 3223025. PMID 22121395.
- ↑ Pavlus, John (11 July 2019). "Curious About Consciousness? Ask the Self-Aware Machines". Quanta Magazine. Retrieved 2021-01-06.
- ↑ Bongard, Josh, Victor Zykov, and Hod Lipson. "Resilient machines through continuous self-modeling." Science 314.5802 (2006): 1118–1121.
Bibliography
- Aleksander, Igor (1995), Artificial Neuroconsciousness: An Update, IWANN, archived from the original on 1997-03-02
- Armstrong, David (1968), A Materialist Theory of Mind, Routledge
- Arrabales, Raul (2009), "Establishing a Roadmap and Metrics for Conscious Machines Development" (PDF), Proceedings of the 8th IEEE International Conference on Cognitive Informatics, Hong Kong: 94–101, archived from the original (PDF) on 2011-07-21
- Baars, Bernard J. (1995), A cognitive theory of consciousness (Reprinted ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-30133-6
- Baars, Bernard J. (1997), In the Theater of Consciousness, New York, NY: Oxford University Press, ISBN 978-0-19-510265-9
- Bickle, John (2003), Philosophy and Neuroscience: A Ruthless Reductive Account, New York, NY: Springer-Verlag
- Block, Ned (1978), "Troubles for Functionalism", Minnesota Studies in the Philosophy of Science 9: 261–325
- Block, Ned (1997), On a confusion about a function of consciousness in Block, Flanagan and Guzeldere (eds.) The Nature of Consciousness: Philosophical Debates, MIT Press
- Boyles, Robert James M. (2012), Artificial Qualia, Intentional Systems and Machine Consciousness (PDF), Proceedings of the Research@DLSU Congress 2012: Science and Technology Conference, ISSN 2012-3477
- Chalmers, David (1996), The Conscious Mind, Oxford University Press, ISBN 978-0-19-510553-7
- Chalmers, David (2011), "A Computational Foundation for the Study of Cognition", Journal of Cognitive Science, Seoul Republic of Korea: 323–357, archived from the original on 2015-12-23
- Cleeremans, Axel (2001), Implicit learning and consciousness (PDF), archived from the original (PDF) on 2012-09-07, retrieved 2004-11-30
- Cotterill, Rodney (2003), "Cyberchild: a Simulation Test-Bed for Consciousness Studies", in Holland, Owen (ed.), Machine Consciousness, vol. 10, Exeter, UK: Imprint Academic, pp. 31–45
- Doan, Trung (2009), Pentti Haikonen's architecture for conscious machines, archived from the original on 2009-12-15
- Ericsson-Zenith, Steven (2010), Explaining Experience In Nature, Sunnyvale, CA: Institute for Advanced Science & Engineering, archived from the original on 2019-04-01, retrieved 2019-10-04
- Franklin, Stan (1995), Artificial Minds, Boston, MA: MIT Press, ISBN 978-0-262-06178-0
- Franklin, Stan (2003), "IDA: A Conscious Artefact", in Holland, Owen (ed.), Machine Consciousness, Exeter, UK: Imprint Academic
- Freeman, Walter (1999), How Brains make up their Minds, London, UK: Phoenix, ISBN 978-0-231-12008-1
- Gamez, David (2008), "Progress in machine consciousness", Consciousness and Cognition, 17 (3): 887–910, doi:10.1016/j.concog.2007.04.005, PMID 17572107, S2CID 3569852
- Graziano, Michael (2013), Consciousness and the Social Brain, Oxford University Press, ISBN 978-0199928644
- Haikonen, Pentti (2003), The Cognitive Approach to Conscious Machines, Exeter, UK: Imprint Academic, ISBN 978-0-907845-42-3
- Haikonen, Pentti (2012), Consciousness and Robot Sentience, Singapore: World Scientific, ISBN 978-981-4407-15-1
- Haikonen, Pentti (2019), Consciousness and Robot Sentience: 2nd Edition, Singapore: World Scientific, ISBN 978-981-120-504-0
- Koch, Christof (2004), The Quest for Consciousness: A Neurobiological Approach, Pasadena, CA: Roberts & Company Publishers, ISBN 978-0-9747077-0-9
- Lewis, David (1972), "Psychophysical and theoretical identifications", Australasian Journal of Philosophy, 50 (3): 249–258, doi:10.1080/00048407212341301
- Putnam, Hilary (1967), The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion, University of Pittsburgh Press
- Reggia, James (2013), "The rise of machine consciousness: Studying consciousness with computational models", Neural Networks, 44: 112–131, doi:10.1016/j.neunet.2013.03.011, PMID 23597599
- Rushby, John; Sanchez, Daniel (2017), Technology and Consciousness Workshops Report (PDF), Menlo Park, CA: SRI International
- Sanz, Ricardo; López, I; Rodríguez, M; Hernández, C (2007), "Principles for consciousness in integrated cognitive control" (PDF), Neural Networks, 20 (9): 938–946, doi:10.1016/j.neunet.2007.09.012, PMID 17936581
- Searle, John (2004), Mind: A Brief Introduction, Oxford University Press
- Shanahan, Murray (2006), "A cognitive architecture that combines internal simulation with a global workspace", Consciousness and Cognition, 15 (2): 443–449, doi:10.1016/j.concog.2005.11.005, PMID 16384715, S2CID 5437155
- Sun, Ron (December 1999), "Accounting for the computational basis of consciousness: A connectionist approach", Consciousness and Cognition, 8 (4): 529–565, CiteSeerX 10.1.1.42.2681, doi:10.1006/ccog.1999.0405, PMID 10600249, S2CID 15784914
- Sun, Ron (2001), "Computation, reduction, and teleology of consciousness", Cognitive Systems Research, 1 (4): 241–249, CiteSeerX 10.1.1.20.8764, doi:10.1016/S1389-0417(00)00013-9, S2CID 36892947
- Sun, Ron (2002). Duality of the Mind: A Bottom-up Approach Toward Cognition. Psychology Press. ISBN 978-1-135-64695-0.
- Takeno, Junichi; Inaba, K; Suzuki, T (June 27–30, 2005). "Experiments and examination of mirror image cognition using a small robot". 2005 International Symposium on Computational Intelligence in Robotics and Automation. Espoo Finland: CIRA 2005. pp. 493–498. doi:10.1109/CIRA.2005.1554325. ISBN 978-0-7803-9355-4. S2CID 15400848.
Further reading
- Aleksander, Igor (2017). "Machine Consciousness". In Schneider, Susan; Velmans, Max (eds.). The Blackwell Companion to Consciousness (2nd ed.). Wiley-Blackwell. pp. 93–105. doi:10.1002/9781119132363.ch7. ISBN 978-0-470-67406-2.
- Baars, Bernard; Franklin, Stan (2003). "How conscious experience and working memory interact" (PDF). Trends in Cognitive Sciences. 7 (4): 166–172. doi:10.1016/s1364-6613(03)00056-1. PMID 12691765. S2CID 14185056.
- Casti, John L. "The Cambridge Quintet: A Work of Scientific Speculation", Perseus Books Group, 1998
- Franklin, S, B J Baars, U Ramamurthy, and Matthew Ventura. 2005. The role of consciousness in memory. Brains, Minds and Media 1: 1–38, pdf.
- Haikonen, Pentti (2004), Conscious Machines and Machine Emotions, presented at Workshop on Models for Machine Consciousness, Antwerp, BE, June 2004.
- McCarthy, John (1971–1987), Generality in Artificial Intelligence. Stanford University, 1971–1987.
- Penrose, Roger, The Emperor's New Mind, 1989.
- Sternberg, Eliezer J. (2007) Are You a Machine?: The Brain, the Mind, And What It Means to be Human. Amherst, NY: Prometheus Books.
- Suzuki T., Inaba K., Takeno, Junichi (2005), Conscious Robot That Distinguishes Between Self and Others and Implements Imitation Behavior, (Best Paper of IEA/AIE2005), Innovations in Applied Artificial Intelligence, 18th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, pp. 101–110, IEA/AIE 2005, Bari, Italy, June 22–24, 2005.
- Takeno, Junichi (2006), The Self-Aware Robot -A Response to Reactions to Discovery News-, HRI Press, August 2006.
- Zagal, J.C., Lipson, H. (2009) "Self-Reflection in Evolutionary Robotics", Proceedings of the Genetic and Evolutionary Computation Conference, pp 2179–2188, GECCO 2009.
External links
- Artefactual consciousness depiction by Professor Igor Aleksander
- FOCS 2009: Manuel Blum – Can (Theoretical Computer) Science come to grips with Consciousness?
- www.Conscious-Robots.com, Machine Consciousness and Conscious Robots Portal.