Mind, Part 4: Expanding what is meant by Mind

A) Token for Admission

Previously, it was discussed how the mind is more than an isolated region or a series of chemicals. But contemporary science is still committed to a reductionist physicalism. To use Pierce’s nomenclature, these previous ways of explaining the mind are part of type physicalism, where we look at abstract universals (eg. pain, fear) of mental events to be identical to some neurophysiological event.

But not every brain state is mental and not all brain activity manifests mentality so, by Leibniz’s law, aspects such as intentionality remain elusive for the theory. In short, we can discover all types of physiological processes correlated with processing the available information, but we don’t get at a description of the information processing itself. As we saw with vision last time:

“This kind of question-begging often takes the form of theories that in effect postulate homunculi with the selfsame intellectual capacities the theorist set out to explain. Such is the case when visual perception is explained by simply postulating psychological mechanisms that process visual information.” –  Jerry Fodor

So instead perhaps we should look to token physicalism, which looks instead for mental particulars (eg. Mary’s pain at this moment, Mario’s fear of snakes). This allows us to search out the functioning of thinking rather than examining static structures or chemical flows, and ask about how exactly processing occurs.

B) Functionalizing Representations

If we take thoughts to be tokens, then we can give an account of mental states via representations. In a representational theory of mind, mental states can be tokens relating a person with a mental representation, and a fortiori all that is entailed in that representational content.

So how do we know if these mental states are occurring? A good proxy is function. In terms of a theory, viz. functionalism, this means that every mental event type can be fully characterized by means of its typical causal connections between inputs and outputs. The intervening steps, by Occam’s Razor, should be a form of work on the representational content. And these processes can be tokens.

However, to wit:

“The traditional view in the philosophy of mind has it that mental states are distinguished by their having what are called either qualitative content or intentional content.” – Jerry Fodor

So what about qualitative content, or qualia as discussed earlier? Well, if one considers qualia to be ineffable, then we can’t speak about them even if we tried, we can only monitor output states. The best we can do is control the input to different subjects and see what output (or self-reported representation) occurs. This obscuring has actually made processing of some perceptual phenomena clearer to us, such as the phi phenomenon, eigengrau, stereopsis, motion parallax, etc. Research such as that of Zenon Pylyshyn have produced much to make us think the field is fecund.

The same functional explanations can also make progress towards explaining problems of intentionality. Representations have intentionality towards some form of knowledge, but having an idea (and its content) is not what motivates. It is the tokened thoughts and related processing that operate on an idea that makes decisions and produces behavioral output, and so the same consistent representation (eg. belief or idea) can still differ in causal powers depending on the thought processes and other representations that coincide. So intentionality is accepted and woven into the theory. In other words:

“Functionalism is not a reductionist thesis. It does not foresee, even in principle, the elimination of mentalistic concepts from the explanatory apparatus of psychological theories.” – Jerry Fodor

Similar to qualia then, functionalism elides the problem of intentionality. It does so however, by changing intentional content to semantic content on the assumption that symbols, like thoughts, refer to things. Working on this basis, we can yield a science of thought as a series of potential logically manipulable moves, as hoped for last time:

Screenshot (1201)
From “The logical primitives of thought: Empirical foundations for compositional cognitive models

As we cash out intentionality for semantic content, we need to explore further the relations between syntax and semantics, and how we can model them.

C) Computizing Functions

If we’re swapping out representations for symbols, then we can likewise replace thinking with computations since computations are causal chains of inference on symbols. This manipulation of symbols proceeds in virtue of being sensitive to the accompanying syntax, the same way that representations are thought about in context with other representations. At this point, we can abstract all the processes to the point of talking merely about information processing.

As discussed before, chemicals require further refinement to become translated into knowledge, but when we abstract chemicals to the level of information, we also make information medium neutral, viz. we can use any medium we want to carry information if the interlocking of information and processor is what creates knowledge. This ability to realize the same functions in different substrates is called multiple realizability. This means that, unlike type physicalism, the brain need not be isomorphic between function and morphology.

So by computation we just mean to say any sequence whose successive values can be generated by an algorithm. So, what can we accomplish with algorithms and multiple realizability? Well, the biggest breakthroughs in this line of thinking came with the concept of a Turing machine that can read and carry out a list of instructions, Alonzo Church’s lambda calculus on how to read  those instructions, and Von Neumann architecture as a method of implementation. These worked together to make the process universal and formed the basis of modern day computers.

turing-machine
A simple Turing machine

So how far can we take this? Assuming information is medium neutral, we can code the list of instructions in any form of binary manner. It could be a 0 or a 1, a punched or not punched hole in a card, a voltage reading either high or low along a circuit, etc. As a species, humans have been doing it a long time in form of turning continuous sounds into discrete binary units, such as when notes are used to compose music. Binary digits, ie. bits, are ubiquitous in contemporary technology. So that part is also feasible.

Is this sufficient for conscious activity? Enter the the Turing test. Our normal conversations proceed with a principle of charity, granting intelligence to others so long as they are functionally equivalent to all the other persons we encounter. That is how we normally form our judgments that others possess similar qualities as us. So if a computer passes for human, according to the Turing test, then we can grant to it a level of equivalent awareness.

So, similar to qualia and intentionality, functionalism also dodges questions of conscious behavior insofar as considering a simulation successful means working as if it were real. Turing called his test an “imitation game” after all, and the field feels that this is sufficient.

holden-tests-leon1
“The only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise, according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult.” – Alan Turing

To stick to a mechanistic framework, one must use a simple mechanistic model, and a Turing machine and the Turing test provide exactly that. From them, it is reasonable to project a scientific endeavor around attempting to express the rules governing human behavior by a Turing machine table relating input states to output states.

This decision to make an equivalency between physical structure so long as it is running the appropriate software or program is perfectly compatible with multiple realizability. This line of thinking can be foreshadowed as far back as Hobbes, who said that “when a man reasons, he does nothing else but conceive a sum total from addition of parcels.” The thinking continues in the contemporary zeitgeist by philosophers such as Daniel Dennett, who asks: “you can replace or splice the auditory nerve with nonliving wire, suitably attached at both ends; why not the rest of the brain?” And if this so, then we can realize any carbon-based matter (eg. brain) on silicon-based matter (eg. computer) provided that the algorithms being performed are equivalent in behavioral output. This stance is called Good Old Fashioned Artificial Intelligence, ie. GOFAI.

brain-meat
“The brain is just a computer made of meat” – Marvin Minsky

On this front, the field can take considerable credit. Depending on how you judge, the Turing test could be considered beaten in the form of Weizenbaum’s DOCTOR psychotherapist program, and Colby’s schizophrenic PARRY program. The performance of computers is superior in many ways as well, beating humans at checkers, chess, and Jeopardy!. Some can even compose music at a Turing test level.

None of this should not be surprising when one considers the hardware of the brain compared to the hardware of a computer. But despite these advances, there is still an uncanny valley effect, whereby we are reluctant to attribute consciousness to a computer even as it seems more and more to exhibit attributes of intelligence. Is this simply biocentric “prejudice against comprehending robots” as Dennett would say? Or is this tendency built on something else that needs to be articulated?

D) Doubtable Computations

Are there problems with AI that would justify our feeling of an uncanny valley? Well, in fact, there are several “problems”, even acknowledged within the field itself.

First, there is the binding problem. This consists of how a program should segregate input into parts, and how the parts combine into a whole. Ambiguous figures, like the Necker Cube, show how lines are bound to a perception of the whole:

300px-fig_necker_cube
“The machine could not interpret the figure three-dimensionally as a cube first in one, then in the other of these orientations. Such an interpretation would require the machine to focus on certain aspects of the figure while leaving others in the background, and the machine lacks precisely this figure-ground form of representation. For it, every point of the figure is equally explicit.” – Hubert Dreyfus

This problem has persisted since a famous German used to talk about the unity of transcendental apperception. And the problem shows no signs of ceasing in the near future.

Then there’s the frame problem. This consists of selecting what is relevant or not to be axioms when considering the world to be as dynamic as it is. Some researchers claim that heuristics solve this problem, and they may in the functionalist sense (viz. by ignoring it). But as Fodor says:

“If someone thinks that he has solved the frame problem, he doesn’t understand it; and if someone thinks he does understand the frame problem, he doesn’t; and if someone thinks that he doesn’t understand the frame problem, he’s right.”

Finally, there’s the halting problem. This stems from the Entscheidungsproblem, as a famous German called it, which asked: is there some mechanical procedure for answering all mathematical problems, belonging to some broad, but well-defined class? In other words, is there a general algorithm, from which it is possible to determine whether a Turing machine would actually ever halt or continue endlessly? This problem actually was resolved. However, the solution is to say that there is no solution for all inputs-output pairs. Think about it as follows:

“Productivity is embarrassingly cheap; all it requires is computational procedures that can apply to their own outputs, so computational systems that can do recursion needn’t be able to do anything much else that’s interesting…Everybody knows about the computer scientist who was found dead in his bathtub holding a bottle of shampoo with the instruction: ‘Soap, Rinse, Repeat’.” – Jerry Fodor

None of this is encouraging for GOFAI. But it needn’t be discouraging either. All fields have difficulties, and there’s no indication that the above predicaments are insoluble. So one way or the other, we need more definitive answers to what is, and is not, possible for GOFAI.

E) Limits to Formal Language

The problem of formalizing a language for computation as seen in the halting problem emerges in other ways. Saul Kripke, arguably following from Wittgenstein, created the Quus paradox, which states an ambiguous operation of:

x quus y = x + y, if x, y < 57

x quus y = 5, if otherwise.

If you’ve been working with sums that have all been below the threshold of 57, then quus has really appeared as plus this whole time. Maybe this whole time the operation called for quus and it was mistaken as addition. This means that regardless of past functions, there is the possibility that the computer’s next output will be inconsistent, and you’ll assume it has broken, but in reality it was just doing it’s job of running quus. Which is the correct interpretation of the operation is something we’ll never know because:

“Any interpretation still hangs in the air along with what it interprets, and cannot give it any support. Interpretations by themselves do not determine meaning” – Ludwig Wittgenstein

We’re left with only interpretation because there is nothing in the physical properties of the machine that show us what programs are running; we depend on the output to be evidence that things are going correctly. To wit,

“This was our paradox: no course of action could be determined by a rule, because any course of action can be made out to accord with the rule.” – Ludwig Wittgenstein

This statement is coherent, however paradoxical sounding.

thinkyourewrong-analyticphilosophers-quus
Analytic philosophers such as Wittgenstein disagree

Some reduce this paradox down to the perennial problem of induction. But that merely delays the problem, rather than solves it, as induction is still a tricky Gordian knot that has yet to be untangled.

F) Limits to Algorithms

But what if the problems of induction are old hat, and do not phase you. You simply say that we set the algorithms and GOFAI can do all the things humans can do, tout court.

But what about things that the algorithms can’t do? For instance, how is a computer going to have emotions? We build computers to be very rational and equanimous in disposition, but people are notoriously not like that. And if emotions have any value it is in highlighting salient qualities in our experiences. Without definitive evidence, it is reasonable to suggest that anger deeply anchors a memory and makes it harder to forget, boredom drives curiosity and innovation, confusion reveals contradiction, etc. Dennett goes so far as to say that emotions serve as heuristic tools to solve NP-complete problems. As such:

“A truly intelligent computational agent could not be engineered without humor and some other emotions.” – Daniel Dennett

It also follows that:

“A strict algorithmic approach will be inadequate to imbue an agent with a sense of humor because the structure of humor is dictated by the riskiness of heuristic processes that have evolved to permit real-time conclusion-leaping” – Daniel Dennett

Ethically, would you want to give robots emotions? Could we be justified in creating entities that suffer when we just as easily could create entities that don’t suffer?

maxresdefault
“If we were to consider the project of creating a robot that could cry at the movies, we would have to do something apparently rather cruel. We would have to ensure that this robot knew all about suffering, for it is only against a background of pain that beautiful scenes in films become deeply moving rather than merely nice.” – Alain de Botton

I also don’t know how that would merge with the three laws, for instance, as emotions might make AI malignant. But some would argue that this difference is exactly the sine qua non that grants us our peculiar form of awareness:

“The sad thing about artificial intelligence is that it lacks artifice and therefore intelligence.” – Jean Baudrillard

So we need non-algorithmic processing to do things such as emote. But emotions could in principle be simulated by heuristics, as discussed above by Dennett, and simulations are sufficient to pass the Turing test. Is there anything else that would lie outside of the computational, ie. mathematically provable, abilities of GOFAI?

We know that mathematic provability itself is outside mathematics, according to Gödel and his incompleteness theorems. This means:

“The following disjunctive conclusion is inevitable: Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine, or else there exist absolutely unsolvable diophantine problems of the type specified.” – Kurt Gödel

How do we parse this? Well, let’s abstract back for a moment, beyond mathematics into metamathematics. Let’s assume we can’t prove either part of the disjunctive. This frustrates a Turing machine, which needs provability for its truth. But humans don’t. In the case of Gödel’s theorem:

“If the workings of the mathematician’s mind are entirely algorithmic, then the algorithm that he actually uses to form his judgments is not capable of dealing with the [Gödel theorem] constructed from his personal algorithm. Nevertheless, we can, in principle, see that [the Gödel theorem] is actually true! This would seem to provide him with a contradiction, since he ought to be able to see that also. Perhaps this indicates that the mathematician was not using an algorithm at all!” – Roger Penrose

Likewise, in the case of Goodstein’s theorem:

“Goodstein’s theorem is actually a Gödel theorem for that procedure that we learn at school called mathematical induction…[but] the unprovability of Goodstein’s theorem certainly does not stop us from seeing that it is in fact true. Our insights enable us to transcend the limited procedures of proof.” – Roger Penrose

And thus, the first part of the disjunctive, that humans have abilities beyond that of Turing machines, is true.

This ability for insight and rumination is beyond inferences and recursion. Ayn Rand insists that such insight provides us with the innovation and creativity that allow us to go beyond instinctual algorithms:

“For man, the basic means of survival is reason. Man cannot survive, as animals do, by the guidance of mere percepts… No percepts and no “instincts” will tell him how to light a fire, how to weave cloth, how to forge tools, how to make a wheel, how to make an airplane, how to perform an appendectomy, how to produce an electric light bulb or an electronic tube or a cyclotron or a box of matches. Yet his life depends on such knowledge – and only a volitional act of his consciousness, a process of thought, can provide it.”

Whether or not Rand’s whole theory is valid overall is irrelevant to the point of her argument here. One can say that humankind made many innovations without saying that laissez-faire capitalism is the best form of economics, the same as one can say non-algorithmic cognition led to innovations without embracing the perfectly rational Objectivist as the model person.

thinkyourewrong-atheists
Atheist philosophers such as Ayn Rand disagree

So we’re clearly separated from GOFAI via non-algorithmic cognition. Is there anything else?

G) Engendering Embodiment

In a sort of ironic twist, in avoiding dualism, GOFAI creates its own kind of dualism. When you have multiple realizability as an option and make syntax sufficient for creating consciousness, then you’ve created a situation where mind and body, software and hardware, are totally independent of each other. To restore the link between tokening and physicalism, we need limits. At the most basic level this means we need to insist on physics to avoid dualism. So in however limited a form, we need to embody the algorithms in a physical system that can process inputs to produce outputs.

But how does the physics of a Turing machine correlate with the physics of a brain? Well, simply put, the former is digital and the latter is analog. These are considerably different physical facts that need to be accounted for. Physical facts cannot be neglected, even in the case of modern computers, since we know about phenomena such as quantum tunneling. The very idea of a computer and a brain being physically interchangeable was denounced as early as 1957:

 “The available evidence, though scanty and inadequate, rather tends to indicate that the human nervous system uses different principles and procedures [from a computer]” – John Von Neumann

However, as discussed above, information can be digitalized. Does this mean a digital reproduction will yield the same results? Some argue that if we can fully digitalize, then that information content can be translated from one form to another, and be recovered in an equivalent form. These are the people who will get into the Star Trek teleporters.

But in thinking about what distinguishes the configuration of you from me from a door, its not the individual constituents (assuming we’re all made of the same atomic pieces), its how the constituents are arranged. And digitization succeeds because it takes continuous phenomena and removes idiosyncrasies in a Procrustean manner to get them to become digital. So instead of a teleporter, Donald Davidson thinks thusly:

“Suppose lightning strikes a dead tree in a swamp; I am standing nearby. My body is reduced to its elements, while entirely by coincidence the tree is turned into my physical replica. My replica, Swampman, moves exactly as I did; according to its nature it departs the swamp, encounters and seems to recognize my friends, and appears to return their greetings in English. It moves into my house and seems to write articles on radical interpretation. No one can tell the difference.

But there is a difference. My replica can’t recognize my friends; it can’t recognize anything since it never cognized anything in the first place. It can’t know my friends’ names (though of course it seems to); it can’t remember my house. It can’t mean what I do by the word ‘house’, for example, since the sound ‘house’ Swampman makes was not learned in a context that would give it the right meaning, or any meaning at all.”

However indistinguishable the two would be, Swampman and Davidson would not be the same. However much the input-output production correlates with what Davidson would have done prior, it doesn’t have the causal history to have the same thoughts and meanings behind those actions. To have the same thoughts and meanings, we need to be in the real world. Up until now we’ve been assuming that all physical processes can be described in mathematical formalism, translated into syntax, and then manipulated algorithmically. But what about the surrounding context not embedded in the syntax, ie. linguistic pragmatics?

cycles

The context in which content is received matters considerably. Hubert Dreyfus, as back as 1965, considered holistic ways that syntax will be unable to replicate pragmatics because of the lack of context.

First, Dreyfus points out that while computers used heuristics to find options which may then explode exponentially in size, a person first limits the possibilities by keeping track of salient objects on the “fringes of consciousness.” Thus, chess players could beat early computers despite the latter’s clearly superior ability to calculate strategic options.

Secondly, a person can reduce ambiguity without having to reduce it entirely to an algorithmic set of rules. Suppose we are given the opaque sentence: “the box was in the pen”. It can be clarified if we’re given the context of the two prior sentences: “Little John was looking for his toy box. Finally he found it. The box was in the pen.” Similar to how a person glimpses insight into Goodstein’s theorem, you understand it now when given context, but an algorithm would be lost.

thinkyourewrong-continentalHeideggerians
Continental philosophers such as Dreyfus disagree

This inability to incorporate context into Turing machines has not escaped the attention of those in the field:

 “The problem is to get the structure of an entire belief system to bear on individual occasions of belief fixation. We have, to put it bluntly, no computational formalisms that show us how to do this, and we have no idea how such formalisms might be developed…If someone, a Dreyfus for example, were to ask why we should even suppose that the digital computer is a plausible mechanism for the simulation of global cognitive processes, the answering silence would be deafening.” – Jerry Fodor

So however much we “neutralize” information, that doesn’t mean the underlying cognitive mechanisms are working like human consciousness.

H) The insufficiency of syntax

We are improving our ability to make machine output look like behavioral human output. But the output delivered is ascertainable directly from the syntax and there are reasons to think that syntax is not a sufficient condition for consciousness. However effective this is at producing an effective simulation, novel stimuli outside the computability of the syntax will be noncomputable. Descartes had an intuition about this when he said:

“It is indeed conceivable that a machine could be made so that it would utter words, and even words appropriate to the presence of physical acts or objects which cause some change in its organs; as, for example, if it was touched in some spot that it would ask what you wanted to say to it; if in another, that it would cry that it was hurt, and so on for similar things. But it could never modify its phrases to reply to the sense of whatever was said in its presence, as even the most stupid men can do.”

But we’re at a level where machines exist that can pass the Turing test, so the amount of stimuli that can be novel or unaccounted for by syntax seems to be dwindling.

But is passing the Turing test sufficient? Let’s look to syntax itself, since that’s the primary factor (and the only consistent factor across multiple realizations). In the machine that passes the Turing test, how can we know that the syntax is producing conscious awareness of what is occurring? Consider the “Chinese Room” thought experiment of John Searle:

“Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”

However much manipulation occurs over the input, the Chinese Room inhabitant will not get a semantic, or intentional, meaning out of the process. So as long as the functioning of the system comes from syntax, it will never be sufficient to produce consciousness. In other words, being competent at the Turing test is not sufficient to produce consciousness.

room-searle-iswearididntmakethisup-idkyphilosophersliketrappingpeopleinrooms
Intentionality can’t get into the Chinese Room

Proponents of GOFAI have tried to respond in kind. They claim that with enough complexity, there is indeed a form of consciousness. They argue that a system is aware of something if it has a model of that thing within itself, and self-aware when it has a model of itself within itself.

But there is no reason to think that being self-referential is equivalent to self-awareness. As Penrose puts it:

“A video camera has no awareness of the scenes it is recording; nor does a video-camera aimed at a mirror possess self-awareness”

Other proponents of GOFAI would argue that out of parallel processing (ie. using distinct pathways instead of a serial pathway as seen in a reflex) or out of sufficiently fast processing (eg. quantum computing) that consciousness will emerge. They would point to the Luminous Room thought experiment, where someone waves a magnet and argues that the absence of resulting visible light shows that Maxwell’s electromagnetic theory is false. This is supposed to be analogous to the absence of understanding in the Chinese Room showing that syntax accounting for intentional content would be false.

Processes may be performed faster in these computer variants, but that speed does not account for emergence. In the case of the magnet in the Luminous Room, we know that the smaller components of electromagnetism can be combined to yield effects given larger and faster components. Mutatis mutandis, for dipoles and liquids. And so on. But no so between the arrangement of mind and matter, where there is no semblance of the emergent property in the subordinate properties. This is an explanatory gap and wildly screaming “emergence!” will not solve it. The emergence of a novel property like consciousness, that is not traceable back to any former components, is not in line with reductionism but rather more in line with ad-hoc speculation and metabasis eis allo genos.

And there is also no reason to suppose that these different forms of computing somehow avoid syntax, viz. they are still Turing machines. There is nothing about these variations in the form of the computing apparatus that gets past the simple fact that syntax, even if speedy and accumulating and recursive (and qualify so on ad infinitum), will not generate semantics.

hardware_reductionism
“Like the ancient astronomers, they try to save their theory by adding a few more epicycles” – Hubert Dreyfus

Still other proponents of GOFAI may leave aside arguments from speed or complexity, but may try to relocate the seat of consciousness. They would argue that the system as a whole can deliver the knowledge without any one piece having the full story.

But as we’re going to grant that, then maybe the correct amount of computation requires many more people, and so much more space. Perhaps a better analogy than a room would be a nation with a large population, like India. But it is absurd to think that India as a country understands a story that none of the individuals understand, which is what is being claimed. This sort of reply should strike you as simply wrong. As Searle says:

 “[Their] idea is that while a person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible.”

I) Revisiting Intentionality

Assuming that we stay within an algorithmic framework (viz. ignore Penrose), it has still been shown that syntax (a la Searle), symbols (a la Wittgenstein), and even pragmatics (a la Dreyfus) are problematic for a Turing machine. So what explains the functioning of a computer? How does it simulate intentionality?

Intentionality was a topic dodged earlier, but now it needs to be brought back. There are reasons to think that a machine’s syntactical operations are derived from the human consciousness that programs them, and is not autochthonous. Computation is not intrinsic to physics, but assigned by an observer in the form of an isomorphism between human logical reasoning and the program. As put by Edward Feser:

“Machines have the purposes they do only because human minds with representational powers designed them. Their intentionality is derived rather than intrinsic (as Searle would put it) so that they do differ from human brains in kind and not degree.”

And why is this simulation and not using the same logic? Well, because there is nothing in the objective physical facts that determine which if any of these things is the beginning or end of a casual chain. The short version is:

“There can be no causal or physiological theory of reason.” – Karl Popper

This is because identifying a causal chain presupposes interpretation, a fortiori it presupposes representation and intentionality. That’s circular reasoning at its finest.

thinkyourewrong-theists
Theist philosophers such as Feser disagree

Consider Ned Block’s Blockhead machine. This machine is programmed with a large repertoire of sentences that are syntactically and grammatically correct, and can give appropriate responses to questions. This machine could carry on a conversation and pass the Turing test. However, it fails for the reasons given above:

“If one is speaking to an intelligent person over a two-way radio, the radio will normally emit sensible replies to whatever one says. But the radio does not do this in virtue of a capacity to make sensible replies that it possesses. The two-way radio is like my machine in being a conduit for intelligence, but the two devices differ in that my machine has a crucial capacity that the two-way radio lacks. In my machine, no causal signals from the interrogators reach those who think up the responses, but in the case of the two-way radio, the person who thinks up the responses has to hear the questions. In the case of my machine, the causal efficacy of the programmers is limited to what they have stored in the machine before the interrogator begins.”

Computers and their designs are artifacts, arranged by humans, artificially. The intentionality that seems to originate in the machine is derived from our intentionality.

J) Revisiting Qualia

Qualia is another topic bypassed at the beginning. But we can’t ignore the internal state. A conscious internal state must contain some form of qualia, even if ineffable. But how do we know there is any internal awareness of sensation?

Some would argue that, granted enough information, one can have knowledge of how a physical sensory impression would impinge on the nervous system. But this mistakes types for tokens. Processes will be tokens, capable of being realized across multiple physiological types. If we grant multiple realizability then there is an infinite number of possible hardware that can instantiate the software, and then identifying how input would active processes, not to mention additionally create output, seems hopeless.

Also, given multiple realizability, the ability to see a quale such as red does not necessarily follow from the connection of a certain input to correlated processing. For what happens if we inverted the spectrum of qualia? We can have functionally equivalent states with different qualia, such as an inversion of the color spectrum.

strawberry_qualia_forever
“Neither would it carry any Imputation of Falsehood to our simple Ideas, if by the different Structure of our Organs, it were so ordered, that the same Object should produce in several Men’s Minds different Ideas at the same time; if the Idea, that a Violet produced in one Man’s Mind by his Eyes, were the same that a Marigold produces in another Man’s, and vice versa.” – John Locke

For some, this inner state is irrelevant. In dealing with the inverted spectrum, Quine says: The fixed points are just the shared stimulus and the word. The ideas in between are, as may be and may vary as they please so long as the stimulus in question stays paired with the word in question for all those concerned.” If so, we can continue using functionalism without concern about internal states.

But producing the same output does not mean the inner state and qualia need be identical. Programs producing the same output at the macro does not mean the same internal processing states at a finer level. That is the whole point of multiple realizability after all. So if at the macro level two people call red things ‘red’, this does not mean that their internal states, and experience of the quale, are phenomenally the same. And we should not be indifferent towards inner phenomenal differences. In practical medical terms, this means:

 “The simple analogy to computer hardware and software illustrates the difficulty in psychiatry.  One can understand all of the hardware of a computer system, but this will not explain if or how the computer can run word processing software, video games, instant message or be susceptible to a virus, or make any predictions about the behavior of this software in the real world (for example, no computer technician could predict the writing of this paper, nor, by changing the hardware, alter the content of this paper.) Hardware is finite, but software is infinite, or as infinite as is thought. Without understanding the mechanism of thought, or at least how thoughts or states can affect mood, then a pharmacology of the brain will simply tread water with no progress towards either treatment or diagnosis.  One cannot alter mood without at least simultaneously altering thought.”

And there’s no reason to think that simple mechanisms concatenated will yield qualia, for the same reasons I gave for being against emergent properties above. Think of it this way: Imagine that a billion people in China are given two-way radios to enact signals displayed to them in skywriting. If this functions as a brain, if each person is a neuron or algorithm, then the whole system would never have experiences. You cannot concatenate their actions and form qualia.

So functionality does not guarantee qualia, and so cannot guarantee consciousness.

K) Plasticity beyond Programming

Considering all of the above, it should be no surprise that in 1986, Hubert Dreyfus said:

“Artificial Intelligence’s record of barefaced public deception is unparalleled in the annals of academic studies”

It also appears that the one thing different philosophical persuasions, from Analytic to Continental, from atheist to theist, have in common is their disapproval of GOFAI. Being different schools of thought, they have different reasons of course, but there is a shared disapproval nonetheless. Which is fairly damning for proponents of GOFAI.

Intentional states beyond the laws of language and syntax, that function with non-algorithmic insights need to be modeled. In modern parlance this is called learning. When used in neuroscience it is called plasticity. And the incorporation of these ideas in computer science, as a way to compensate for an over-reliance on pre-designed programs as seen in GOFAI, is called machine learning.

Moving beyond rules and learning to adapt is what will be discussed in the next section.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s