On Writing About Harry Potter

It is with regret that I tell you all that I do not have a book review for this week. The book that I was so excited to write about is lost in the chaos that is my half-unpacked bedroom and I didn’t have time to finish it by my Thursday night deadline, a combination of physical therapy, work, and travel getting in my way. In lieu of a proper book review, I would like to discuss my favorite book series, Harry Potter.

Writing about Harry Potter is difficult these days. Things were much more straightforward when we only had the seven books, but with the advent of Pottermore, what is and isn’t canon has become more and more of a question. In the beginning, I tried to keep up, but recently I have been more of an advocate for returning to the original seven texts whenever I am in any sort of doubt.

FB_IMG_1531253678707.jpg
I’m not sure that I would pay them to stop, but I do think that the situation has gotten slightly out of hand.

When I write about Harry Potter, I try to stick to the main text as my base of evidence, though I will admit to a certain amount of cherry-picking when it comes to the extended Harry Potter universe. The fact of the matter is that I had to come up with a system, because I do tend to spend a great deal of my time thinking and writing about Harry Potter.

One of my primary missions while I was in college (other than simply graduating) was to make sure that every semester I made a significant Harry Potter reference in at least one of my graded assignments every semester. I am pleased to say that I succeeded, and my final Harry Potter essay was worth 60% of my grade in the last class I needed to complete my major. I’m quite proud of this paper, which I worked on with no small amount of dedication (as anyone who had an essay worth 60% of their grade would) which is why I posted it on this site in the first place. The paper is concerned with the representation of fate and free will and agency as a concept in the Harry Potter universe, and is very much tailored to the religious philosophy that predated modernity, which was the primary focus of that class. If you would like to read the entire essay you can do so here, though I recommend setting some time aside to do so, since it is on the longer side.

While I wrote many papers about Harry Potter during my undergraduate career at Brandeis, the only other one that I felt was worth posting is an essay that I wrote for my Introduction to Global Literature course, which I took spring of my sophomore year. The essay compares how morality is conveyed via fantastic literature versus how it is conveyed in realistic literature, contrasting the Harry Potter series with Persepolis by Marjane Satrapi. If you would like to read that essay you can find it here, and I do promise that it is shorter than the other one, having a different length requirement and being worth a much smaller portion of the grade – 20% I believe, but I’m too lazy to track down my old syllabus.

I’m considering digging up some of my older Harry Potter essays that I wrote back in middle/ early high school, when I felt the pain that many teenagers feel of the world having turned its back on me, which is when I turned to the Harry Potter series. Depending on how much I agree or disagree with the thoughts of my former self – not to mention my former self’s attention to grammar – I might end up posting them, or at least my revised commentary on them.

In any case, don’t expect this to be the last discussion of Harry Potter on this blog, and tune in next week for mystery topic on Tuesday and a guaranteed book review on Friday.

Cheers,

Talia

The Ghost in the Machine in the Chinese Room

[This is a guest post by Talia’s girlfriend Annie, who is maintaining this blog while Talia is away at Middlbury Language Schools. Also, I’m sorry this post is a couple days late; I’ve been really busy this week.]

The “Chinese room” is a famous thought experiment in philosophy of mind that argues that, no matter how well the output of a computer program imitates the output of human thought processes, a computer can never attain true consciousness or understanding. The argument was first articulated by philosopher John Searle in his 1980 paper “Minds, Brains, and Programs,” and runs as follows:

Suppose that I'm locked in a room and given a large batch of Chinese,writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch a "script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call the 'program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while, I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view – that is, from the point of view of somebody outside the room in which I am locked – my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view-from the point of view of someone reading my "answers" – answers to the Chinese questions and English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.

Searle argues that, just as his ability to manipulate symbols to produce what looks like fluent Chinese would not mean he understands Chinese, neither would a computer’s ability to do the same mean that it understands Chinese (or whichever language it is imitating).

Now, I’m not going to mince words here: I think this argument is completely wrong. But it’s wrong in a way that’s worth considering at length, because I think it illuminates a common error in how many people think about computers – and, for that matter, about minds.

 

In identifying himself with the computer in this thought experiment, Searle is implicitly treating the computer, and thus the “mind” of a hypothetical artificial intelligence, as isolable from the programs that are given to it. In Searle’s formulation, programming a computer is analogous to giving a set of instructions to a human who then blindly carries them out. The human’s mind processes the instructions and moves the body so as to carry them out, but doesn’t gain deeper understanding from them.

But I think Searle is drawing his conceptual boundaries in the wrong place. By placing a human in the Chinese room as the agent carrying out the instructions, Searle has biased himself and his audience in favor of his interpretation. We all know that humans have conscious, thinking minds, so we naturally assume that the human is the only thing in the Chinese room that could be doing any thinking. But is this necessarily the case? I would argue that no, it’s not. Searle’s main error, in my view, is his identification of the human in the Chinese room with the computer. The human is actually carrying out a role more akin to that of a processor – the part of the computer’s physical hardware that translates the information in a program into actions. Treating the processor as if it were the entire computer completely overlooks the role played by the information in the programs themselves. Treating the processor as the location of an artificial intelligence’s “mind”, with no reference to the information being processed, is like looking for the human mind in the physical architecture of individual neurons without paying any attention to the electrochemical state of those neurons and the information encoded by that state. In the analogy between mechanical minds and human minds, programming a computer isn’t like giving a human a list of instructions – it’s more like giving them a psychiatric drug that directly modifies the functioning of their brain.

In the Chinese room, the human is embedded in a larger system that includes the rule books the human is using to process the Chinese writing. The human may not understand Chinese, but I would argue that this larger system does. If this system contains all the information necessary to recognize a semantically meaningful input and produce an equally meaningful output in response, and to do so with all the robustness and fluidity of a native human speaker of the language, I would be entirely comfortable saying that this system understands the language. If we are assuming that anything capable of understanding a language must qualify as a mind, then the Chinese room represents one mind embedded inside another one.

If this argument is hard for you to intuitively grasp, it might help to think about what the physical architecture of the Chinese room would have to look like. At my current job, I work on programs that process language, and the code for these programs is really long. I only work on a small portion of it, but I’d guess that if the whole thing were printed out, it could fill up a few books. And what these programs are capable of is not even close to what the hypothetical program in Searle’s thought experiment would have to be capable of. The programs I work on take a piece of text as input and identify key words and phrases that relate to a particular domain of interest. Most of them stop there, but the most loquacious among them will spit out one of a few pre-written phrases to prompt the user for more input. That’s a far cry from producing fluent speech that’s indistinguishable from that of a native speaker. A program that could do that would have to be vastly longer and more complicated than any of the programs I’m familiar with.* And in the Chinese Room experiment, we’re not even talking about a digital representation of these programs, but an analog one, written out in English on physical sheets of paper. Theoretical physicist Scott Aaronson, in his book Quantum Computing Since Democritus, describes what this might look like:

The third thing that annoys me about the Chinese Room argument is the way it gets so much mileage from a possibly misleading choice of imagery, or, one might say, by trying to sidestep the entire issue of computational complexity purely through clever framing. We’re invited to imagine someone pushing around slips of paper with zero understanding or insight, much like the doofus freshmen who write (a + b)^2 = a^2 + b^2 on their math tests. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time? If each page of the rule book corresponded to one neuron of a native speaker’s brain, then probably we’d be talking about a “rule book” at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it’s not so hard to imagine this enormous Chinese-speaking entity that we’ve brought into being might have something we’d be prepared to call understanding or insight.

Much more complicated than a guy in a room shuffling papers around, no?

To read more of Annie’s content check out her blog at http://www.escape-velocities.com/ or the guest posts page for a list of posts on word-for-sense made by people that aren’t Talia.

Citations:

Searle, John. “Minds, Brains, and Programs.” The Behavioral and Brain Sciences, Vol. 3, Cambridge University Press, 1980.  

Aaronson, Scott. Quantum Computing since Democritus.  Cambridge University Press, 2013.

 

*Chatbots that can produce fairly naturalistic output do exist (eg. Siri or Cortana), and they are indeed more complex than the programs I work on, but even they haven’t achieved the level of fluency described in Searle’s experiment. The most advanced one I’m aware of is Microsoft’s Tay, and at its most fluent it produced a fairly convincing impression of an internet troll. Whether this can be considered human-level linguistic proficiency is, shall we say, open to interpretation.