<< First< PreviousNext >Most Recent >>


Your Overlords Declare:
Alexis_Royce, 16 Nov 2014 09:09 am


What do you guys think? Are Computer's emotions any less valid because he's a program? Do they even count as emotions?

Coloring by Meg this week, and you can click on the happy little Computer under the comic for this week's vote incentive. It's very dorky, but I hope you find it to be a little charming. ^_^




User Comments:
BattleStarX, 16 Nov 2014 10:04 am


Now we're just getting philosophical.

And now I need to go play more Harvest Moon, thanks for that. :)

shylarah, 16 Nov 2014 01:10 pm


I don't know anything about Harvest Moon.

As for Computer/Will's feelings...if it reacts like it has feelings, you might as well treat it like it does. That's my boiled down version, at least. We can go into philosophy and ethics, but I don't know if a satisfactory conclusion would ever be reached.

Koren, 16 Nov 2014 01:20 pm


He's basically a fully functional A.I. Just because his feelings are artificial doesn't make them any less real to him. Why should we treat them any differently, then?

Hydra, 16 Nov 2014 02:44 pm


The human mind can be considered one big fleshy computer, all Stan did is transfer the connections into code, Will is no different emotionally than he was before the accident.

Psychikos, 16 Nov 2014 05:50 pm

But you'll always be you
and that's all that matters.

Seth (Guest), 16 Nov 2014 06:25 pm


I do not have nearly enough technical details (which don't exist anyway, because the technology isn't real) to render judgment on the question of whether Computer is truly conscious or just a p-zombie. Best to assume he's conscious until that assumption can be disproved.

Hokova, 17 Nov 2014 07:16 am


Idk, he is self-aware, so I'd say the feelings are valid...

Seth (Guest), 17 Nov 2014 10:01 pm


I find it interesting that we now have three commenters taking it as a given that Computer is self-aware. Is that really a reasonable assumption, though? Technological development doesn't happen by leaps and bounds; it's an iterative process. Isn't it more reasonable to suppose that the first successful brain-to-computer transfer would result in a program that draws on the human's memories, knowledge, and personality to convincingly model their responses to external stimuli, but isn't actually self-aware? In short, I'd expect the first human-based AI to be a p-zombie because creating self-awareness would be harder than creating a human model. We should act on the assumption that he's self-aware until we have more information... but we should also recognize that it's actually a bad assumption.

Hydra, 17 Nov 2014 10:16 pm


@Seth but wouldn't it knowing that it's a program in a computer compared to a real person mean that it IS self aware?

Seth (Guest), 17 Nov 2014 10:39 pm


@Hydra: It would no more know that than your browser knows that you are viewing Evil Plan. It would be data to be processed through the model of Will, and then the model would be updated according to response the model predicts Will would have to knowing that he has been transferred to an AI. It proves that the program can adapt its model to new information - and of course, the more it adapts, the more it diverges from the original Will - but no, it doesn't prove self-awareness. It doesn't prove that Computer has some kind of subjective experience.

ChibiSilverWings, 17 Nov 2014 11:32 pm


@Seth: For me, it's hard to define what humanity IS enough to define what it isn't. But I guess that's just me.

Stefan (Guest), 18 Nov 2014 03:52 pm


@Seth: Your arguments remind me to some degree of John Searle's Chinese Room. And like many people who use it to proof that AI can never be self-aware, you ask the wrong question. Because the question isn't if the computer(a collection of processors, memory and other components) is self aware but if the whole system consisting of the computer AND the program can reach self awareness.
Another great problem many people ignore is that there is no way for a person to proof to the world around it that it is actually self aware. If you by any chance know a way to proof your conciousness to the world I don't think that there would be any philosopher who isn't interested.

Seth (Guest), 18 Nov 2014 04:27 pm


@Stefan: I have, in fact, been talking about Computer (with a capital 'C') as a complete system of hardware and software. The question of whether a computer system could be designed to reach self-awareness is an interesting one, but I do believe the question of whether Computer is self-aware is the right one in this context. Computer wasn't designed to gradually become an AI; it was designed to be a fully formed AI version of Will the moment it was powered on.

I don't, of course, have a proof to offer of my own consciousness, and indeed, I am one of those philosophers who would be extremely interested. I instead must make do with a framework of reasonable logical induction: I am certain of my own consciousness, though I cannot prove it, and it is reasonable to suppose, in the absence of contrary evidence, that other members of my own species are also conscious, and further to suppose that they have made the same induction regarding myself.

Queek (Guest), 18 Nov 2014 11:45 pm


Descartes said it best. "I think, therefore I am."

Post A Comment