Upcoming Conventions:

Colorado Springs Comic Con:
Aug 25th-27th

GrandCon, Grand Rapids:
Sept 15-17

Grand Rapids Comic Con:
Oct 20-22

Youmacon, Detroit MI:
Nov 3-6

Con+Alt+Delete, IL:
Dec 15-17

Shuto Con, Lansing:
Mar 23- Mar 25

Anime Matsuri, Houston TX:
Mar 30-Apr 1



<< First< PreviousNext >Most Recent >>

Your Overlords Declare:
Alexis_Royce, November 16th, 2014, 9:09 am

What do you guys think? Are Computer's emotions any less valid because he's a program? Do they even count as emotions?

Coloring by Meg this week, and you can click on the happy little Computer under the comic for this week's vote incentive. It's very dorky, but I hope you find it to be a little charming. ^_^

User Comments:
BattleStarX, November 16th, 2014, 10:04 am

Now we're just getting philosophical.

And now I need to go play more Harvest Moon, thanks for that. :)

shylarah, November 16th, 2014, 1:10 pm

I don't know anything about Harvest Moon.

As for Computer/Will's feelings...if it reacts like it has feelings, you might as well treat it like it does. That's my boiled down version, at least. We can go into philosophy and ethics, but I don't know if a satisfactory conclusion would ever be reached.

Koren, November 16th, 2014, 1:20 pm

He's basically a fully functional A.I. Just because his feelings are artificial doesn't make them any less real to him. Why should we treat them any differently, then?

Hydra, November 16th, 2014, 2:44 pm

The human mind can be considered one big fleshy computer, all Stan did is transfer the connections into code, Will is no different emotionally than he was before the accident.

Psychikos, November 16th, 2014, 5:50 pm

But you'll always be you
and that's all that matters.

Seth (Guest), November 16th, 2014, 6:25 pm

I do not have nearly enough technical details (which don't exist anyway, because the technology isn't real) to render judgment on the question of whether Computer is truly conscious or just a p-zombie. Best to assume he's conscious until that assumption can be disproved.

Hokova, November 17th, 2014, 7:16 am

Idk, he is self-aware, so I'd say the feelings are valid...

Seth (Guest), November 17th, 2014, 10:01 pm

I find it interesting that we now have three commenters taking it as a given that Computer is self-aware. Is that really a reasonable assumption, though? Technological development doesn't happen by leaps and bounds; it's an iterative process. Isn't it more reasonable to suppose that the first successful brain-to-computer transfer would result in a program that draws on the human's memories, knowledge, and personality to convincingly model their responses to external stimuli, but isn't actually self-aware? In short, I'd expect the first human-based AI to be a p-zombie because creating self-awareness would be harder than creating a human model. We should act on the assumption that he's self-aware until we have more information... but we should also recognize that it's actually a bad assumption.

Hydra, November 17th, 2014, 10:16 pm

@Seth but wouldn't it knowing that it's a program in a computer compared to a real person mean that it IS self aware?

Seth (Guest), November 17th, 2014, 10:39 pm

@Hydra: It would no more know that than your browser knows that you are viewing Evil Plan. It would be data to be processed through the model of Will, and then the model would be updated according to response the model predicts Will would have to knowing that he has been transferred to an AI. It proves that the program can adapt its model to new information - and of course, the more it adapts, the more it diverges from the original Will - but no, it doesn't prove self-awareness. It doesn't prove that Computer has some kind of subjective experience.

ChibiSilverWings, November 17th, 2014, 11:32 pm

@Seth: For me, it's hard to define what humanity IS enough to define what it isn't. But I guess that's just me.

Stefan (Guest), November 18th, 2014, 3:52 pm

@Seth: Your arguments remind me to some degree of John Searle's Chinese Room. And like many people who use it to proof that AI can never be self-aware, you ask the wrong question. Because the question isn't if the computer(a collection of processors, memory and other components) is self aware but if the whole system consisting of the computer AND the program can reach self awareness.
Another great problem many people ignore is that there is no way for a person to proof to the world around it that it is actually self aware. If you by any chance know a way to proof your conciousness to the world I don't think that there would be any philosopher who isn't interested.

Seth (Guest), November 18th, 2014, 4:27 pm

@Stefan: I have, in fact, been talking about Computer (with a capital 'C') as a complete system of hardware and software. The question of whether a computer system could be designed to reach self-awareness is an interesting one, but I do believe the question of whether Computer is self-aware is the right one in this context. Computer wasn't designed to gradually become an AI; it was designed to be a fully formed AI version of Will the moment it was powered on.

I don't, of course, have a proof to offer of my own consciousness, and indeed, I am one of those philosophers who would be extremely interested. I instead must make do with a framework of reasonable logical induction: I am certain of my own consciousness, though I cannot prove it, and it is reasonable to suppose, in the absence of contrary evidence, that other members of my own species are also conscious, and further to suppose that they have made the same induction regarding myself.

Queek (Guest), November 18th, 2014, 11:45 pm

Descartes said it best. "I think, therefore I am."

Post A Comment