RECENT TOPICS » View all
vileru wrote:
In fact, the hypocrites are those who believe that animals can suffer but never hold them responsible for their actions. Consciousness entails suffering, and free choice is inherent to consciousness. No one would claim a robot is conscious unless it is capable of free choice. Therefore, if an animal can suffer, then it is responsible for its actions.
Actually you are confusing consciousness with self-awareness. Consciousness does not require free choice: you can imagine a situation where an animal is aware of the world around it in a way unique to that individual. It entails a decision making process which responds to external stimuli, but it does not require free choice about which path to take - depending on which philosopher you listen to, you can conceive of a conscious robot.
Self-awareness involves the ability to recognise oneself as an entity and, by extension, appreciate that actions performed by you have results which can be controlled by your decisions. A being which has self-awareness still does not necessarily have free choice - merely an appreciation of his existence as distinct from other entities.
It's very difficult to tell if another being has free choice about their actions. The reason we hold other humans responsible for their actions is that we feel as though we have free choice, other people are similar to us, so we assume they also have the free choice we assume ourselves to have.
In fact, we still have almost no scientific understanding of consciousness as a physiological and neurological phenomenon. For this reason, and until we do, it is unwise to base any form of reasoning on assumptions about consciousness. You'll just be stabbing in the dark.
I just had to dispel your specious reasoning, but it doesn't mean I think animals suffering is a problem or that animals really do suffer. I think anyone who tries to ascribe feelings, sensations or other qualia to an organism from a different species is just guessing, and to base your belief system on a guess is a ridiculous way to live life.
Here's my fairly simple logical deduction: Animals may or may not suffer in farming. Plants may or may not suffer in farming. In order to feed humans, and regardless of whether I decide humans should be fed, farming of some sort will take place. Farming animals for meat is about 1/20th as efficient in land, water, mineral, fossil and ecological resource use as farming plants, so I will choose not to eat animals. Note that I disregard the suffering consideration because there is not enough information available (and there is a practical limit on how much information can be available until we make huge scientific and philosophical breakthroughs which may never happen) to make the decision on this basis.
kendo99, I've vowed to discuss only Japanese on this forum. I pass the torch to you.
Blahah wrote:
...to base your belief system on a guess is a ridiculous way to live life.
I thought it was the only way
Actually you are confusing consciousness with self-awareness. Consciousness does not require free choice: you can imagine a situation where an animal is aware of the world around it in a way unique to that individual. It entails a decision making process which responds to external stimuli, but it does not require free choice about which path to take - depending on which philosopher you listen to, you can conceive of a conscious robot.
Self-awareness involves the ability to recognise oneself as an entity and, by extension, appreciate that actions performed by you have results which can be controlled by your decisions. A being which has self-awareness still does not necessarily have free choice - merely an appreciation of his existence as distinct from other entities.
I can't see the distinction you are trying to make here between self-awareness and consciousness. If an entity is aware of it's world, by extension, that entity, as part of that world, would necessarily have to have some degree of awareness of itself. I find it fairly difficult to imagine any form of consciousness that was purely passive. William James argued, rather brilliantly, that experience (the essential content of consciousness/awareness) was an event where subject/object distinction ultimately broke down.
Further, the free will/determinism arguement that you've brought into play here has nothing to do with either consciousness nor self-awareness. You have conflated several arguements and obscured the issue.
I think that consciousness assumes some degree of freedom, in the sense that an entity, under processes at work within that entity's self, and not entirely determined by external forces, acts. An entity is different from a pinball, despite existing in a pinball-mechanical world, and being subject to the same physical laws as the pinball. I actually think the term self-direction is more appropriate anyways. Free-will carries so much connotational baggage from pre-scientific times that ambiguity and other errors of language make it nearly impossible to come to a scientific understanding.
In fact, we still have almost no scientific understanding of consciousness as a physiological and neurological phenomenon. For this reason, and until we do, it is unwise to base any form of reasoning on assumptions about consciousness. You'll just be stabbing in the dark.
While we are far from a completed neuroscience, we know a lot more than you seem to think. Although we don't have a completed physics, we can still make accurate predictions about events on macro, mundane and quantum scales with the limited physics that we have. Similarly, we have a lot of data about the brain, some damn good theories, and the ability to make accurate predictions about all manner of psychological and neurological events. I agree that reasoning based on those assumptions about consciousness usually termed "folk psychology" are suspect, however, we have enough data to do plenty of reasoning based on scientifically verifiable evidence about consciousness, which is not so suspect.
I think anyone who tries to ascribe feelings, sensations or other qualia to an organism from a different species is just guessing
Daniel Dennett wrote a great book on this subject called Kinds of Minds. He argues from an evolutionary perspective, about what sort of evolutionary needs/requirements would produce the various aspects of consciousness we call Mind. So, for instance, fairly simple organisms have what he terms a "Skinnerian" mind, acting and reacting based purely on behavioristic learning. He concludes that while our rich "mind" probably only came about as we developed language and the ability/need to predict/foresee the future, higher animals would certainly have a rudimentary consciousness that would certainly be aware of pain and suffering. This would be neccessary for them to engage in many of the behaviors they engage in.
You can also argue from neuro-anatomical similarity. Those animals with brains the most similar to ours, probably have minds that are fairly similar to ours in important ways. Essentially your arguement is solipsistic. To take it to the absurd, all we have to do is follow your reasoning. A)We don't know anything about consciousness. B) We can't make any assumptions about consciousness without a completed neuroscience. C) Therefore, we can't assume that other humans are conscious, despite our anatomical and behavorial similarities, and we must accept a position of agnostic solipsism. So, if all the economic/environmental reasons are equal, is it cool if I start eating people? Want to come by for dinner?
I'm not suggesting that animals have as rich a mental life as we do. Clearly they don't, as we use language and other rich symbol sets to represent all sort of things mentally, and those symbol-tools are something no other species has. But to deny that animals experience pain/suffering is fairly laughable, taking into consideration the high levels of anatomical similarity between us and higher animals like mammals, as well as the behavioral similarities when they are in pain.
But...if you're really set on this solipsistic agnosticism, I again extend dinner invitations...Please arrive early as I'll need time to prepare you, err, the dinner...
nest0r wrote:
kendo99, I've vowed to discuss only Japanese on this forum. I pass the torch to you.
日本語で答えたらいいんじゃない?
ただ、それなら別のスレをしたほうがいいかも。
@Blahah
First, let me reemphasize that my post that you quoted was a half-joking reply to someone who said that it's impossible to eat or benefit from animals without being hypocritical. Even though I was kidding around, elaborate the arguments I was playing with would take at least a journal article to fully elaborate. However, it's worth neither my time nor effort to further discuss the topic. Although, since you've taken interest, I'll mention a few points.
I realize that my post ignores the distinction between consciousness and self-awareness. However, the argument can be made that consciousness is self-consciousness.
There is no way that we can determine that humans are conscious either (See: the problem of other minds). What matters is that we act as if they are. If we seriously consider the argument that consciousness is self-consciousness, then that means, by-and-large, animals aren't conscious since we don't treat them as if they are. Even a dog chases its own tail.
@kendo99
Arguing that animals are conscious due to neuro-anatomical similarity falls prey to speculation. The only solid conclusion we can draw from neuro-anatomical similarity is that there are physical similarities between the brains of humans and animals. It will be easy to argue against arguments from neuro-anatomical similarity until we can pinpoint what exact neural conditions give rise to consciousness. Even then, how do we verify that consciousness has been achieved or not? Of course, this problem of verification arises with humans as well, which is where the point about whether we act as if an entity is conscious or not comes in.
Anyway, I apologize to you both for inadequate replies in this post. The issue is seriously complicated and I'd rather avoid the impending philosophical headbutting. At any rate, I'm glad that sparked some genuine reflection =^_^=
Last edited by vileru (2010 April 19, 1:42 am)
I would eat robots too if they tasted good.
Also, who says that robots can't suffer? I offer this pic as proof: http://tiny.cc/sufferingrobot
Last edited by Jarvik7 (2010 April 19, 1:47 am)
kendo99 wrote:
I can't see the distinction you are trying to make here between self-awareness and consciousness. If an entity is aware of it's world, by extension, that entity, as part of that world, would necessarily have to have some degree of awareness of itself. I find it fairly difficult to imagine any form of consciousness that was purely passive.
vileru wrote:
I realize that my post ignores the distinction between consciousness and self-awareness. However, the argument can be made that consciousness is self-consciousness.
We are playing with a concept which defies constrictive definitions, but in the last 30 or so years consciousness has been carefully made distinct from both self-awareness and self-determinism. I'll try to explain precisely...
Consciousness has a requirement of awareness of the thoughts occurring at the present moment. This definition would allow an animal whose decisions were simply the unavoidable result of immediate stimuli to be conscious. Self-awareness is a subset of this conscious experience, and has the further requirement of awareness of oneself as a distinct entity outside of the thoughts occurring at the present moment. For example, I am self-aware because I know that I did some things earlier and will do some things later, and importantly I have the ability to direct my train of thought, whilst another conscious but not self-aware being might lack the ability to deviate from a chain of thoughts initiated by a stimulus.
kendo99 wrote:
William James argued, rather brilliantly, that experience (the essential content of consciousness/awareness) was an event where subject/object distinction ultimately broke down.
What William James is talking about is the immediate experience, and if we take his position that experience is the point at which the subject/object ultimately breaks down, at an instant an entity may be conscious but not self aware. Over a time scale however, the opportunity for self-awareness arises.
kendo99 wrote:
Further, the free will/determinism arguement that you've brought into play here has nothing to do with either consciousness nor self-awareness. You have conflated several arguements and obscured the issue.
I think that consciousness assumes some degree of freedom, in the sense that an entity, under processes at work within that entity's self, and not entirely determined by external forces, acts. An entity is different from a pinball, despite existing in a pinball-mechanical world, and being subject to the same physical laws as the pinball. I actually think the term self-direction is more appropriate anyways. Free-will carries so much connotational baggage from pre-scientific times that ambiguity and other errors of language make it nearly impossible to come to a scientific understanding.
I intermingled discussion of determinism with consciousness because the author to whom I responded did so, and I was asserting the distinction. It's not really worth getting into here, as I agree with you that the concepts are distinct, although if we make a separate thread about it I'm happy to take a stroll down that path.
kendo99 wrote:
While we are far from a completed neuroscience, we know a lot more than you seem to think. Although we don't have a completed physics, we can still make accurate predictions about events on macro, mundane and quantum scales with the limited physics that we have. Similarly, we have a lot of data about the brain, some damn good theories, and the ability to make accurate predictions about all manner of psychological and neurological events. I agree that reasoning based on those assumptions about consciousness usually termed "folk psychology" are suspect, however, we have enough data to do plenty of reasoning based on scientifically verifiable evidence about consciousness, which is not so suspect.
This is a good analogy but you have interpreted it backwards. Actually, until we had quantum theories and relativism, we could make some accurate physical predictions while other results were nonsensical. This is analogous to our current understanding of the mind. We have a well developed biology and can make predictable changes to the body, we know a bit about psychology and can interpret and to some extent predict behaviours, but we lack the crucial link: what neurology causes consciousness? We do not know. Your last assertion cannot be true - we don't and cannot make sound reasoning about consciousness like we can with other aspects of psychology and neurobiology.
kendo99 wrote:
He concludes that while our rich "mind" probably only came about as we developed language and the ability/need to predict/foresee the future, higher animals would certainly have a rudimentary consciousness that would certainly be aware of pain and suffering. This would be neccessary for them to engage in many of the behaviors they engage in.
Dennett's books on consciousness are fascinating, but are completely speculative. They aren't deductive, but he imagines some interesting possibilities.
kendo99 wrote:
You can also argue from neuro-anatomical similarity. Those animals with brains the most similar to ours, probably have minds that are fairly similar to ours in important ways. Essentially your arguement is solipsistic. To take it to the absurd, all we have to do is follow your reasoning. A)We don't know anything about consciousness. B) We can't make any assumptions about consciousness without a completed neuroscience. C) Therefore, we can't assume that other humans are conscious, despite our anatomical and behavorial similarities, and we must accept a position of agnostic solipsism.
I wasn't very specific about what understanding of consciousness we lack (but actually we have very little anyway). Specifically, we have no idea what the physiological conditions there are (and we assume there are some) which give rise to consciousness. Neuroscience, physics and anything in between (or outside) have all fallen short of this - if you can point me to a paper I'd be immensely glad to read it. Without this understanding, we can have no certainty when we compare consciousnesses and assuming shared traits of consciousness based on neurobiological similarities is just guessing. Daniel Dennett, John Searle and the others will only be speculating from ultimately baseless assumptions. Dennett lectured in Bristol last year and accepted this point - It's a limitation of all philosophy of mind. A breakthrough here would be one of the most meaningful discoveries in human history.
My argument would be solipsistic if there were no other considerations available, but there are (and always will be). Because there are other criteria on which we can make decisions about our interactions with animals, and because there is more accurate information available to make those decisions, we can disregard issues of animal suffering because they are all ultimately baseless and use the alternative information. If there were no other way to make the decision, then would be the appropriate time to think about making neurobiological comparisons.
kendo99 wrote:
So, if all the economic/environmental reasons are equal, is it cool if I start eating people? Want to come by for dinner?
Yes it is, human meat is just meat once a person is dead. I would be perfectly happy to eat human meat if there were no social or legal consequences. I've got no particular desire to do so, but I certainly have no problem with it either.
kendo99 wrote:
I'm not suggesting that animals have as rich a mental life as we do. Clearly they don't, as we use language and other rich symbol sets to represent all sort of things mentally, and those symbol-tools are something no other species has. But to deny that animals experience pain/suffering is fairly laughable, taking into consideration the high levels of anatomical similarity between us and higher animals like mammals, as well as the behavioral similarities when they are in pain.
I agree that, to our knowledge, no other species matches our linguistic complexity and by extension they probably differ in their conscious (or not) experience. I dont argue that animals don't necessarily experience pain (though I think you're wrong about how certain we are - the most extensive defense of this I've found was Peter Singer's and it was deeply flawed), rather I think that it doesn't matter if they do experience pain if they aren't self-aware. Without self-awareness pain is just a physical sensation like any other - true pleasure and displeasure are rich and complex interactions of sensations and I don't think we can ascribe them to other species without the unified understanding of consciousness.
When shall I arrive for the meal?
If anyone reading thought "Hmm, what's been happening in consciousness research the past 10 years?", here's a place to start: http://www.mediafire.com/?mwm2mmu2x3n (For table of contents see here -- But don't let that Google Books url fool you, the .pdf is a compilation of papers via Google Scholar, freely available to the public.)
See also: http://books.google.com/books?id=2XWXfmDYTRIC
Much has happened in the past 10 years, but I think these two represent where most of the current major research has developed from...
Last edited by nest0r (2010 April 19, 7:09 am)
IceCream wrote:
show me why "self awareness" or language function is even necessary for any distinction in how pain is perceived (apart from mental pain). Someone sticks a knife in you, you scream and try to get away from it. There is no "oh, i appear to have been stuck with a knife, in this kind of situation i ought to attempt to move away" etcetc. borrrring.
Perhaps you and I experience pain differently, but I can dull or amplify pain by the application of thought. I'm fairly certain this a general feature of pain and not something specific to me. In particular if I focus on some physical pain I'm feeling, it's much more painful because I'm staying aware of the sensation. But because I am self-aware I can choose to focus on something else, effectively lessening the pain. Self-awareness allows me to control the experience of pain, so a non-self-aware being would experience the pain differently. I can't even say how the other being would experience the pain, since it's impossible for me to stop being self-aware.
In Korea, apparently there is a belief that the meat tastes better if the animal has suffered. The preferable way to slaughter dogs is to beat them to death.
Of course, who knows how much of that is true and how much is animal rights advocate exaggeration.
re: robots..
Life forms are nothing but robots made of meat. We have chemical reactions instead of transistors. Discuss.
Last edited by Jarvik7 (2010 April 19, 7:58 am)
How is it difficult to imagine consciousness without awareness?
Before becoming aware of myself I have many vivid memories where it seems I'm almost autonomic in the things I do.
I agree with Jarvik though. The flesh feeds our ego; tells us that metal can't have a "soul."
Save a cow, eat baby.
Life forms are nothing but robots made of meat. We have chemical reactions instead of transistors. Discuss.
Sure, we're very complex machines since we're operating in a physical/mechanical universe. I find it rather easy to imagine a robot/computer sufficiently sophisticated enough to demonstrate behavior indistinguishable from that of a human being (ie. pass the Turing test), and probably have consciousness and self-awareness (although I reiterate that to a greater or lesser extent, awareness of self is implied in awareness as the self exists in the world of which one is aware). Of course, we'd again be in a position where someone like blahhah could easily deny we have any evidence that the machine was conscious, and any arguements someone less skeptical such as myself would make would be unconvincing without a completed neuroscience demonstrating the exact mechanisms by which consciousness exists and then another explanation of how the computer also has consciousness and the mechanisms by which it operates.
We will probably have conscious machines long before we can convince everyone that they are actually conscious.
IceCream wrote:
^ i doubt that very much. It seems that consciousness is likely be some higher level feature / function of chemical reactions.
Consciousness is input/retrieval, as far as can be observed. The only being's consciousness you can see exists for a fact is your own.
IceCream wrote:
^ i doubt that very much. It seems that consciousness is likely be some higher level feature / function of chemical reactions. if that's true, electricity can never replicate it, no matter how complex the program.
Consciousness may supervene on certain chemical reactions (although a lot of what's going on IS also electrical), but that doesn't exclude it from also supervening on electrical or even functional processes...
Jarvik7 wrote:
I would eat robots too if they tasted good.
Also, who says that robots can't suffer? I offer this pic as proof: http://tiny.cc/sufferingrobot
Just wanted to say that pic is epic. I agree that robots can suffer too!
A robot isn't a pile of electricity.
You might not be able to create water with electricity by itself (you can't with chemicals either unless you are processing something into water), but you can process something into water using machinery powered by electricity.
Machinery + electricity = robot
Future computers will almost certainly be organic/molecular, which significantly blurs the line between life and machine.
Also, a sufficiently advanced nanomachine could emulate water molecules in function without actually being water. If you implemented every molecule in a lifeform in this manner and assembled them in the same pattern, would this artificial thing be alive or machine? This is delving into the realm of thought experiment, although this level of technology may be available at some time in the distant future.
Last edited by Jarvik7 (2010 April 19, 12:54 pm)
supervenience means two or more processes can produce the same phenomenon. It's used all the time in quantum mechanics. Just because consciousness first appeared in biological creatures with chemical components to their nervous system (though I again reiterate that electricity plays an enormous role in nervous system function, the chemicals actually appear to tell the electricity when and how to fire), it doesn't mean that the chemical components are what "produce" consciousness. It seems more likely that its a function of patterns and processes, more than the biological hardware itself, though of course the shape and substance of the anatomy is important, it is probably only relevant to the extent that it allows those processes to take place. A machine that used electricity to produce those same processes would in all likelihood be just as conscious... Look at the work of Paul Churchland, to better understand why, The Engine of Reason.
No electricity by itself can't make water, but electricity could be used to power a machine which emulated the processes nature uses to make water, and electricity could power a machine which produced the same patterns and processes of the brain to make consciousness, or even better, could power a machine which used different patterns and processes but was sufficiently intelligent and complex to also produce consciousness. Really, we are in the realm of science fiction at this point, but I see absolutely no scientific or philosophical reason to rule out conscious robots, even without organic/biological components.
sorry for being a dick, but...
No, I was using supervenience incorrectly/unclearly. What I should have said is that there is no reason consciousness couldn't supervene on a property that both electro-mechanical and electro-chemical-mechanical systems could share, ie. function/process. As far as neurotransmitters go, yes, they pass between axon to synapse, where they trigger an electrical charge in the calcium and/or potassium channels which in turn triggers a release of more neurotransmitters from the axon. It's as much electrical as it is chemical. There is no reason some other system couldn't fulfill the role of triggering that electrical response.
EDIT: No reason we are currently aware of...
Last edited by kendo99 (2010 April 19, 3:19 pm)
http://www.labspaces.net/103158/Cat_bra … equivalent ;p
Dehaene talks a bit about serial and digital processing, re: global neuronal workspace, here: http://www.edge.org/3rd_culture/dehaene … index.html
Gerhard Roth's piece in the previously linked NCC book (#5) seemed pretty germane and interesting, albeit dated. A useful launchpad.
Last edited by nest0r (2010 April 19, 3:42 pm)
So far, Lu has connected two electronic circuits with one memristor. He has demonstrated that this system is capable of a memory and learning process called "spike timing dependent plasticity." This type of plasticity refers to the ability of connections between neurons to become stronger based on when they are stimulated in relation to each other. Spike timing dependent plasticity is thought to be the basis for memory and learning in mammalian brains.
"We show that we can use voltage timing to gradually increase or decrease the electrical conductance in this memristor-based system. In our brains, similar changes in synapse conductance essentially give rise to long term memory," Lu said.
[emphasis mine]
nest0r's links ftw. Thanks, that's exactly the sort of thing I'm talking about...Oh, and thanks a lot for sucking me into this discussion so I could waste time here instead of doing something productive, lol.
Last edited by kendo99 (2010 April 19, 3:37 pm)
Yo, I think I busted ~3000 comments, between myself and ruiner, so I deserve a vacation. Till then, reprazent our neuro-peepz, bro. (This is how I talk whilst on vacation.)
Last edited by nest0r (2010 April 19, 3:43 pm)
@IceIceBaby
I didn't make a trinity of early noughties recommendationss since Damasio (cogsci of emotion, like Panksepp) was in the NCC book, but... might find this interesting: http://www.medicalto.com/neurochemistry … s-in-mind/
Last edited by nest0r (2010 April 19, 3:59 pm)
That's why I linked to it, since the deeper link was dead. ;p
http://books.google.com/books?id=3o2fCD … mp;f=false
Only thing I found odd in the review was the mention of this quaint notion of 'qualia', though I see you mentioned as well, so I shan't venture further, lest I ruin my shallow comment style. ^_^ <vacation>
kazelee wrote:
How is it difficult to imagine consciousness without awareness?
Before becoming aware of myself I have many vivid memories where it seems I'm almost autonomic in the things I do.
Yet, you have a memory of those autonomic experiences, which means that you were aware of it to some degree. Philosophers who argue self-consciousness is consciousness would challenge those who claim that there can be consciousness without awareness to imagine seeing without knowing that you're seeing or hearing without knowing that you're hearing. A good example that illustrates this is sleep-walking. A sleep walker, although physically functioning, is not conscious. The sleep-walker's eyes, motor control, and sense of balance work, which is evident in that she walks upright and avoid obstacles in her path. In other words, her senses are working to some degree, but she is not self-aware, and is therefore unconscious. The argument against animal consciousness is based on this idea, and thereby concludes that animals are like sleep walkers. This doesn't entirely clear up the debate over self-awareness, but it's a vague concept that I won't delve into.
Regardless, the real strength of the argument comes from the fact that animals aren't considered ethical agents, that is, entities we hold responsible for their actions. This leads to issues about what ethics is and its purpose, which I will avoid.
At any rate, I'm going to drop out of the discussion. However, I kindly suggest moving the conversation away from neuro-anatomy since its very easy to spiral out into wild speculation.

