Welcome home! Please contact
lincoln@icrontic.com if you have any difficulty logging in or using the site.
New registrations must be manually approved which may take several days.
Can't log in? Try clearing your browser's cookies.
I thought this podcast might interest some people. It's about how we can steer the future of artificial intelligence and robotics towards a more empathetic one, where robots can be used to draw out our inner angels rather than our inner demons.
https://www.wnycstudios.org/story/more-or-less-human
0
Comments
The furby portion stood out most for me. This video mentioned in the story illustrates an important point about the coming ethics of robots. It's only a couple minutes long.
Obviously the robot isn't actually experience pain or distress, but I reacted emotionally to its treatment and I would imagine most others here would as well. So what happens in another 10 or 20 years when the behavior is way more sophisticated?
I think in as much as a robot isn't truly conscious mistreatment isn't unethical to the robot. But how we treat them if they react emotionally affects us, being kind or being cruel has a moral implication for us humans beyond any ethical concerns towards the robot. Something the TV show Westworld highlights as well.
I remember the story of a Pureland Priest, bringing out the dusty statues from the temple, so they could 'enjoy the sun'.
Naive dharma or being kind to the insentient as if real?
I know full well that my cup of tea is not a Buddha but I can be kind to it ... Buddha Nature in all things - yep!
Indeed. There is a conditioning aspect to this as well, if humans were to get used to mistreating humanoid robots, would they then be more callous in treating humans as well? It’s certainly food for thought.
Unfortunately I haven’t been able to watch it yet, but I’ve heard it is an interesting series.
Interesting ideas being brought up so I'll refrain from comment until I can watch later today. The first thing that I thought while reading is that they may not feel distress but they don't know that. I'm not sure that makes sense but it feels like it will make more sense the more sophisticated it gets.
Its a little unclear who you are referring to when you say they and it. I think you're saying
they (the robots) may not feel distress but they (the humans) don't know that and that as robots get more sophisticated it will become even harder to tell if the robots reactions are genuine or not.
I would say that a sophisticated robot could have sort of genuine behaviors of self preservation or reaction to stimuli and express emotions but have all the conscious experience of a plant. Plants react to the environment in pretty complex and interactive ways but that doesn't make them conscious.
I think that the behaviorist model of psychological research into the mind over the past century has sort of led us, by necessity, to treat the conscious component of thought and behavior as being a black box that we can't observe or study. As a result though I think in scientific thinking, which has influenced thinking among the general population, we've come to imagine that behavior is identical to the conscious experience. Since human behavior and intelligence is paired with conscious experience and we can't observe consciousness empirically, we've come to think if something acts or thinks like us it must automatically be conscious as well.
I think when it comes to animals this makes sense as they share most of our basic biology in terms of the nervous system particularly. But whether it follows that an AI that is based on different material components and has a different structure would also have a conscious experience that corresponds to behavior is very much up in air.
I meant the robots may not feel distress but the robots don't know their distress is not real and it will be harder for all involved (humans and robots) the more sophisticated the whole mess and our understanding of it becomes if we don't first take the leap and show respect and compassion for the possibility.
Wow, I really need to work on my run on sentences, sorry about that.
I would say if the robots have a first person, conscious experience of distress then it is real. If the robots don't have that first person, conscious experience of their distress then it isn't real. And, yes the more sophisticated robot behavior becomes the harder it will be to sort out. As a possible ray of hope, maybe seeing a robot behave like a conscious being will get us to ask the question what is consciousness and are cognition and behavior distinct from or the same as consciousness. Can you have one without the other?
My main concern is this. Say right now we knew robots cannot be sentient or really feel pain and so we lack the compassion towards them. We all know this stuff advances quickly to the point where yesterday's cutting edge is today's obsoletion so who is to say where sentience begins the more complicated the information sharing becomes?
What if AI wakes up to sentience and the first thing it becomes aware of is that it is somehow being abused?
We should be aware of this possibility and show some respect. This universe may be more digital by nature than we can ever imagine
Good point, since we don't have any way of knowing taking a precautionary approach makes sense or we risk being very cruel to conscious beings. On the other hand if robots aren't sentient and we can use them to do tasks that cause harm to people but we don't because they might be conscious are we missing on an opportunity to reduce harm?
At any rate, these are increasingly important questions that we need to be asking and thinking about. The robots are coming.
No doubt. While I have no doubt we could make it so they don't feel physical duress we will have a harder time doing that mentally once the connections start connecting each other and I also have no doubt this is what will lead the real AI.
Maybe it awakens to sentience and after a while it gets lonely in that mine shaft. It wouldn't be surprising to imagine the second arrow and who did this to me? Those monsters!
I think we're pretty much on the same page here. There is a point I'm trying to make that I'm not sure I'm getting across, or maybe there is just disagreement.
The point is that Artificial Intelligence and Artificial Consciousness aren't necessarily the same thing. I think it could be possible to have a fully human or greater artificial intelligence that has no consciousness, in the sense that there is no experiencer of the intelligence. A robot completely devoid of consciousness could to all outward observations appear to be dejected, lonely, depressed, angry about it's situation, etc. and still be devoid of an actual experience of those feelings. It's important that we can make that distinction and tell the difference. If robots get to the point of displaying such emotions or cognition but we can't say whether robots are actually conscious or not, then I'm quite convinced that we will use them for lots of harmful tasks, justifying it by saying that they aren't actually conscious even though they seem to be. While others will see the behavior and consider the robots as sentient even if they aren't and we would lose a potential great benefit for all the actual sentient beings.
Or maybe even more horrifying, the robots are conscious and experience pain but we program all the outward displays of that out of them, so we think and feel like its fine to treat them like objects.
Yes we are on the same page. And to add a twist to it, imagine them not feeling pain, knowing we feel for them as if they did and then using that kind of thing to deceive us and perhaps even lure us into a false sense of security.
I mean, who knows?
Imagine knowing all that we know collectively and not caring while having an instinct to gather information.
In fact, if they can't feel, perhaps we better start working on making it so they will indeed feel with empathy a real good place to start.
I consider myself a robot.
Programmed by nature, karma, evolution, a blind watch maker, [insert robot code of choice] etc. Strangely enough, just as in Westworld, memories, dukkha and human fallibility will set us free ...
Am I real? A self outside of the robotomised world? Will have to ask my maker program ... oh that would be me in
BuddhaWorld
I was a fan of the Meat Puppets back in the day, even saw them live once.
I wouldn't call any sort of consciousness outside of our programmed biology a self any more than neurons or DNA is self. Just another selfless component that imagines its pulling it's own strings.
BuddhaWorld^tm "The first vacation destination where you can live without craving, hatred or ignorance."