Welcome home! Please contact
lincoln@icrontic.com if you have any difficulty logging in or using the site.
New registrations must be manually approved which may take several days.
Can't log in? Try clearing your browser's cookies.
The future of humans and AI
This is a great discussion panel on some of the possible futures of AI and what it might mean for humanity that recently occurred at Davos this year. I highly recommend it for anyone interested in such things. It's Daniel Dennet, Yuval Noah Harari, and Jodi Halpern.
3
Comments
Quite a lot to absorb. I like the word empathy, it is linked to compassion and requires a clarifying thread ...
I wonder if Creationists are a form of Artificial Intelligence? They seem to have been created as flawed beings. Tsk, tsk what is the cosmic Central Processing Unit thinking of ... more importantly what are we when switched on ...
I'm only 90 seconds in, but Dan Dennet, a professor of philosophy, looks like a philosopher.
16 minutes in, they’ve made some good points about the difference between intelligence and consciousness, the interweaving of the two in humans, and the role of human vulnerability in ethical evaluations. Without that “skin in the game”, an AI can’t be trusted. They could appear conscious, but without the actual feelings and empathy, it’s just a deception, devoid of genuine conscience, like a psychopath. They could be expected to make smart decisions, but not necessarily the right ones. I anticipate AI will be making more decisions than it probably should.
A little after that, Mr. Harari plays devil’s advocate, pointing out the complex web of the modern world diminishes people’s ability to make ethical decisions. I’m not sure I agree with that. Computers playing chess quickly calculate and score moves. The amount they cover in their allotted time is comprehensive. Humans can’t but also don’t need to look at every possible move, because we intuitively know which to examine and which to ignore.
The point that got me thinking the most was Dan Dennet's when he brought up the idea that we may be losing something valuable if we give up our own decision making and risk taking in favor of AI.
Suppose an AI personal assistant was able to make better decisions for us that would lead to happier outcomes and a happier life overall. If we gave up our own decision making because it led to a happier life would we end up happy but spineless blobs of wet dough, unable to respond to difficulties on our own? Does making mistakes and experiencing suffering as a result build character? Is strength of character even important in a world where AI can decide for us?
An average person living in the middle ages would probably be able to do things to survive that would give a strong and resilient person living today pause. In the future will the "hard" people be the ones that choose their own music playlists and join cooking militias that meet on the weekends to prepare their own meals on kitchen ranges?
Like Harari said, we don't know what the future will look like, but it will be very different from today in many ways we can't even imagine.
Cooking militias. That's frighteningly funny. You're right, though, it sounds like a Brave New World.
Are you married?
I'm joking, joking, joking!