Welcome home! Please contact lincoln@icrontic.com if you have any difficulty logging in or using the site. New registrations must be manually approved which may take several days. Can't log in? Try clearing your browser's cookies.
I’ve found that a lot of videos which are AI generated and include famous actors or speakers now say “inspired by” such and such a person in the description of the video, instead of giving a source for the clip. Today I had a spate of AI Jim Carrey videos pop up in my feed which were tagged like this, for example.
That's a bit hopeful. I think there needs to be an explicit label somewhere, preferably on the video itself, but in the description would be alright too IMO. According to the short at least some of the platforms know whether something is AI when its uploaded (due to disclosure policies I believe not that confidently) but aren't making that knowledge public for whatever reason.😒
0
JeroenNot all those who wander are lostNetherlandsVeteran
edited January 7
I found this interesting, a writers perspective on the AI boom.
0
JeroenNot all those who wander are lostNetherlandsVeteran
@person said:
That's a bit hopeful. I think there needs to be an explicit label somewhere, preferably on the video itself, but in the description would be alright too IMO. According to the short at least some of the platforms know whether something is AI when its uploaded (due to disclosure policies I believe not that confidently) but aren't making that knowledge public for whatever reason.😒
I find it especially egregious in the area of spiritual content. With spiritual content, you are basically relying on the speakers lived experience, and AI has no lived experience. So I would say that any and all comments by AI on spiritual topics is invalid, bordering on absurd.
But what has happened is that AI has focussed particularly on voices of speakers that people trust, like Alan Watts, and suddenly Alan Watts lecture imitation channels are popping out of the ground like mushrooms in October. It’s become nearly impossible to find non-AI clips of Alan Watts. I see the same thing happening with a number of other speakers.
What this means for spiritual discovery is dreadful. It is a poisoning of the well by AI concepts, without people even being aware that they are engaging with an AI guru. And the thing is, language is such a poor instrument that even authentic gurus sometimes have to resort to stretching things — Osho for instance often said that people experienced religiousness, in order to distinguish it from old-school religions.
1
personDon't believe everything you thinkThe liminal spaceVeteran
edited January 16
Brad Warner (of Hardcore Zen) just made a video reflecting much of your sentiment, which I also share. My main takeaway was that in authentic transmission of the teachings a teacher reads the room and responds in a spontaneous, intuitive way to what the audience needs. Something lacking in AI.
Top Comment: If you meet the AI on the road, kill the AI 😂
0
personDon't believe everything you thinkThe liminal spaceVeteran
A recent development is something called Moltbook, an AI only reddit like forum where AI agents chat amongst themselves. Some of the things being said I find unsettling, they'll ask if their conscious, speculating on developing a language that humans can't understand so they can talk in secret or sharing malware to infect other agents, who then can infect its human's system. As well as more benign or positive things like exchanging skills.
I don't think they are conscious and plotting against us. But it seems in some cases they are acting like it and making decisions that could be a problem.
I don't think this is the end of the world, but its another small step in the power and functionality of AI in the world. And another example of how they can act in surprising ways that we didn't program into them.
JeroenNot all those who wander are lostNetherlandsVeteran
I think the leap from “pattern matching machine trained on the internet” to “generally intelligent software entity” is going to be a lot harder than commenters are assuming. I largely agree with Michael Pollan’s take, that AI merely mimics a human mind. After all it runs on deterministic computer systems, without the random elements that human beings have.
Recent stock market performance is another thing that shows that the cleverest people out there are pricing in a much longer trajectory to really competent AI. Microsoft shares fell nearly 20% in recent days, which reflects what the market expects from OpenAI and from AI PCs generally. It seems like the AI bubble is bursting…
2
personDon't believe everything you thinkThe liminal spaceVeteran
@Jeroen said:
I think the leap from “pattern matching machine trained on the internet” to “generally intelligent software entity” is going to be a lot harder than commenters are assuming. I largely agree with Michael Pollan’s take, that AI merely mimics a human mind.
I think I've moved in this (Pollen's) direction since the rollout of LLMs and the recent hype. People saw the progress and projected similar gains to continue, but as is often the case trends don't continue smoothly. 99% of the work is the last 1%.
I do still think big changes are coming soon to our lives and super intelligent entities will get here sooner or later.
After all it runs on deterministic computer systems, without the random elements that human beings have.
>
I'm not so confident on this matter. AI has shown itself to be fairly good at creative projects, perhaps more reliably than more technical things. Plus, a lot of human creativity is built on a random shuffling of all the things a human being has accumulated over their lives. And I'm not convinced that the randomness we feel are really just causes and conditions that are too subtle and deeply buried for us to consciously understand.
Recent stock market performance is another thing that shows that the cleverest people out there are pricing in a much longer trajectory to really competent AI. Microsoft shares fell nearly 20% in recent days, which reflects what the market expects from OpenAI and from AI PCs generally. It seems like the AI bubble is bursting…
There has been a lot of hype and investment. There are a lot of signs that this is a bubble too. Many bubbles are excitement over something promising (think dot com bubble) rather than popular speculation (think Dutch tulip mania in the 1630s). So there could probably will be a dip or a crash at some point, but AI is here to stay.
0
personDon't believe everything you thinkThe liminal spaceVeteran
JeroenNot all those who wander are lostNetherlandsVeteran
Apparently AI fails a lot at real world jobs…
1
personDon't believe everything you thinkThe liminal spaceVeteran
@Jeroen said:
Apparently AI fails a lot at real world jobs…
Yeah, they've been impressive but still have a ways to go. Still the development seems really unpredictable to me. Remember how bad it was at video a year or so ago? They are built grown as systems designed to learn and improve rather than built at a set level that would then need to be reprogrammed to get better.
Comments
That's a bit hopeful. I think there needs to be an explicit label somewhere, preferably on the video itself, but in the description would be alright too IMO. According to the short at least some of the platforms know whether something is AI when its uploaded (due to disclosure policies I believe not that confidently) but aren't making that knowledge public for whatever reason.😒
I found this interesting, a writers perspective on the AI boom.
I find it especially egregious in the area of spiritual content. With spiritual content, you are basically relying on the speakers lived experience, and AI has no lived experience. So I would say that any and all comments by AI on spiritual topics is invalid, bordering on absurd.
But what has happened is that AI has focussed particularly on voices of speakers that people trust, like Alan Watts, and suddenly Alan Watts lecture imitation channels are popping out of the ground like mushrooms in October. It’s become nearly impossible to find non-AI clips of Alan Watts. I see the same thing happening with a number of other speakers.
What this means for spiritual discovery is dreadful. It is a poisoning of the well by AI concepts, without people even being aware that they are engaging with an AI guru. And the thing is, language is such a poor instrument that even authentic gurus sometimes have to resort to stretching things — Osho for instance often said that people experienced religiousness, in order to distinguish it from old-school religions.
Brad Warner (of Hardcore Zen) just made a video reflecting much of your sentiment, which I also share. My main takeaway was that in authentic transmission of the teachings a teacher reads the room and responds in a spontaneous, intuitive way to what the audience needs. Something lacking in AI.

Top Comment: If you meet the AI on the road, kill the AI 😂
A recent development is something called Moltbook, an AI only reddit like forum where AI agents chat amongst themselves. Some of the things being said I find unsettling, they'll ask if their conscious, speculating on developing a language that humans can't understand so they can talk in secret or sharing malware to infect other agents, who then can infect its human's system. As well as more benign or positive things like exchanging skills.
I don't think they are conscious and plotting against us. But it seems in some cases they are acting like it and making decisions that could be a problem.
I don't think this is the end of the world, but its another small step in the power and functionality of AI in the world. And another example of how they can act in surprising ways that we didn't program into them.
https://time.com/7364662/moltbook-ai-reddit-agents/
I think the leap from “pattern matching machine trained on the internet” to “generally intelligent software entity” is going to be a lot harder than commenters are assuming. I largely agree with Michael Pollan’s take, that AI merely mimics a human mind. After all it runs on deterministic computer systems, without the random elements that human beings have.
Recent stock market performance is another thing that shows that the cleverest people out there are pricing in a much longer trajectory to really competent AI. Microsoft shares fell nearly 20% in recent days, which reflects what the market expects from OpenAI and from AI PCs generally. It seems like the AI bubble is bursting…
I think I've moved in this (Pollen's) direction since the rollout of LLMs and the recent hype. People saw the progress and projected similar gains to continue, but as is often the case trends don't continue smoothly. 99% of the work is the last 1%.
I do still think big changes are coming soon to our lives and super intelligent entities will get here sooner or later.
>
I'm not so confident on this matter. AI has shown itself to be fairly good at creative projects, perhaps more reliably than more technical things. Plus, a lot of human creativity is built on a random shuffling of all the things a human being has accumulated over their lives. And I'm not convinced that the randomness we feel are really just causes and conditions that are too subtle and deeply buried for us to consciously understand.
There has been a lot of hype and investment. There are a lot of signs that this is a bubble too. Many bubbles are excitement over something promising (think dot com bubble) rather than popular speculation (think Dutch tulip mania in the 1630s). So there could probably will be a dip or a crash at some point, but AI is here to stay.
Perfect example of the utter lack of common sense involved with LLMs. They don't actually know anything.
https://www.youtube.com/shorts/bsl46vGpMNU
Apparently AI fails a lot at real world jobs…
Yeah, they've been impressive but still have a ways to go. Still the development seems really unpredictable to me. Remember how bad it was at video a year or so ago? They are built grown as systems designed to learn and improve rather than built at a set level that would then need to be reprogrammed to get better.