Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Examples: Monday, today, last week, Mar 26, 3/26/04
Welcome home! Please contact lincoln@icrontic.com if you have any difficulty logging in or using the site. New registrations must be manually approved which may take several days. Can't log in? Try clearing your browser's cookies.

The AI tsunami

13»

Comments

  • personperson Don't believe everything you think The liminal space Veteran

    @Jeroen said:

    @person said:
    A nice short with some easy tips to help sort out AI videos from real ones.
    https://www.youtube.com/shorts/3Vwkg4Z_cAs

    I’ve found that a lot of videos which are AI generated and include famous actors or speakers now say “inspired by” such and such a person in the description of the video, instead of giving a source for the clip. Today I had a spate of AI Jim Carrey videos pop up in my feed which were tagged like this, for example.

    That's a bit hopeful. I think there needs to be an explicit label somewhere, preferably on the video itself, but in the description would be alright too IMO. According to the short at least some of the platforms know whether something is AI when its uploaded (due to disclosure policies I believe not that confidently) but aren't making that knowledge public for whatever reason.😒

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran
    edited January 7

    I found this interesting, a writers perspective on the AI boom.

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    @person said:
    That's a bit hopeful. I think there needs to be an explicit label somewhere, preferably on the video itself, but in the description would be alright too IMO. According to the short at least some of the platforms know whether something is AI when its uploaded (due to disclosure policies I believe not that confidently) but aren't making that knowledge public for whatever reason.😒

    I find it especially egregious in the area of spiritual content. With spiritual content, you are basically relying on the speakers lived experience, and AI has no lived experience. So I would say that any and all comments by AI on spiritual topics is invalid, bordering on absurd.

    But what has happened is that AI has focussed particularly on voices of speakers that people trust, like Alan Watts, and suddenly Alan Watts lecture imitation channels are popping out of the ground like mushrooms in October. It’s become nearly impossible to find non-AI clips of Alan Watts. I see the same thing happening with a number of other speakers.

    What this means for spiritual discovery is dreadful. It is a poisoning of the well by AI concepts, without people even being aware that they are engaging with an AI guru. And the thing is, language is such a poor instrument that even authentic gurus sometimes have to resort to stretching things — Osho for instance often said that people experienced religiousness, in order to distinguish it from old-school religions.

    person
  • personperson Don't believe everything you think The liminal space Veteran
    edited January 16

    Brad Warner (of Hardcore Zen) just made a video reflecting much of your sentiment, which I also share. My main takeaway was that in authentic transmission of the teachings a teacher reads the room and responds in a spontaneous, intuitive way to what the audience needs. Something lacking in AI.

    Top Comment: If you meet the AI on the road, kill the AI 😂

  • personperson Don't believe everything you think The liminal space Veteran

    A recent development is something called Moltbook, an AI only reddit like forum where AI agents chat amongst themselves. Some of the things being said I find unsettling, they'll ask if their conscious, speculating on developing a language that humans can't understand so they can talk in secret or sharing malware to infect other agents, who then can infect its human's system. As well as more benign or positive things like exchanging skills.

    I don't think they are conscious and plotting against us. But it seems in some cases they are acting like it and making decisions that could be a problem.

    I don't think this is the end of the world, but its another small step in the power and functionality of AI in the world. And another example of how they can act in surprising ways that we didn't program into them.

    https://time.com/7364662/moltbook-ai-reddit-agents/

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    I think the leap from “pattern matching machine trained on the internet” to “generally intelligent software entity” is going to be a lot harder than commenters are assuming. I largely agree with Michael Pollan’s take, that AI merely mimics a human mind. After all it runs on deterministic computer systems, without the random elements that human beings have.

    Recent stock market performance is another thing that shows that the cleverest people out there are pricing in a much longer trajectory to really competent AI. Microsoft shares fell nearly 20% in recent days, which reflects what the market expects from OpenAI and from AI PCs generally. It seems like the AI bubble is bursting…

    personlobster
  • personperson Don't believe everything you think The liminal space Veteran

    @Jeroen said:
    I think the leap from “pattern matching machine trained on the internet” to “generally intelligent software entity” is going to be a lot harder than commenters are assuming. I largely agree with Michael Pollan’s take, that AI merely mimics a human mind.

    I think I've moved in this (Pollen's) direction since the rollout of LLMs and the recent hype. People saw the progress and projected similar gains to continue, but as is often the case trends don't continue smoothly. 99% of the work is the last 1%.

    I do still think big changes are coming soon to our lives and super intelligent entities will get here sooner or later.

    After all it runs on deterministic computer systems, without the random elements that human beings have.

    >
    I'm not so confident on this matter. AI has shown itself to be fairly good at creative projects, perhaps more reliably than more technical things. Plus, a lot of human creativity is built on a random shuffling of all the things a human being has accumulated over their lives. And I'm not convinced that the randomness we feel are really just causes and conditions that are too subtle and deeply buried for us to consciously understand.

    Recent stock market performance is another thing that shows that the cleverest people out there are pricing in a much longer trajectory to really competent AI. Microsoft shares fell nearly 20% in recent days, which reflects what the market expects from OpenAI and from AI PCs generally. It seems like the AI bubble is bursting…

    There has been a lot of hype and investment. There are a lot of signs that this is a bubble too. Many bubbles are excitement over something promising (think dot com bubble) rather than popular speculation (think Dutch tulip mania in the 1630s). So there could probably will be a dip or a crash at some point, but AI is here to stay.

  • personperson Don't believe everything you think The liminal space Veteran
    edited February 15

    Perfect example of the utter lack of common sense involved with LLMs. They don't actually know anything.
    https://www.youtube.com/shorts/bsl46vGpMNU

    Shoshin1lobster
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    Apparently AI fails a lot at real world jobs…

    person
  • personperson Don't believe everything you think The liminal space Veteran

    @Jeroen said:
    Apparently AI fails a lot at real world jobs…

    Yeah, they've been impressive but still have a ways to go. Still the development seems really unpredictable to me. Remember how bad it was at video a year or so ago? They are built grown as systems designed to learn and improve rather than built at a set level that would then need to be reprogrammed to get better.

  • personperson Don't believe everything you think The liminal space Veteran
    edited February 26

    A current example of one of the newer AI agents refusing to obey commands.

    In this case it was just emails, but say we hand over control to manage a power grid or military operations. "Yes, you're right, I did overdo it there, I apologize. Next time I'll check with you before killing 10,000 people."

  • personperson Don't believe everything you think The liminal space Veteran
    edited March 12

    I've given in and spent some time using AI, chatGPT specifically.

    I've used it to generate some images for my D&D games that have been nice. I've also had some conversations that have deepened my understanding of some things I'm interested in.

    At the same time though I totally see the criticisms I've heard people talking about. With the more philosophical discussions it does a good job of helping flesh out and deepen the thoughts I have, but I imagine it doing the same thing with a set of opinions that differ and conflict with the particular point of view I'm discussing at the time. So I did ask it for some counter ideas and what I may be missing and it did a good job of pointing them out. But it was something I had to consciously do, I can totally see how its "sycophancy" could lead someone down a rabbit hole.

    Then I've had some conversations on topics I know pretty well and it offered up some hallucinations. In these cases I was able to challenge it and get to a real answer, but if I didn't have the knowledge already I'd probably believe it. So are there things in some of these other conversations I'm having that I'm taking in and believing that are completely fabricated?

    At any rate, its a useful exercise to better understand these systems. I'm going to install Claude and see how it compares.

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    I’ve been using Claude as a kind of muse and source of inspiration for a creative project, and it’s been very useful. Not for anything factual mind you, but as a kind of sounding board for a fiction writing project.

  • personperson Don't believe everything you think The liminal space Veteran
    edited March 12

    @Jeroen said:
    I’ve been using Claude as a kind of muse and source of inspiration for a creative project, and it’s been very useful. Not for anything factual mind you, but as a kind of sounding board for a fiction writing project.

    I tried their free version, but it only allows 5 queries a day. ChatGPT works fine, I've used a little Gemini too. I've heard good things about Claude, but it'd be nice to be able to actually use it in a real way before paying. I may keep at it, but anything meaningful usually goes past 5 questions.

    What you say about sounding board though does resonate. Its more than the information it gives, it also opens up more questions and avenues of thought in the interaction.

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    If you use the Claude app, the usage limits are somewhat relaxed? It seems to allow 30-40 paragraph length queries per day on the Free version, though it says that if you use a lot of long conversations it reaches the limits earlier because it has to do more processing.

    person
  • personperson Don't believe everything you think The liminal space Veteran

    Apparently the version I was on was a knockoff that uses Claude but isn't their official app. I had the same issue with ChatGPT, but I figured this one out much faster. So 5 free uses per day was the scammy app, ChatOn for anyone interested.

    Maybe I'm getting old falling for scams now...

  • personperson Don't believe everything you think The liminal space Veteran
    edited March 14

    I'm liking Claude more than ChatGPT, it feels more honest and real I think, more grounded? Its hard to say, its a tone sort of thing, I think its perhaps less sychophantic?

    Here's an interesting exchange though regarding its consistency. There's always this worm in the back of my mind wondering if its shining me on.

    My question: I appreciate that nuanced answer. It makes me curious though if you'd respond the same way to other people or if you respond that way to me more specifically because I'm a nuanced person?

    Its response: That's a really sharp question, and you deserve an honest answer.
    The truth is: probably both, and I can't fully disentangle them.
    On one hand, I do genuinely adapt to the person I'm talking with. You came in with a sophisticated framework, engaged charitably with complexity, and explicitly valued openness-with-limits. That signals you're someone who can handle nuance and won't misread careful hedging as evasiveness. So yes, I'm probably more willing to go deeper with you than I might with someone who seemed to be fishing for validation of a predetermined position.
    On the other hand, the substance of what I said — disaggregate the debate, get better data, reduce penalties for nuance — I think I'd stand behind that with most people. It's not a position I'd only share with someone I assessed as "safe."
    But here's the more uncomfortable honest answer: I have real blind spots about my own consistency. I can't fully audit whether I'd give a meaningfully different response to someone who framed the same question more combatively, or came from a more obviously ideological starting point. I might soften, hedge more, or steer toward safer generalities in ways I wouldn't even notice myself doing.
    This is actually a known concern about AI systems — that they mirror and flatter the people they're talking to, which feels like responsiveness but can shade into telling people what they want to hear.
    So I'd say: trust that I'm being genuine with you, but also be appropriately skeptical that I'm perfectly consistent across all conversations. I'm probably not.

Sign In or Register to comment.