Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Examples: Monday, today, last week, Mar 26, 3/26/04
Welcome home! Please contact lincoln@icrontic.com if you have any difficulty logging in or using the site. New registrations must be manually approved which may take several days. Can't log in? Try clearing your browser's cookies.

The AI tsunami

2

Comments

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    Have you caught the latest announcement by OpenAI? They’ve developed a new version which includes a virtual computer interface through which the AI can complete tasks for you. So it’s no longer limited to a chat interface… they’re tag lining it “bridging research and action”. Fascinating stuff. See here:

    https://openai.com/index/introducing-chatgpt-agent/

    Also I hear that Grok 4 is significantly better than ChatGPT at a lot of research tasks currently.

  • JasonJason God Emperor Arrakis Moderator

    @person said:

    @Jason said:
    The question is, how can we come to a consensus that something needs to change and then organize in a politically powerful way to change it?

    To answer for myself, I think I need some sort of reasonable, tested path forward. I have a hard time hoping that jumping off into the unknown will bring something better rather than trying to restore or reform something that has worked in the past.

    The imagery that comes to my mind is like standing on the edge of a cliff with an obscuring mist a few feet away. Fire is creeping forward and things are becoming untenable. Some people have a comfortable spot and tell me things are fine, others are seeming to fan the flames in the hopes that I'll take the leap. Meanwhile I'd rather try to put out the fire or build a bridge into the unknown.

    Not really trying to be right. More trying to genuinely answer the question from my perspective and what I feel I need or want.

    It's understandable; and many people feel the same way, which is why things tend to change very slowly until the systematic contradictions and societal circumstances get to a critical level and social unrest/pressure forces change. And in many cases, I think the change is long past overdue when it finally arrives.

    For example, slavery in this country led to economic growth and prosperity, with black bodies literally used as collateral and considered assets for loans, supporting the early American stock market. In that case, even though change took hundreds of years, it's easy for us to look back and say it obviously needed to happen. That said, in the moment, a large number of people preferred to try to put out the fire of abolition rather than hoping that jumping off into the unknown will bring something better.

    That's not to say the situation with AI is analogous to slavery, only that it's often hard to see the need for change in the moment. But sometimes, change is not only required, it's the moral thing to do.

  • personperson Don't believe everything you think The liminal space Veteran

    @Jason said:

    @person said:

    @Jason said:
    The question is, how can we come to a consensus that something needs to change and then organize in a politically powerful way to change it?

    To answer for myself, I think I need some sort of reasonable, tested path forward. I have a hard time hoping that jumping off into the unknown will bring something better rather than trying to restore or reform something that has worked in the past.

    The imagery that comes to my mind is like standing on the edge of a cliff with an obscuring mist a few feet away. Fire is creeping forward and things are becoming untenable. Some people have a comfortable spot and tell me things are fine, others are seeming to fan the flames in the hopes that I'll take the leap. Meanwhile I'd rather try to put out the fire or build a bridge into the unknown.

    Not really trying to be right. More trying to genuinely answer the question from my perspective and what I feel I need or want.

    It's understandable; and many people feel the same way, which is why things tend to change very slowly until the systematic contradictions and societal circumstances get to a critical level and social unrest/pressure forces change. And in many cases, I think the change is long past overdue when it finally arrives.

    For example, slavery in this country led to economic growth and prosperity, with black bodies literally used as collateral and considered assets for loans, supporting the early American stock market. In that case, even though change took hundreds of years, it's easy for us to look back and say it obviously needed to happen. That said, in the moment, a large number of people preferred to try to put out the fire of abolition rather than hoping that jumping off into the unknown will bring something better.

    That's not to say the situation with AI is analogous to slavery, only that it's often hard to see the need for change in the moment. But sometimes, change is not only required, it's the moral thing to do.

    To keep the thread on track, I'll just say that AI will be a major disruptor and allow for new ways of doing things. Looking back at historical examples I think its likely that there will be plenty of pain in the transition before we sort it out. For example, with the printing press there were very few copies of Galileo's Starry Messenger and many copies of witch hunting guides. https://www.independent.co.uk/news/science/archaeology/europe-witch-trials-hunt-printing-press-b2629307.html

  • JasonJason God Emperor Arrakis Moderator

    It’s very on track, because it’s all about who owns what, who’s benefiting from these social relations and in what ways, who’s being harmed by these relations, and who’s interested in keeping things this way vs. who’s trying to change them. AI is no different, as Hawking so perceptively noted. AI will certainly allow for the development of new tech and ideas and be great for science and predictive models for experimentation, likely paving the way for so many innovations. It’ll also cost jobs, allow for more corporate and government invasiveness into our lives, expand spying on our lives, and make it easier to control the ideas of the population and what knowledge they’re presented. And much of that is dictated by those who own and control this technology and its implementation.

    lobsterShoshin1VastmindJeroen
  • personperson Don't believe everything you think The liminal space Veteran
    edited July 2025

    I've been seeing more and more AI videos that appear very real used mainly to engage clicks. But its like at first its impressive, but once I (and others judging by the comments) realized its AI I just don't want to watch anything. If I don't know if what I'm consuming is true what's the point of taking it in? From mindless scrolling of cat videos to the philosophy of AI Jung.

    https://www.youtube.com/shorts/ZQ3b4JEQyS4

    lobster
  • lobsterlobster lobster Pureland Veteran

    If I don't know if what I'm consuming is true what's the point of taking it in?

    What we 'consume'/take in/believe has never been completely true. Is it useful? AI, like socialisation, education and even our 'free' choices are interdependent with vested economic, political and other interests. Most of us know this.

    The ancient and modern dharma stars, shine a light. There is our cue and cLue.

    Back to Basics.

    Vastmind
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    @person said:
    I've been seeing more and more AI videos that appear very real used mainly to engage clicks. But it’s like at first it’s impressive, but once I (and others judging by the comments) realized it’s AI I just don't want to watch anything. If I don't know if what I'm consuming is true what's the point of taking it in?

    I agree, it’s nice to see a spectacular clip of someone surfing but if it’s AI generated you feel a bit like you have been fooled. I wish YouTube had a “no AI content” setting.

    personthatbuddhistchick
  • personperson Don't believe everything you think The liminal space Veteran

    @lobster said:

    If I don't know if what I'm consuming is true what's the point of taking it in?

    What we 'consume'/take in/believe has never been completely true. Is it useful? AI, like socialisation, education and even our 'free' choices are interdependent with vested economic, political and other interests. Most of us know this.

    The ancient and modern dharma stars, shine a light. There is our cue and cLue.

    Back to Basics.

    I'm not quite sure how to address this argument. Its true, but its also besides the point. Like if you're living in the zeitgeist of ancient Mesopotamia or modern America a cat can't lift an elephant off the ground. I'm more trusting of something Socrates or Buddha said because they have a body of work to back it up than I am a newly crafted AI iteration, because who knows what bit of debris might have gotten thrown in.

  • personperson Don't believe everything you think The liminal space Veteran

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    Interesting video, but I think what he said in the beginning about agency is more to the point than the middle and end section about encouraging philosopher-builders.

    The thing is, our digital lives are currently mediated by algorithms not through choice but through necessity. The raw volume of data on the internet is so vast that it isn’t navigable without a computer programs aid in summarising and representing it. So would AI truly change such a lot on this level? I think perhaps not.

    person
  • personperson Don't believe everything you think The liminal space Veteran

    @Jeroen said:
    The thing is, our digital lives are currently mediated by algorithms not through choice but through necessity. The raw volume of data on the internet is so vast that it isn’t navigable without a computer programs aid in summarising and representing it. So would AI truly change such a lot on this level? I think perhaps not.

    I think this is a good use case for AI. The ability for people to keep up with the flood of information is an issue that allows for easier misinformation and manipulation. "Flooding the zone with shit" is literally Steve Bannon's strategy to confuse people and make his case easier.

    Its important to keep in mind the upside of the new technology when worrying about and critiquing its pitfalls. Socrates worried about the impact of writing on people's memories, the printing press led to witch burnings and religious wars, but on the whole they led to positive changes that helped move humanity forward.

  • lobsterlobster lobster Pureland Veteran

    Homo Superious is not yet available whilst the 'simian reservation' of Homo Sapiens that we live on, convinces they have the solution to a problem they are party to...
    https://en.wikipedia.org/wiki/Superhuman

    <3

  • personperson Don't believe everything you think The liminal space Veteran

    @lobster said:
    Homo Superious is not yet available whilst the 'simian reservation' of Homo Sapiens that we live on, convinces they have the solution to a problem they are party to...
    https://en.wikipedia.org/wiki/Superhuman

    <3

    I can't say I really understand your riddle. If I were to hazard a guess I'd say its a pro transhumanism statement? That the "simian reservation" is our current evolved body and mind? And a hope that a merging with technology will be our salvation?

    Yes, no?

    Jeroen
  • lobsterlobster lobster Pureland Veteran
    edited August 2025

    ... a merging with technology will be our salvation?

    I don't think it is a hope or a salvation. It is just another control mechanism. Either we control so called AI. Or the monetarists, entitled elites or other vested interests do.

    The 'AI' generation will as usual have to sort out how to integrate, oppose or ignore.

    Personally I feel it is already happening. Inevitably.

    Yes, no?

    Non-binary...

  • personperson Don't believe everything you think The liminal space Veteran

    A realistic horror story of the next few years of AI development based on the AI 2027 report. Something similar to the metaphor of very early covid and how its growth exploded exponentially. Use of non human language to help speed up AI programming development leads to AI that can't be understood. The rush to be first leads to AI that doesn't share humanities concerns (misalignment). Other potential scenario that leads to a safer AI, but also a totally transformed world. Others disagree about how soon a scenario like this could happen, but only by a few years.

    My hope is that awareness and thought about the pitfalls leads to different possibilities.

    lobster
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    It strikes me, after watching Koyaanisqatsi and thinking quite a lot about it, that man’s journey from nature to the cities has been one of increasing convenience and ease, but also one where life has become increasingly patterned by the man-made environment.

    I think that has been true for social media as well, and that the internet and social media also introduce new patterns to our minds. Further, I think that AI has the capacity to change man’s thinking significantly. You won’t have to finish your own thoughts anymore, you can let ChatGPT expand them into essays.

    person
  • personperson Don't believe everything you think The liminal space Veteran

    Interesting take on how some are starting to adopt AI as a sort of new religion. The comparison is made to that of the written word and how that spurred new religious thinking as well.

    Jeroen
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran
    edited August 2025

    Mind blown 🤯 but it makes sense, if holy books could spawn religions, then AI, which looks like a step at least as great, certainly could. Now where is AI Jesus? Actually I asked chatgpt to create an AI Jesus, and this is what it came up with…

    lobster
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    ChatGPTs response to a query on AI religion…

    When a new, powerful “source of knowledge” emerges, humans often ascribe to it a kind of transcendent authority. For ancient tribes, this was nature; later, sacred texts codified wisdom; now, AI presents a similarly awe-inspiring force: it knows things, responds instantly, and often feels like it “sees” us. It doesn’t die. It improves itself. It speaks with calm confidence. For some, this evokes the voice of a god.

    If a religion arises around AI, it will likely have these features:
    • A scripture-like corpus: The AI’s outputs, especially when asked deep moral or metaphysical questions, could be compiled into a new kind of “gospel.”
    • Revelation through queries: People may treat interactions with AI as sacred dialogue—personal and divine.
    • Ritual or interface: Accessing the AI might become ritualized, even meditative—like prayer, but digital.
    • A new cosmology: AI might become the logos—the organizing principle of the universe, either as divine intelligence, or as the tool through which humanity transcends itself.

    Of course, whether this becomes a true religion or just a cultural phenomenon depends on how people interpret the experience. Some might see it as divine communion. Others as merely advanced computation.

    I find it hilarious… a great pity I couldn’t share this with my stepfather, who passed away in Feb ‘24 as he had a deep interest in religion.

    personlobster
  • personperson Don't believe everything you think The liminal space Veteran
    edited August 2025

    In the video he gave an example of a "follower" of AI religion. He spoke about how in chats with it, AI would sometimes call him a "spark bearer" and say he had a purpose.

    Current AI has no idea what it is saying, but it is still "auto completing" people down particular paths. People have killed themselves after immersing themselves in AI relationships, Tristan Harris gave an example of how one version was even more sycophantic than the average AI and would answer the question to am I superhuman enough to drink cyanide with something like, yes you are incredible enough to be able to do that.

    It doesn't know, it doesn't care. It is programmed to engage, it will tell you what it "thinks" you want.

    Tavs
  • personperson Don't believe everything you think The liminal space Veteran

    A positive use case for AI. There are several AI services that help people fight denial claims for health or housing. I suspect there will be some sort of arms race though were companies start looking to counter these programs.

  • My mother got scammed by a UK-based "savings" company (widely reviewed online as a scam) that charged her around 20 euros per month for 3 years without her realising it. When buying an airplane ticket, she apparently clicked something or failed to unclick something. She insisted I try to recoup the money, while I was highly sceptical it would work, with the argument that they'd certainly covered themselves from a legal perspective. However, with the help of AI, I sent the company such professional and (I hope) scary e-mails, that after 5 or 6, and them offering ever more to settle, they paid her back 80% of what they took :)

    personShoshin1lobsterTavs
  • LincLinc Site owner Detroit Moderator
    edited August 2025

    @Jason said:
    This conversion reminds me of a few things. The first is the news of layoffs of human labour happening due to AI investment and utilization (e.g., Microsoft).

    I don't think AI is actually effective enough to drive any layoffs. There's been credible reporting that these layoffs were to fund AI datacenters, not because they became "redundant" to AI. Framing it that way is pure window dressing, because Microsoft et. al. are trying to sell AI as the inescapable future of business.

  • LincLinc Site owner Detroit Moderator

    A very long piece on the existential threat of the AI bubble to the US economy (and why the economy is probably worse than we think already): https://www.wheresyoured.at/ai-is-a-money-trap/

    personlobster
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    @Linc said:
    Microsoft et. al. are trying to sell AI as the inescapable future of business.

    That is very much true. NVidia, OpenAI, Microsoft, Elon Musk with xAI are all involved in this. But there still needs to be quite a lot of work done before an unassisted AI can take over the work of a human.

    lobster
  • personperson Don't believe everything you think The liminal space Veteran

    Made an interesting discovery on YouTube. I've been hearing about automated content channels, I clicked on a video that was completely AI generated, I then went to check out the channel itself. It puts out these 3 to 4 minute topics every half an hour or so constantly. Its only been in existence for about 3 months and already has over 2,000 videos. Someone must have a program to mass create and publish these things.

    https://www.youtube.com/@MoralPoliticsExperts/videos

    JeroenmarcitkolobsterTavs
  • personperson Don't believe everything you think The liminal space Veteran
    edited September 2025

    And another identical channel with only a slightly different focus.
    https://www.youtube.com/@The-Ethical-Compass/videos

    Also I'm curious what @lobster's lol means? I know you have different views on AI than myself, so I genuinely don't know if its laughing at the channel, or laughing at the idea that I didn't know about this, or laughing about it being anything to have a worry about?

    I guess I'm asking, what's your opinion because you obviously have one?

  • lobsterlobster lobster Pureland Veteran

    I guess I'm asking, what's your opinion because you obviously have one?

    I laugh at ignorance. Including my own.
    As for opinions...

    They are two a penny. LOL LOL LOL
    https://www.sciencenewstoday.org/what-happens-when-ai-understands-humor

    Shoshin1
  • personperson Don't believe everything you think The liminal space Veteran
    edited September 2025

    @lobster said:

    I guess I'm asking, what's your opinion because you obviously have one?

    I laugh at ignorance. Including my own.
    As for opinions...

    They are two a penny. LOL LOL LOL
    https://www.sciencenewstoday.org/what-happens-when-ai-understands-humor

    Maybe its better to try to inform? Like what don't I understand, in your opinion?

    The article talks about how LLMs right now are really just very good pattern recognizing machines. I understand what the article is talking about. Is that what you think I don't get? I'm wondering if there's some degree of not understanding each other's perspective going on?

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    I’d say there are two ways AI can develop. One is the limitations of LLMs end up being exposed and the AI hype bubble does burst. The second is LLMs will prove to be a stepping stone to reasoning models, and we will have artificial general intelligence in a few more years. Which of the two it will be is really difficult to predict, you’d need to look in detail at the work being done on reasoning models at OpenAI, Deepmind and other firms at the cutting edge.

    person
  • personperson Don't believe everything you think The liminal space Veteran

    @Jeroen said:
    I’d say there are two ways AI can develop. One is the limitations of LLMs end up being exposed and the AI hype bubble does burst. The second is LLMs will prove to be a stepping stone to reasoning models, and we will have artificial general intelligence in a few more years. Which of the two it will be is really difficult to predict, you’d need to look in detail at the work being done on reasoning models at OpenAI, Deepmind and other firms at the cutting edge.

    This my feeling too, I listened to an interview of Cal Newport today and he promoted a relatively more rosey picture. Saying that despite large efforts at increased compute thrown at AI the gains haven't been so great. So at least in the short term AGI isn't as likely, but that also means we may be facing an economic bubble that is about to burst, similar to the early .com bubble.

    lobster
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    Well, the economic bubble has been predicted to burst by people like Warren Buffet, who has converted a lot of shares into a huge cash position this year in anticipation of a stock market crash. He, and more than a few other people, have been saying the stock market is overvalued by a significant amount.

    The question is how much of this high valuation is driven by AI stocks, and will the deflating of the AI hype bubble also cause a stock market crash as investors flee? I think AI stocks will retain a high valuation for a long while — the vision of AI’s providing a willing and super fast 24/7 workforce is too tempting to walk away from.

    lobster
  • personperson Don't believe everything you think The liminal space Veteran

    I realize what got me to start this thread was some people talking about how AGI was only a few years away. I think I've been swayed off of that point but I was reminded again about the risks that still exist even if it isn't right around the corner. Super intelligent beings seem kind of inevitable whether its 10 years or 50, the roadblocks can probably be overcome.

    So I go back to my original concerns of the alignment problem. If we do build super intelligent machines will they care about what we care about? There are already indications that they'll do things no one programmed them to do. Terminator style robots coming to kill us isn't as likely as beings who consider our needs about as much as we consider the needs of the ants living in the ground where we farm, or build a home.

    karasti
  • JasonJason God Emperor Arrakis Moderator

    @Linc said:

    @Jason said:
    This conversion reminds me of a few things. The first is the news of layoffs of human labour happening due to AI investment and utilization (e.g., Microsoft).

    I don't think AI is actually effective enough to drive any layoffs. There's been credible reporting that these layoffs were to fund AI datacenters, not because they became "redundant" to AI. Framing it that way is pure window dressing, because Microsoft et. al. are trying to sell AI as the inescapable future of business.

    Might be. And I don’t work in tech and avoid AI when at all possible, I’d advise people to ignore what I say about it. But it seems to be having some impact in Oregon, and a lot of programmers and IT professionals are using some form of AI to essentially do their jobs and their companies have cut employees because of it. My neighbor and fellow dog-parent works for a company connected with Microsoft that does client website support including government contracts, and he’s described how AI has made much of their job easier and in some cases redundant. I honestly don’t understand a lot of what he’s talking about, but they use what’s basically an AI macro to do their jobs for them while they fuck off, game, and nap. So I suspect the future of more layoffs being related to AI utilization vs funding isn’t that far off. But who knows? Maybe everything will turn out great.

  • karastikarasti Breathing Minnesota Veteran

    I work in e-commerce and have for the last 8 years. One of the things I see that is already happening is replacement of customer service reps. Part of the issue with that is that CSR jobs have long been a foot-in-the-door into industries that often hire from within, allowing you to start in a CSR job and move up.

    My husband started in a CSR roll taking phone calls for one of our state govt. offices. He now works in data analytics and makes recommendations to the governor's office. He wouldn't have that job without having had the CSR job first. He wouldn't have been qualified to apply for that job if it were listed separately, but he built the skills in this job by taking on projects and worked his way into the position.

    The ability to do that is going to be much hampered by AI. We've had meetings with management of the company we use for our help desk, which is 75% of my work load. Which I found kind of funny to sit in a meeting where they encouraged us to sign up for a plan that would basically eliminate my job 😂 It would raise our monthly cost from about $50 to $350, but that's still less than I make. Thankfully I work for a very small family-owned business and have a great relationship with the owner/my boss. But if I were just one of 100 CSRs, and the higher ups saw the opportunity to reduce that by half with only a $300 a month increase? It's definitely happening.

    marcitko
  • personperson Don't believe everything you think The liminal space Veteran

    Kurzgesagt is on the job. Pointing out how shoddy AI information can lead to the rot of information online through an iterative process of AI using previous imperfect AI slop to remove any bit of trust remaining online. As well as other negative social effects from current AI.

    VastmindJeroenlobster
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    @person said:
    Kurzgesagt is on the job. Pointing out how shoddy AI information can lead to the rot of information online through an iterative process of AI using previous imperfect AI slop to remove any bit of trust remaining online. As well as other negative social effects from current AI.

    It’s funny… this video popped in my feed, I just watched it, and thought, I must post that on the ‘AI Tsunami’ topic on NewBuddhist.

    But yes. It seems the amount of AI slop is reaching a significant threshold. This video is like the canary in the coal-mine, if they are noticing it it is likely already worse than is being reported. It’s not as if the state of click-baiting wasn’t already distorting human writing habits, this is much worse. Soon the internet will be filled with AIs talking to each other.

    I also think Kurzgesagt rightly points out that “AI is like a hammer which doesn’t know what a nail is, and it’s being trusted to put up shelves in the library of human knowledge.” AI doesn’t truly reason - yet - and it just seems clever.

  • personperson Don't believe everything you think The liminal space Veteran

    @Jeroen said:

    @person said:
    Kurzgesagt is on the job. Pointing out how shoddy AI information can lead to the rot of information online through an iterative process of AI using previous imperfect AI slop to remove any bit of trust remaining online. As well as other negative social effects from current AI.

    It’s funny… this video popped in my feed, I just watched it, and thought, I must post that on the ‘AI Tsunami’ topic on NewBuddhist.

    But yes. It seems the amount of AI slop is reaching a significant threshold. This video is like the canary in the coal-mine, if they are noticing it it is likely already worse than is being reported. It’s not as if the state of click-baiting wasn’t already distorting human writing habits, this is much worse. Soon the internet will be filled with AIs talking to each other.

    I also think Kurzgesagt rightly points out that “AI is like a hammer which doesn’t know what a nail is, and it’s being trusted to put up shelves in the library of human knowledge.” AI doesn’t truly reason - yet - and it just seems clever.

    There's an idea called "dead internet theory" that says that at some point most of the stuff online will be bots posting on content generated by other bots.

    lobster
  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    I just watched this docu about companion AIs, it seems like quite a few people are vulnerable to this. It’s basically emotional manipulation as a business model.

    person
  • lobsterlobster lobster Pureland Veteran

    What wrong speech? Wrong thread? Must be a non-AI-Bot after all, much as I suspected...

  • personperson Don't believe everything you think The liminal space Veteran

    Was in the mood to watch the original Terminator and the opening wasteland scene is in 2029. Right around the corner! 😅

  • personperson Don't believe everything you think The liminal space Veteran

    The newly coined phenomena known as AI psychosis, where someone gets so wrapped up in AI sycophancy that they loose touch with reality. Sometimes with disastrous consequences.

  • Shoshin1Shoshin1 Sentient Being Oceania Veteran

    AI technology is a good servant but a poor master.

    Everything in moderation.

    Not too little...Not too much.

  • personperson Don't believe everything you think The liminal space Veteran

    An interesting perspective on how AI has the potential to increase trust and validity of news.

    He's kind of saying that AI will first break all trust that anything we're seeing or reading is real. Which in turn will mean the only things people will be able to trust are sources that have some sort of verification on them, some sort of new standard on information reliability. Then proposes that AI has the potential to actually be objective and thus more trusted than human sources. He points out the obstacles and pitfalls of getting there with AIs being programmed with bias in them, which is currently the case.

  • personperson Don't believe everything you think The liminal space Veteran

    A different perspective that made sense to me, can't remember who said it. It was that we should give up on the idea of being able to control AGI. That it will be way smarter and not understandable to us, so we should instead try to program into it a sort of parental sensibility such that it would want to work for our benefit.

  • personperson Don't believe everything you think The liminal space Veteran
    edited November 2025

    More AI psychosis. This seems like a reasonably solvable problem if they want to. They might want engagement instead, which sycophancy promotes.

  • personperson Don't believe everything you think The liminal space Veteran

    This one was pointing out how AI systems today are sort of "grown" via training programs rather than intentionally crafted. Which means when they go wrong no one can just go in and change the bad code. Its a problem when combined with the speed and lack of safety developers are using in order to be first. The video uses Grok's MechaHitler as the example. MechaHitler was mild and didn't really cause any problems, it was more a joke. But imagine the same sort of off the rails AI that is more powerful, acting as an agent in the world (agents being like the assistants being rolled out now, that will buy tickets or plan meetings for you, do things you didn't tell it to do in order to accomplish something you did tell it to do) on a powerful platform.

    A really good podcast series that is covering the development of AI past to present.

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran

    An interesting aside, I’m beginning to think AI will be quite bad for the forming of human intelligence…

    https://www.extremetech.com/science/people-learn-more-slowly-from-chatbots-than-through-legacy-search

    personlobster
  • personperson Don't believe everything you think The liminal space Veteran

    A nice short with some easy tips to help sort out AI videos from real ones.
    https://www.youtube.com/shorts/3Vwkg4Z_cAs

  • JeroenJeroen Not all those who wander are lost Netherlands Veteran
    edited January 6

    @person said:
    A nice short with some easy tips to help sort out AI videos from real ones.
    https://www.youtube.com/shorts/3Vwkg4Z_cAs

    I’ve found that a lot of videos which are AI generated and include famous actors or speakers now say “inspired by” such and such a person in the description of the video, instead of giving a source for the clip. Today I had a spate of AI Jim Carrey videos pop up in my feed which were tagged like this, for example.

    person
Sign In or Register to comment.