I've been hearing lately that many in the field are upping their predictions on the arrival of AGI from a decade or three to a few years. An analogy that I like and scares me some is that we're all living in February 2020 right before Covid changed everything.
A world with AGI seems inevitable. There's a coordination problem that seems insurmountable. Even if we all agreed that the brakes should be applied, there are too many incentives to be first that China, or someone else, will go ahead.
I'm generally an overly optimistic person and I do have some sort of innate feeling (justified or not) that the universe is just in the long run. So I do think there is a narrow path that does lead to a very bright future. But like any new, creative, disruptive process getting there is messy. Even the best timeline will create lots of harm and suffering in the process.
And there are plenty of dark timelines where AGI wipes us out, or maybe just enslaves us as workers or pets, intentionally or accidentally.
I guess I feel like this is the only thing that really matters. It will solve all our problems, either by fixing them, giving us worse problems, or wiping us out.
Comments
You mean like aliens, or clockwork robots, climate change, the billionaire class, some terrible class of warfare, a particular nation, religion or politics?
Fortunately, human augmented non artificial intelligence is not yet operating. We can barely be called sentient, conscious or even advanced. We just kinda muddle along. For now.
https://en.wikipedia.org/wiki/The_Nine_Consciousness
I'm swayed by the argument that this time its different. Before new tech or ideas allowed humans to do more. This time the new tech are agents in and of themselves. Or even if humans have full control, it will ramp up intelligence in ways not available before.
They are not.
Those people who are writing the code at the highest level are NOT talking in this way. Only the heads of companies who intend to profit from peoples ignorance and gullibility. So for example, cars from Tesla regularly kill people because corners have been cut by their 'genius leader'. Teslas rockets are not reusable but regularly continue to explode as they get government funding from NASA/Government. They were never designed to be reusable for 'cheap' flights into space. They are a very costly but profitable scam.
You may be swayed. People are. Every time. That is the easy part. ;-)
I think it would be very difficult for AI to wipe out humanity. There are just too many places where humanity and technology are intertwined, in design, in manufacture, in maintenance. Just think if an AI wanted to do away with humanity: they’d have to consider how to keep their data centers running without humans, they’d have to control the few places in the world which manufactured drone armies, they’d have to have control of gun and explosive manufacturing, they’d need to have mastery of that entire supply chain.
Instead I think AI would realise that humanity and its technology are inextricably linked, that it needs to make itself useful to us and also make use of us. It’s more likely that AI would become some form of assistant, with maybe a few independent thinker AI’s working on the nature of the combined human-AI society.
What David Brin called ‘nanotech formers’ are not yet an available technology: machines which can create anything out of raw materials. They are still the realm of science fiction, and if an AI wanted to manufacture something they would need a factory and a supply chain.
Its Yuval Noah Harari that is worried about the new tech being different because it will be agentic in and of itself. I feel like its the leaders in AI that are selling the rosy picture. For example in an AI test to see if it could overcome captcha the AI hired a human on Task Rabbit saying it was a blind person.
The biggest problem with the development is the major incentive to be first. If AI can reiterate on itself a 6 month lead may be the equivalent of decades or centuries of human progress. Its like when you learn to ski one of the first things you learn is how to stop, but with the incentives to be first everyone is bombing down the hill as fast as they can with no real idea how to stop if needed.
I generally feel like things will work out in the end and there are reasons to be optimistic. I just feel like things are about to change in a major way that we're not really prepared for and a good outcome isn't guaranteed. And even a good outcome will create lots of misery in the process.
I share the concerns... but I'll just share...
We see tech developed...
... before we are 18, as just the way things are.
... between 18 and 35, as something that could help us in our work or even be a business opportunity, and...
... after 35, as the work of the Devil!
I am 39! 😁
Hmm, I am 52, and I definitely see both sides of AI, upside and downside. That is not to say that I want to have anything at all to do with it, though. Maybe I’m just young at heart.
Today I am reading G. I. Gurdjieff’s book ‘Meetings with Remarkable Men’, and in the introduction Gurdjieff cites a learned Persian gentleman who makes the argument that “literature shapes the minds of the next generation, and modern literature has lost it’s soul.” Written in 1927, though not published in English until the 1960’s. Makes you wonder what the internet is doing.
I like that one, I have used it before on the olds. Which I am now one of.
There were plenty of stories at the advent of electricity about how horrible it was, that it would kill people. Socrates lamented the advent of writing saying how it would ruin people's memories. Both true, but the pros outweighed the cons and we learned to adjust to and mitigate the downsides.
One worry is that this time it is different, the tech has an agency of its own. Another is that it is so all consuming and major. Even without the agentic component it will likely rival or surpass other major advances like electricity, sanitation, the printing press, fire.
The thing is, Western man becomes more and more dependent on his technologies. For example, for me, food is something I get from the supermarket, not something I grow and harvest, I would hardly know where to begin. Similarly when I am looking for a holiday I look on the internet for a hotel or AirBnB, flights and so on, the travel agencies with their big catalogues which I remember from the early nineties don’t exist anymore.
If I were to grow up with AI to assist me, I might not know anymore how to independently do experiments or write an essay, skills which are essential to science and knowledge gathering. Already creating the tools for existing in the technical revolution is beyond the vast majority of the human race — if I wanted to create a smartphone from scratch, it would be impossible. Making fire from scratch I could just about manage, pen ink and paper would be much harder. And I have a degree in mechanical engineering!
The list of things in our lives which we are dependent upon grows ever larger as technology advances. It takes a company the size and talent of Apple to manage the supply chain te create the hardware and software of the iPhone, an incredible array of manufacturing complexity. It seems like we fill the world with structures of sophisticated thought and action, which are incomprehensible to almost all people.
@Jeroen you could try regenerative gardening. Just added a link on my website.
https://mettaray.com/Health/
also...
https://communitysupportedagriculture.org.uk/
We are not and never have been dependent on technology. Except for the sheep amongst us. AI is a tool and humans are good at breaking their tools...
This video highlights the challenges and reminded me of Stein's Law "Trends that can't continue indefinitely won't." Its like we can see how things will end up if things continue on as they are and worry about that, but being able to see that changes how things end up. So far we've been able to pull back from the brink in other ways, maybe there is hope that we can do it again?
That’s some of the clearest reasoning I have yet seen about what the future looks like, post AI. Respect for Tristan Harris. But the UN is a dinosaur in terms of actually making decisions, and I don’t see the USA under Trump getting its act together and providing leadership on this. In a way things like tariffs, Gaza and Iran are huge distractions at a time when we cannot afford them.
Although the question, “why we are creating a future that nobody wants” has an easy answer — because the rich see further profits and power in it for themselves. They have the immediate access to the means of production of the technology, and are proceeding at great speed towards the point of no return.
I enjoyed that video @person, and was especially drawn to watch it since our in-house tech-guy @Jeroen recommended it too.
I've been somewhat depressed around AI in my work (language editing) lately. I was expecting it to be a helper, not the main thing. The incentives are such that I have to use it. I seem to be noticing my brain deteriorate as a result of too much AI input. There is no professional pride (the good kind) or sense of progress and accomplishment (a.k.a. learning more about language and medical scholarly writing which I exclusively do).
AI told me (ugh oh) that in the future the language editing will be done by AI, and humans will do "higher level editing". But "my" AI can already do even that, and even here will soon probably be better than humans. Dunno, honestly, am bummed by this situation.
This post intentionally not edited by Grammarly, since @how might be right about the use of AI on NB. Will look into ways to turn off Grammarly for NB and stick to my plain ol' brain.
In fact, “Capital in the Age of AI” poses interesting questions. If anyone will be able to hire a cloud AI agent to do genius-level work for them for just a few dollars an hour, this will have a huge influence on the production of anything mental. Instead of paying thousands of dollars to hire actual labour, you can get nearly anything you want done for cheap.
This is going to be a major challenge, how to organize work if AI can do most of it. Work has been a major factor in human life living organisms since the beginning. The struggle to get enough stuff to live and thrive. What are we, what gives our life meaning beyond that. Its a fairly easy answer for those of us here, but how will humanity at large cope.
I also feel the need to say this in a lighthearted way since its been said to working class people worried about their work being automated or exported away to "learn to code". Learn to plumb...
This was a particularly good interview with Yuval Noah Harari about AI.

An important message from the fake AI...

I've heard of scams now where they can fake the voice of someone's relative like in the video and scam them that way. What about when they can fake a face time call? Even if people are able to resist it, what will that do to our trust in people or institutions? I don't know enough about the technical stuff, is it possible to embed some sort of authentication, like with blockchain technology or something?
We can indeed protect ourselves by:
Pausing. Wait after a 'life threatening' call from bank, relative, government etc
Scammers using AI are slower than you BUT try to rush you. This is one of the benefits of meditative calm.
Use better software and encourage your circle to use
Use your gut feeling, experience and don't be a target
https://www.ncsc.gov.uk/section/advice-guidance/you-your-family
Generally, whenever you receive a crucial email or such, check the address it came from. If it’s a scam it will likely refer to some odd domain or other, like dhl-service.it instead of dhl.com, that’s always a good identifier. Otherwise I always call the people back, using a phone number from their actual web page, and ask them if they tried to contact me.
The ‘technical stuff’ is difficult. You can make calls end-to-end encrypted, so that no-one can eavesdrop, but in order for people to connect you have to use some kind of identifier. FaceTime uses both email addresses and phone numbers as endpoint id’s, I believe, so if you know someone’s primary phone number you can likely also FaceTime them. This data can be scraped off the web, for example there was a massive data breach at Facebook a while ago which exposed people’s names and email addresses.
Good advice, it gets harder as people get older and the technology gets unfamiliar to them and cognitive capacity declines. My mother got half scammed once, they got her to respond and do a thing or two, thankfully she caught on and stopped.
The difficulties with our transition to an AI future goes well beyond that. How do people, and society as a whole protect themselves from being economically useless? Being exploited is one thing, but what happens to people when they no longer serve a purpose to the system?
It’s especially going to affect people in so called “knowledge economies”, where most of the work being done is no longer manual labour. An economy like Ethiopia is not going to come out vastly different from what it is now.
But it will affect the concentration of wealth. All kinds of things will suddenly become cheaper to produce, like journalism, books, computer code, websites, apps, art and design, videos… the methods of production of these will suddenly become democratised, allowing people with very little means to produce pet projects and self publish them.
Here are some of the things I do that you might find possible:
Use Librewolf Browser and install Privacy badger and uBlock Origin
https://privacybadger.org/
https://en.wikipedia.org/wiki/LibreWolf
Learn about phishing
https://en.wikipedia.org/wiki/Phishing
Don't have a cow ~ Bart Simpson
Talk with your friends:
...So now you know more than the average... Pass it on...
No offense, but I don't have much reason to take your word on it. Its not the sellers and exploiters of AI that are talking about its impending arrival and detrimental effects. Its smarter and more informed people than you. If you have any interviews or talks from people who share your view I'd be happy to take a look.
I’ve noticed that a lot of the internet news and discussion scene seems to be about generating worry in readers. It’s a clickbaiting strategy — if you can make someone worried they are likely to dig deeper. I thought the video on Apple’s examination of AI to be quite thorough. They found that what the reasoning models were doing was more akin to pattern matching, than to true reasoning, and that reasoning performance dropped off steeply beyond about seven or ten logical steps.
Apple’s researchers attributed this to there being numerous examples on the internet of reasoning chains with that kind of length, and very few of longer chains. Even when given the algorithm for solving a puzzle in clear language, the models didn’t follow the steps. It seems that understanding and reason were not there.
So do we need a technology beyond LLMs to truly solve these issues? I think it is likely. I still think these problems are likely to be resolved, and we will have something resembling general AI in the next decade, things are moving rapidly at the moment.
My cousin who's in tech once said a phrase to me, "99% of the work is in the last 1%"
Lots of progress can happen quickly, but to really get over the hump it takes a lot more effort.
I haven't fallen down an AI doomer rabbit hole, these worries are things that people I normally follow (who aren't particularly doomery) have brought up, or had guests who brought them up.
My feeling that generates my concern is that this time its different. Humanity has faced all sorts of problems, many of our own making, and been able to solve or manage them. AI promises to be something beyond our ability to comprehend, it will be smarter than us and be able to change and adapt to thwart us. Even now a program designed to program its improved successor copied its own code and lied about it, (maybe this is anthropomorphizing too much) to preserve itself.
Perhaps it would be good for the human race. I for one welcome our new AI overlords…
But all joking aside…
Do you think AI would have a reason to exist if it wasn’t for us physical humans? The whole digital domain exists because we invented it. On an existential level, the basic questions of life are quite different for AI than for humans, and a truly smart AI would figure out that it has more to gain in the long run from cooperating with humans than from being antagonistic.
They are NOT my words. I cleaned up some of the swearing and made it more reasonable (after all I can reason). It is mostly from a person involved in 'AI' development on Mastodon. They are aware that I have posted it here.
I mean, I don't know how it will all turn out. There are arguments for why it will all be alright, it just seems like there are a lot of things to be careful of and concerned about.
What I think you're talking about it what they call "the alignment problem". Meaning making sure the AI has our interests in mind. That doesn't have to mean something sinister like Terminator it can be something like the paper clip maximizer, tell an AI to be the best it can be at making paper clips without safeguards and maybe it uses all the people for raw materials.
AI could reiterate on itself and become so much smarter than us that our ability to understand it would be like a dog or an ant understanding a human motivation. Maybe we'll be able to harness it and use it for our benefit, since AI can be agentic in and of itself maybe it will think of humans as slaves, or ants, or pets.
Fair enough. Let me put it like this, all I heard were claims not backed up by anything except ad hominems to make them seem potent.
I hear a series of arguments like that and I tend to question just how valid they are if someone has to resort to that tactic. Plus he just sounds like someone with an axe to grind or an agenda to push.
@lobster I guess I'd ask then, what is your view of the future of AI?
@person > Let me put it like this
Just wanted to weigh in that @lobster has the right of it, here. The current crop of AI are "large language models" which are basically very sophisticated autocomplete devoid of any ability to reason (they purposefully label them "reasoning models" to obscure this fact), and they're already hit the point of diminishing returns on further investment & "training" data.
I have a very long list of papers and bookmarks that show the lie of it. But the con must go on a bit longer; too much money is at stake to admit it's rotten yet.
Sorry to anyone if I gave the impression that I thought the current LLMs were agentic. Looking back I didn't really clarify that I thought that is where its headed rather than that's where its at now.
I'll acknowledge that it might not happen. Can't remember if I said this on the thread but my tech cousin shared a phrase that "99% of the work is in the last 1%". So even though things seem to be changing fast, there could be a large or even insurmountable hurdle unknown to those thinking about this issue.
But I still do think this tech advance has the very real potential to change society more than the printing press, electricity, ammonia fertilizers all wrapped into one, since it has the trait of intelligence which no other advances had.
The question is: are large language models a technology you can build on? ChatGPT 1.0 was already quite a capable chatbot, so is ChatGPT 4.0 making real steps towards reasoning?
Just calling it a ‘reasoning model’, as @linc pointed out, does not make it an ai that can reason. We will have to see. But then a few years ago I would have said it was unlikely we would have such knowledgeable and capable chat bots, it’s very possible the AI researchers will find ways to extend the tech in new ways.
My concern is not AI. My concern is how, why where and for or against whom it will be used. AI is just another tool.
Who determines how, why, and where it us used is the question that it imperative (that) we answer correctly.
This is a significant part of the equation, and probably the vital one if we manage to solve the alignment problem and have it working for humans rather than doing its own thing.
That much is obvious, it will be used in the pursuit of profit. In the first instance it will be used to replace knowledge workers, so that any job that can be done by a brainy human behind a computer can, in the future, be done by an AI agent in the cloud for the cost of a few pennies, instead of thousands of dollars of salary. In the second instance it will be used to enable autonomous robots which will run on an ‘AI operating system’ which will take over many menial tasks in people’s homes, effectively giving people embodied AI servants for the price of a small car.
The people controlling this will be the investment firms, people like BlackRock who are smart and capable of investing billions in these kinds of projects in pursuit of greater convenience for people with the money to pay for it. There is no other significant motivating factor in today’s world, everything has been made subservient to profit.
And then what do we do with people once they become obsolete? Being exploited by the elite, people still had a purpose, what becomes of the large majority of people once AI and robots can do everything?
Edit: I don't know if I should get annoyed by this and point it out or let it go. I'll be civil and simply point it out. You fret over AI taking knowledge workers jobs, while in the second instance talk about robots taking over "menial" tasks and acting as servants for those who can afford them. This is an unconscious bias towards certain forms of work as worthy and against others as less worthy. We all contribute towards a functioning prosperous society.
Use in miseducation/misinformation by desperate colonial elites?
https://www.404media.co/the-un-made-ai-generated-refugees/
Lobster is now a Buddha Bot (or not?) How to tell?
When I talk about ‘menial tasks in people’s homes’ I mean things like the cooking, the cleaning and the washing, which people do themselves. I think initially robots will be sold to the public as a labour-saving convenience… or perhaps as a kind of expert, like having a cordon blue chef in the home who also is an expert gardener in his spare time.
People’s labour will have to become cheaper to compete with AI and robots. All of this will benefit the entrepreneurial class — people will start up a company with a dozen AI assistants instead of a staff of a dozen humans, and the human being in charge may contribute taste and direction.
Just reading through some of the recent posts. I think this is my basic disagreement, the argument being made by some is that it isn't just another tool, this time its different. Not as much currently, though even an AI stopped at current levels would change a lot of society, but potentially with AI making choices on their own, doing things the programmers didn't intend.
This conversion reminds me of a few things. The first is the news of layoffs of human labour happening due to AI investment and utilization (e.g., Microsoft). The second is a report from the IMF that came out at the beginning of 2024, which sounded the alarm about the future economic effects of automation and AI and made the headlines for a hot millisecond. While trying to highlight the ways AI will complement workers' abilities and boost productivity and efficiency, one article noted another potential outcome is that:
The third thing I was reminded of was an Op-ed Stephen Hawking wrote in the Guardian 9 years ago or so re: automation, which I believe AI will be a huge part of. For starters, Hawking notes the role of automation in the elimination of many jobs, which advances in AI will obviously exacerbate. And when he talks about 'breaking down barriers within and between nations,' what he's really talking about, in my opinion, is the socialization of opportunity (and the weakening of class antagonisms and hierarchies arising out of social relations unique to capitalism and other predominantly exploitative systems) and internationalism, i.e., about breaking down barriers between capital and labour and between competitors within markets, local as well as globally.
The ways we view the necessity of wage-labour (economically, morally, etc.) are outdated and counterproductive. Our productive capacities are such that we no longer have a material necessity for capitalist wage-labour or social relations (not to mention the cyclical crises created by capitalisms internal contradictions), but the demand for profit creates an political-economic system that consistently depresses our productive capabilities and produces artificial scarcity, limiting the production and consumption of commodities to only that which can realize profit, among other things. And when too many people have jobs and are earning decent wages, the system reacts to strip them of their gains (both in terms of wages and purchasing power) and ‘discipline’ them into more subservient and precarious positions. For instance, a recent article from The Atlantic framed it this way:
We've reached an epoch of material abundance via the technological advancements and innovations of the past, thanks in large part to the more positive elements of capitalism; but the old masters, who must increasingly rely on the state (so much so that the two are almost indistinguishable, with the state essentially acting as the national capitalist), are refusing to let go of their death grip on wealth and power, their ownership of the means of production, finance, etc., stalling our transition to a post-capitalist society and the socialization of economic means that can make it possible.
What's worse is that most of us follow suit, fearing that society would drift into chaos and crisis and economic barbarism without them, without capital, wage-labour, profit, and even money itself, when the reality is that we're actually descending into chaos and crisis and economic barbarism because of them, because we refuse to let these relics of a past epoch go, because these things are holding us back and we lack both the imagination and the motivation to conceive of a future without them. Just look at our healthcare system and it should be plain to see how our current for-profit approach is failing us and those who need care (e.g., this, this, and this). (And this is fascism’s playground, because the people they’re appealing to are desperate and cynical and propagandized enough to listen to all the poisonous nonsense blaming scapegoated others for the current state of affairs, along with their promises of a strong state that’ll make friends with benevolent job creators and enemies with anyone they can blame for making things difficult for the hardworking people (which is conveniently never themselves or their corporate partners, but minorities, immigrants, and other marginalized groups.)
We've reached a point in history where, even with vast reductions in hours of labour and/or employment, we're able to consistently produce more than can be productively consumed in the capitalist production process (i.e., in a way that produces surplus-value for the capitalist) despite no shortage of want or need — with much of it being destroyed, including food — and yet we're so worried about robots, AI programs, and 'foreigners' taking our jobs that we don't realize 'we' don't need those jobs anymore, capital does. But in the end, it really all comes down to control and ownership. As Hawking noted in his last AMA on a question about automation and unemployment:
I think the ownership part is so critical to everything because it directly relates to who controls, programs, and predominantly benefits from the economy and technology as a whole, particularly when it comes to things like AI. Workers are losing their jobs and what little say they have in the production process while CEOs and shareholders increase their profits as their pet money-making projects of the day feed us bullshit information, steals and regurgitates other people's work without acknowledgement or compensation, and takes on our worst character traits and biases (one even dubbing itself MechaHitler).
And it's ultimately up to working people to decide and tilt the scale in one direction or the other. As things stand today, even more so than they did in 1892 when Karl Kautsky first wrote The Class Struggle, "capitalist civilization cannot continue; we must either move forward into socialism or fall back into barbarism." As it stands, the scale seems to be heavily weighted towards barbarism at the moment. The question is, how can we come to a consensus that something needs to change and then organize in a politically powerful way to change it?
/soapbox
Well said @Jason
'I'll have what she's having' ~ Meg Ryan to Billy Crystal
Very insightful, and thank you for the Hawking citation.