Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Case in point. Skynet isnât real. It was made up for the Terminator movies. Conservatives always believe fake things are real.Skynet.![]()
Case in point. Skynet isnât real. It was made up for the Terminator movies. Conservatives always believe fake things are real.
That's what they used to say about fire.Case in point. Skynet isnât real. It was made up for the Terminator movies. Conservatives always believe fake things are real.
Welp...
Gullible Trump Cronies Losing Their Minds Over Fake AI Slop on Twitter
Multiple pro-Trump influencers, including the self-described "investigative journalist" Laura Loomer, were fooled by an AI-generated image designed to depict post-Hurricane Helene aftermath.
https://futurism.com/the-byte/trump-allies-fall-for-ai-slop
Itâs only funny when It happens to your opponents apparently?
Primitive people might not have understood the physical processes underlying fire, but they definitely didn't think it was make-believe. Skynet is a fantasy boogieman made up by Hollywood.That's what they used to say about fire.![]()
Case in point. Skynet isnât real. It was made up for the Terminator movies. Conservatives always believe fake things are real.
You're not known for your sense of humor.Primitive people might not have understood the physical processes underlying fire, but they definitely didn't think it was make-believe. Skynet is a fantasy boogieman made up by Hollywood.
Primitive people might not have understood the physical processes underlying fire, but they definitely didn't think it was make-believe. Skynet is a fantasy boogieman made up by Hollywood.
Sadly, as AJ enters his sunset years, we seldom hear him repeat his mantra anymore.That's what they used to say about fire.![]()
The CEO of OpenAI, Sam Altman, graciously declined Elon Musk's offer this afternoon, and countered with offering to buy Elon Musk's Twitter for its residual value of 9 billion dollars!Elon Musk leads an offer to buy ChatGPTâs parent company [OpenAI] for nearly $100 billion
https://www.cnn.com/2025/02/10/tech/openai-elon-musk-purchase/index.html
Systems that start to point to AGI* are coming into view, and so we think itâs important to understand the moment we are in. AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.
Letâs imagine the case of a software engineering agent, which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others.
Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.
The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc.
But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.
Agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness and enable individual people to have more impact than ever before, not less.
The price of many goods will eventually fall dramatically (right now, the cost of intelligence and the cost of energy constrain a lot of things), and the price of luxury goods and a few inherently limited resources like land may rise even more dramatically.
Altman Summary: We need more investment to excavate this massive pit of bullshit, but there really is a magical pony at the bottom, I promise.Sam Altman on how life will change once AGI steals your job
Reading a blog post from OpenAI CEO Sam Altman on this particular Monday in February makes perfect sense, considering whatâs happening in the world right now. The AI Action Summit in Paris has world leaders and tech execs in attendence, discussing AIâs future and potential regulation needed to safeguard the space.
Sam Altman penned a blog post titled Three Observations, sharing a mission statement for the future of ChatGPT and other OpenAI technology, with a clear focus on AGI (Artificial General Intelligence). The CEO gives us his incredibly optimistic view of what AGI and AI agents will mean for the world in the near and more distant future and what life might be like once AGI and AI agents steal your jobs.
The mission statement came at the end of an incredibly important period for OpenAI, and it was all the more important considering the headwinds OpenAI had to face.
In the past few weeks, the company released the first AI agents (Operator and Deep Research) and made available two ChatGPT o3 models, all part of a massive money-raising campaign that proved successful. Thatâs despite the ongoing tradition of ChatGPT safety researchers leaving the company or the unexpected competition from Chinese rival DeepSeek.
I wondered what OpenAIâs main creation would think about the blog, so I went to ChatGPT (GPT-4o) to ask the AI how it felt about the blog post. As expected, the AI recognized the mission statement is about technologies similar to itself, without having any feelings about it. ChatGPT also highlighted concerns with Altmanâs line of carefully edited thinking.
Altmanâs view of the post-AGI world
Altman started the blog by explaining AGI after making it clear that OpenAIâs mission is to ensure that AGI benefits humanity. As youâre about to see, the exec didnât offer a perfectly objective explanation for AGI or what AGI means from the Microsoft-OpenAI business relationship:
Altman then explained the rapid progress in AI development, indicating that the cost of an AI product tends to fall by ten times every 12 months, leading to increased usage. GPT-4 prices from early 2023 dropped by about 150 times by the time ChatGPT reached the GPT-4o model in mid-2024.
The CEO also made it clear that OpenAI wonât stop investing in AI hardware in the near future, which is likely a needed remark in a post-DeepSeek world. A few weeks ago, the Chinese AI stunned the world with its ChatGPT-like abilities obtained at much lower costs.
All these AI developments will lead to the next phase of AI evolution, including AI agents, towards the age of AGI. Thatâs where Altman gave an example of AI agent working as a software engineer:
Altman didnât say this engineer would take the job of a human, but he might have just as well said it. Imagine millions of AI agents taking over jobs in countless fields:
Yes, thatâs a nightmare scenario to some people, and itâs easy to understand why, even though Altman paints an overall rosy picture of whatâs coming ahead and downplaying the bad side effects. Altman said the world wonât change immediately this year, but AI and AGI will change in the more distant future. Weâll inevitably have to learn new ways of making ourselves useful (read: work) once AI takes over:
Altman also mentioned that the impact of AGI will be uneven, which is probably a massive understatement. He also explained how day-to-day life might change people:
He said the road for OpenAI âlooks fairly clear,â but it depends on public policy and collective opinion.
Altman also mentioned there âwill likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI,â without disclosing what they would be.
It is reassuring to see Altman talk about AGI safety, but as a ChatGPT user myself, Iâd want more specifics. Altman did mention the need to empower individuals with AI rather than having it used by authoritarian regimes for mass surveillance and loss of autonomy.
https://bgr.com/tech/sam-altman-on-...ls-your-job-and-what-chatgpt-thinks-about-it/
[Bonus: check out ChatGPT's responses to the author's concerns via the link. Very interesting.]