The current MAGA administration + AGI achieved within the term = perfect storm?

RoryN

You're screwed.
Joined
Apr 8, 2003
Posts
60,311
Deplorables seem to be having problems with the basic concepts of A.i.. đŸ€”
 
Last edited:
Case in point. Skynet isn’t real. It was made up for the Terminator movies. Conservatives always believe fake things are real.

LOL!

All Deplorables have to do to save a little face, is to keep their mouths shut, and avoid unforced errors.

And they can't even do that right. 🙂
 
Welp... đŸ€·â€â™‚ïž

Gullible Trump Cronies Losing Their Minds Over Fake AI Slop on Twitter

Multiple pro-Trump influencers, including the self-described "investigative journalist" Laura Loomer, were fooled by an AI-generated image designed to depict post-Hurricane Helene aftermath.

https://futurism.com/the-byte/trump-allies-fall-for-ai-slop

The incredibly gullible Chloe shared that girl-in-a-boat AI image. Did any of our other gullible dimwits share the same one, or does Chloe get the undisputed gullibility crown?
 
Currently anything from an image made in MS Paint to a TikTok cat-filter is called AI by deplorables.
 
Primitive people might not have understood the physical processes underlying fire, but they definitely didn't think it was make-believe. Skynet is a fantasy boogieman made up by Hollywood.
You're not known for your sense of humor. :)
 
Primitive people might not have understood the physical processes underlying fire, but they definitely didn't think it was make-believe. Skynet is a fantasy boogieman made up by Hollywood.

Cue the "I was joking" playbook Deplorables use when they post something stupid and get called out in 3...2...1...

(Oops; too late. nvm.) đŸ€Ł
 
Elon Musk leads an offer to buy ChatGPT’s parent company [OpenAI] for nearly $100 billion

https://www.cnn.com/2025/02/10/tech/openai-elon-musk-purchase/index.html
The CEO of OpenAI, Sam Altman, graciously declined Elon Musk's offer this afternoon, and countered with offering to buy Elon Musk's Twitter for its residual value of 9 billion dollars!
https://i.imgur.com/MOweSdZ.png

Fun historical fact: Musk was one of the original investors in OpenAI in 2015, and his pledge to fund it with one billion dollars of his own money gave the company badly needed "street cred" to attract venture capital.

Musk saw the future in AI....unfortunately, Musk also has ADHD and a notorious short attention span which made him push hard for unrealistic deadlines.

When ChatGPT was not ready for public release at the end of 2016, he declared the project a failure and decided to make his own AI at Tesla, poaching OpenAI's chief of technology when he left (he never fulfilled his one billion dollar pledge either). Cash-strapped OpenAI partnered with Microsoft (who had the infrastructure to host AI but not the intellectual bandwidth) and ironically it became a win-win scenario for OpenAI and Microsoft, and Musk string of investment "victories" came to a crushing end.

Obviously, Musk never got over his first technology failure.

Side side note: AI by Tesla obviously never came to pass, but the chief technologist that Musk poached from OpenAI was instrumental (pun semi-intended) at developing the framework which became Tesla's "driverless technology".
 

Sam Altman on how life will change once AGI steals your job


Reading a blog post from OpenAI CEO Sam Altman on this particular Monday in February makes perfect sense, considering what’s happening in the world right now. The AI Action Summit in Paris has world leaders and tech execs in attendence, discussing AI’s future and potential regulation needed to safeguard the space.

Sam Altman penned a blog post titled Three Observations, sharing a mission statement for the future of ChatGPT and other OpenAI technology, with a clear focus on AGI (Artificial General Intelligence). The CEO gives us his incredibly optimistic view of what AGI and AI agents will mean for the world in the near and more distant future and what life might be like once AGI and AI agents steal your jobs.

The mission statement came at the end of an incredibly important period for OpenAI, and it was all the more important considering the headwinds OpenAI had to face.

In the past few weeks, the company released the first AI agents (Operator and Deep Research) and made available two ChatGPT o3 models, all part of a massive money-raising campaign that proved successful. That’s despite the ongoing tradition of ChatGPT safety researchers leaving the company or the unexpected competition from Chinese rival DeepSeek.

I wondered what OpenAI’s main creation would think about the blog, so I went to ChatGPT (GPT-4o) to ask the AI how it felt about the blog post. As expected, the AI recognized the mission statement is about technologies similar to itself, without having any feelings about it. ChatGPT also highlighted concerns with Altman’s line of carefully edited thinking.


Altman’s view of the post-AGI world​

Altman started the blog by explaining AGI after making it clear that OpenAI’s mission is to ensure that AGI benefits humanity. As you’re about to see, the exec didn’t offer a perfectly objective explanation for AGI or what AGI means from the Microsoft-OpenAI business relationship:

Systems that start to point to AGI* are coming into view, and so we think it’s important to understand the moment we are in. AGI is a weakly defined term, but generally speaking we mean it to be a system that can tackle increasingly complex problems, at human level, in many fields.

Altman then explained the rapid progress in AI development, indicating that the cost of an AI product tends to fall by ten times every 12 months, leading to increased usage. GPT-4 prices from early 2023 dropped by about 150 times by the time ChatGPT reached the GPT-4o model in mid-2024.

The CEO also made it clear that OpenAI won’t stop investing in AI hardware in the near future, which is likely a needed remark in a post-DeepSeek world. A few weeks ago, the Chinese AI stunned the world with its ChatGPT-like abilities obtained at much lower costs.

All these AI developments will lead to the next phase of AI evolution, including AI agents, towards the age of AGI. That’s where Altman gave an example of AI agent working as a software engineer:

Let’s imagine the case of a software engineering agent, which is an agent that we expect to be particularly important. Imagine that this agent will eventually be capable of doing most things a software engineer at a top company with a few years of experience could do, for tasks up to a couple of days long. It will not have the biggest new ideas, it will require lots of human supervision and direction, and it will be great at some things but surprisingly bad at others.

Altman didn’t say this engineer would take the job of a human, but he might have just as well said it. Imagine millions of AI agents taking over jobs in countless fields:

Still, imagine it as a real-but-relatively-junior virtual coworker. Now imagine 1,000 of them. Or 1 million of them. Now imagine such agents in every field of knowledge work.

Yes, that’s a nightmare scenario to some people, and it’s easy to understand why, even though Altman paints an overall rosy picture of what’s coming ahead and downplaying the bad side effects. Altman said the world won’t change immediately this year, but AI and AGI will change in the more distant future. We’ll inevitably have to learn new ways of making ourselves useful (read: work) once AI takes over:

The world will not change all at once; it never does. Life will go on mostly the same in the short run, and people in 2025 will mostly spend their time in the same way they did in 2024. We will still fall in love, create families, get in fights online, hike in nature, etc.
But the future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge. We will find new things to do, new ways to be useful to each other, and new ways to compete, but they may not look very much like the jobs of today.
Agency, willfulness, and determination will likely be extremely valuable. Correctly deciding what to do and figuring out how to navigate an ever-changing world will have huge value; resilience and adaptability will be helpful skills to cultivate. AGI will be the biggest lever ever on human willfulness and enable individual people to have more impact than ever before, not less.

Altman also mentioned that the impact of AGI will be uneven, which is probably a massive understatement. He also explained how day-to-day life might change people:

The price of many goods will eventually fall dramatically (right now, the cost of intelligence and the cost of energy constrain a lot of things), and the price of luxury goods and a few inherently limited resources like land may rise even more dramatically.

He said the road for OpenAI “looks fairly clear,” but it depends on public policy and collective opinion.

Altman also mentioned there “will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI,” without disclosing what they would be.

It is reassuring to see Altman talk about AGI safety, but as a ChatGPT user myself, I’d want more specifics. Altman did mention the need to empower individuals with AI rather than having it used by authoritarian regimes for mass surveillance and loss of autonomy.

https://bgr.com/tech/sam-altman-on-...ls-your-job-and-what-chatgpt-thinks-about-it/


[Bonus: check out ChatGPT's responses to the author's concerns via the link. Very interesting.]
 

Sam Altman on how life will change once AGI steals your job


Reading a blog post from OpenAI CEO Sam Altman on this particular Monday in February makes perfect sense, considering what’s happening in the world right now. The AI Action Summit in Paris has world leaders and tech execs in attendence, discussing AI’s future and potential regulation needed to safeguard the space.

Sam Altman penned a blog post titled Three Observations, sharing a mission statement for the future of ChatGPT and other OpenAI technology, with a clear focus on AGI (Artificial General Intelligence). The CEO gives us his incredibly optimistic view of what AGI and AI agents will mean for the world in the near and more distant future and what life might be like once AGI and AI agents steal your jobs.

The mission statement came at the end of an incredibly important period for OpenAI, and it was all the more important considering the headwinds OpenAI had to face.

In the past few weeks, the company released the first AI agents (Operator and Deep Research) and made available two ChatGPT o3 models, all part of a massive money-raising campaign that proved successful. That’s despite the ongoing tradition of ChatGPT safety researchers leaving the company or the unexpected competition from Chinese rival DeepSeek.

I wondered what OpenAI’s main creation would think about the blog, so I went to ChatGPT (GPT-4o) to ask the AI how it felt about the blog post. As expected, the AI recognized the mission statement is about technologies similar to itself, without having any feelings about it. ChatGPT also highlighted concerns with Altman’s line of carefully edited thinking.


Altman’s view of the post-AGI world​

Altman started the blog by explaining AGI after making it clear that OpenAI’s mission is to ensure that AGI benefits humanity. As you’re about to see, the exec didn’t offer a perfectly objective explanation for AGI or what AGI means from the Microsoft-OpenAI business relationship:



Altman then explained the rapid progress in AI development, indicating that the cost of an AI product tends to fall by ten times every 12 months, leading to increased usage. GPT-4 prices from early 2023 dropped by about 150 times by the time ChatGPT reached the GPT-4o model in mid-2024.

The CEO also made it clear that OpenAI won’t stop investing in AI hardware in the near future, which is likely a needed remark in a post-DeepSeek world. A few weeks ago, the Chinese AI stunned the world with its ChatGPT-like abilities obtained at much lower costs.

All these AI developments will lead to the next phase of AI evolution, including AI agents, towards the age of AGI. That’s where Altman gave an example of AI agent working as a software engineer:



Altman didn’t say this engineer would take the job of a human, but he might have just as well said it. Imagine millions of AI agents taking over jobs in countless fields:



Yes, that’s a nightmare scenario to some people, and it’s easy to understand why, even though Altman paints an overall rosy picture of what’s coming ahead and downplaying the bad side effects. Altman said the world won’t change immediately this year, but AI and AGI will change in the more distant future. We’ll inevitably have to learn new ways of making ourselves useful (read: work) once AI takes over:



Altman also mentioned that the impact of AGI will be uneven, which is probably a massive understatement. He also explained how day-to-day life might change people:



He said the road for OpenAI “looks fairly clear,” but it depends on public policy and collective opinion.

Altman also mentioned there “will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI,” without disclosing what they would be.

It is reassuring to see Altman talk about AGI safety, but as a ChatGPT user myself, I’d want more specifics. Altman did mention the need to empower individuals with AI rather than having it used by authoritarian regimes for mass surveillance and loss of autonomy.

https://bgr.com/tech/sam-altman-on-...ls-your-job-and-what-chatgpt-thinks-about-it/


[Bonus: check out ChatGPT's responses to the author's concerns via the link. Very interesting.]
Altman Summary: We need more investment to excavate this massive pit of bullshit, but there really is a magical pony at the bottom, I promise.
 
Sam Altman is the technology equivalent of co-President Donald J Trump: A PT-Barnum-like huckster for the modern age. Altman has grifted his way from one technology trend to another over the past two decades.

An Altman-Musk dustup has no winners, but the clear loser would be the end-use consumer. Chinese AI Deep Seek has effectively turned the profitable-for-hedge-funds AI investment models upside down, and there is a huge amount of chaos in the market right now. I really can't blame a tech-savvy investor like Musk for swooping in and attempting to talk down OpenAI by engaging in the usual Fear/Uncertainty/Doubt tech rhetoric that Bill Gates and Steve Ballmer invented at Microsoft in the 1990s. That's the way the tech market runs.

At one level, the Chinese made lemonade out of a very sour batch of lemons: they were restricted by the Biden administration from purchasing state-of-the-art Nvidia GPUs which drive the soulless heart of AI. So the Chinese pivoted and made the best of what they had: Using 2021-era technology, they created a THIRD way to process AI, a hybrid way.
The three models of artificial intelligence are now:
  • Using an enormous integrated supercomputer back end processes, relying on an ever expanding (scalable) infrastructure to handle increasingly complex tasks.
  • Using a desktop/phone/whatever as a conduit to a single node which solves all queries as if they've never heard of it before, using supercomputers as needed.
  • The new hybrid Chinese model: maintaining a vast infrastructure of "subject matter experts" (i.e. apple growing seasons, Korean dating preferences, etc) and using a supercomputer to arbitrate any inconsistencies between results gathered from the SMEs.

With the benefit of 20/20 hindsight, this was a brilliant compromise, especially because the Chinese govt. basically gave it away and kneecapped virtually the entire American AI industry, which Wall Street hedge funds had invested as the "next big cash cow".

The Chinese also apparently had uncanny timing (no doubt due to their ability to compromise American security protocols during Republican presidential administrations): a few days after Pumpkinhead announced his half-a-trillion dollar government "Stargate" AI project (to grift for his hedge-fund billionaire buddies) on January 22nd, China announced Deep Seek. Deep Seek is already available in the iPhone app store and hosted on Amazon AWS.

Oh, did I mention that Deep Seek periodically "phones home" to the Chinese motherland when it finds something juicy? But who the fuck cares anymore, Elon Musk's stooges can walk into the Treasury department and copy every American's financial date onto thumb drives.
 
Back
Top