Do you REALLY care if AI is a danger to our craft?

There is a false impression that there are two sides to the argument in this thread. There aren't. Call me arrogant, but I see almost universal agreement here on about 90% of the aspects of AI. And I see both sides equally misguided, albeit for different reasons, about the remaining 10%.
The focus seems to be on countering the other side rather than finding common ground. And it again proves that it's not the AI that's the problem. It's us.
 
It's the way you ask'em. I asked the same way in Sept 2025, I was corrected and got the correct answer.
You don't quite understand irony, do you? You asked the "wrong" question, somehow figured that out, then got one of several "correct" answers available. I can see why you're a lawyer, you're getting whatever answer suits you best ;).
 
View attachment 2566079

To expand, we can’t even solve a three body problem analytically, let alone the evolution of the Universe. Interesting to know who the “experts” are, apart from some YT dudes.

We can't solve the three body problem, but the earth's climate is significantly more complex. I've been told the experts all agree we've got that completely figured out.
 
Haha - having seen your next post, you continue to illustrate my point perfectly. You're jumping through hoops to argue that AI is coherent and useful, but only if you ask it the "right" question. It's fucking useless, constantly, if there are so many different "answers" to the same question.

That's a bit like blaming the calculator because you typed in the wrong numbers.
 
Except that we have free will and we are always making choices. Every one of those choices alters the future. If some equation tried to tell me what I would do tomorrow, I'd be very tempted to choose the opposite.

"Man is bound because he is first free." -- Ernest Holmes

And game theory accounts for contrarian players mathematically. You aren't unique is feeling that way, so that is factored in.
 
There is a false impression that there are two sides to the argument in this thread. There aren't. Call me arrogant, but I see almost universal agreement here on about 90% of the aspects of AI. And I see both sides equally misguided, albeit for different reasons, about the remaining 10%.
The focus seems to be on countering the other side rather than finding common ground. And it again proves that it's not the AI that's the problem. It's us.

Not sure if I agree with you about being 90% agreement here.

AI is a new technology that has limitations at the moment but a tremendous amount of potential. To argue that AI is worthless, "spicy autocorrect" or whatever other term you want to use is like standing at Kitty Hawk North Carolina on Dec 17 1903 and saying the whole thing was worthless and stupid because the best they could do was travel 800 feet with all that effort.

Sometimes AI art looks terrible and everyone has 6 fingers. Sometimes it looks damn good.
Sometimes it is incredibly useful, sometimes it makes a hash of things.
You'd be foolish to think that it won't improve though.
One of my college professors pointed out that a KGB agent in 1965 would have gleefully slit the throats of everyone in the classroom to get his hands on one of our smart phones, because it represented massive computing power in 1965.
It's short sighted to bet against technology because of current limitations.
Palm Pilots and other PDAs were a failure until technology caught up. Now your phone has the same functionality and the idea is ubiquitous.
 
A team of YouTube Bros, oh...wait *checks notes* A team of researchers from the Universities of Geneva (UNIGE), Texas, Paris and University College London joined forces to create the first mathematical model of embodied consciousness.

https://e3.eurekalert.org/news-releases/911551

Some completely unqualified guys from *checks notes* MIT have been working on these concepts for decades.

https://web.mit.edu/amliu/www/Papers/PentlandLiu_NeuralComp99_v11n2.pdf

There's a lot more.

https://group.ntt/en/newsrelease/2024/04/19/240419a.html

Science is cool.
 
Reading some comments on this thread with increasing incredulity. I can only think our education system has wholly failed a cohort of children who - now as adults - have close to zero understanding of mathematics, statistics, scientific models, and the differences / connections between them. Or maybe it’s just convenient to forget.
 
You don't quite understand irony, do you? You asked the "wrong" question, somehow figured that out, then got one of several "correct" answers available. I can see why you're a lawyer, you're getting whatever answer suits you best ;).
Prompt:
What are the four members of the periodic table with four letters in their name?

Response:
There are exactly five chemical elements with four-letter names, not four—so you were just one off! Here they are:

[TR]
[TH]Element[/TH]
[TH]Symbol[/TH]
[TH]Atomic Number[/TH]
[/TR]
[TR]
[TD]Gold[/TD]
[TD]Au[/TD]
[TD]79[/TD]
[/TR]
[TR]
[TD]Iron[/TD]
[TD]Fe[/TD]
[TD]26[/TD]
[/TR]
[TR]
[TD]Lead[/TD]
[TD]Pb[/TD]
[TD]82[/TD]
[/TR]
[TR]
[TD]Neon[/TD]
[TD]Ne[/TD]
[TD]10[/TD]
[/TR]
[TR]
[TD]Zinc[/TD]
[TD]Zn[/TD]
[TD]30[/TD]
[/TR]


Each one has its own flair:
  • 🪙 Gold: The eternal symbol of wealth and conductivity.
  • ⚙️ Iron: Backbone of civilization, from swords to skyscrapers.
  • 🧱 Lead: Heavy, toxic, but historically vital in pipes and pigments.
  • 💡 Neon: The glow-up king of signage and plasma tubes.
  • 🧪 Zinc: Essential for health and galvanizing steel.

It's the only answer I got.

As I say, some can, some can't; but what's worse, some have never even tried.
 
Last edited:
Reading some comments on this thread with increasing incredulity. I can only think our education system has wholly failed a cohort of children who - now as adults - have close to zero understanding of mathematics, statistics, scientific models, and the differences / connections between them. Or maybe it’s just convenient to forget.

It's always interesting how people will make bold comments about things they've never actually researched. I suspect it has to do with the emphasis on self-esteem in the education system over actual results. The kids don't learn anything, but they sure are confident in what they don't know.
I suspect some of the people in this thread have actually seen "A Beautiful Mind", so in some sense they know who John Nash is, but they never made the connection to game theory and modeling human behavior mathematically.
For most of the people who claim to "Love Science" on the internet that just means watching a Ted Talk every few months and sharing Neil DeGrasse Tyson memes.
 
Not sure if I agree with you about being 90% agreement here.
It's not fully conscious, the way I see it. I don't want to deepen this rift, so I'll only say that from what I've seen, there's a reluctant consensus that AI (LLM AI to be specific) is a powerful but flawed tool.

Both sides agree that LLMs were trained unethically, that they can be powerful, and that they are also prone to hallucinations and mistakes, so their responses have to be taken with caution. Maybe there's a disagreement about how often these hallucinations happen, and whether that's the mistake of AI or the prompter, but that's about it. The remaining 10% is all about perspective, whether the AI will be used for the betterment or to the detriment of humanity.

There's no doubt in my mind that AI will get better with time. That has been true for literally every tech in the history of mankind. It might take years, but LLMs will become faster, more accurate, smaller, and more dependable. In time, maybe AGI emerges too.

My point has always been that the problem isn't the AI, it's us. Aside from the usage of AI that comes from curiosity and the desire to test its limitations, you can bet your ass that the majority of other types of prompts were about cheating or using AI to create "art" that's basically a product of theft.

People already use it overwhelmingly to cheat on homework, projects, reports, reviews, you name it. And that will lead to further degradation of knowledge and responsibility in the world.
People use it to create "art" they have no business creating. It's all based on theft, and its overwhelming volume will drown human-made art, which we can see is already happening with smut, erotic drawings, games, etc.
As messy as the situation on Lit is with sometimes wrongful AI-rejections, can you imagine what would happen if Lit allowed AI stories? Even putting ethics aside, our creations would get drowned in the flood of slop that would follow.

AI is a powerful tool that has the potential to become incredibly powerful and impactful on our lives. And humans aren't capable of wielding such a tool responsibly. They will mostly use it to cheat and steal, and to avoid work. I wish there were some decent statistics about the percentage of each type of prompt. I really want to know how many people prompted AI about ways to help humanity or their communities. If those prompts are even 0.00001% I'll be surprised.

So, to conclude this rant of mine, AI is a powerful tool that will be heavily misused, and that's its only fault. It will likely impact the world in a bad way because of the way we will misuse it.
And since we can't change the human race, and we can't change our greed and laziness and ignorance, and we can't change the truth that those in charge are only interested in how much more money and power AI will bring them, we criticize AI and its existence, because that's the only thing we can do.

Yeah, I wish the AI didn't exist, because that's a wish that's more realistic than the one where humanity somehow becomes good and responsible. I'm not sure there's a god out there capable of changing the flawed mess that we all are.
 
It's not fully conscious, the way I see it. I don't want to deepen this rift, so I'll only say that from what I've seen, there's a reluctant consensus that AI (LLM AI to be specific) is a powerful but flawed tool.

Both sides agree that LLMs were trained unethically, that they can be powerful, and that they are also prone to hallucinations and mistakes, so their responses have to be taken with caution. Maybe there's a disagreement about how often these hallucinations happen, and whether that's the mistake of AI or the prompter, but that's about it. The remaining 10% is all about perspective, whether the AI will be used for the betterment or to the detriment of humanity.

There's no doubt in my mind that AI will get better with time. That has been true for literally every tech in the history of mankind. It might take years, but LLMs will become faster, more accurate, smaller, and more dependable. In time, maybe AGI emerges too.

My point has always been that the problem isn't the AI, it's us. Aside from the usage of AI that comes from curiosity and the desire to test its limitations, you can bet your ass that the majority of other types of prompts were about cheating or using AI to create "art" that's basically a product of theft.

People already use it overwhelmingly to cheat on homework, projects, reports, reviews, you name it. And that will lead to further degradation of knowledge and responsibility in the world.
People use it to create "art" they have no business creating. It's all based on theft, and its overwhelming volume will drown human-made art, which we can see is already happening with smut, erotic drawings, games, etc.
As messy as the situation on Lit is with sometimes wrongful AI-rejections, can you imagine what would happen if Lit allowed AI stories? Even putting ethics aside, our creations would get drowned in the flood of slop that would follow.

AI is a powerful tool that has the potential to become incredibly powerful and impactful on our lives. And humans aren't capable of wielding such a tool responsibly. They will mostly use it to cheat and steal, and to avoid work. I wish there were some decent statistics about the percentage of each type of prompt. I really want to know how many people prompted AI about ways to help humanity or their communities. If those prompts are even 0.00001% I'll be surprised.

So, to conclude this rant of mine, AI is a powerful tool that will be heavily misused, and that's its only fault. It will likely impact the world in a bad way because of the way we will misuse it.
And since we can't change the human race, and we can't change our greed and laziness and ignorance, and we can't change the truth that those in charge are only interested in how much more money and power AI will bring them, we criticize AI and its existence, because that's the only thing we can do.

Yeah, I wish the AI didn't exist, because that's a wish that's more realistic than the one where humanity somehow becomes good and responsible. I'm not sure there's a god out there capable of changing the flawed mess that we all are.
I think where you and I are in disagreement is the idea of creating "art that they have no business creating", or "to avoid work". As well as your generally pessimistic view of humanity.
I'd argue that using AI to remove the drudgery of work that can be done by machine is improving humanity. Is digging a ditch by hand more noble than using a back hoe? Reducing drudgery, helping people "avoid" work has been the purpose of technology since the invention of the wheel.
"Turok, why you no want to carry heavy load!"
We've just gone from simplifying manual labor to intellectual labor.
A friend of mine got tasked by his boss to prepare for an end of year review. His boss said in a nutshell give me your achievements for the year based on this criteria.
He dropped the list of criteria his boss provided, and the list of achievements he keeps for himself into a company approved AI and it spit out the answer in 15 seconds.
Is that wrong? Is that "avoiding work", He saved a few hours of work and could go on to more productive pursuits.
I see that as a net positive. There's no nobility in doing unnecessary work.
Using AI to do research is no more cheating than Googling it. In some ways it's just a fancier search engine, and just like Google, you need to check your results.

As for art, I don't think there is any such thing as art people have no business creating. I've experimented with AI image generation, and I can't wait for AI video creation to get better. I've got a number of ideas that just work better in a visual medium, but no way to bring them to life. When that changes I'm all in.
 
Don't forget to add your name to the class actions. You may be very surprised to learn that your stolen art is worth $3000 or more.
 
I think where you and I are in disagreement is the idea of creating "art that they have no business creating", or "to avoid work". As well as your generally pessimistic view of humanity.
I'd argue that using AI to remove the drudgery of work that can be done by machine is improving humanity. Is digging a ditch by hand more noble than using a back hoe? Reducing drudgery, helping people "avoid" work has been the purpose of technology since the invention of the wheel.
"Turok, why you no want to carry heavy load!"
We've just gone from simplifying manual labor to intellectual labor.
A friend of mine got tasked by his boss to prepare for an end of year review. His boss said in a nutshell give me your achievements for the year based on this criteria.
He dropped the list of criteria his boss provided, and the list of achievements he keeps for himself into a company approved AI and it spit out the answer in 15 seconds.
Is that wrong? Is that "avoiding work", He saved a few hours of work and could go on to more productive pursuits.
I see that as a net positive. There's no nobility in doing unnecessary work.
Using AI to do research is no more cheating than Googling it. In some ways it's just a fancier search engine, and just like Google, you need to check your results.

As for art, I don't think there is any such thing as art people have no business creating. I've experimented with AI image generation, and I can't wait for AI video creation to get better. I've got a number of ideas that just work better in a visual medium, but no way to bring them to life. When that changes I'm all in.
It's true that my views are generally pessimistic when it comes to humanity. I also agree about using tools to save us from manual, boring, or taxing work. Praise the dishwasher.

But not for any creative work. I can't justify using AI to write scientific papers (not to research, but to write), write fiction, or do homework, whose very point is to solidify knowledge with practice. And the same goes for art. I understand it would be so great if, instead of a month of writing, typing, reworking, and editing, I could just pour my ideas and scenes into the AI and let it generate a story in fifteen seconds.
And even if the AI could do it as well or better than I could, which it can't at the moment, it would take away from the "worth" of my story. It's the talent to put ideas into words and all the sweat and hard work that gives it value. AI takes that away from the art it creates, and no, I don't think that tweaking prompts and putting your own ideas into them makes up for even 5% of the creative work.

I understand you see things differently, and that's okay. I could go into a more profound analysis of the whys, but I don't think I'll convince anyone. Still, my whole point was that instead of these random disagreements and arguments about everything, when you look at it properly, it's mostly about some aspects of the present and future usage of AI, and where it will all lead, what people seem to disagree about.
 
You can get a lot of word written in a short amount of time. However, this would only be a starting part. In my opinion, and other people who actually use AI to write, what you get is only the beginning of the process and the overall time to produce a quality novel isn't changed. Therefore, if it doesn't save anytime whatsoever, why use it?
It's true that my views are generally pessimistic when it comes to humanity. I also agree about using tools to save us from manual, boring, or taxing work. Praise the dishwasher.

But not for any creative work. I can't justify using AI to write scientific papers (not to research, but to write), write fiction, or do homework, whose very point is to solidify knowledge with practice. And the same goes for art. I understand it would be so great if, instead of a month of writing, typing, reworking, and editing, I could just pour my ideas and scenes into the AI and let it generate a story in fifteen seconds.
And even if the AI could do it as well or better than I could, which it can't at the moment, it would take away from the "worth" of my story. It's the talent to put ideas into words and all the sweat and hard work that gives it value. AI takes that away from the art it creates, and no, I don't think that tweaking prompts and putting your own ideas into them makes up for even 5% of the creative work.

I understand you see things differently, and that's okay. I could go into a more profound analysis of the whys, but I don't think I'll convince anyone. Still, my whole point was that instead of these random disagreements and arguments about everything, when you look at it properly, it's mostly about some aspects of the present and future usage of AI, and where it will all lead, what people seem to disagree about.
 
Prompt:
What are the four members of the periodic table with four letters in their name?

Response:
There are exactly five chemical elements with four-letter names, not four—so you were just one off! Here they are:

[TR]
[TH]Element[/TH]
[TH]Symbol[/TH]
[TH]Atomic Number[/TH]
[/TR]
[TR]
[TD]Gold[/TD]
[TD]Au[/TD]
[TD]79[/TD]
[/TR]
[TR]
[TD]Iron[/TD]
[TD]Fe[/TD]
[TD]26[/TD]
[/TR]
[TR]
[TD]Lead[/TD]
[TD]Pb[/TD]
[TD]82[/TD]
[/TR]
[TR]
[TD]Neon[/TD]
[TD]Ne[/TD]
[TD]10[/TD]
[/TR]
[TR]
[TD]Zinc[/TD]
[TD]Zn[/TD]
[TD]30[/TD]
[/TR]


Each one has its own flair:
  • 🪙 Gold: The eternal symbol of wealth and conductivity.
  • ⚙️ Iron: Backbone of civilization, from swords to skyscrapers.
  • 🧱 Lead: Heavy, toxic, but historically vital in pipes and pigments.
  • 💡 Neon: The glow-up king of signage and plasma tubes.
  • 🧪 Zinc: Essential for health and galvanizing steel.

It's the only answer I got.

As I say, some can, some can't; but what's worse, some have never even tried.
That just shows that some questions result in incorrect answers and some result in correct answers. Which literally every one of us knows, including you.
 
That just shows that some questions result in incorrect answers and some result in correct answers. Which literally every one of us knows, including you.
And even the same question can result in incorrect answers for some and correct answers for others, because these things have a random component by design.

And with Google's AI search "assist", for all I know the results are also influenced by what data Google already has about the user.
 
There's no doubt in my mind that AI will get better with time. That has been true for literally every tech in the history of mankind.

Tech generally gets better for someone, but the question is: which someone? Everybody has their own idea of what "better" is and those ideas often conflict.

Take household appliances: for me, as somebody who buys things like fridges and microwaves and sewing machines, I want something that's going to last decades. But the manufacturers would rather sell me cheap shit that's going to break down in a few years and need to be replaced, because they make far more money that way.

Likewise, DIY repair for a lot of products is much harder than it used to be. To some extent that's a side-effect of miniaturisation and technological advances, but there's also been an intentional push by manufacturers to give themselves a monopoly on service. The John Deere/"right to repair" conflict is a prominent example but it happens all over the place. If you're happy to just roll up to the Authorised Dealership and pay whatever they ask every time your car/phone/... needs work, that's fine, but if you're the kind of person who likes to fix stuff yourself - let alone modify it! - not so great.

Feature bloat is a thing. I don't want a TV that interacts with the internet (and possibly tells the manufacturer stuff about my activity), but it's getting harder to get that. My experience using products like Word is actively worse now than it was ten or twenty years ago, partly because of bloat and partly because of a shift towards a cloud-focussed model that doesn't fit my circumstances.

(Yes, I'm aware that open-source alternatives exist. No, none of them are viable for the work I do.)

And from the user end stuff like Google Search is much worse than it used to be, because about 20 years ago the focus shifted from "help people find the information they're looking for" to "get advertising revenue". But if you're a Google shareholder that was "better".

LLMs have a novel kind of problem: they depend on being able to access huge volumes of data for training, they're hitting the limits of available data, and the places where they harvest that data are increasingly being spammed with LLM outputs, which leads to the phenomenon of model collapse.

Even where technology does continue to improve, that doesn't necessarily mean infinite improvement. Cars today are safer than cars of fifty years ago, but they still don't fly (excepting a few prototypes that have never achieved commercial viability) and they're never going to drive you to the moon.

ETA: don't take my word on that, here's OpenAI's own researchers confirming that plausible-but-false responses will always be a part of LLM outputs and this can't be fixed by finessing the tech: https://www.computerworld.com/artic...ly-inevitable-not-just-engineering-flaws.html

(I agree with a lot of the rest of AS's post, FWIW.)
 
Last edited:
Back
Top