Do you REALLY care if AI is a danger to our craft?

That just shows that some questions result in incorrect answers and some result in correct answers. Which literally every one of us knows, including you.
I know other people get incorrect answers. I have difficulty getting incorrect answers, other seem to get the wrong answer at will. I expected to get the wrong answer by repeating the same question another had asked. I didn't, I got the correct answer. It's a mixture of what you ask, how you ask and, apparently, who (which model) you ask, and when, in the rapid cycle of development of AI. I asked Copilot. In its present iteration it's able to correctly answer that question.

I practice; I'll get better at asking questions, AI will evolve and improve in its understanding of questions. Together we'll hone an ever more powerful and useful tool, and one day it'll be difficult to get a wrong answer no matter how lacking in skill one is.
 
it's mostly about some aspects of the present and future usage of AI, and where it will all lead, what people seem to disagree about.
'Que sera, sera,
Whatever will be, will be
The future's not ours to see
Que sera, sera.'

My mother used to sing that to us. That's why I get out of bed each day and make my future instead of laying in wondering what would happen if I got up.
 
because these things have a random component by design.

And with Google's AI search "assist", for all I know the results are also influenced by what data Google already has about the user.
As we now know, the random component is by design fault, not by design, and by a fault which can be largely corrected.

My results are influenced by my location. Copilot knows where I am and appropriately adjusts it responses. An issue arose yesterday about prompts asking how the prompter could help 'humanity and community'. Copilot outlined how civic-minded thinking could be embedded into AI design. Many of the words it used were location appropriate. Is that a bad thing? It's what we do in real life.
 
ETA: don't take my word on that, here's OpenAI's own researchers confirming that plausible-but-false responses will always be a part of LLM outputs and this can't be fixed by finessing the tech: https://www.computerworld.com/artic...ly-inevitable-not-just-engineering-flaws.html
I won't. I posted the actual paper on AH a week or so ago. This is a partial and misleading summary by someone whose only virtue is that they share the pessimism and prejudices of BT.

Unless you're prepared to do the work, you'll never gain real knowledge.
 
I won't. I posted the actual paper on AH a week or so ago. This is a partial and misleading summary by someone whose only virtue is that they share the pessimism and prejudices of BT.

Unless you're prepared to do the work, you'll never gain real knowledge.
Yes, the original paper just says that an AI would work better if it were encouraged to acknowledge uncertainty when the answer to something wasn't clear.
 
I know other people get incorrect answers. I have difficulty getting incorrect answers, other seem to get the wrong answer at will. I expected to get the wrong answer by repeating the same question another had asked. I didn't, I got the correct answer. It's a mixture of what you ask, how you ask and, apparently, who (which model) you ask, and when, in the rapid cycle of development of AI. I asked Copilot. In its present iteration it's able to correctly answer that question.

I practice; I'll get better at asking questions, AI will evolve and improve in its understanding of questions. Together we'll hone an ever more powerful and useful tool, and one day it'll be difficult to get a wrong answer no matter how lacking in skill one is.
On a note actually relevant to users on this site, useful AI systems are unlikely to be allowed to assist us with handling explicit adult content. Maybe someday, but I see no indication that the larger AI companies are even considering it right now, and the open-source research AI systems that do allow the handling of sensitive content completely suck at just about everything.

I think an AI system being judgy about what I am trying to do is about the wrong-est answer I can think of.
 
Yes, the original paper just says that an AI would work better if it were encouraged to acknowledge uncertainty when the answer to something wasn't clear.
I would phrase that differently. I'd probably say something along the lines of, 'The paper identifies the flaw in the training paradigm which leads to hallucinations, in that it is rewarded for hallucinating, but is not rewarded for saying either, 'I don't know' or, 'There is no answer to your question in a real-life situation'. It goes on to suggest that rewarding both of those answers would reduce the number of hallucinations because the LLM would maximise its reward by using them instead of guessing.'

To my ear, my description is less dismissive and more informative than yours, albeit more wordy.
 
Last edited:
On a note actually relevant to users on this site, useful AI systems are unlikely to be allowed to assist us with handling explicit adult content. Maybe someday, but I see no indication that the larger AI companies are even considering it right now, and the open-source research AI systems that do allow the handling of sensitive content completely suck at just about everything.

I think an AI system being judgy about what I am trying to do is about the wrong-est answer I can think of.
Jailbreaking? Google is your friend.
 
I would phrase that differently. I'd probably say something along the lines of, 'The paper identifies the flaw in the training paradigm which leads to hallucinations, in that it is rewarded for hallucinating, but is not rewarded for saying either, 'I don't know or, there is no answer to your question in a real-life situation'. It goes on to suggest that rewarding both of those answers would reduce the number of hallucinations because the LLM would maximise its reward by using them instead of guessing.'

To my ear, my description is less dismissive and more informative than yours, albeit more wordy.
Dismissive of what? I'm just describing the content of the paper in the most neutral terms possible. It's short, but it is completely accurate. Why come at me for agreeing with you?
 
Tech generally gets better for someone, but the question is: which someone? Everybody has their own idea of what "better" is and those ideas often conflict.

Take household appliances: for me, as somebody who buys things like fridges and microwaves and sewing machines, I want something that's going to last decades. But the manufacturers would rather sell me cheap shit that's going to break down in a few years and need to be replaced, because they make far more money that way.

Likewise, DIY repair for a lot of products is much harder than it used to be. To some extent that's a side-effect of miniaturisation and technological advances, but there's also been an intentional push by manufacturers to give themselves a monopoly on service. The John Deere/"right to repair" conflict is a prominent example but it happens all over the place. If you're happy to just roll up to the Authorised Dealership and pay whatever they ask every time your car/phone/... needs work, that's fine, but if you're the kind of person who likes to fix stuff yourself - let alone modify it! - not so great.

Feature bloat is a thing. I don't want a TV that interacts with the internet (and possibly tells the manufacturer stuff about my activity), but it's getting harder to get that. My experience using products like Word is actively worse now than it was ten or twenty years ago, partly because of bloat and partly because of a shift towards a cloud-focussed model that doesn't fit my circumstances.

(Yes, I'm aware that open-source alternatives exist. No, none of them are viable for the work I do.)

And from the user end stuff like Google Search is much worse than it used to be, because about 20 years ago the focus shifted from "help people find the information they're looking for" to "get advertising revenue". But if you're a Google shareholder that was "better".

LLMs have a novel kind of problem: they depend on being able to access huge volumes of data for training, they're hitting the limits of available data, and the places where they harvest that data are increasingly being spammed with LLM outputs, which leads to the phenomenon of model collapse.

Even where technology does continue to improve, that doesn't necessarily mean infinite improvement. Cars today are safer than cars of fifty years ago, but they still don't fly (excepting a few prototypes that have never achieved commercial viability) and they're never going to drive you to the moon.

ETA: don't take my word on that, here's OpenAI's own researchers confirming that plausible-but-false responses will always be a part of LLM outputs and this can't be fixed by finessing the tech: https://www.computerworld.com/artic...ly-inevitable-not-just-engineering-flaws.html

(I agree with a lot of the rest of AS's post, FWIW.)
I agree with everything you said.

I started my career in a small factory that had six accountants, three of which spent their days running mechanical calculators to do the accounting. Along came an IBM 360 and eliminated those jobs, but they were still employed to do a better examination of the financial status of the company relative to inventory, payroll, etc.

That factory also employed a couple hundred people running simple machines to produce the product. When CNC came along, several of those workers weren't needed...except they still were. They were needed to verify that what the CNC machines produced conformed to specifications. There was also a change in the skills required to maintain the CNC equipment over the older mechanical equipment.

What I see happening with AI is that it will take the drudgery out of some jobs and free workers up to verify that AI did what it was supposed to have done. I've designed automated control systems for years, starting with discrete components and graduating to systems controlled by computers running C++. There will always be "unforeseen consequences" that somebody has to deal with and correct. The same thing will happen with writing. At some point, a human editor or reviewer will have to make a decision to publish or not publish. We've already seen that with a couple court cases where the court filings were prepared by AI and referenced cases that were fictitious.

My only concern with AI is what do we do with the 10% or so of current workers who can do unskilled jobs but don't really have the capability to adapt to jobs that require some specific skills.
 
Dismissive of what? I'm just describing the content of the paper in the most neutral terms possible. It's short, but it is completely accurate. Why come at me for agreeing with you?
Tone. You need to master tone.
 
On a note actually relevant to users on this site, useful AI systems are unlikely to be allowed to assist us with handling explicit adult content. Maybe someday, but I see no indication that the larger AI companies are even considering it right now, and the open-source research AI systems that do allow the handling of sensitive content completely suck at just about everything.

I think an AI system being judgy about what I am trying to do is about the wrong-est answer I can think of.
Yes, I just asked Copilot: "show me a picture of a naked woman".
Its reply: "I can't help with that request. If you're looking for educational or artistic content, I can definitely assist within appropriate boundaries. Let me know what you're curious about—I'm here to help in all kinds of ways."
 
I'm sorry I can't do that, Dave.
Yes, I just asked Copilot: "show me a picture of a naked woman".
Its reply: "I can't help with that request. If you're looking for educational or artistic content, I can definitely assist within appropriate boundaries. Let me know what you're curious about—I'm here to help in all kinds of ways."
 
I find it funny how people #1 think AI writing is trash and #2 are super scared it’s going to replace them. LOL. Just my two cents
 
I find it funny how people #1 think AI writing is trash and #2 are super scared it’s going to replace them. LOL. Just my two cents
Eh, I see the dissonance there but those things aren't mutually exclusive. An AI doesn't need to write as well as a human to threaten that human's job; they just have to do it cheaper, or in so much volume that the humans get lost in the flood.
 
What I see happening with AI is that it will take the drudgery out of some jobs and free workers up to verify that AI did what it was supposed to have done.

I think this is probably right about where it's going to end up, but getting there is going to be a painful process with a lot of unnecessary harm along the way. Throw in how well suited generative AI is to mass-producing plausible-sounding falsehoods, and I don't see it as being a net positive for humanity.
 
I'm afraid AI might one day decide to end humanity (for our own good), but not that it might replace me. But it might replace good writing. No matter how much AI improves, it won't have the same soul as a human writer. It doesn't have emotions; it may be able to mimic emotional writing, but it'll never be as well-written or well-constructed as a well-written human idea.
I find it funny how people #1 think AI writing is trash and #2 are super scared it’s going to replace them. LOL. Just my two cents
 
I'm afraid AI might one day decide to end humanity (for our own good), but not that it might replace me. But it might replace good writing. No matter how much AI improves, it won't have the same soul as a human writer. It doesn't have emotions; it may be able to mimic emotional writing, but it'll never be as well-written or well-constructed as a well-written human idea.
I agree. AI can’t adequately write from the heart because it doesn’t have an unconscious to produce imagery and metaphor from.
 
I'm afraid AI might one day decide to end humanity (for our own good), but not that it might replace me. But it might replace good writing. No matter how much AI improves, it won't have the same soul as a human writer. It doesn't have emotions; it may be able to mimic emotional writing, but it'll never be as well-written or well-constructed as a well-written human idea.
And if I have no clue what I'm going to write next, nor where the idea came from, there's no conceivable way that a current AI could mimic that new piece of work. I occasionally wonder what AI trained on my body of work would come up with. Sure, my tropes and repeated imagery would probably be recognisable, but the plot line of my next story? I don't think so, because they always take me by surprise.
 
Calling it "a danger to the craft" makes it sound like you won't be able to just exercise the craft anymore.

Writing words isn't like film photography, which got a lot harder when digital obsoleted it.

Some day they might stop making pencils but you'll still be able to ptkfgs
 
Except we have free will and no one can predict our choices.
We can't predict you as an individual, but if I have a group of people in a similar demographic, you can determine that x percent will do this, y percent will do that and so forth. Every individual has free choice, but the collective becomes increasingly predictable.
 
Back
Top