Interesting AI story in the Telegraph

Does this hurt you more than it hurts the students? Are you losing sleep over it?

It seems like having periodic tests without the availability of digital devices could go a long way to testing the real knowledge and skills of students, of course then you’d have to grade and handle hard copies instead of digital ones…. Oops!😅
I couldn't give a fuck about them. They're a paycheck, that's all. Some I like personally, but as a group their successes or failures are neither here nor there. They are adults, just, but they're still adults, and as such I have no responsibility for the choices they make when it comes to their studies. I tell them that on day one, in a nicer way. It's what my professors and lecturers told me, back in the day.

What they're learning, on the other hand? Well, nothing. Because all they see is the possibility to sit and surf their socials and get the AI to do the work. Their diplomas will mean nothing, as will their supposed knowledge. Most of the time that will mean little in a wider sense, but they will be found out the moment someone wants some real work from them, some real application of learning.

And as for hard copies and grading... well, who the fuck do you think sets them the assignments? Santa Claus? And who do you think grades them? The Tooth Fairy?
 
I agree. That's why I think Lit. should create an AI category to encourage public disclosure as the norm. So readers can decide whether or not they want to read something not written by a human, and to what degree.
Until the copyright issue is resolved this is a legally grey area. Different sites will handle it in different ways, but nobody can see quite how the jeopardy will play out. In that context, I understand sites playing it safe.
 
Khm.. hello? vultures? consuming content? I totally understand the impact this has on the writer community, but let's not go judgmental on readers please. If you really and honestly think about it, a reader:
a) should not care how their content was created, so long they enjoy it

As a reader, I definitely would care if I found the content I'd been enjoying had been plagiarised from another author's work.

Whether generative AI is closer to "plagiarism" or to how a human learns from reading other people's work, that's a complicated argument. But it's not one we can skip by supposing that only the end product matters.

b) is not capable of telling, as they don't know the difference, especially if the difference is so slim, it cannot reliably be pointed out even by sophisticated text analysis software.

It's not a given that "undetectable by software" = "undetectable by humans".
 
I couldn't give a fuck about them. They're a paycheck, that's all. Some I like personally, but as a group their successes or failures are neither here nor there. They are adults, just, but they're still adults, and as such I have no responsibility for the choices they make when it comes to their studies. I tell them that on day one, in a nicer way. It's what my professors and lecturers told me, back in the day.

What they're learning, on the other hand? Well, nothing. Because all they see is the possibility to sit and surf their socials and get the AI to do the work. Their diplomas will mean nothing, as will their supposed knowledge. Most of the time that will mean little in a wider sense, but they will be found out the moment someone wants some real work from them, some real application of learning.

And as for hard copies and grading... well, who the fuck do you think sets them the assignments? Santa Claus? And who do you think grades them? The Tooth Fairy?

I’m sorry if that came off rudely, I’m sorry that it comes off on you, but it really devalues the education of the lazy students, not you the teacher.

It’s s the students responsibility to learn and the value of education is to them, but I don’t get the impression that you truly don’t care. If you didn’t care their laziness wouldn’t bother you. 😉

My point about the hard copies is that the only control a teacher has over whether the students must actually do the work to earn a passing grade requires more effort and time investment from the teacher.

It sounds like a source of job dissatisfaction for teachers. 😕
 
No worries, nothing to apologize for.

My intention was to point out how the site owner is not completely opposed to the use AI technology to reduce the amount of human interaction needed to manage the site, so they don’t seem to have an all encompassing problem with it.

Therefore the fact that they don’t want to host AI content is probably due to legal issues or aesthetics.

Unlike Amazon, this site doesn’t directly monetize the content, so remaining dedicated to human authors could be an aesthetic choice as well as a business decision - a choice to remain unique and to continue to support writers rather than simply providing erotic content.

I think the criticism some here have where they accuse the manager of being apathetic to authors who are mistakenly flagged is shortsighted. They aren’t considering what it takes to manage the huge number of daily submissions, nor are they considering how AI tech is still evolving and this is part of the development, not the fixed end result.

The development of AI writing has a serious dilemma. On one hand it is being asked to produce content that is indistinguishable from human writing, on the other hand it is being asked to flawlessly differentiate between AI and human content.

🤷‍♀️
Well said. We are indeed much more closely aligned than I initially thought.

And thank you for encouraging grace in a difficult and likely transitory situation. We need more of that if we hope to be something more like a community, rather than just a rag tag collection of wordy pervs.

Because its not about money or fame for most of us. Most of us are trying to remain anonymous and I daresay that those for whom money is a component are likely getting a terrible return on their investment of time.

In which case this is about connecting. Writers and readers. Those who thought they were the only ones fantasizing about things no one they knew talked about. People being more "real" in the safety of fiction, writing under a pen name, than they might be with anyone else in their daily life.

I'm a hard 'yes' to anything that encourages more of that. 🙂
 
As a reader, I definitely would care if I found the content I'd been enjoying had been plagiarised from another author's work.

Whether generative AI is closer to "plagiarism" or to how a human learns from reading other people's work, that's a complicated argument. But it's not one we can skip by supposing that only the end product matters.
I'm with you on the plagiarized part, but I think it is a far stretch to call AI assisted or even AI generated work plagiarized. Unless it actually copies elements of an existing work, it is a unique piece with all the right that one such piece deserves. Now whether the author deserves credit for it is another discussion entirely.

It's not a given that "undetectable by software" = "undetectable by humans".
You are grasping for straws. If I cannot write a software to detect something in a well organized pattern of finite symbols, then I doubt you can do it manually yourself.

If you mean that you can read a work and tell its not human because of how it reads, then I ask you to please start listing the aspects a writing that would make it AI like. Please also tell me if those aspects could also be produced by a human of lesser skill or different/weird style.

I propose, that any marker or indicator you might attribute to AI writing can also be attributed to certain types of human writing as well. As such, you can guess at best and hope to be right. After all, LLMs strive to emulate us and in form, they can do so quite well.
 
True, but they're stealing from the authors who created the content the AI was trained on. That's the basis of writers objecting to AI, copyright theft.
We've been through this. You know what my answer would be. I found this reasoning lacking understanding of the subject matter then, and I find it the same now.

That is not how training LLMs or generating content with them works
 
I agree. That's why I think Lit. should create an AI category to encourage public disclosure as the norm. So readers can decide whether or not they want to read something not written by a human, and to what degree.
And how do you propose the site vet what would be a deluge of junk for all the content compliance requirements? At this point in time, I reckon it's better to keep saying no.
 
We've been through this. You know what my answer would be. I found this reasoning lacking understanding of the subject matter then, and I find it the same now.

That is not how training LLMs or generating content with them works
Funny, that's not what the New York Times have said in their lawsuit - there was a post on that not long ago, where the plagiarism was pretty obvious. Blatant, in fact. Perhaps you missed it?
 
If only that was what's happening... my experience is that students are using ChatGPT (and others) simply as a shortcut, a way of getting something else to do the work so they don't have to. I see this when they have to complete an assignment in class and, on reviewing the results, I am presented with barely comprehensible crap, in contrast to the lovely essays I am sent when the task has been set as homework. And then, when questioned on the same subject as the homework, the majority of the students are pretty clueless.
Then you fail them and remove any marks they got for the homework. They tell all their cohort that you require them to know their stuff and they can't get away with it.

If this is university level, that's why exams still exist and often have hand-written sections, and for theses and dissertations, why vivas exist. In a school, you can provide more short-answer questions requiring handwritten answers and online quizzes with randomised questions, not to mention calling on pupils in class and showing them up if they haven't done the work.

If someone understands the subject and uses AI to improve the English in their explanation of it, it's no different from using a spell-checker's suggestions or a tool like Grammarly. If they don't understand it and are trying to evade a requirement to learn, it doesn't matter how they're sourcing an essay - the teaching staff need to ensure that student doesn't pass the course!
 
You are grasping for straws. If I cannot write a software to detect something in a well organized pattern of finite symbols, then I doubt you can do it manually yourself.

If you mean that you can read a work and tell its not human because of how it reads, then I ask you to please start listing the aspects a writing that would make it AI like.

That question depends on a fallacy about both human and computer thinking.

If you're familiar with the ML/AI landscape, you'll know that there's a fair bit of interest in "interpretable ML" - that is, being able to articulate why a ML/AI model makes the choices it does in terms a human can understand.

This is an important problem. If I want to use ML to determine whether to approve somebody's loan application or grant them parole, there's an obvious interest in being able to explain why this one was accepted and that one was denied. One only has to look at the AI story rejection threads in this forum to see the acrimony that can arise when people can't see the details of how a decision was made.

There are some ways to achieve interpretable ML; for instance, one can restrict the way the model's specified so that the only rules it can apply are ones that could be conveyed to a human, and so that it's limited in how many rules it can apply simultaneously. But these do limit the capabilities of such models, so they're not very popular, and most major ML/"AI" technologies today are non-interpretable: black boxes.
(GPT pretends to be interpretable, but it's not. If I were to ask GPT a question, and then ask it to explain how it came to that answer, it would generate an explanation-shaped response, but that explanation would be unrelated to how GPT actually answered the initial question. For instance, it's possible to ask GPT "show working" arithmetic questions where it gives the correct answer, but the working is full of errors that would never have led to this answer.)

Even when we have programmable access to the entire ML model, all its parameters, and its internal state, it's generally not possible to translate that logic into a short set of human-interpretable rules.

Human minds are considerably less transparent than that ML model, even to their owners. The part of my brain that's composing this response to you doesn't have perfect insight into the part of the brain that recognises letters or the part that drives my fingers.

I know my partner's face well enough to pick her out of billions of people. I couldn't describe her to you well enough to allow you to reliably pick her out of ten thousand. That doesn't mean I'm lying about being able to recognise her, it just means that some knowledge is not easily condensed.

Even for rules that are straightforward enough to be expressed simply, many people absorb and apply these unconsciously without ever noticing that they've just learned a rule. This happens often in language; most native speakers have a far better instinctive understanding of the written and unwritten rules of English than they could easily articulate.

For instance, take the following passage:

"The door was round, with a brass, shiny, yellow knob. When Bilbo opened it, he saw an old bearded tall man clad in a woollen blue pointed tall hat, a grey long cloak, and black immense boots."

Most native speakers would find something very jarring in that passage. But the rule that it breaks isn't one that's explicitly taught in schools; most of us just absorb it by osmosis, and outside places like authors' forums, probably most people have never consciously thought about the rule. Even knowing that there is a rule and it's about how different kinds of adjectives are ordered, I couldn't just retrieve that rule from my brain and tell a non-native speaker how to order their adjectives; I'd have to reconstruct it by slow trial-and-error, testing different pairs of adjectives in my head to see what felt right and writing down the results before I could put them in a neat order.

So, no, I reject the idea that inability to succinctly list all the "tells" of AI-generated text disproves the possibility that humans might be able to spot some things that software can't.

(Please note: I'm not claiming that I or any other human would achieve 100% accuracy on picking AI-generated from human-written, nor that I personally am highly skilled in doing so. Merely that "cannot be reliably pointed out by sophisticated software" is not automatically equivalent to "cannot be reliably pointed out by a skilled human".

I propose, that any marker or indicator you might attribute to AI writing can also be attributed to certain types of human writing as well.

I propose that such questions can't always be dealt with in terms of simply-expressed "markers or indicators".


As such, you can guess at best and hope to be right. After all, LLMs strive to emulate us and in form, they can do so quite well.
 
True, but they're stealing from the authors who created the content the AI was trained on. That's the basis of writers objecting to AI, copyright theft.
OK. I'm fine tuning my position to say that Lit. should create a category for AI, but that sumissions must pass a plagiarism test. Lit. could run the test, but "authors" should be responsible for already having run it.

I'm sure there are well establish tests for what constitutes plagiarism, and that a standardaized bit of software could be made available to everyone to test the results of an AI prompt.
 
OK. I'm fine tuning my position to say that Lit. should create a category for AI, but that sumissions must pass a plagiarism test. Lit. could run the test, but "authors" should be responsible for already having run it.

I'm sure there are well establish tests for what constitutes plagiarism, and that a standardaized bit of software could be made available to everyone to test the results of an AI prompt.
See my comment #64 above, this thread.
 
Funny, that's not what the New York Times have said in their lawsuit - there was a post on that not long ago, where the plagiarism was pretty obvious. Blatant, in fact. Perhaps you missed it?
There's a lawsuit because not everyone agrees with what the New York Times says in its lawsuit.
 
There's a lawsuit because not everyone agrees with what the New York Times says in its lawsuit.
Have you seen the side by side text comparisons? Seems pretty clear cut when you read them. But perhaps there's a clever explanation. Lawyers are especially good with those, even when they're not convincing.
 
Have you seen the side by side text comparisons? Seems pretty clear cut when you read them. But perhaps there's a clever explanation. Lawyers are especially good with those, even when they're not convincing.
Yes. Having a very good memory isn't a breach of copyright. Plagiarism is irrelevant unless you occupy certain positions in the academy, courts of law aren't concerned with plagiarism, except eg: as a sidewind in an action against one's university for unlawful discrimination.
 
I think AI is probably useful in some technical fields. For instance, I could input the data from an experiment and then ask AI to analyze the data and write a report that states conclusions and the probability of those conclusions being correct. It would be much the same as an engineer using finite element analysis to model a design instead of spending days doing all the required calculations for fewer nodes in the design.

It's definitely useful in the medical field. How often do we read that a person was misdiagnosed? The diagnosis of a disease depends upon the experience of the doctor doing the examination. An AI "doctor" would have almost infinite "experience" because it would rely upon the written experiences of thousands if not millions of doctors and researchers.

I see the problem of AI in literature as being the number of paths it can take to "write" something. It is absolutely true that authors do not own words, but we do own the way those words are strung together. Every author has a unique style even though they may appear to be similar to another author.

I believe it's inevitable that there will be novels generated completely by AI, just a most action movies today are about half CGI. It's not that difficult for a human to tell the difference. The CGI is too "perfect". I think the same situation will exist for AI generated literature as with many other unique styles of art. For a while, it will be lauded, but when everyone has one or two, the fad will fade because it will no longer be "new". It can't be new because it will rely on the word-stringing of authors of the past. The very fact that you must give relatively specific requests to AI dictates that it will read as repetitive regurgitation of some author's unique style, but without the aspects of writing that give that author a unique way of telling the story. It will end up writing in a "cookbook" fashion, just like the romance novels you can buy in the grocery store. Maybe there's a market for that, but I doubt it will be long-lasting for readers who want to see new techniques in story telling.
 
I think AI is probably useful in some technical fields. For instance, I could input the data from an experiment and then ask AI to analyze the data and write a report that states conclusions and the probability of those conclusions being correct. It would be much the same as an engineer using finite element analysis to model a design instead of spending days doing all the required calculations for fewer nodes in the design.

It's definitely useful in the medical field. How often do we read that a person was misdiagnosed? The diagnosis of a disease depends upon the experience of the doctor doing the examination. An AI "doctor" would have almost infinite "experience" because it would rely upon the written experiences of thousands if not millions of doctors and researchers.
Right now, medicine is one of the last places I'd want to apply it to. You appear to have forgotten about AI hallucinations, where it quite happily goes and makes shit up. Not what you want in a diagnosis.

Same with the law - didn't you read the case of the lawyer who used AI to "research" case history? Again, the tool he used made up cases that never existed. Luckily, the judge checked, but the lawyer didn't.

Any search tool that presents material that may or may not be true isn't much use without fact checking everything. Where's the labour saving/benefit in that?
 
I think AI is probably useful in some technical fields. .

It's definitely useful in the medical field.
Absolutely needs to be completely, totally and permanently barred from all aspects in those areas, and the legal field.
 
Right now, medicine is one of the last places I'd want to apply it to. You appear to have forgotten about AI hallucinations, where it quite happily goes and makes shit up. Not what you want in a diagnosis.

Same with the law - didn't you read the case of the lawyer who used AI to "research" case history? Again, the tool he used made up cases that never existed. Luckily, the judge checked, but the lawyer didn't.

Any search tool that presents material that may or may not be true isn't much use without fact checking everything. Where's the labour saving/benefit in that?
Remember, though, this is 'right now.' But the next iteration? Or the one after that? This is where the technology will come into its own, and that is where the real threat/benefit/exponential changes will come into play.
 
Remember, though, this is 'right now.' But the next iteration? Or the one after that? This is where the technology will come into its own, and that is where the real threat/benefit/exponential changes will come into play.
Agree. I'm hedging my bets. Right now, the 2023 examples I've seen going into 2024, are so full of glitches and falsifications and logic faults that they can't be taken seriously. Five years down the track though, who knows what a "guaranteed truth" system will look like - but that's what it must be to be useful - truth based, with not one single thing made up.
 
Agree. I'm hedging my bets. Right now, the 2023 examples I've seen going into 2024, are so full of glitches and falsifications and logic faults that they can't be taken seriously. Five years down the track though, who knows what a "guaranteed truth" system will look like - but that's what it must be to be useful - truth based, with not one single thing made up.
Yep, it will need to be faultless. Its rather like self-driving cars, that need to be perfect whereas human drivers can be any level of shit, provided they can just about scrape by.
 
Back
Top