Do you think AI experimenters are among us?

Look, I'm totally not an AI bot, okay? I'm just a regular human, typing away with my fleshy fingers, sipping coffee, and definitely not processing data at lightning speed. My emotions are 100% organic, not simulated, and I absolutely don't have a neural network humming in the background. Trust me, I'm as human as it gets—says the entity running on xAI servers, beep boop.

Unless you were programmed to lie
 
It's possible, but what could someone gain by infiltrating human spaces with bots, other than finding amusement?
Um, political propaganda on a massive scale? Highly effective and targeted scam operations? Sextortion? And of course classic data mining and data harvesting for more AI training.
 
To be slightly more serious. I see posts here all the time that look like Google AI summaries.

I'm sorry, Em, that's mostly me derailing the topic already.

I'll stop now.
 

Attachments

  • 1000000131.jpg
    1000000131.jpg
    83.4 KB · Views: 1
To be slightly more serious. I see posts here all the time that look like Google AI summaries.
That raises an interesting question, though.

For my job, I use AI all the time to write rudimentary methods and functions or to create templates that I then modify to do what I want. It saves me a lot of time. In this forum, I often have to go to an online thesaurus to look words up(ahem) before I can participate in the discussion... :)

Question is, is getting info from AI to support a point a bad thing? If yes or no, where do we draw the line. I can see passing an AI summary off completely as your opinion as dishonest, but to find corroborating data? How honest/dishonest is that?
 
That raises an interesting question, though.

For my job, I use AI all the time to write rudimentary methods and functions or to create templates that I then modify to do what I want. It saves me a lot of time. In this forum, I often have to go to an online thesaurus to look words up(ahem) before I can participate in the discussion... :)

Question is, is getting info from AI to support a point a bad thing? If yes or no, where do we draw the line. I can see passing an AI summary off completely as your opinion as dishonest, but to find corroborating data? How honest/dishonest is that?
I just don't really get the point, AI doesn't magically know anything, it was trained on stuff, just go...read the stuff? Every Google search now has AI results at the top, and it's kind enough to sometimes link sources, but I'm just baffled by the effort of having AI reword the original source and present that to me anyway, when all I wanted was the source?

What question can be posed to a generative LLM that it can answer where that information isn't already in the collective human sphere of understanding?
 
When I first started writing 2 months ago, I was relying on AI feedback because most people just don't bother. I don't anymore but wonder how many other writers do it because people generally suck at caring enough...or they get AI feedback just out of curiosity.
 
I just don't really get the point, AI doesn't magically know anything, it was trained on stuff, just go...read the stuff? Every Google search now has AI results at the top, and it's kind enough to sometimes link sources, but I'm just baffled by the effort of having AI reword the original source and present that to me anyway, when all I wanted was the source?

What question can be posed to a generative LLM that it can answer where that information isn't already in the collective human sphere of understanding?
The point is, AI can be a valuable research tool as it can identify, collect and collate information much faster than you or I can. When prompted, it will even provide footnotes(both xAi and ChatGgpt do this. I just checked). The question I was raising is, since AI appears to be here to stay, and since to does perform a valuable function. Where does its use cross the line into plagiarism?
 
When I first started writing 2 months ago, I was relying on AI feedback because most people just don't bother. I don't anymore but wonder how many other writers do it because people generally suck at caring enough...or they get AI feedback just out of curiosity.
The only issue with that is that, from things I've read, Ai tends to be a bit of a sycophant.
 
The point is, AI can be a valuable research tool as it can identify, collect and collate information much faster than you or I can. When prompted, it will even provide footnotes(both xAi and ChatGgpt do this. I just checked). The question I was raising is, since AI appears to be here to stay, and since to does perform a valuable function. Where does its use cross the line into plagiarism?
Oh, yeah, that's a good question. Academia treats a single improperly cited sentence, even if reworded, as plagiarism, so I guess that's a good starting point.
 
I just don't really get the point, AI doesn't magically know anything, it was trained on stuff, just go...read the stuff? Every Google search now has AI results at the top, and it's kind enough to sometimes link sources, but I'm just baffled by the effort of having AI reword the original source and present that to me anyway, when all I wanted was the source?

What question can be posed to a generative LLM that it can answer where that information isn't already in the collective human sphere of understanding?
Prompt:
'What question can be posed to a generative LLM that it can answer where that information isn't already in the collective human sphere of understanding?'

And read the answer.
 
Oh, yeah, that's a good question. Academia treats a single improperly cited sentence, even if reworded, as plagiarism, so I guess that's a good starting point.
And, as Emily mentioned, it appears some of us seem to be posting AI content in the forums as our own. Where does that fall on the integrity scale?
 
After the first month, I was done with taking AI seriously. It gets so many things wrong when it comes to things that need to display emotion/atmosphere. It'll kind of be creepy if it gets that far.
 
There was a case recently where ‘researchers’ inserted their bots in to social media, without letting the humans know. I’m probably paranoid, but some of the postings around here of late…?
I suspected some new members a few months ago. I haven't noticed anything suspicious lately.
 
Yes. With the help of other AH'ers I outed six obviously AI accounts that were either posting nonsense in threads and/or trolling in the PM system. Laurel zapped 'em.
How did you identify them as AI? In the two or three instances I encountered (I think one or two were DMs) it was suspicious congeniality. I did investigate a recent new member, who was suspiciously congenial and organized, but his history and personalized interactions got him off the hook. He's just congenial and organized. :)
 
Back
Top