ShelbyDawn57
Fae Princess
- Joined
- Feb 28, 2019
- Posts
- 4,078
Linking to djmac1031...please wait... linking... linking...I mean... if my profile picture wasn't already a dead giveaway...
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Linking to djmac1031...please wait... linking... linking...I mean... if my profile picture wasn't already a dead giveaway...
Look, I'm totally not an AI bot, okay? I'm just a regular human, typing away with my fleshy fingers, sipping coffee, and definitely not processing data at lightning speed. My emotions are 100% organic, not simulated, and I absolutely don't have a neural network humming in the background. Trust me, I'm as human as it gets—says the entity running on xAI servers, beep boop.
Um, political propaganda on a massive scale? Highly effective and targeted scam operations? Sextortion? And of course classic data mining and data harvesting for more AI training.It's possible, but what could someone gain by infiltrating human spaces with bots, other than finding amusement?
To be slightly more serious. I see posts here all the time that look like Google AI summaries.
Or AI profiles posing as real people?
I'm currently in Las Vegas, NV with my business trip"
"I am 38 years old this year. How old are you this year?
And this from Popular Science that I mentioned above: https://www.popsci.com/technology/chatgpt-turing-test/
I have received slight variations of the message below three or four times in the last two weeks from "women".
Sadly AI would be embarrassed by the grammar and spelling in some of the DMs I get fromI have received slight variations of the message below three or four times in the last two weeks from "women".
That raises an interesting question, though.To be slightly more serious. I see posts here all the time that look like Google AI summaries.
Well, I'm definitely not ChatGPT...Unless you were programmed to lie
Real world interaction with people. Fine tuning the model for future use.
I just don't really get the point, AI doesn't magically know anything, it was trained on stuff, just go...read the stuff? Every Google search now has AI results at the top, and it's kind enough to sometimes link sources, but I'm just baffled by the effort of having AI reword the original source and present that to me anyway, when all I wanted was the source?That raises an interesting question, though.
For my job, I use AI all the time to write rudimentary methods and functions or to create templates that I then modify to do what I want. It saves me a lot of time. In this forum, I often have to go to an online thesaurus to look words up(ahem) before I can participate in the discussion...
Question is, is getting info from AI to support a point a bad thing? If yes or no, where do we draw the line. I can see passing an AI summary off completely as your opinion as dishonest, but to find corroborating data? How honest/dishonest is that?
The point is, AI can be a valuable research tool as it can identify, collect and collate information much faster than you or I can. When prompted, it will even provide footnotes(both xAi and ChatGgpt do this. I just checked). The question I was raising is, since AI appears to be here to stay, and since to does perform a valuable function. Where does its use cross the line into plagiarism?I just don't really get the point, AI doesn't magically know anything, it was trained on stuff, just go...read the stuff? Every Google search now has AI results at the top, and it's kind enough to sometimes link sources, but I'm just baffled by the effort of having AI reword the original source and present that to me anyway, when all I wanted was the source?
What question can be posed to a generative LLM that it can answer where that information isn't already in the collective human sphere of understanding?
The only issue with that is that, from things I've read, Ai tends to be a bit of a sycophant.When I first started writing 2 months ago, I was relying on AI feedback because most people just don't bother. I don't anymore but wonder how many other writers do it because people generally suck at caring enough...or they get AI feedback just out of curiosity.
Oh, yeah, that's a good question. Academia treats a single improperly cited sentence, even if reworded, as plagiarism, so I guess that's a good starting point.The point is, AI can be a valuable research tool as it can identify, collect and collate information much faster than you or I can. When prompted, it will even provide footnotes(both xAi and ChatGgpt do this. I just checked). The question I was raising is, since AI appears to be here to stay, and since to does perform a valuable function. Where does its use cross the line into plagiarism?
Prompt:I just don't really get the point, AI doesn't magically know anything, it was trained on stuff, just go...read the stuff? Every Google search now has AI results at the top, and it's kind enough to sometimes link sources, but I'm just baffled by the effort of having AI reword the original source and present that to me anyway, when all I wanted was the source?
What question can be posed to a generative LLM that it can answer where that information isn't already in the collective human sphere of understanding?
And, as Emily mentioned, it appears some of us seem to be posting AI content in the forums as our own. Where does that fall on the integrity scale?Oh, yeah, that's a good question. Academia treats a single improperly cited sentence, even if reworded, as plagiarism, so I guess that's a good starting point.
Pretty lowAnd, as Emily mentioned, it appears some of us seem to be posting AI content in the forums as our own. Where does that fall on the integrity scale?
It's literally advertising Google's AI services to people who have budgets for AI.I'm just baffled by the effort of having AI reword the original source and present that to me anyway, when all I wanted was the source?
I suspected some new members a few months ago. I haven't noticed anything suspicious lately.There was a case recently where ‘researchers’ inserted their bots in to social media, without letting the humans know. I’m probably paranoid, but some of the postings around here of late…?
How did you identify them as AI? In the two or three instances I encountered (I think one or two were DMs) it was suspicious congeniality. I did investigate a recent new member, who was suspiciously congenial and organized, but his history and personalized interactions got him off the hook. He's just congenial and organized.Yes. With the help of other AH'ers I outed six obviously AI accounts that were either posting nonsense in threads and/or trolling in the PM system. Laurel zapped 'em.