How do you use AI, and what do you recommend be avoided?

AG31

Literotica Guru
Joined
Feb 19, 2021
Posts
4,112
I can't believe I haven't created a thread like this, but I did create a thread asking if you're polite to AI systems. I'm believing more and more that we need to familiarize ourselves with AI so we know what to avoid, in our own practices and in public policy.

How I use it.
Where can I watch a particular TV show.
How can I find a particular feature on some web site, e.g., find all my transactions for a particular account. (Some websites are just awful.)
Here is the most spectacular, although I haven't yet tested the results. It's what gave me the idea for this thread. (Originally I had "It's what prompted this thread," but I realized it made me sound like an AI tool....)

Over two dozen units in our condo have gotten new thermostats. The "manual" is unbelievably confusing. It is basically useless. It's "programmable" so there are lots of buttons and instructions, but most of us opted for the simple, non-programmable version. So today I prompted ChatGPT with this: "Can you generate a user friendly manual for a Salus ST100ZB thermostat when it is not programmable?" It did just that. I can't wait to try it. You can enter that prompt to see the kind of output. At the end it said, "If you want, I can also generate a quick one page guide." I asked for that too. If it's not loaded with hallucinations, this will be major for me!

How I don't use it.
To generate or edit stories. It might get better and better, but I want the sense of human interaction.
To generate a companion. (Yuch!!!)

Curiosity
What is the meaning of life?
It gave me a fairly respectable summary of major religions and philosophies.
 
I use it as a particularly stupid and ineffectual search engine. At Nightcafe I use it to generate images, the best of which become book covers. Sometimes I get silly and convert photographs of luddite friends to Anime images. I will sometimes use it to find the proper meter of a poem. When I'm in an evil mood I ask it to find a rhyme for "Nantucket" That's about it.
 
I can't believe I haven't created a thread like this, but I did create a thread asking if you're polite to AI systems. I'm believing more and more that we need to familiarize ourselves with AI so we know what to avoid, in our own practices and in public policy.

You haven't? It seems like you're creating an AI thread every week.
 
I can't believe I haven't created a thread like this, but I did create a thread asking if you're polite to AI systems. I'm believing more and more that we need to familiarize ourselves with AI so we know what to avoid, in our own practices and in public policy.

How I use it.
Where can I watch a particular TV show.

https://www.justwatch.com/ will do that for you more reliably.

Over two dozen units in our condo have gotten new thermostats. The "manual" is unbelievably confusing. It is basically useless. It's "programmable" so there are lots of buttons and instructions, but most of us opted for the simple, non-programmable version. So today I prompted ChatGPT with this: "Can you generate a user friendly manual for a Salus ST100ZB thermostat when it is not programmable?" It did just that. I can't wait to try it. You can enter that prompt to see the kind of output. At the end it said, "If you want, I can also generate a quick one page guide." I asked for that too. If it's not loaded with hallucinations, this will be major for me!

That's an important "if"...

I use non-generative forms of "AI" all over the place, though I'd usually call them "machine learning". Interpreting voice input, estimating how many widgets we're likely to sell next quarter, lots of applications.

I use generative AI mostly inadvertently (if I forget to include "-ai" in my search terms), or sometimes to generate cautionary examples for showing colleagues why they should be wary of generative AI.
 
I think I've said most of this before in various threads.

Professionally I use AI for voice recognition and transcription purposes to comply with the Americans with Disabilities Act.

In my writing process, I use it for plausible nonsense. Mostly that means the creation of constructions with similar phonemes to existing real-world phrases for science fiction and fantasy purposes -- doesn't make sense to have a character say Jesus Christ! in a world without Jesus. Here's a sample prompt:
Five strings of four syllables each. The first syllable should begin with a fricative. the second should begin with a sibilant and end with a sibilant or a fricative. the third should begin with a hard sound, and the fourth should end with a plosive.
Similarly, I use it to generate syllable matrixes I can use for names. Often, as a background exercise while I'm writing, I'll plug a character description I've already written into an AI image generator (I'm using Stable Diffusion XL as my model right now) and see if anything interesting happens. Usually the answer is no, but earlier today a character description for a Valentines story resulted in something I could use for an urban fantasy/paranormal romance.

Outside that, I don't use AI.
 
I use non-generative forms of "AI" all over the place, though I'd usually call them "machine learning". Interpreting voice input, estimating how many widgets we're likely to sell next quarter, lots of applications.
I'm generally within the same boat. AI designed as tools for specific functions to analyze data and make crunching numbers easier is fine.

Generative AI is awful, however, and I will never use it. Feeding prompts into a machine that steals from things humans have created and regurgitating it back at me is not something I'm interested in and it's quite disheartening how many authors here are okay with it. Even if you're against using it to create stories, using it to create AI imagery (that honestly looks cheap and tacky 100% of the time) is still stealing from human artists and is both morally and ethically indefensible. Yes, even if it's used to promote your "free" stories.
 
Knowingly:
  • To generate images for Birthday and Christmas cards that have a particular meaning to the recipients
  • Transcribing and translating meetings (the latter with decidedly mixed results)
Probably:
  • By allowing Grammarly to check my work for grammar & spelling
  • By using a recent OS on my phone, tablet, laptop, and desktop
  • On assorted websites, including chatbots that want to help but don't
Not:
  • To generate stories or presentations

Whether you like it or not, AI is already an almost unavoidable part of our lives. It is here; it is not going away. Like any new technology, the smart thing is to find good uses for it. Trying to avoid it reminds me of the Luddites.
 
Whether you like it or not, AI is already an almost unavoidable part of our lives. It is here; it is not going away. Like any new technology, the smart thing is to find good uses for it. Trying to avoid it reminds me of the Luddites.
A widely misunderstood group of people, the Luddites.
 
Generative AI is awful, however, and I will never use it. Feeding prompts into a machine that steals from things humans have created and regurgitating it back at me is not something I'm interested in and it's quite disheartening how many authors here are okay with it. Even if you're against using it to create stories, using it to create AI imagery (that honestly looks cheap and tacky 100% of the time) is still stealing from human artists and is both morally and ethically indefensible. Yes, even if it's used to promote your "free" stories.

Yes, it looks cheap and tacky. I don't care. That's not the point. I'm not using it for promotional purposes; no one sees it but me. It's there to show me random potential images of my characters and of the environments they might be in. It's a thinking tool because I don't know how to draw and wouldn't have time to anyway.

As an aside: I think it's interesting the outrage over generative AI usage, which I basically share when it's used for creative/commercial purposes, and the collective shrug about the way modern authors are being allowed to blatantly rip off others. It came up in the romantasy thread, but Sarah Maas has made quite a bit of money directly using lines from movies and other people's books. Sable Sorenson's Direbound features a character named Stark who lives in Sturmfrost in the north and has a direwolf. Ali Hazelwood's editor sends her a list of tropes to assemble into novels (this one's going to be enemies-to-lovers and there's a room with only one bed which causes the grumpy guy to fall in love with the happy-go-lucky girl). Entangled is cranking out paranormal smut with four authors simultaneously working on one manuscript under a single pen name, and they're getting sued over it because it turns out they're plagiarizing from their own authors. And that doesn't even include the amount of fan fiction that's getting lightly reworked to remove IP references and published as wholly original work. Love Hypothesis, Hurricane Wars -- I'm sorry, it's just ReyLo and Dramione fanfic all the way down.

There is a whole cottage industry of HumanGPT content that's just regurgitations of works by other people, and no one really cares about it, and it sucks.
 
The only way I use AI is where it's baked in to things I use with non-AI motivations and often with no awareness that there's AI in it.
 
I've actually seen quite a bit of AI generated art that looks quite nice. The "It is all slop" crowd is just demonstrating their own bias.
Sure, plenty of it is, but one can say the same of the stories on Lit.
 
Some are using diffusion models for art and video; some are using attention models language. The next big thing is to use diffusion models for language. Apparently, it's more accurate and much cheaper. Time will tell.
 
I've had a overall great experience with ChatGPT about a few topics. I been getting more interested in philosophy, and it gives these great introduction style lists of things to look into. Like when I asked it 'Why does anything exist rather than nothing' it was able to give me like a dozen theories. When I asked it to reduce it down to a list that doesn't include religion/spirituality, and it narrowed it down by about half while also explaining the origins of those philosophies, and where they're most common from among different scientific fields. I was able to do the same thing with subjects like consciousness and ethics, and based on the ones I aligned with personally, I asked it to give me easy to read introduction books, and it spit out a long list of popular books on each subject.

I find myself using ChatGPT more than Google now when it comes to asking random questions. And because it can provide links to things like recipes for cooking or guides/walkthroughs for games, I just like being able to avoid the google search engine where the top few results are just paid 'on-topic' advertisements instead of the actually best or most relevant results.
 
I don't know whether it's formally called AI anymore, but the closest I come is using algorithms on music streaming services. That, and maybe search engines inadvertently.

I don't use it in any other way, and I don't recommend anyone use it for anything your own brain can do just as well. Our brains are important. We evolved them for specific reasons, and we should be trying to use them well enough and often enough to understand those reasons. The brain is a "use it or lose it" device; it prunes neurons we don't use, meaning we end up with fewer neural capabilities if we substitute AI for our our own native abilities.

I don't know about anything else, but I need all the brainpower I can get.
 
Most recently, I've used it to prepare for a math exam - with great success! I can feed it a whole question and it can provide not just the answer, but also step-by-step explanations of what is done and why. And provide links to sites that can verify/provide background to the methods.

For work, I use it for generating complex formulas in various applications. I can do the same myself, but it's just much faster and less likely to have silly mistakes.

For ... adult purposes, I like creating chatbots/scenarios that can give me an interactive experience - somewhere between a pick-your-own-path story and a roleplay. It isn't as creative or on point as a good RP partner, but it's better than most. It's always available, reliable and fast. And it (almost) always shares your kinks ;)

The only things I actively avoid using AI for, is searches where I know it is likely to get things wrong (e.g. things with lots of newer details). And for fact based searches, I do always click through and check the links the answers are based on.
 
I don't know whether it's formally called AI anymore, but the closest I come is using algorithms on music streaming services.
Why is it that the website where I play sudokus has learned that I like to finish with the 9s, but Spotify just can't seem to understand that I'll click past any song by Michael Jackson? And that I don't want to listen to The Waterboys?

I'm almost starting to think that the algorithms are for their benefit, not mine.
 
I use it as a search engine as a starting point. Then I research its output.

Work has shoved down our throat copilot. Cannot stand it. I tried it Outlook and what it suggested was NOT ME at all. So I do not use it.
I also laugh when you have a large document and offer you to summarize it for you. What is the point of that?

I freaking hate AI but it is here to stay and ruing our lives.
 
Some are using diffusion models for art and video; some are using attention models language. The next big thing is to use diffusion models for language. Apparently, it's more accurate and much cheaper. Time will tell.
What's a diffusion model? I know, I know. I could Google it and let AI tell me, but I'd rather continue in a human conversation.
 
I don't recommend anyone use it for anything your own brain can do just as well.
Probably no one uses it for things their brain can do just as well. I view it mostly as a much more efficient search engine. It would take me much longer (2 minutes instead of 2 seconds? 2 days instead of 2 seconds?) to search the web for the same info.
 
Over two dozen units in our condo have gotten new thermostats. The "manual" is unbelievably confusing. It is basically useless. It's "programmable" so there are lots of buttons and instructions, but most of us opted for the simple, non-programmable version. So today I prompted ChatGPT with this: "Can you generate a user friendly manual for a Salus ST100ZB thermostat when it is not programmable?" It did just that. I can't wait to try it. You can enter that prompt to see the kind of output. At the end it said, "If you want, I can also generate a quick one page guide." I asked for that too. If it's not loaded with hallucinations, this will be major for me!
Turns out the "manual" it generated was for the model with batteries. Ours don't have batteries. I never did dig deep enough to know if it was otherwise useful. It apologized and suggested I send it a copy of the manual I had. I did, and it generated another one page guide. I sent that around to other thermostat holders, and am hoping one of the more engineering-minded will assess it again.

@ElectricBlue, you keep insisting that I can do better without AI. The installer's only help is to give us the useless manual. Do you have a suggestion for where we should turn other than AI?
 
What's a diffusion model? I know, I know. I could Google it and let AI tell me, but I'd rather continue in a human conversation.
Basically, in a diffusion model, you start with a prompt and a batch of random noise -- static, basically. Over a period of steps using one of many methods of statistical sampling, the model denoises the batch, removing randomness until what's left is something like the prompt. The model operates on a series of checkpoints that have been trained using other people's data -- this is the stealing of creative works that folks don't like. I give the model a canvas covered in layers of paint and a knife, and tell it to remove everything from the canvas that's not a house. At the end, I have a house, or maybe I have Dr. House, or maybe I have Dr. House in a house, and he's reading House, the 2006 novel, and watching House!, the comedy, while listening to house music; while using Stable Diffusion last night for some concept art, it took me a couple iterations to realize that "duster coat" was causing the model to look for feather dusters, leading the character to always be holding an oversized dusting device, at least when it wasn't poking out the back of his head.

In text, large language models basically predict one word at a time, sequentially; diffusion models predict blocks of text in parallel. That has advantages for speed but not currently for accuracy. But this is very much a live R&D avenue for the big players in the AI world.
 
Turns out the "manual" it generated was for the model with batteries. Ours don't have batteries. I never did dig deep enough to know if it was otherwise useful. It apologized and suggested I send it a copy of the manual I had. I did, and it generated another one page guide. I sent that around to other thermostat holders, and am hoping one of the more engineering-minded will assess it again.
I'd do this, involve an engineer who knows what they're doing. Right from the start.
@ElectricBlue, you keep insisting that I can do better without AI. The installer's only help is to give us the useless manual.
Didn't you just say up above that the AI produced the wrong manual?
Do you have a suggestion for where we should turn other than AI?
The AI is only as good as the raw material it's given. If the installation manual is useless, the AI composite of all similar installation manuals is going to contain an element of that wrongness. You've said that yourself, it gave you the wrong manual.

If there's wrong information in one part of its response: a) how do you know that; and b) how can you be assured that everything else is correct?

That's the fundamental issue with AI hallucinations - if you don't know yourself, you have no way of knowing what's correct, what's not. And if one element is shown or known to be wrong, that surely casts immediate doubt on all the other content.

It's like one of those quadrant models:

Know you know / Don't know you know

Know you don't know / Don't know you don't know

AI makes up all the rest.
 
Back
Top