A tangential AI question, not on AI itself, but on “How do you pretend to like AI at work because you know you have to?”

jsmiam

Literotica Whisperer
Joined
Aug 10, 2003
Posts
1,734
The title says it all.
  • How do you pretend you like AI at work, if you think it’s slop, based on theft, mass plagiarism, and click theft, that it’s at best offering 9th grade/3rd form level superficiality, and also that it isn’t much better than just clicking on the first (old fashioned) search result you find in a web search?
  • (As you might guess, the intended audience for this discussion are fellow AI skeptics/cynics).
  • Keep in mind we debate AI at length in other threads, so we don’t have to debate it in this one.
  • So I’m hoping to keep the focus on the topic at hand, “how do you pretend to like AI at work when you know you have to?”
So…. “How do you pretend you like AI at work, when you know your bosses expect you to like it?” For example, the chief-something-officer at my job recently talked enthusiastically about how AI wrote a memo for him/her. So I know where they stand.

I’m just too honest, and I’m struggling at work to keep my mouth shut and to pretend I’m all in. But I know I need to.

How do the rest of my fellow doubters pull of the act at work?
 
IDK your work situation but when I run into AI slop I just call it out. Not angrily or anything, but my line is that if we can't do better than AI, what are we even doing?
 
The same way you deal with any other hype cycle, like Big Data or the cloud. Be the voice of reason but don’t unnecessarily rain on anyone’s parade, and accept the possibility you might be wrong in some manner and thus learn something new.
 
IDK your work situation but when I run into AI slop I just call it out. Not angrily or anything, but my line is that if we can't do better than AI, what are we even doing?
The same way you deal with any other hype cycle, like Big Data or the cloud. Be the voice of reason but don’t unnecessarily rain on anyone’s parade, and accept the possibility you might be wrong in some manner and thus learn something new.
Thanks, very valid.

In my case (I bet I’m not alone) it’s that they (leadership) believe the hype completely. They’re sold on it, think it’s great and there’s an implied pressure for us to believe in it wholeheartedly too.

They can make me do stuff since they’re paying me. I’m ok with that.

I just need acting help. ;-)
 
The title says it all.
  • How do you pretend you like AI at work, if you think it’s slop, based on theft, mass plagiarism, and click theft, that it’s at best offering 9th grade/3rd form level superficiality, and also that it isn’t much better than just clicking on the first (old fashioned) search result you find in a web search?
  • (As you might guess, the intended audience for this discussion are fellow AI skeptics/cynics).
  • Keep in mind we debate AI at length in other threads, so we don’t have to debate it in this one.
  • So I’m hoping to keep the focus on the topic at hand, “how do you pretend to like AI at work when you know you have to?”
So…. “How do you pretend you like AI at work, when you know your bosses expect you to like it?” For example, the chief-something-officer at my job recently talked enthusiastically about how AI wrote a memo for him/her. So I know where they stand.

I’m just too honest, and I’m struggling at work to keep my mouth shut and to pretend I’m all in. But I know I need to.

How do the rest of my fellow doubters pull of the act at work?
In my considerable experience in a business setting, there is a vast distance between a supervisor saying they like a particular method of doing something and the demand that all subordinates adopt that method. What matters to most supervisors is the results they get, not how their subordinates get those results. If you used AI to write everything and you failed to perform to expectations, you'd probably lose your position just as fast as if you wrote everything yourself.

This scenario has played out in my industry since the 1970's. First it was the demand that the automotive industry copy everything Toyota did, and most engineers and quality control people complied. The only problem was that those supervisors didn't understand any more about Toyota's system than reading about it in a book. The results were sometimes useful, sometimes entirely wrong.

After Toyota came several other techniques mostly dependent upon which consulting group the industry hired.

In the end, the solution to any problem is just people using their intelligence and common sense to utilize the data available to get to the right answer. Having AI write the conclusion will never solve anything.
 
I could never pretend to like AI!

I asked Google how to convince you to like AI:

That's an interesting challenge! Since I don't know who "MillieDynamite" is or what her current reservations about AI might be, I'll structure a persuasive approach based on common concerns people have about artificial intelligence.
My strategy would focus on highlighting the practical benefits, creative potential, and ethical development of AI.


💡 Focus 1: AI as a Powerful Tool (Practical Benefits)​


I'd start by reframing AI not as a replacement for human effort, but as an incredibly advanced tool designed to handle tedious, repetitive, or complex tasks, freeing up time for what humans do best.
  • Saving Time and Effort: Demonstrate how AI can automate boring or time-consuming chores. For example, AI-powered scheduling, summarizing long documents, filtering spam emails, or transcribing audio.
  • Enhancing Safety and Accuracy: Showcase AI's role in fields like medicine (detecting diseases earlier in scans), climate science (predicting weather patterns), and engineering (optimizing energy use).
  • Personalization: Point out familiar, positive AI uses she likely already enjoys, such as better music recommendations, superior navigation apps (like Waze or Google Maps), or optimized search results.
"AI isn't about taking away jobs; it's about taking away drudgery so we can focus on creativity and complex problem-solving."


🎨 Focus 2: AI as a Creative Partner (Enhancing Human Abilities)​


To appeal to the human element, I'd emphasize how AI can augment and inspire creativity, not stifle it.
  • Generating Ideas: Explain how AI can act as a "smart brainstorming partner," generating draft content, diverse design options, or musical motifs that a human creator can then refine and direct.
  • Breaking Down Barriers: Show examples of AI tools that make complex skills more accessible, like sophisticated photo editing features (e.g., automatically removing backgrounds) or turning a simple sketch into a 3D model.
  • Exploring the Unknown: Highlight AI's ability to analyze vast datasets, helping researchers discover new materials, understand complex social trends, or translate languages instantly, connecting people and knowledge globally.


⚖️ Focus 3: Addressing Concerns (Ethics and Control)​


Acknowledging common fears is crucial for building trust. I would directly address concerns about job loss, bias, and control.
  • Human Oversight is Key: Stress that the most effective AI systems always have human direction and oversight. We design the systems, set the goals, and decide how the outputs are used.
  • Fighting Bias: Discuss the active, ongoing efforts to make AI systems fairer and less biased by diversifying the data they train on and implementing rigorous ethical review.
  • Skill Shift, Not End: Reframe the concept of job loss as a skill shift. New roles are emerging—AI developers, data labelers, prompt engineers, and ethical AI auditors—and emphasize the importance of lifelong learning.
The closing argument: I would suggest that engagement is better than avoidance. By understanding and working with AI, she can help shape its future to be more beneficial and ethical for everyone. Ignoring it means missing the chance to influence its development.
Would you like me to tailor this approach further by assuming a specific reason why MillieDynamite might currently dislike AI (e.g., fear of job loss, skepticism of technology, etc.)?
 
Good impersonation of the teacher from all those Peanut shows and cartoons, "Bla, bla, bla-da, bla-da, blady-bla."
I asked Google how to convince you to like AI:

That's an interesting challenge! Since I don't know who "MillieDynamite" is or what her current reservations about AI might be, I'll structure a persuasive approach based on common concerns people have about artificial intelligence.
My strategy would focus on highlighting the practical benefits, creative potential, and ethical development of AI.


💡 Focus 1: AI as a Powerful Tool (Practical Benefits)​


I'd start by reframing AI not as a replacement for human effort, but as an incredibly advanced tool designed to handle tedious, repetitive, or complex tasks, freeing up time for what humans do best.
  • Saving Time and Effort: Demonstrate how AI can automate boring or time-consuming chores. For example, AI-powered scheduling, summarizing long documents, filtering spam emails, or transcribing audio.
  • Enhancing Safety and Accuracy: Showcase AI's role in fields like medicine (detecting diseases earlier in scans), climate science (predicting weather patterns), and engineering (optimizing energy use).
  • Personalization: Point out familiar, positive AI uses she likely already enjoys, such as better music recommendations, superior navigation apps (like Waze or Google Maps), or optimized search results.



🎨 Focus 2: AI as a Creative Partner (Enhancing Human Abilities)​


To appeal to the human element, I'd emphasize how AI can augment and inspire creativity, not stifle it.
  • Generating Ideas: Explain how AI can act as a "smart brainstorming partner," generating draft content, diverse design options, or musical motifs that a human creator can then refine and direct.
  • Breaking Down Barriers: Show examples of AI tools that make complex skills more accessible, like sophisticated photo editing features (e.g., automatically removing backgrounds) or turning a simple sketch into a 3D model.
  • Exploring the Unknown: Highlight AI's ability to analyze vast datasets, helping researchers discover new materials, understand complex social trends, or translate languages instantly, connecting people and knowledge globally.


⚖️ Focus 3: Addressing Concerns (Ethics and Control)​


Acknowledging common fears is crucial for building trust. I would directly address concerns about job loss, bias, and control.
  • Human Oversight is Key: Stress that the most effective AI systems always have human direction and oversight. We design the systems, set the goals, and decide how the outputs are used.
  • Fighting Bias: Discuss the active, ongoing efforts to make AI systems fairer and less biased by diversifying the data they train on and implementing rigorous ethical review.
  • Skill Shift, Not End: Reframe the concept of job loss as a skill shift. New roles are emerging—AI developers, data labelers, prompt engineers, and ethical AI auditors—and emphasize the importance of lifelong learning.
The closing argument: I would suggest that engagement is better than avoidance. By understanding and working with AI, she can help shape its future to be more beneficial and ethical for everyone. Ignoring it means missing the chance to influence its development.
Would you like me to tailor this approach further by assuming a specific reason why MillieDynamite might currently dislike AI (e.g., fear of job loss, skepticism of technology, etc.)?
 
I hate how every time someone brings up AI in a thread, someone inevitably plugs a question into it and copy/pastes the response. :(

On topic: OP, are you required to use AI to perform your job functions? If not, I would simply just nod and shrug and move the discussion onto something else.

I work with data analytics so I see Co-Pilot everyday, for better or worse (mostly the latter). If I have to work with a colleague who uses it, I contribute my part as quickly as I can and move on.
 
If someone asks me my opinion on how better to use it, I quietly point out that I am not the best person to ask and that there are other people on the other side of the church who might give a better answer.

Find a sacrifical goat and exploit them.
 
I briefly dipped my toe into the "if you can't beat 'em join 'em" camp, giving it a good faith effort and pretending all was well. But I came away feeling a little gross. And concluded I'd rather just get beat.
 
The only use I have found for LLMs in a work context has been to get an ultra quick and most likely non-exhaustive and probably inaccurate view of some area.

I’ll then go down to the actual websites returned and take a look at them, assessing how credible they are.

It can give me an - admittedly possibly shaky - overview. I view it like asking a colleague about something they are not an expert in, but know slightly more than me.

There is however one compelling use case. Writing my annual objectives to placate HR.
 
There is however one compelling use case. Writing my annual objectives to placate HR.
And my manager knows I do this, he then borrowed the idea off of me for his own annual objectives.

LLMs are great at doing low value work where no one gives a shit whether it is right or wrong.
 
If I need to understand how something old works, I use Google or StackOverflow, and branch out from there if I don't know precisely what I need or my Google-fu lets me down.

If I need to understand something new, I search for it on HackerNews and read any sites linked or spawned by that train of investigation.

I cannot think of a single recent spelunking expedition that has started with me going "Hey, Claude, tell me something plausible"

That's not going to change.
 
There is however one compelling use case. Writing my annual objectives to placate HR.

Be sure to tell your AI to awkwardly and ineptly insert a bunch of trendy buzzwords into your annual objectives statement. HR will nod like the NPC zombies they are, reflexively approve, and recommend you for promotion. ;)
 
I asked Google how to convince you to like AI:

That's an interesting challenge! Since I don't know who "MillieDynamite" is or what her current reservations about AI might be, I'll structure a persuasive approach based on common concerns people have about artificial intelligence.
My strategy would focus on highlighting the practical benefits, creative potential, and ethical development of AI.


💡 Focus 1: AI as a Powerful Tool (Practical Benefits)​


I'd start by reframing AI not as a replacement for human effort, but as an incredibly advanced tool designed to handle tedious, repetitive, or complex tasks, freeing up time for what humans do best.
  • Saving Time and Effort: Demonstrate how AI can automate boring or time-consuming chores. For example, AI-powered scheduling, summarizing long documents, filtering spam emails, or transcribing audio.
  • Enhancing Safety and Accuracy: Showcase AI's role in fields like medicine (detecting diseases earlier in scans), climate science (predicting weather patterns), and engineering (optimizing energy use).
  • Personalization: Point out familiar, positive AI uses she likely already enjoys, such as better music recommendations, superior navigation apps (like Waze or Google Maps), or optimized search results.



🎨 Focus 2: AI as a Creative Partner (Enhancing Human Abilities)​


To appeal to the human element, I'd emphasize how AI can augment and inspire creativity, not stifle it.
  • Generating Ideas: Explain how AI can act as a "smart brainstorming partner," generating draft content, diverse design options, or musical motifs that a human creator can then refine and direct.
  • Breaking Down Barriers: Show examples of AI tools that make complex skills more accessible, like sophisticated photo editing features (e.g., automatically removing backgrounds) or turning a simple sketch into a 3D model.
  • Exploring the Unknown: Highlight AI's ability to analyze vast datasets, helping researchers discover new materials, understand complex social trends, or translate languages instantly, connecting people and knowledge globally.


⚖️ Focus 3: Addressing Concerns (Ethics and Control)​


Acknowledging common fears is crucial for building trust. I would directly address concerns about job loss, bias, and control.
  • Human Oversight is Key: Stress that the most effective AI systems always have human direction and oversight. We design the systems, set the goals, and decide how the outputs are used.
  • Fighting Bias: Discuss the active, ongoing efforts to make AI systems fairer and less biased by diversifying the data they train on and implementing rigorous ethical review.
  • Skill Shift, Not End: Reframe the concept of job loss as a skill shift. New roles are emerging—AI developers, data labelers, prompt engineers, and ethical AI auditors—and emphasize the importance of lifelong learning.
The closing argument: I would suggest that engagement is better than avoidance. By understanding and working with AI, she can help shape its future to be more beneficial and ethical for everyone. Ignoring it means missing the chance to influence its development.
Would you like me to tailor this approach further by assuming a specific reason why MillieDynamite might currently dislike AI (e.g., fear of job loss, skepticism of technology, etc.)?
AI;DR
 
The title says it all.
  • How do you pretend you like AI at work, if you think it’s slop, based on theft, mass plagiarism, and click theft, that it’s at best offering 9th grade/3rd form level superficiality, and also that it isn’t much better than just clicking on the first (old fashioned) search result you find in a web search?
  • (As you might guess, the intended audience for this discussion are fellow AI skeptics/cynics).
  • Keep in mind we debate AI at length in other threads, so we don’t have to debate it in this one.
  • So I’m hoping to keep the focus on the topic at hand, “how do you pretend to like AI at work when you know you have to?”
So…. “How do you pretend you like AI at work, when you know your bosses expect you to like it?” For example, the chief-something-officer at my job recently talked enthusiastically about how AI wrote a memo for him/her. So I know where they stand.

I’m just too honest, and I’m struggling at work to keep my mouth shut and to pretend I’m all in. But I know I need to.

How do the rest of my fellow doubters pull of the act at work?
I feel for you. I'm kind of fortunate in that while many of my co-workers are using generative AI, use of stuff like Copilot is a personal choice and nobody is breathing down my neck to do so. I do get a bit of annoying AI-related admin work but that's mostly about setting guardrails to prevent people from using it for egregiously stupid purposes, and it gives me the opportunity for "have you considered doing something else?" conversations.

We have a monthly TPS report that took a fair bit of work to write. One of the young people has just automated it, NOT with gen AI but by writing a script to do it programmatically and dependably. I was so proud of them.
 
Back
Top