Is AI actually helpful?

It's DEC, of course it was octal.
My first system in the Navy was an A/N-UYK-7.

It was all octal. I learned to read octal for that. That thing was a beast. No chips, all discrete components on little cards, each one was a separate logic circuit.

The display was just blinking lights, showing the contents of the registers and the current command. One fun thing is that the display had a lamp test button that would light up all the register positions, and it would write those as ones in the main register. Which would promptly crash whatever was running at the time.
 
And here's a dilemma: Companies hire hacker whizbangs to automate all kinds of functions. The functions change as the devices 'improve'. Companies replace customer service with robot voices, bots, and FAQs that don't know the user is on a three-year-old machine and can't help. Gosh, it makes sense that we write about sex.
 
I remember an I've Been Moved salesman telling me that when customers asked. "Yes, but how does it work? His response was, "Just fine."
As long as I'm being dragged down memory lane, I have two related stories.

My first real job out of college, I was helping the company choose its first in house computer. We needed it to run both accounting s/w and in house engineering stuff.

IBM responded to our RFP with something that would have been fine for the accounting but simply didn't have the beef to run the engineering stuff, so they did not make it out of the first round. The sales people arranged a visit to the President and told him that there was a problem in the s/w group, implying that we should all be replaced. Fortunately for me, the prez's former job was as veep at a s/w company so he knew enough to talk to us and then laughed it off. Otherwise, I may have had a very short career. It was a while before I would say anything nice about IBM. Fast forward several decades and my SO worked for one of the IBM research labs. Oh well.

Also decades later, a small DB company I was well familiar with was selling against Oracle to a group at Boeing (acoustical engineers). When the small company won the contract,Boeing told them why -- Boeing had asked the Oracle salesperson if Oracle could handle complex arithmetic (numbers with real and imaginary components, which is used heavily in acoustical engineering). Salesperson answered that "Some of the numerical applications written on Oracle are very complicated." Of course, Oracle sale people were only slightly more honest than ChatGPT about what they know.
 
Boomers need teenagers to help them pay their bills.

Gosh honest truth. As commented elsewhere, my wife is technologically illiterate. She is unable to independently navigate a web page. Many, many tearful sessions trying, but if [whatever] isn't in the same place it was on another page she "learned", she'll throw her hands up in defeat. We bought a Roku TV three years ago... ohgawd... that's been whole bunches of (NOT!) fun. At least once a week she pokes at the remote trying to make something happen, and I have to untangle it for her, usually with two or three button presses, which I show her, and she summarily ignores/forgets the lesson.

I have all bills come to the house hardcopy because if something were to happen to me, even a short hospital stay, she'd be hosed. "But I learned to drive a manual transmission car in San Francisco!"

...sigh...

And I get accused of de-[Ruby on]-railing threads…

This one was due.
 
Hm, haven't read the whole thread. But for coding, I find AI extremely useful for learning. I ask questions about functions and syntax. I ask about working with Excel or Access. I'll get told something that mentions something else I don't know about any look that up. It's like an interactive reference book and stack exchange, except with those I'm always looking up a similar situation and trying to apply it to my situation; with chatgpt it talks about my exact situation. And if it gets it a little bit wrong I go no, bla bla. And then it gets it right.

So extremely useful to learning and figuring things out. But if I were so knowledgeable that I didn't need to look things up, then it would probably slow me down because I'd be stopping to ask it stuff when I could just go.
 
Hm, haven't read the whole thread. But for coding, I find AI extremely useful for learning. I ask questions about functions and syntax. I ask about working with Excel or Access. I'll get told something that mentions something else I don't know about any look that up. It's like an interactive reference book and stack exchange, except with those I'm always looking up a similar situation and trying to apply it to my situation; with chatgpt it talks about my exact situation. And if it gets it a little bit wrong I go no, bla bla. And then it gets it right.

So extremely useful to learning and figuring things out. But if I were so knowledgeable that I didn't need to look things up, then it would probably slow me down because I'd be stopping to ask it stuff when I could just go.
I'm ignernt, but isn't a compiler essentially A.I.? Who actually writes in machine language now...?
 
I'm ignernt, but isn't a compiler essentially A.I.? Who actually writes in machine language now...?

Lots of folks in the microcontroller world. You know, things like Arduinos. There are higher-level languages, of course, but when you have something twiddling hardware where timing is crucial, you have to move the bits by hand, virtually speaking.
 
I'm ignernt, but isn't a compiler essentially A.I.? Who actually writes in machine language now...?
The scary thing is someone will ask ChatGPT to write a compiler and people will use it, essentially wasting a huge amount of resources to produce buggy code that doesn't work because it's normal for things to not work.
 
I'm ignernt, but isn't a compiler essentially A.I.? Who actually writes in machine language now...?
Not at all. It's a program that converts human readable code into various levels of code that is operable by the computer. You don't need AI for that, and the first compiler was written in 51-52 by Grace Hopper.

As for machine language, it's an older game, but Rollercoaster Tycoon was written in Assembly. Not many are still writing directly in it, but embedded programmers still write in C.
 
As long as I'm being dragged down memory lane, I have two related stories.

My first real job out of college, I was helping the company choose its first in house computer. We needed it to run both accounting s/w and in house engineering stuff.

IBM responded to our RFP with something that would have been fine for the accounting but simply didn't have the beef to run the engineering stuff, so they did not make it out of the first round. The sales people arranged a visit to the President and told him that there was a problem in the s/w group, implying that we should all be replaced. Fortunately for me, the prez's former job was as veep at a s/w company so he knew enough to talk to us and then laughed it off. Otherwise, I may have had a very short career. It was a while before I would say anything nice about IBM. Fast forward several decades and my SO worked for one of the IBM research labs. Oh well.

Also decades later, a small DB company I was well familiar with was selling against Oracle to a group at Boeing (acoustical engineers). When the small company won the contract,Boeing told them why -- Boeing had asked the Oracle salesperson if Oracle could handle complex arithmetic (numbers with real and imaginary components, which is used heavily in acoustical engineering). Salesperson answered that "Some of the numerical applications written on Oracle are very complicated." Of course, Oracle sale people were only slightly more honest than ChatGPT about what they know.
Similar tale: I was working for an educational company with products with lots of components. I figured we could use a handy-dandy database program to catalog the products; pretty much spread sheet stuff and then pour the results into a marketing/sales program that would use the resulting data. Nope. Everything had to be custom tailored. What was unforgettable was that all the software engineer 'tailors' were orthodox Jewish guys in full regalia.
 
I'm ignernt, but isn't a compiler essentially A.I.? Who actually writes in machine language now...?
Short answers, no and almost no one.


Longer answers:

What most people mean in these discussions by AI is actually LLMs, which is most definitely not what compilers are doing. There are lots of algorithmic ways of translating code from one language to another. Even by the broader Rusself + Norvig sense of AI, most compilers have no AI -- some optimizations now do.

@MrPixel already gave a better answer for the second half. Knowing Machine language is also very useful for trying to reverse engineer viruses and the like.
 
I'm ignernt, but isn't a compiler essentially A.I.? Who actually writes in machine language now...?
Somewhat like a word processor highlighting the misspelled words or recommending the correct spelling when your cursor hovers over it.

The next evolution might be to offer a better way of phrasing a sentence, then the whole paragraph, and possibly the entire chapter.

But maybe I'm getting ahead of myself.

EDIT: But then again, my parents struggled to comprehend the technology of the cellphone I gave them as able to call me anywhere in the country at no extra cost! They were forever trapped in their local call area mindset.
 
Last edited:
Short answers, no and almost no one.


Longer answers:

What most people mean in these discussions by AI is actually LLMs, which is most definitely not what compilers are doing. There are lots of algorithmic ways of translating code from one language to another. Even by the broader Rusself + Norvig sense of AI, most compilers have no AI -- some optimizations now do.

@MrPixel already gave a better answer for the second half. Knowing Machine language is also very useful for trying to reverse engineer viruses and the like.
This raises a question that is hard to articulate but most beneficiaries of A.I. are curious about: How to put it... A.I. is booming as a growth industry because it has the potential to give 'added value' to many fields. Sahdes of the first Silicon Valley boom. Today the garage, tomorrow that multi billion IPO. So what are humans actually doing to drive A.I.? There are lots of ways to throw data at a computer. But maybe the question is: How does the algorithm that I write turn into one a computer writes for itself? How are programmers deciding what intelligence is? And like Asimov, are they charged with telling computers what idiocy and stupidity are?
 
Somewhat like a word processor highlighting the misspelled words or recommending the correct spelling when your cursor hovers over it.

The next evolution might be to offer a better way of phrasing a sentence, then the whole paragraph, and possibly the entire chapter.

But maybe I'm getting ahead of myself.

EDIT: But then again, my parents struggled to comprehend the technology of the cellphone I gave them as able to call me anywhere in the country at no extra cost! They were forever trapped in their local call area mindset.
Do we assume that the million monkeys writing Shakespeare are telling the computer what is a smooth Shakespearean phrase and what is a clumsy one? Because just sampling all of Shakespeare doesn't do that. Maybe sampling dozens of papers about Shakespeare. But there are dozens of 'better' ones in dusty vaults that were never digitized. And who is hiring Shakespeare A.I. programmers and justifying their salaries? And how are they vetted? I just asked Google what the best car of 1957 was? Google's original algorithm followed searches in terms of frequency. Oversimply, more hits was the answer. Google now named Detroit cars. Was that an 'intelligent' reponse. Who is defining intelligence?
 
Here is a Google response to "Who is the best author on Literotica? My interest was see if I could puzzle out how programmers pushed the A.I. search into drawing conclusions. Who wrote the model for literary evaluation or pointed the app at how to shape that?

"It is impossible to name a single "best" author on Literotica, as "best" is a subjective judgment based on personal tastes in style, genre, and storytelling
. The site features thousands of amateur and semi-professional writers, making it difficult to identify an objective standout. What one reader considers a well-crafted narrative, another might find boring.

Instead of a single name, here are some authors and criteria often mentioned by readers and communities for writing that goes beyond simple popularity.
Noteworthy authors and characteristics
  • Fenella Ashworth: A British author who published her books on Literotica before moving to Kindle Unlimited, where they became bestsellers. She is recognized for strong storytelling and building anticipation.
  • Gabthewriter: Mentioned by Reddit users for writing dark and non-consensual themes with good craftsmanship.
  • Long-standing authors with consistent ratings: Because Literotica ranks stories based on user ratings, following authors who have consistently received high marks over many stories is a good way to discover quality writers.
  • Writers with published works: Some authors use Literotica as a proving ground before publishing professionally. An example is the site's own anthology series, The Very Best of Literotica.com, which features a curated collection of high-quality stories.
How to discover authors
Given the site's massive library, a good approach is to define what "best" means and use the site's features to find authors who match specific preferences. It's important to be aware that the content on this site is explicit and may include themes some readers find offensive or disturbing.
  1. Use advanced search filters. Literotica's filters allow searches by specific themes, categories, and writing styles. If one values plot over pure steam, searching for categories tagged with more story-focused elements can be helpful.
  2. Sort by rating. While looking beyond the most-read is the goal, sorting by high rating can help find stories with a proven record of reader satisfaction. The most-read stories often benefit from a popularity feedback loop, but a highly rated story with fewer readers may be a hidden gem.
  3. Read feedback from other readers. The site's message boards and story comment sections can offer insight into which authors are celebrated for their skill. Many long-time users are happy to recommend hidden talent.
  4. Try different genres. The erotica genre is vast. An author who writes excellent BDSM fiction may not be as skilled at romance. Trying highly rated stories from a range of genres can help identify authors who are masters of their specific domain."
 
Do we assume that the million monkeys writing Shakespeare are telling the computer what is a smooth Shakespearean phrase and what is a clumsy one? Because just sampling all of Shakespeare doesn't do that. Maybe sampling dozens of papers about Shakespeare. But there are dozens of 'better' ones in dusty vaults that were never digitized. And who is hiring Shakespeare A.I. programmers and justifying their salaries? And how are they vetted? I just asked Google what the best car of 1957 was? Google's original algorithm followed searches in terms of frequency. Oversimply, more hits was the answer. Google now named Detroit cars. Was that an 'intelligent' reponse. Who is defining intelligence?
Perhaps much like Webster dictionary defines the correct spelling on any word in English?
 
This raises a question that is hard to articulate but most beneficiaries of A.I. are curious about: How to put it... A.I. is booming as a growth industry because it has the potential to give 'added value' to many fields. Sahdes of the first Silicon Valley boom. Today the garage, tomorrow that multi billion IPO. So what are humans actually doing to drive A.I.? There are lots of ways to throw data at a computer. But maybe the question is: How does the algorithm that I write turn into one a computer writes for itself? How are programmers deciding what intelligence is? And like Asimov, are they charged with telling computers what idiocy and stupidity are?
AI is booming mostly because an unprecedented amount of money has been invested in it and a non-trivial portion of that has gone to media blitzes.

There has been enormous work done on how to get programs to improve themselves. (If you care about it, first name that pops into my head is work Josh Bongard was doing 20 years ago) Almost none of the effective stuff involves LLM's. The point of LLM's is it takes little human effort (in theory), just lots of data. But it turns out that observation alone with no generalization or logical processing has limited effectiveness. It will asymptotically approach a solution, but never reach there, requiring ever more enormous amounts of data.

Imagine trying to learn a moderately complicated board game, like Settlers of Catan, by only watching other people play. You cannot ask for any explanations. You cannot read the rules. How many games would you have to watch before you mostly got it right. How many before you got every nuance right? Now make that game a million times more complicated. There will never be enough games played to fully learn it. Without reasoning, it is hopeless.

LLMs have already reached this point, investing many billions of dollars in training (even when stealing all the training data) to get minimal improvements. And in many evaluations, regressions. This generation of the hype cycle should have already crashed but too much money is betting on it to let it fall that easily. They are just waiting for the new marks to take the white elephant off their hands, so it's more hype hype hype.
 
AI is booming mostly because an unprecedented amount of money has been invested in it and a non-trivial portion of that has gone to media blitzes.

There has been enormous work done on how to get programs to improve themselves. (If you care about it, first name that pops into my head is work Josh Bongard was doing 20 years ago) Almost none of the effective stuff involves LLM's. The point of LLM's is it takes little human effort (in theory), just lots of data. But it turns out that observation alone with no generalization or logical processing has limited effectiveness. It will asymptotically approach a solution, but never reach there, requiring ever more enormous amounts of data.

Imagine trying to learn a moderately complicated board game, like Settlers of Catan, by only watching other people play. You cannot ask for any explanations. You cannot read the rules. How many games would you have to watch before you mostly got it right. How many before you got every nuance right? Now make that game a million times more complicated. There will never be enough games played to fully learn it. Without reasoning, it is hopeless.

LLMs have already reached this point, investing many billions of dollars in training (even when stealing all the training data) to get minimal improvements. And in many evaluations, regressions. This generation of the hype cycle should have already crashed but too much money is betting on it to let it fall that easily. They are just waiting for the new marks to take the white elephant off their hands, so it's more hype hype hype.
It’s going to be bad for all of us when the bubble pops.
 
It’s going to be bad for all of us when the bubble pops.
Almost certainly. Worse than when the dotcom bubble burst. But as a teenager, you don't remember that...

As an oldster, I remember when the tech industry had to react to general economic disruptions. Now it causes them.

Of course, in my grumpy state tonight, I see lots of other looming catastrophes. Most of which belong in the politics board, which I am unwilling to visit, so they will left unsaid.
 
I am neither a teenager now, nor then. I wasn’t yet ten at the time of the crash. So it wasn’t that high on my list of priorities 😊.
I think I know how old you are, within a reasonable margin of error. I was teasing you for making me feel old earlier.
 
Back
Top