MIT Study on using AI and critical thinking

This seems like an article published for the purposes of confirmation bias. It's usually a bit of a red flag when 'scientists' send their results to the media before any independent corroboration has occurred. I'm not saying their results are necessarily wrong, but the study design doesn't seem very rigorous, so the odds may be high that it was conducted specifically to garner attention from a credulous audience before it can be debunked.

For one thing, they're measuring brain activity of people doing different activities (which sounds science-y) and suggesting that the observed discrepancies are profound, insofar as the cognitive activity of the people using the chatbot is less than the ones doing the essays with other tools. I think it's possible, perhaps even likely, that all their results show is that the ones who are only allowed to use the chatbot, and who eventually wind up using copy and paste, quickly learned that there isn't any point to exerting much effort on the task. Using the chatbot to try corroborating whatever it said the first time is an exercise in futility, so the most efficient and practical thing to do is just regurgitate whatever it says and get paid for your time, even if you know or suspect it's bad information. The fact that doing so doesn't light up the same parts of the brain as doing actual research is hardly surprising, nor does it mean that the people using the chatbot have been permanently impaired. They'd need to conduct a much longer study to have any relevant data for long-term issues, although I can't fault them for being concerned about the possibility.

It kind of reminds me of the tale (perhaps apocryphal) about the archaeologists who offered locals money for any ancient pots or pot sherds they found. The locals quickly realized that they could make more money by breaking any intact pots and selling the pieces. Did this really happen? Maybe. It has a fable-like quality to it, especially when used to illustrate the idea that people can be clever when it comes to figuring out what's best for them, and how easy it is to give them perverse incentives to ruin your study.
 
All the discussion on AI, o thought I’d share this link…


https://time.com/7295195/ai-chatgpt-google-learning-school/
I'm not at all surprised. At the beginning, I refused to look at the Google AI results or to go to ChatGPT with questions, having experienced first hand hallucinatory answers. Then I began using them, but also looking at the google hits below and asking ChatGPT for references (which I didn't always check, I'm embarrassed to say, just like the dept of Health and Human Services...). Lately I go to ChatGPT first and think, "Well, that sounds reasonable. It's probably OK." Slippery slope. Slippery slope.
 
This seems like an article published for the purposes of confirmation bias. It's usually a bit of a red flag when 'scientists' send their results to the media before any independent corroboration has occurred. I'm not saying their results are necessarily wrong, but the study design doesn't seem very rigorous, so the odds may be high that it was conducted specifically to garner attention from a credulous audience before it can be debunked.

For one thing, they're measuring brain activity of people doing different activities (which sounds science-y) and suggesting that the observed discrepancies are profound, insofar as the cognitive activity of the people using the chatbot is less than the ones doing the essays with other tools. I think it's possible, perhaps even likely, that all their results show is that the ones who are only allowed to use the chatbot, and who eventually wind up using copy and paste, quickly learned that there isn't any point to exerting much effort on the task. Using the chatbot to try corroborating whatever it said the first time is an exercise in futility, so the most efficient and practical thing to do is just regurgitate whatever it says and get paid for your time, even if you know or suspect it's bad information. The fact that doing so doesn't light up the same parts of the brain as doing actual research is hardly surprising, nor does it mean that the people using the chatbot have been permanently impaired. They'd need to conduct a much longer study to have any relevant data for long-term issues, although I can't fault them for being concerned about the possibility.

It kind of reminds me of the tale (perhaps apocryphal) about the archaeologists who offered locals money for any ancient pots or pot sherds they found. The locals quickly realized that they could make more money by breaking any intact pots and selling the pieces. Did this really happen? Maybe. It has a fable-like quality to it, especially when used to illustrate the idea that people can be clever when it comes to figuring out what's best for them, and how easy it is to give them perverse incentives to ruin your study.
Interesting. Thanks for this.
 
With all of the posts I’ve made about how impressed I am with the power and development of AI I’m sure some forum users think I’m lost to it and desperately need to “touch grass” in an attempt to rescue whatever is left of my shrinking intellect.

What I am is speculative and curious with a focus driven by ADHD. This dynamic had me sidelined as a kid, called a disaster by some teachers and brilliant by others. I’m chock full of contradictions, one of which is that although I embrace modern technology I also embrace some esoteric ideas about human nature and development.

I spent a small fortune sending my kids to a Waldorf school because Rudolph Steiner’s pedagogical philosophy resonated with how I wish I’d been raised. One of the key tenants is to keep kids in the ethereal world of their own imagination as long as possible, partly by keeping them free of digital media - at least until the upper grades.

Waldorf doesn’t fixate on early reading or writing, allowing kids to develop at their own pace within a wider guideline. One of my kids didn’t read until he was eight and was very slow about it. It could take him ten minutes to read a single page of an early reader book, but when asked about the content he could expound on the subtext and possible backgrounds of the characters and settings. His teachers told us not to worry, that his curiosity and understanding would motivate his development. It did. This kid would have been labeled slow and in need of remediation if he were in public school. Now he has a memory and processing ability that makes some people think he has an eidetic memory. Still, at the age of 24 with a six figure salary and no student debt he finds value in AI.

AI is a tool. It can be a crutch or it can enhance your life. We the humans still make the choices.
 
partly by keeping them free of digital media - at least until the upper grades.
Good for you!!! My son and daughter-in-law have managed to keep phones away from their boys until this year when one graduated from middle school. Even now the plan is no phone after bed time. They caved years ago and allowed both boys to play MineCraft, on a time limited basis. But avid readers ended up abanding books for their games. One asked for a big thick book for Christmas.... a book about how to play MineCraft!!!!!!
 
Good for you!!! My son and daughter-in-law have managed to keep phones away from their boys until this year when one graduated from middle school. Even now the plan is no phone after bed time. They caved years ago and allowed both boys to play MineCraft, on a time limited basis. But avid readers ended up abanding books for their games. One asked for a big thick book for Christmas.... a book about how to play MineCraft!!!!!!

The creativity of the MineCraft world was one of their key negotiating points when they were trying to get their first gaming console. Those kids, now young adults, still have a tenacious curiosity that they keep pursuing into fascinating vocations and avocations.

I wish I’d been raised that way instead of being told I didn’t fit whatever molds I was being pressed into by adults who never understood themselves, much less their children.

The new generations are being raised in a world far different from ours. AI is part of their reality. To me it’s critical that it doesn’t become their primary source of information.
 
The creativity of the MineCraft world was one of their key negotiating points when they were trying to get their first gaming console. Those kids, now young adults, still have a tenacious curiosity that they keep pursuing into fascinating vocations and avocations.

I wish I’d been raised that way instead of being told I didn’t fit whatever molds I was being pressed into by adults who never understood themselves, much less their children.

The new generations are being raised in a world far different from ours. AI is part of their reality. To me it’s critical that it doesn’t become their primary source of information.
I was disappointed to learn recently that MineCraft can involve shooting and other violence. I don't know if it was available on the version my grandson showed me many years ago.
 
AI is part of their reality. To me it’s critical that it doesn’t become their primary source of information.
I recently attended an online panel discussion presented by my alma mater bout AI in education. I was glad to hear that they were focusing on ways to teach students how to use it safely and productively instead of trying to figure out how to shield them from it.
 
The new generations are being raised in a world far different from ours. AI is part of their reality. To me it’s critical that it doesn’t become their primary source of information.
It reminds me of the furor over Wikipedia 20 years ago, and how students wouldn't be able to do research and think critically about sources.

Or in my day, how teachers insisted that having calculators would mean that you weren't actually learning math. I found it highly ironic then that my college math classes strongly recommended using graphing calculators for class.
 
Social media, apps, AI.... those are things children should be kept away from. Letting kids grow up in front of a screen is abuse.
 
It reminds me of the furor over Wikipedia 20 years ago, and how students wouldn't be able to do research and think critically about sources.
I don't really remember that. I'm not saying it didn't happen, I just wasn't exposed to it.

The fact that Wikipedia makes it so, so easy to evaluate sources is one of its major, major features.

I know many people for whom Wikipedia was their introduction to reading research papers. I for one had already developed the habit of reviewing printed citations and locating and reading the cited material, but I come from a family of researchers.

There are other people who wouldn't ever see a research paper at all if it weren't linked from a Wikipedia citation.

You can't think critically about sources if you don't even know what the sources are. And most media doesn't identify sources with enough clarity for a reader to be able to go and access them.
 
Social media, apps, AI.... those are things children should be kept away from. Letting kids grow up in front of a screen is abuse.

My parents were told the same thing in the 1970s, but it was merely a different screen. The sky did not fall.

Keeping kids away from technology in the western world in 2025 is a failure to prepare them for the world they live in. I think that's sad, but I can't ignore reality.
 
Well, there are extremes, and a lot of room in between them, including one or more versions of happy medium.

Training a kid to be glued to a screen from age 2 isn't preparing them for anything positive.

A tween can begin learning about the self-directed usage of technology and be just as caught-up, skills wise, as anyone, by the time they're employable.

"Keeping kids away from technology" doesn't have to look like zero-tolerance. Supervision and limits are ways to keep kids away from technology without keeping kids away from technology.

Complete isolation until adulthood obviously is not the alternative to absolutely unfettered access from birth, and I only see one person suggesting that it is, or implying that anyone else was suggesting that it is.
 
Last edited:
All the discussion on AI, o thought I’d share this link…


https://time.com/7295195/ai-chatgpt-google-learning-school/
Zero surprise at all!

They've already studied around stuff like this, where in the last few years they did one that hooked up users while reading a book to prove a point: one would read a book on a Kindle while another would read an actual print book. What did they find when they ran the PET scans during that exercise? Of the brain area that normally lights up while reading, the person reading the book on the Kindle only had 2% of that region active; the person handling the physical copy had 98%. They determined it had everything to do with interacting physically with the source--turning the pages, feeling the paper. It's the same relative to using a phone too much to text instead of write. A similar study was just done where they asked people to record notes three different ways: 1) to handwrite the notes; 2) to type the notes, but to summarize and paraphrase where they felt it needed; 3) to do their best to transcribe the notes verbatim. What happened? The ones that had to handwrite the notes did so much slower but showed learning games in all of the three or four areas measured; the second group showed a very positive gain in one metric, a slight gain in a second and negative in the remainder; group 3 showed negative impacts across all metrics measured because why? They did not engage their brains meaningfully. They've also measured the same type of brain and communication atrophy in more than just teens using phones excessively, whether it be texting or for other things. The book Disconnected documents quite a bit of this last phenomenon, where teens have great difficulty looking others in the eye and even forming entire coherent sentences and thoughts (curiously children that don't get a cellphone until 18 or older do not show ANY of these issues). So to see someone cheating using chat GPT to do the work and becoming dependent totally jives with becoming too dependent on a phone for tasks we used to approach differently. This is something we've known in language acquisition for a long time, at least since 1983 for sure and quite possibly since about 1976: the more senses one uses to deal with and process the information, the more one remembers and the better the brain recalls and retains the same. So imagine this: when you handwrite something while listening, you're using your ear, you are seeing it on the paper, and you are writing it onto the page--while processing and synthesizing the words into something you can understand. Anytime someone synthesizes or creates their own meaning, you are engaging in the highest level of Bloom's taxonomy, which is the goal in the first place. Today, the goal is more just to get it done without thinking of the shortcuts and things one sacrifices to get there.
 
We divided our participants into three groups and asked them to make fire. The first group was given just twigs. The second group was given twigs and flint. The third group was given twigs, flint and a flamethower.

The third group overwhelmingly used the flamethrower.

Lazy fuckers.
 
the tale (perhaps apocryphal) about the archaeologists who offered locals money for any ancient pots or pot sherds they found. The locals quickly realized that they could make more money by breaking any intact pots and selling the pieces.
Perverse incentive

a.k.a Cobra Effects.
anecdote taken from the British Raj.[2][3] The British government, concerned about the number of venomous cobras in Delhi, offered a bounty for every dead cobra. Initially, this was a successful strategy; large numbers of snakes were killed for the reward. Eventually, however, people began to breed cobras for the income. When the government became aware of this, the reward program was scrapped. The cobra breeders set their snakes free, leading to an overall increase in the wild cobra population.

Late stage capitalism is rife with such incentives. And A.I. is its current darling so such downsides are unsurprising.

*edit* I wonder if this isn't akin or the same as your example:

The 20th-century paleontologist G. H. R. von Koenigswald used to pay Javanese locals for each fragment of hominin skull that they produced. He later discovered that the people had been breaking up whole skulls into smaller pieces to maximize their payments. When he cancelled the payments, many locals burned the remaining skulls they had as retaliation.[42]
 
I don't really remember that. I'm not saying it didn't happen, I just wasn't exposed to it.
I surely do. It was occurring squarely in my high school and college years, and most teachers were vehemently anti-wikipedia. I remember several of them pontificating about it for entire periods on end. I couldn't help but internalize that to some extent, such was the prevalence of the sentiment in the mid naughties.

I suspect it ended up doing a great deal more harm than good to my generation. The backlash, I mean. I'm sure more than a few people my age were trained simply to never use it, rather than how to use it. I expect that's destined to be an echo of the present AI panic as well.

Letting kids grow up in front of a screen is abuse.
Frankly, I'm forcefully suppressing my urge to lash out at this flippant misuse of the word abuse. That's about a hundred miles from what abuse is. Find a less fraught word with witch to spread your moral panic, please. Thanks.
 
I don't really remember that. I'm not saying it didn't happen, I just wasn't exposed to it.

It was a thing. This view was in its heyday when I got my start as a teacher. I was strongly encouraged to ban any use of Wikipedia from my students, and it's was only many years later that it occurred to me to actually try to teach them how to use it.
 
Back
Top