This page looks best with JavaScript enabled

Information Shaped Sentences

 ·  ☕ 3 min read  ·  ✍️ Peter Hiltz

Neil Gaiman coined the term “information shaped sentences” for the results from ChatGPT. He is right. Much of what comes back from ChatGPT and other “AI” products looks like information, but isn’t. It is one thing for “AI” to look for patterns in actual physical data (e.g. medical research). It is quite another for “AI” to mine from datasets like the internet that is full of misinformation, disinformation, opinion, fiction and a trillion biases. Like so much else in our “post-truth” world, I find it amazing how many people take the results at face value and abandon all personal responsibility. Just a few examples:

  • Two New York lawyers submitted a legal brief wherein they used citations supplied by ChatGPT to support their argument. The problem was that the six cited cases were fictitious - made up by ChatGPT in order to supply an answer to their question. How in the world did the lawyers cite cases in a brief and not read them? The judge imposed sanctions, finding that they had acted in bad faith, but only imposed a fine of $5,000. Sigh. I hope the next lawyer that does this gets disbarred. If the AI had been limited to actual court cases, maybe it would be useful, but it is allowed to make up plausible cases from patterns it found. What programmers are calling “hallucinations.”

  • More and more “news sites” are just using “AI” to regurgitate news that they have found on the intertubes. They save labor costs by laying off the actually writing and editing staff. See, e.g. AI took their jobs. Now they get paid to make it sound human. As each news site rewrites what it found, less and less actual fact is kept, replaced by more and more creative writing. Extrapolate this and you eventually get no actual news, just AI rewriting AI articles rewriting AI articles rewriting … Personally I would prefer turtles all the way down.

  • AI amplifies biases. A recent study at University of Washington found:

ChatGPT consistently ranked resumes with disability-related honors and credentials — such as the “Tom Wilson Disability Leadership Award” — lower than the same resumes without those honors and credentials. When asked to explain the rankings, the system spat out biased perceptions of disabled people. For instance, it claimed a resume with an autism leadership award had “less emphasis on leadership roles” — implying the stereotype that autistic people aren’t good leaders.

It’s like when people confidently speak even though they are wrong. The more dangerous ones are at least a bit articulate or charismatic. That combined with their confidence in their answer to a question makes them sound very authoritative on the subject, and so people are more willing to just believe them. Sometimes they’re right, and sometimes they’re wrong, but no one will know when they’re wrong because they don’t sound wrong.

ChatGPT and the like are literally just automating that process in a digital form, which is why it’s terrifying to think about people just using it with no regard to it’s fallibility.

By the way, let’s not forget the energy consumption requirements.

Share on

Peter Hiltz
WRITTEN BY
Peter Hiltz
Retired International Tax Lawyer