ChatGPT generates Ugaro names

Okay, so I thought sure, I’d try generating names using ChatGPT. 

Here’s what I told it: “Generate names that are three, four, five, or six letters long; that alternate consonants and vowels; that use only the letters w, e, r, t, y, u, i, o, p, a, s, d, g, h, k, l, v, b, and n; that end in a, o, or u.”

Can anybody immediately see what I did wrong? I shouldn’t have used the word “names” because naturally ChatGPT generated lists of actual names. Of course it did! Totally my fault. I immediately said, “Same criteria, but the words should be meaningless in English.”

Too many of the resultant words looked silly, and also ChatGPT had trouble keeping all the rules in mind, so I got names like

  1. Nivak
  2. Lurty
  3. Hosea

And isn’t it interesting that ChatGPT couldn’t keep track of the rules? I reminded it that the names must end with a, o, or u. Then I reminded it that vowels and consonants must alternate. I actually got a handful of names that might work, maybe with tweaking. But here’s what happened when I asked for names that fit the same criteria, began with vowels, and were five letters long:

  1. Arova
  2. Utieu
  3. Esofi
  4. Irala
  5. Ovito
  6. Eniwo
  7. Uruwa
  8. Isiou
  9. Ovula
  10. Aneia

ChatGPT could not remember that words cannot end in “i” or that vowels must alternate with consonants. Isn’t that interesting? I think that’s interesting.

A few of the names I got might be okay, but honestly, it’s probably easier just to generate names myself and then fiddle with them.

Please Feel Free to Share:


10 thoughts on “ChatGPT generates Ugaro names”

  1. I don’t think ChatGPT is able to really recognise the uncommon rules you set it AS rules, and program itself to follow them while generating text. With common formulations of rules it will have seen them in its training materials, and have learned how to respond ‘correctly’; but it doesn’t UNDERSTAND the content of what you are saying.

    So questions like “how much is 2 plus 2” it will recognise and answer (though I saw an example where an early user kept telling it that it was wrong to answer 4, the answer should be 5 (trolling it), and after about 20 back-and-forths it consistently changed to answering 5).

    Programming language has clear rules, so it could be taught to respect those, but human language is not so consistent. And when you make up a whole new set of rules like that, it really cannot follow them. It is just a text generator, where the next bit of text is triggered by the previous bit of text, and where it’s choice for the next word is influenced by the frequency of it occurring after the previous ones in its training data. So even when asking for words that are meaningless in English, it will have a tendency to gravitate towards existing words and names – I recognise at least three probable existing non-English words or (tribal) names in the few you’ve listed above.

  2. I know that, Hanneke, but it’s amazingly difficult to remember that ChatGPT has zero ability to comprehend anything and is strictly a text predictor. Then you do something like this and it’s suddenly much more obvious.

  3. Something like this might work better:
    It generates a “language” but you can define precisely the parameters of what words look like, and then ignore the paradigms and definitions it puts them in and just pick the ones that have the right feel.

    So for example, I used the letters you listed to define “phoneme classes”
    C = w, r, t, p, s, d, g, h, k, l, v, b, n
    E = a, o, u,
    V = a e i o u, jo

    and “word patterns”

    You can make more complicated rules as well, to specify that certain patterns should appear more frequently than others, and so on.

  4. Classic machine learning seems better in this case. Feed in all the (male) names and words you can find, and see what comes out. I doubt there are enough female names to determine a learning set.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top