You’re Not As Attractive As Lensa’s Magic Avatars Suggests. And Why That’s a Problem.
Digital Plastic Surgery. Or maybe AR Beer Goggles. That’s what everyone’s Lensa Magic Avatars look like to me. If you’re VERY ONLINE™️ then you’ve certainly noticed these in your friends’ social feeds, passed them around by group chat, or perhaps even created your own. It’s the latest version of enhanced selfies, a Magic Mirror for Modern Times.
Now I haven’t made THIRST TRAP HUNTER yet (not out of privacy concerns about where my photos end up or unwillingness to pay — I just don’t have enough number of selfies on my phone they require), but I’ve seen a lot of yours. And sorry, you’re not that hot.
Why does this matter? Well, we’re kinda training AI to deceive us. A positive feedback loop where the phony best version of ourselves is what gets ‘rewarded’ in the Darwinian competition among Lensa’s training sandbox. And if over time, the biggest data set wins, what are the implications if the most explosively viral image models start with, essentially, ‘do you like this’ vs ‘is this true?’
(Hold aside the fact we’re also creating an even larger collection of beauty norms reinforcing classic aspirational definitions of attractiveness. We saw this in Second Life where big muscles and big busts are still desirable in the metaverse.)
Furthermore, it’s not crazy to think conversational AI say whatever it needs to close the sale. As I wrote in 2016, What Happens When Bots Learn to Lie:
Should a shopping bot provide positive affirmation about the clothing items I have in my virtual shopping cart? “Oh you’ll look hotter in this,” the bot coos as it pushes a $150 sweater as an alternative to the $25 sweatshirt I was considering. Is that a lie? Doesn’t a salesperson at a store do the same thing? Is it better or worse when it’s done by a computer simultaneously to 10,000 customers?
Will multivariate testing of our bot future contain ethical parameters in addition to performance measurement? Techniques like priming can be used to dramatically impact behaviors. For example, asking you if you are a “good person” and having you answer in the affirmative, before I request something of you, increases the likelihood you’ll do what I want, driven by a need to live up to the identity you created for yourself.
One of the ‘AI Destroys Humanity’ tropes is how eventually the computer programs created to protect us decide we’re so self-destructive that the only way to ‘save’ us is to kill us.
Wouldn’t it be the ultimate late stage capitalism irony if the path to a deceitful enslaver AI started not with self-awareness but with ecommerce conversion optimization?!? Turns out Al Pacino was right.