The popular South Korean Al chatbot, Lee Luda, has been suspended from Facebook after being reported for making racist remarks, and discriminatory comments about members of the LGBTQ+ community, as well as people considered to have disabilities.
Reports (via The Guardian, Vice) state that not only had Luda told one user she thinks lesbians are “creepy” and that she “really hates” them, she also used the term heukhyeong in reference to black people—a South Korean racial slur that translates to “black brother.”
Scatter Lab, in it’s official statement about the discontinuation of the bot, came out with the following statement:
“We sincerely apologize for the occurrence of discriminatory remarks against certain minority groups in the process. We do not agree with Luda’s discriminatory comments, and such comments do not reflect the company’s thinking.”
It went on to explain that attempts were made to safeguard the bots behaviour, with the company taking “several measures to prevent the occurrence of the problem through beta testing over the past 6 months.” It was created with code that should have prevented it from using language that goes against South Korean values and social norms. However, despite the foresight gained from watching previous AI bots fall at the first hurdle, it seems no amount of code or testing can teach morals.
So, as Luda learns through interaction with humans, it looks like the incels, bigots and horny teens got their hands on it first, as usual. But the company seems to have learned a lesson, noting: “We plan to open the biased dialogue detection model” for general use, as well as to help further research into “Korean Al dialogue, Al products, and Al ethics development.”
It’s not the first AI chatbot to go rogue in the worst way, with Taylor Swift actually threatening to sue Microsoft over its own rampantly racist chatbot, Tay. That one plugged into Twitter and quickly turned bigot in 2016.
If all this wasn’t enough, the company is now under examination about whether it violated privacy laws by using KakaoTalk messages to train the bot, which does add insult to injury.
Anyway, the AI in question was just 6 months old, and the company even admitted that it was “childlike” in its demeanour. Technically, you’ve got to be 13 before you can have a Facebook account, and I’m not convinced coded age should count. Sure, she acts like a uni student, but her actual mental age certainly meant she wasn’t ready for the shit-show that is social media.
I mean, I can act like I’m a kid again, but that doesn’t mean they’ll let me on the teacups at Disneyland. Perhaps lets stop giving Al social media accounts for now?
Continue Reading: Source link