Set as Homepage - Add to Favorites

日韩欧美成人一区二区三区免费-日韩欧美成人免费中文字幕-日韩欧美成人免费观看-日韩欧美成人免-日韩欧美不卡一区-日韩欧美爱情中文字幕在线

【cerita lucah sensasi】Google's AI LaMDA isn't sentient, but has racial and gender biases

While a sentient AI is cerita lucah sensasia thoroughly freaky concept, it's not (yet) a reality. But a racist and sexist AI? Unfortunately, very much a reality.

In a recent interview with Wired, engineer and mystic Christian priest Blake Lemoine discussed why he believes that Google's large language model named LaMDA has become sentient, complete with a soul. While that claim has been refuted by many in the artificial intelligence community and has resulted in Lemoine being placed on paid administrative leave by Google, Lemoine also explained how he began working on LaMDA.

His journey with the AI started with a much more real-world problem: examining the model for harmful biases in relation to sexual orientation, gender, identity, ethnicity, and religion.


You May Also Like

"I do not believe there exists such a thing as an unbiased system," said Lemoine to Wired."The question was whether or not [LaMDA] had any of the harmful biases that we wanted to eliminate. The short answer is yes, I found plenty."

Lemoine also explained that the Google team has done a good job repairing these biased "bugs," as far as he could tell. When asked whether LaMDA showed racist or sexist tendencies, Lemoine answered carefully, stating that he "wouldn't use that term." Instead, he claims "the real question is whether or not the stereotypes it uses would be endorsed by the people that [LaMDA is] talking about."

SEE ALSO: Amazon used AI to promote diversity. Too bad it’s plagued with gender bias.

Lemoine's hesitancy to label LaMDA's "bugs" as outright racist or sexist highlights an ongoing battle within the AI community, where many have spoken out about the harmful stereotypes that AI systems often perpetuate. But when those who do speak out about these issues are largely Black women — and those women are subsequently fired from companies like Google — many feel that it falls on men in tech like Lemoine to continue to call attention to AI's current bias problems, rather than confound researchers' and the public's attention span with claims of AI sentience.

“I don't want to talk about sentient robots, because at all ends of the spectrum there are humans harming other humans, and that’s where I’d like the conversation to be focused,” said former Google Ethical AI team co-lead Timnit Gebru to Wired.

Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up!

Artificial intelligence faces long history of harmful stereotypes, and Google is not new to or unaware of these issues.

In 2015, Jacky Alciné tweeted about Google Photos tagging 80 photos of a Black man to an album titled "gorillas." Google Photos learned how to do so using a neural network, which analyzed enormous sets of data in order to categorize subjects like people and gorillas — clearly, incorrectly.

It was the responsibility of Google engineers to ensure that the data used to train its AI photosystem was correct and diverse. And when it failed, it was their responsibility to rectify the issue. According to the New York Times, Google's response was to eliminate "gorilla" as a photo category, rather than retrain its neural network.

Companies like Microsoft, IBM, and Amazon also face the same biased AI issues. At each of these companies, the AI used to power facial recognition technology encounter significantly higher error rates when identifying the sex of women with darker skin tones than when compared to sex identification of lighter skin, as reported by the Times.

SEE ALSO: Meet the designer who makes high-tech nail art and fights facial recognition with flowers

In 2020, Gebru published a paper with six other researchers, four of whom also worked at Google, criticizing large language models like LaMDA and their propensity to parrot words based on the datasets that they learn from. If those datasets contain biased language and/or racist or sexist stereotypes, then AIs like LaMDA would repeat those biases when generating language. Gebru also criticized training language models with increasingly larger datasets, allowing the AI to learn to mimic language even better and convincing audiences of progress and sentience, as Lemoine fell into.


Related Stories
  • Uber's artificial intelligence ambitions just got bigger
  • These people aren't real. Can you tell?
  • How to blur your home on Google Street View (and why you should)
  • Facial recognition cameras to be rolled out in London amid privacy concerns
  • Clearview AI, the creepy facial recognition company, is reportedly developing a surveillance camera

After a dispute over this paper, Gebru says Google fired her in December 2020 (the company maintains she resigned). A few months later, Google also fired Dr. Margaret Mitchell, founder of the ethical AI team, a co-author of the paper, and defender of Gebru.

Despite a supposed commitment to "responsible AI," Google still faces ethical AI problems, leaving no time for sentient AI claims

After the drama and admitted hit to its reputation, Google promised to double its responsible AI research staff to 200 people. And according to Recode, CEO Sundar Pichai pledged his support to fund more ethical AI projects. And yet, the small group of people still on Google's ethical AI team feel that the company might no longer listen to the group's ideas.

After Gebru and Mitchell's departure in 2021, two more prominent ethical AI team members left a year later. Alex Hanna and Dylan Baker quit Google to work for Gebru's research institute, DAIR, or Distributed Artificial Intelligence Research. The already small team grew even smaller and perhaps points to why Lemoine, who is not on the ethical AI team, was asked to step in and research LaMDA's biases in the first place.

As more and more societal functions turn to AI systems in their advancement, it's more important than ever to continue to examine how AI's underpinnings affect its functions. In an already often racist and sexist society, we cannot afford to have our police systems, transportation methods, translation services, and more rely on technology that has racism and sexism built into its foundations. And, as Gebru points out, when (predominantly) white men in technology choose to focus on issues like AI sentience rather than these existing biases — especially when that was their original purpose, like Lemoine's involvement with LaMDA — the biases will continue to proliferate, hidden away under the hullabaloo of robot sentience.

“Quite a large gap exists between the current narrative of AI and what it can actually do,” said Giada Pistilli, an ethicist at Hugging Face, to Wired.“This narrative provokes fear, amazement, and excitement simultaneously, but it is mainly based on lies to sell products and take advantage of the hype.”

Topics Artificial Intelligence Facial Recognition Google

0.123s , 9802.4140625 kb

Copyright © 2025 Powered by 【cerita lucah sensasi】Google's AI LaMDA isn't sentient, but has racial and gender biases,Public Opinion Flash  

Sitemap

Top 主站蜘蛛池模板: 国产一区二区不卡免费观在线 | 中文字幕精品视频在线 | 亚洲日本精品国产第一区二区 | 国产无码黄色网站在线观看 | 精品国产香港三级 | 成人国产日韩欧美另类在线 | 97SE亚洲国产综合在线 | 国产精品视频一区麻豆 | 精品久久久久久蜜臂a | 久久视频在线 | 国产av一区二区三区水牛 | 国产精品视频一区视频二区 | 亚国产欧美在线人成 | 婷婷激情丁香 | 国产乱子伦视频湖北 | 国产午夜精品免费一区二区三区 | 精品国产自线午夜福利 | 一级做a爱片在线播放 | 青青草国产在线视频 | 国产又色又爽又黄A片小说 国产又色又爽又黄刺激在线视频 | 高潮无码少妇人妻偷人激情 | 精品人妻无码一区二区三区牛牛 | 精品国产一区二区三区四区在线看 | 亚洲精品中文一区二区在线 | 国产91青青成人a在线 | 久久精品国产av无码麻豆 | 亚洲欧美高清一区二区三区 | 国产aⅴ无码精品一品二区 国产aⅴ无码精品一区二区 | 欧美刺激黄A片 | 精品无码综合福利网 | 久久久久久久久综合影视网 | 色婷综合 | 国产无套精品久久久久久辛芷蕾 | 开心婷婷丁香 | 亚洲欧洲一区二区三区在线观看 | 99久久亚洲精品无码毛片 | 波多野结衣蓝光中文字幕 | 亚洲天天在线 | 国产无套码aⅴ在线观看在线播放 | 久久国产日韩精品久久 | 精品丝袜国产自在线拍免费看 |