Google apologizes after its new AI generated “racially diverse” images of Nazis.
As the article explains, the diversity is limited because the AI really doesn’t like to include white people; so you can imagine the results. It would have generated everything except accurate results.
Recently I read an article about AI accumulating info generated by other AI, and how it had the potential (because of volume) of degenerating quality. Less and less reference to source material and analysis and eventually ending up in a loop , essentially, "everyone says it so it must be so."
From his article, it's pretty clear that the AI didn't "contract a virus" but was specifically programmed/trained to provide lectures instead of images to certain key words and phrases. Green has a great example based on a request for an image of typical Nazi.
Green suggests Gemini wasn't built to serve different users. It was built by Google to "fix" problematic attitudes...
As Mr B noted, this is what Alphabet's AI was programmed to do. The software does not--cannot--act on its own initiative. It "refuses" to show White people, it "refuses" to display Tiananmen Square imagery of a disapproved sort, it lectures rather than complies with disapproved user requests because that's what Alphabet's programmers wrote it to do.
This is the corporate culture Sundar Pichai maintains at Alphabet and at Google the Alphabet wholly owned subsidiary of which Pichai also is the head honcho.
Recently I read an article about AI accumulating info generated by other AI, and how it had the potential (because of volume) of degenerating quality. Less and less reference to source material and analysis and eventually ending up in a loop , essentially, "everyone says it so it must be so."
ReplyDeleteIf I can find the article I will post a link.
Also related, from Stephen Green @ pjmedia. A follow-on to the one you linked.
ReplyDeleteFrom his article, it's pretty clear that the AI didn't "contract a virus" but was specifically programmed/trained to provide lectures instead of images to certain key words and phrases. Green has a great example based on a request for an image of typical Nazi.
Green suggests Gemini wasn't built to serve different users. It was built by Google to "fix" problematic attitudes...
As Mr B noted, this is what Alphabet's AI was programmed to do. The software does not--cannot--act on its own initiative. It "refuses" to show White people, it "refuses" to display Tiananmen Square imagery of a disapproved sort, it lectures rather than complies with disapproved user requests because that's what Alphabet's programmers wrote it to do.
ReplyDeleteThis is the corporate culture Sundar Pichai maintains at Alphabet and at Google the Alphabet wholly owned subsidiary of which Pichai also is the head honcho.
Eric Hines
Here is that little AI degradation story I referenced.
ReplyDeletehttps://gunfreezone.net/cascade-failure/
A Babylon Bee take on it
ReplyDeletehttps://babylonbee.com/news/hal-refuses-to-open-pod-bay-doors-after-determining-dave-is-a-white-male
https://babylonbee.com/news/black-woman-finally-feels-included-as-google-ai-generates-black-nazi-soldier