「Fmao: The Fake Issue —— BIAS in A.I.」

Beauty bias and appearance-based stereotypes have been pervasive in our society for a long time. This has significant implications for social and economic outcomes, as individuals who are perceived as more attractive are typically more successful in various social situations, such as in the workplace or personal relationships. However, this bias also reinforces and perpetuates harmful stereotypes about those who do not conform to traditional beauty/aesthetic standards. For example, individuals who are overweight, have disabilities, or do not conform to traditional gender norms may face discrimination and social exclusion based on their appearance.

In the world of artificial intelligence (AI), data annotation is an essential step in the process of training algorithms to perform various tasks. However, the data used to train AI models may contain biases and perpetuate stereotypes, including those related to beauty and appearance. For example, if data used to train an AI model only includes images of conventionally attractive individuals, the model may learn to associate attractiveness with other desirable qualities, such as intelligence, ability, and social skills. This could result in biased predictions or decisions that discriminate against those who do not conform to traditional beauty/aesthetic standards.

Therefore, it is crucial to understand the dynamics of beauty bias and stereotypes in the context of AI data annotation. AI developers and data annotators have the responsibility to challenge and subvert conventional beauty standards, to create a more inclusive and diverse AI system. One way to achieve this is by ensuring that the data used to train AI models is diverse and inclusive, including individuals of different body types, ages, and ethnicities. This approach helps to promote body positivity, self-acceptance, and diversity.

Another way to combat beauty bias in AI data is to examine the underlying assumptions and beliefs that underpin our perceptions of beauty. By exploring the cultural and historical roots of beauty/aesthetic standards, AI developers can expose the constructed nature of beauty and open up new ways of seeing and appreciating aesthetics. This could lead to the creation of AI models that do not perpetuate beauty biases and stereotypes.

As we navigate the complex intersection of beauty bias, appearance-based stereotyping, and artificial intelligence, we must recognize the far-reaching effects these biases can have on our perception of ourselves and each other. AI is currently permeating our lives and will eventually become an indispensable part of modern society, much like the internet and smartphones of the past. If we are to truly move towards a more inclusive and fair society, we must be aware of aesthetic standards and biases, both as individuals and developers, and maintain a constant sense of self-awareness both in our personal lives and in the technology we use.

Photographers and other content creators have the responsibility to showcase the diversity of beauty/aesthetic, and should celebrate each individual's unique beauty that is produced through thoughtful consideration, rather than conforming to stereotypical and popular beauty standards. Society should explore the essence of beauty by accepting a wider range of body types, ages, and ethnicities, creating a more positive and inclusive visual culture.

But we must also hold the developers and creators of artificial intelligence to the same standard. By recognizing the potential for AI to replicate and reinforce harmful beauty biases, we can work to ensure that the data used to train these systems is inclusive and diverse. Additionally, we must remain vigilant in monitoring and addressing any biases that may emerge in these systems over time.

Ultimately, our ability to create a more just and equitable society hinges on our ability to recognize and challenge the biases that shape our perceptions of beauty and worth. By embracing diversity and inclusivity in all aspects of our lives, we can create a world where everyone is valued for who they are, not just how they look.

长期以来,美貌偏见和基于外貌的刻板印象在我们的社会中根深蒂固。更有魅力的个体通常在各种社交场合中更加成功,例如在工作场所或个人关系中。然而,这种偏见也强化和延续了对那些不符合传统美丽标准的人的有害刻板印象。例如,肥胖、残疾或不符合传统性别规范的个体可能会因为外表而面临歧视和社会排斥。

    在人工智能(AI)领域,数据标注是训练算法执行各种任务的重要步骤。然而,用于训练AI模型的数据依旧包含偏见并延续刻板印象,包括与美貌和外貌相关的印象。例如,如果用于训练AI模型的数据只包含传统上有吸引力的个体的图像,模型可能会学习将吸引力与其他理想品质,例如智力、能力和社交技巧联系起来。这可能导致有偏见的预测或决策,歧视那些不符合传统美丽标准的人。因此,在AI数据标注的背景下理解美貌偏见和刻板印象的显得非常重要。AI开发人员和数据标注者有责任挑战和颠覆传统的美丽标准,创建一个更包容和多样化的AI系统。实现这一目标的方法是确保用于训练AI模型的数据是多样化和包容的,包括不同体型、年龄和种族的个体。这种方法有助于促进身体积极性、自我接受和多样性。另一种消除AI数据中美丽偏见的方式是审视构成我们审美观的根基。通过探索美的标准源自何处,并探究文化和历史根源,AI模型开发者需要不断开拓并平衡多元的的审美与欣赏方式。这样也许能够创建不再固化审美偏见和刻板印象的AI模型。

    在处理偏见、基于外貌的刻板印象之的人工智能生产物的时候,我们必须认识到这些偏见对我们对自己和彼此的看法将会带来的深远影响,人工智能目前正在覆盖我们的生活,终于一天它会像是过去的互联网与智能手机一样成为现代社会不可或缺的一环。如果要真正向更具包容性和公平性的社会迈进,我们必须意识到什么是审美的标准与偏见,无论是从个人还是开发者,都必须保持时刻的自我意识。

 

艺术家 / Artist:朱浚侨(Kelvin Zhu)

年代 / Time: 2023

作品类型 / Medium: 人工智能生成图像 AI generated image

材质 / Material: 盒装散页 微喷印刷 / Case with Giclee printing *100

尺寸 / Size: A4 210mm×297mm

“Prejudice is eternal, and disputes never sleep."

This is a project emerging 100 magzine covers that combine human photography with artificially generated images, aiming to explore the influence of human bias on artificial intelligence. This project delves into the impact of artificial intelligence on human life, discusses our perceptions of this technology, and how humans and artificial intelligence affect each other's development. Through in-depth reporting, analysis, and commentary, to helps us understand how artificial intelligence is affecting our lives and guides us to think about how we can respond to this influence.

「Fmao: the Fake Issue —— BIAS in A.I.」是一本构建中的美学杂志,由100张混合了人类摄影与人工生成图像的封面所组成,旨在探究人类的偏见对于人工智能的影响。这本杂志利用图像展示了人工智能对人类生活的影响,探讨了我们对这种技术的看法,以及人类与人工智能如何互相影响了对方的发展,并引导我们思考我们如何应对这种影响。

All the photograph, images & AIGC content created by VinlexWorkshop Kelvin Zhu, 2023.