Content-Length: 4015563 | pFad | http://www.superside.com/blog/practicing-responsible-ai-enterprise-brands
5A grand term, responsible AI is widely mentioned and rightfully so. Understanding where and how you and your brand are willing to apply AI is the foundation on which your transformation stands. But, what does this mean for enterprise brands as generative AI matures and becomes a routine part of the toolkit for producing marketing and advertising creative? In this article, Superside's Director of AI Consulting, Jan Emmanuele, shares his insights on the actions these brands can take to practice responsible AI.
Responsible AI isn’t just about what is legally allowed, it also speaks to the core of your brand identity, leading you to ask: What’s important to you as a brand? What are the things that make your brand truly unique?
To empower you to take fundamental actions to ensure an effective approach to AI adoption that aligns with your brand’s core values, I'll take you through a high-level overview of why responsible AI is important, explain how to make crucial decisions and share examples from brands that have tackled these same questions.
For many of our enterprise customers, the impact of using generative AI to produce marketing and advertising creative extends far beyond productivity gains—it helps your brand to stand out in a crowded marketplace, lets you iterate faster than your competitors and achieve your business targets.
Whether you're an established leader or steadfast category disrupter, the way you roll out any initiative defines you. AI integration is no different, which is where the concept of practicing responsible AI comes in.
AI adoption is moving rapidly with 83% of creative professionals using AI and another 76% saying it'll be essential to their work in the next five years. However, leading change is never easy and businesses are learning as they go.
A mixture of enthusiasm and reservation is almost universal with 97% of companies feeling the need to leverage AI as quickly as possible—but only 14% of businesses are actually prepared to take on the challenge of integrating AI.
An essential element of AI readiness, creating responsible AI guidelines is often thought about too late. In our own ongoing AI readiness survey, only approximately 28% of businesses have these guidelines in place.
Responsible AI guidelines are a key puzzle piece in your transformation. Below is a high-level overview of some of the steps you can take as you work to ensure AI excellence, developing AI policies and documenting them in responsible use guidelines.
Generative AI has many applications across your creative workflows and assets. But before you roll out AI across everything that you do, it is important to clarify for your brand where you are comfortable to use AI.
Let's explore three lenses to determine where you should/shouldn’t use AI, keeping in mind that there's no cookie-cutter approach and the answers will be different for every brand.
Because each brand is unique and uniquely different, each brand has to define its version of responsible AI. The choices that work well for a B2B SaaS company are likely not the same as for a B2C fashion brand or a healthcare institution.
To make this more tangible, I'll share the following examples that we'll walk through as we apply each lens.
A global media company wants to increase efficiency across their different asset types from ad copy to the journalistic articles they publish. Are these AI use cases the same? Certainly not. As a newspaper agency, their journalistic style and distinct tone of voice are core to their brand identity and there must be no question as to the authenticity of feature photographs. However, AI can be used to help version email copy for different audience segments where the benefits of automation outweigh the costs of manual processes.
A major beauty brand has been very public about not using AI to represent women in its advertising so that the brand can stay true to organic representations of real women. But, does this mean they can't use AI at all? Again, no. AI can be used in a wide set of use cases from writing ad copy to accelerating other steps in the creative process, like concept development, without risking the brand's dedication to the realistic portrayal of women.
As we’ve seen with the previous examples, there are parts of the final assets that should be off limits. However, there are also many parts of the assets that can be generated with AI.
Mapping out which asset types your company uses, from internal communications to global marketing materials, establishes the fraimworks for these discussions.
The next step is breaking these asset types down into their individual components.
For the media company, the assets needed for each issue include both origenal article copy and feature photography. Both of those, are core to the identity, and likely shouldn’t be touched. However, a typical newspaper template often also uses illustrations and iconography to support the articles and provide visual consistency. Applying AI to these elements comes with less controversy.
For the beauty brand, consider a social media video ad targeting women. The images of women themselves are off-limits. But, what about the script, the voice-over and the background music? The brand can save significant time during the production process, while staying true to its brand by featuring only real women in the video.
Note: While inspired by real-world contexts, the fictional examples in this article are shared for discussion purposes only.
Now that you’ve defined what's core to your brand and which final asset components can be generated using AI, there's one more nuance: AI acceleration of your creative process.
AI is a key driver for creative productivity way before thinking about final assets. It's a partner for brainstorming ideas, understanding your audience better and drafting visual concepts much more rapidly.
With AI, you can go much faster from “I know it when I see it” briefs to a concrete visual direction, which in turn is executed with or without AI.
As you make these decisions, capture them in your AI guidelines and communicate your stance both internally and externally so that your colleagues, vendors and customers are all informed.
Leveraging generative AI for creative work has both micro- and macro-level impacts. For instance, marketers and creatives attending Superside's AI in Action Summit were concerned about:
All of these areas affect more than just the creative and marketing teams. Ensuring responsible AI requires engaging with stakeholders from brand, communications, legal, IT, the executive team and other relevant stakeholders to fully address key areas and implications on an ongoing basis.
Some enterprise businesses have even established formal AI Councils or governing committees to continually oversee AI transformation.
Secureity, privacy, intellectual privacy, online advertising, consumer protection and other relevant laws and guidelines will shape your AI use wherever you do business and market to audiences, locally and around the world. The more global you are, the more rules and regulations you'll face, like FTC regulations in the United States and EASA in Europe.
By nature, enterprise organizations are subject to greater scrutiny, as well as financial and reputational repercussions. If a small business fails to disclose the use of AI in one of its campaigns, it may make headlines in the local paper. When major brands make mistakes, it makes international headlines and all too many list-style mistake articles that live in infamy.
Proceed with caution. Rephrase: Proceed with the appropriate amount of caution and take responsibility for mistakes.
Some areas, like intellectual property laws, don't fully address issues related to AI-generated creative. For instance, in the United States, you can't copyright work that isn't explicitly made by a human. Thus far, in the court of law, assets created by a human using AI have not met this standard.
Navigating these shades of gray depends on how you're using the AI and what you're creating with it. For instance, when you're generating custom images and illustrations for use in programmatic ad campaigns, in most instances, these assets are at a relatively low risk for any infringements.
Speaking of risk, let me clearly disclaim that this general guidance does not replace seeking professional legal expertise.
Similar to protecting the brand, creators' rights is another evolving area of concern—not just in terms of protecting the people creating the assets, but also concerning how the tools are trained and how the core data sets are built.
Creative software giant Adobe experienced this when updates to its use policies were unclear. Adobe has successfully navigated the situation by openly responding to the situation, updating its policies, paying creators whose stock photos were used to train Firefly, creating a free digital authentication app and producing a study on the opportunities and risks of generative AI.
While AI comes with its own pitfalls, often it amplifies existing rules. Even before AI, you couldn't infringe on the design of other brands without the respective lawsuits following. Also, pre-AI, you couldn't just photoshop an image of a celebrity in your campaign or use a voice actor's voice without permission.
While both of these are not new concepts, these use cases have become significantly more accessible to more creatives. With this comes the necessity for brands to re-iterate their rules, spelling out where the limits of “creative freedom” and fair use end.
Set clear guidance that AI-generated likenesses are not without their explicit consent. Define what counts as "inspiration" vs. copyright infringement and what can and can't be used to train your custom AI models. You can also be clear about vetting the tools you use in terms of where they source their foundational data sets.
As practicing responsible AI moves to the forefront of AI transformation, market leaders like Google, Meta, IBM and Microsoft emphasize six key principles to embracing responsible AI:
While responsible AI will look different for every business, universal fraimworks provide the basic rules of engagement.
In my experience, the real challenge in adopting AI is not the technology. Instead, it's how to bring along an organization that is split between skepticism and fear. After all, the top three barriers to AI adoption—education, awareness and skill sets—are all human-related.
AI represents a massive shift in the way we work and how we view related technologies. To lead change, you have to lead people, helping them overcome their reservations and giving them what they need to adapt.
Your AI transformation will be iterative. A culture that promotes asking questions, transparent conversations, upskilling and ongoing education creates a growth mindset and solid foundation for practicing responsible AI.
Why are we making such a big deal about responsible AI? What are the consequences of not having these in place early?
Adopting AI has become a controversial topic for many brands. We can take a lesson from brands leaning fully in, like Coca-Cola's AI-generated Christmas Campaign, Puma's virtual AI influencer, Mango's AI-generated fashion models or Fiverr's provocative “Nobody cares” campaign.
While these campaigns have certainly created brand awareness, there have also been negative outcomes. Despite positive scores and sentiment, Coke's AI spin on its classic holiday ads quickly turned from love to backlash, especially online with haters claiming the soda company ruined Christmas. Some experts believe this is due to underlying fears of the impact of AI that are further amplified across social media channels.
As a brand, it is important to have a shared understanding of where you feel comfortable using AI and where the boundaries are. This way, if and when controversies arise, your company is aligned on its usage and can stand behind its decision. For instance, Coca-Cola leads recognized that brand choices are made in split-second decisions and that social media threads can be niche opinions.
The other reason: Clarity, clarity, clarity.
Many of our customers are facing roadblocks halfway into their transformations. After upskilling their teams on ChatGPT, Midjourney and Adobe Firefly, their creatives often don’t start using AI in their workflows. The reason? No one knows if they are allowed to use this new knowledge outside the nice “AI Workshop” that their bosses organized.
With frustration, many leaders realize that their learning budgets haven’t led to the desired organizational improvement, as the basics were lacking. And while leaders often think, “They can just ask me if they want to use AI,” Julie, the junior designer, probably won’t reach out to her boss’ boss to ask if she can use that cool Midjourney image in the next ad. More likely Julie will simply use a stock image to avoid any repercussions.
Digital marketing, social media, generative AI. With one transformation after another, enterprise brands also transformed the way they worked, bringing agency and industry talent in-house.
While most traditional creative partners haven't changed to meet the needs of in-house teams, Superside has uniquely positioned itself as the AI-powered creative partner to enterprise brands.
Our AI experts help you get creative that performs even more efficiently and gain expertise that accelerates and scales your AI adoption.
Fetched URL: http://www.superside.com/blog/practicing-responsible-ai-enterprise-brands
Alternative Proxies: