Content
Rev. 2/2/26
Marketing and communications professionals across the University of Nebraska–Lincoln play a critical role in amplifying stories, protecting the reputation and crafting messages that reflect the heart of Nebraska and its people. AI, when used thoughtfully, can support this mission—not replace it.
The university's long-term vision remains rooted in storytelling, collaboration, and service. Exploring how AI aligns with these goals will continue as we learn, experiment, and determine where it can responsibly enhance our work.
Goals for incorporating AI:
- Efficiency: Streamlining repetitive tasks to free up time for strategic thinking.
- Accuracy: Supporting fact-checking, proofreading and data-driven insights.
- Ideation: Generating creative concepts, headlines and visual inspiration.
Current uses include brainstorming, layout refinement and project-specific experimentation. AI tools and technologies are continually evolving. Stay adaptable and be prepared to adjust strategies as new AI innovations emerge.
Human Oversight and Responsible Use
AI will never replace human judgment and our output of work. Individuals are responsible for:
- Reviewing AI-assisted work for accuracy and appropriateness
- AI output is based on the data it's trained on and information provided, so the content can be inaccurate, misleading, made up or contain copyrighted material. You are responsible for any content that you publish or share that includes AI-generated content.
- Disclosing AI involvement when relevant
- Any AI-generated content needs to adhere to Nebraska’s brand guidelines
- Avoiding over-reliance—AI should enhance, not replace, human creativity and institutional knowledge
- Encouraging open dialogue about AI use within teams
All AI-generated content must be reviewed before publication to ensure it aligns with brand standards and university values. This ensures that all work remains authentic, ethical, and aligned with institutional priorities.
Privacy, Security and Compliance
The office will follow the University of Nebraska System policies and best practices to ensure responsible AI use. This includes protecting user data, maintaining compliance and minimizing risk. NU’s statement on generative AI found here.
Key guidelines include:
- Data protection protocols: Do not input sensitive information (student records, personal identifiers, unpublished research, finance, HR, medical information, etc.) into public or third-party AI tools. Information shared with generative AI tools using default settings is not private and could expose proprietary or sensitive information to unauthorized parties.
- Avoid uploading proprietary assets (e.g., unreleased campaign materials or internal strategy documents) unless the tool is secure and authorized.
- Compliance
- FERPA: AI tools must not be used to process or generate content that includes student education records.
- Copyright: When using AI to generate or remix content (e.g., images, copy), ensure that outputs do not infringe on copyrighted materials.
- Accessibility: AI-generated content (e.g., alt text, transcripts, visual designs) must meet accessibility standards for digital communications.
- Authorized vs. unauthorized tools: Institutions may provide access to approved AI tools with enterprise-level licenses, which offer more robust data protection. This reduces the risk of students and staff using unauthorized, public-facing tools that pose security and privacy risks.
- The University of Nebraska-Lincoln seats are currently limited. While we use public-facing tools, keep in mind that any sensitive or confidential data entered could become part of the model's public knowledge base. For this reason, you should assume that information entered into these tools is not private.
Content Specific Guidelines
Photography / Image Usage
Acceptable Uses
- It is acceptable to use generative AI tools as helpers in image-creation processes, such as editing an existing, non-AI image in Photoshop using that program’s AI-enabled features. Users should use judgment regarding whether any image edits achieved through AI-enabled software features significantly alter the image to the extent that they should be ethically disclosed.
Unacceptable Uses
- AI-generated and third-party stock imagery should not be used to represent people, places and experiences at the university. Users’ expectations that our content is authentic must be honored.
- Content that is wholly generated using AI, such as an image created using Adobe Stock AI Generator or Midjourney, is discouraged as it is inauthentic.
- An exception to this is using AI as a tool to create conceptual or illustrative content rather than representative (i.e., an image of an amoeba, or a non-representational background image.) Use only when original images or graphics are not available.
- Do not use AI to generate images of Herbie, Lil' Red or other university trademarks.
Note: If AI-generated or stock photography is used for final content, the content should be clearly labeled.
Written Content (web, print, news, marketing)
Acceptable Uses
- Brainstorming story angles, headlines, subheads or navigation structures
- Summarizing background documents or reports (if absent of sensitive/confidential data) to inform story development
- Assisting with editing, word choice or conciseness
- Drafting initial versions of meeting notes, outlines, etc.
Unacceptable Uses
- Publishing AI-generated content without human review
- Inputting personal, student or proprietary university data into public AI tools
- Using AI to bypass editorial standards or misrepresent institutional values
- Relying on AI to produce final messaging for high-stakes or public-facing campaigns
Social Media
Acceptable Uses
- Drafting post ideas or adapting messages for platform-specific tone
- Enhancing clarity or engagement of captions
- Generating content calendars or engagement prompts, or brainstorming content ideas
- Summarizing news stories (if absent of sensitive/confidential data) to inform content development
- Using built-in AI features on social media scheduling tools (such as Buffer) with the human review
Unacceptable Uses
- Posting AI-generated content without fact-checking or brand review
- Misrepresenting AI-generated content as authentic campus experiences
- Using AI-generated images or video
Video
Acceptable Uses
- Tools used for cleaning up images/audio (unwanted background objects, distorted audio)
- Assisting with idea generation (for in-office use only)
- Captions (with human review afterwards)
Unacceptable Uses
- AI-generated and third-party stock video should not be used to fully represent actual people, places and experiences at the university. Users’ expectations that our content is authentic must be honored. Content that is wholly generated using AI is discouraged as it is inauthentic.
- An exception to this is using AI as a tool to create conceptual or illustrative content rather than representative (i.e., an image of an amoeba, or a non-representational background image.) Use only when original images or graphics are not available.
- Placing, replacing, or altering individuals in scenes
- Creating “deep fakes” or misleading edits
- Dubbing audio to misrepresent speaker identity or intent
- Completely AI-generated content for video is highly discouraged
- If AI-generated video is used extensively for final content, the content should be clearly labeled within the video edit as well as given context in the distributed platforms.
Audio
Acceptable Uses
- Trimming or extending music to fit video duration
- Generating transcripts or captions for accessibility
- Tools for cleaning up audio quality
Unacceptable Uses
- Creating audio that mimics someone’s voice to mislead
- Using AI to fabricate interviews or statements
AI Checklist
When using AI in your work, ask:
Is the purpose clear? Am I using AI to support creativity, efficiency, or exploration—not to replace human judgment?
Is the content reviewed? Will a human review and approve all AI-assisted outputs before they’re shared or published?
Is it brand-aligned? Does the content reflect UNL’s tone, values, and messaging standards?
Is it accurate and inclusive? Have I checked for bias, misinformation, or misrepresentation?
Is it secure? Am I avoiding the use of sensitive or confidential data in public AI tools?
Is it compliant? Does the use follow FERPA, copyright, and accessibility policies?
Is it transparent? Have I disclosed AI involvement when appropriate (internally or externally)?
Is it shared? Have I communicated my AI use or learnings with my team or campus partners?