Embracing an
AI University

Today’s AI encompass modern academic and consumer-based generative AI tools which are being developed faster than ever before. As a leader in innovation and technology, FIU strives to provide our students, faculty and employees with the fundamental AI learning, teaching and operational tools to enhance overall student, faculty and research experience.

On this page

Overview

What is Artificial Intelligence?

Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It involves the development of computer systems capable of performing tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.  As AI continues to advance, it raises ethical and societal considerations, such as privacy concerns, job displacement, and the responsible use of AI systems.

Technologies like image recognition and classification are built on artificial intelligence at their core. AI is also transforming decision-making processes – such as predicting traffic light patterns or anticipating when your morning coffee will be ready.

Across industries ranging from finance to healthcare, artificial intelligence plays a significant role. Everyday applications include facial recognition, smart vehicles, mobile digital assistants, social and entertainment apps, online banking, and e-commerce platforms.

Statistics

$

97% of mobile users are using AI-powered voice assistants

$

83% of companies claim that AI is a top priority in their business plans

$
The US AI market is forecast to reach $299.64 billion by 2026
$

AI market size is expected to grow by at least 120% year-over-year

FIU’s Core Principles for the use of AI

Promote AI literacy among FIU Students, Faculty and Staff. 
Encourage the FIU community to seek out training and resources for AI best practices. 
Protect the privacy and data of the FIU community members and FIU systems. 
Promote transparency for the use of all forms of AI.  

Enhance FIU’s AI technological footprint.

Teaching & Learning

Use of AI Applications for Education

When using and acquiring AI-powered systems or applications, it’s critical to follow a thorough review process that includes evaluating the tool’s purpose, functionality, data usage, and alignment with FIU’s goals and privacy policies.

Below we’ve organized some resources to help Faculty and Students in the critical evaluation of AI for teaching and learning.

U

Technology Evaluation Group (TEG)

Resources for FIU approved AI (and other) technologies, policies, FAQs and other useful information.

Visit The TEG Page →

Artificial Intelligence Now

FIU Libraries’ AI Repository. A guide to AI information related to libraries and literacy, equipping students and educators with the knowledge and skills necessary to harness the power of AI responsibly and effectively.

Visit ‘Artificial Intelligence Now’ Repository →

Prohibited Technologies List

FIU IT Security’s list of prohibited technologies.

See The Prohibited Technologies List →

Evaluating AI Tools

Important questions to ask include:

  • What data does the system collect and how is it stored?
  • Is the data shared with third parties?
  • How does the AI make decisions, and is the process transparent and explainable?
  • Does the vendor comply with relevant security and privacy regulations?
  • Are there risks of bias or misuse?

Use of generative AI tools like ChatGPT, Perplexity and others should be done with caution. These AI models retain data provided by users. As such, any data provided to the AI model becomes public data.

Faculty Resources

Integrating AI in Your Course

The Center for the Advancement of Teaching (CAT) provides FIU faculty with many services to assist their teaching and student learning, including the integration of AI within your course!

See the options offered below:

Student Resources

There are a growing number of AI learning opportunities for FIU students. Please check back regularly as this list will continue to grow.

 

AI-Focused Programs, Microcredentials/Badges & Courses at FIU

Programs Offered in Electrical and Computing Engineering
Programs Offered in Computing and Information Sciences
Programs Offered in the College of Business

FIU AI Badges/Microcredentials

Additional Offerings

Artificial Intelligence in Crime Analysis and Detection – Knight Foundation School of Computing and Information Sciences
CIERTA | Education

Artificial Intelligence Strategy for Business Leaders – Department of Information Systems and Business Analytics-College of Business, Executive Education

Advanced Hospitality Technology: Integrating AI and Machine Learning – Executive Education department of the Chaplin School of Hospitality & Tourism Management

Student Security

Artificial intelligence (AI) offers transformative potential for university research, but its adoption must be accompanied by stringent measures to ensure data security and privacy. Below are key guidelines and considerations for using AI responsibly in academic research. It is also important to only use products that have been vetted and approved for use at FIU.

Visit the TEG website to research approved software vendors →

AI & Student Conduct

Any questions related to student conduct or the honor code please reach directly out to the Office of Student Conduct and Academic Integrity (SCAI).

At FIU, the use of generated content submitted as original work is considered plagiarism. As stated in the Student Conduct and Honor Code (Section 6, article ix):

“[Plagiarism is defined as] the submission of any work authored by another person or generated by an automated tool without proper acknowledgement of the source, whether that material is paraphrased or copied in verbatim or near-verbatim form.”

See The Student Conduct and Honor Code (PDF) →.

Research

Responsibility in AI Research

AI is a powerful tool for advancing FIU’s research but must be used responsibly across all disciplines. By adhering to a set of guidelines—focusing on data security, ethical practices, and institutional oversight—FIU can harness the benefits of AI while safeguarding sensitive information and upholding academic integrity.

FIU is working towards creating a variety of area-workgroups that will provide the necessary guidance, direction and oversight for responsible use of artificial intelligence in research.

Top 5 Industries with AI Job Growth

$

Technology

$

Cybersecurity

$

Finance

$

Healthcare

$

Manufacturing

Data Security and Privacy

Artificial intelligence (AI) offers transformative potential for university research, but its adoption must be accompanied by stringent measures to ensure data security and privacy. Below are key guidelines and considerations for using AI responsibly in academic research.

Data Privacy and Security Best Practices

Understand Data Privacy Regulations

Researchers must comply with relevant regulations such as GDPR, HIPAA, or FERPA. Institutions should provide clear guidance on these requirements.

Data Minimization and Anonymization

Limit the data shared with AI models to the minimum required for analysis. Anonymize personally identifiable information (PII) to prevent misuse.

For researchers submitting proposals that include the collection and retention of data, the IRB Frequently Asked Questions site provides helpful answers and guidance. 

Secure Deployment

Use secure, private cloud environments or on-premises servers to host AI tools, reducing exposure to external threats.

Encryption

Ensure all data transmissions to and from AI systems are encrypted using robust protocols. Employ secure channels like VPNs for added protection.

Access Control

Implement strict access controls, including role-based permissions and multi-factor authentication, to restrict who can interact with sensitive data or AI systems.

Ethical Use of AI in Research

Transparency

Disclose the use of AI in research processes, including its role in data analysis, literature reviews, or other tasks. Ensure that outputs are verified by human oversight to avoid inaccuracies or biases.

Accountability

Researchers remain responsible for their work, regardless of AI involvement. This includes ensuring ethical compliance and the accuracy of results.

Bias Mitigation

Be aware of potential biases in AI models and take steps to address them, especially when analyzing sensitive or diverse datasets.

Applications of AI in University Research

Literature Reviews

AI tools can streamline the process of identifying relevant studies, extracting insights, and summarizing findings. However, researchers must verify results due to risks like fabricated citations.

Data Analysis

Generative AI can analyze large datasets efficiently but should only be used with verified and cleaned data to avoid privacy breaches.

Collaborative Tools

Use AI for brainstorming, editing, and formatting research outputs while ensuring compliance with institutional privacy policies.

Institutional Guidelines

Approved Tools Only

FIU recommends that researchers use vetted AI tools that meet FIU’s security and privacy standards. Free public tools may lack adequate protections and should not be used for sensitive research tasks.

Periodic Audits

Regularly review AI systems to ensure compliance with privacy regulations and institutional policies.

Training Program

Researchers should be informed and trained on the ethical use of AI, focusing on its limitations, potential risks, and proper integration into academic workflows.

Addressing Broader Risks

Data Scraping Concern

Avoid using datasets obtained through unauthorized scraping of private information. Always ensure informed consent when collecting data from individuals.

For researchers submitting proposals that include the collection and retention of data, the IRB site provides helpful answers and guidance. https://research.fiu.edu/irb/faqs/

Federated Learning & Differential Privacy

Explore advanced techniques like federated learning (training models locally without centralizing data) and differential privacy (adding noise to data) to protect individual privacy while enabling meaningful insights.

Technology & Innovation

Microsoft 365 Copilot
Analytics AI Assistant (Proof-of-Concept)
High Performance Computing (HPC) Refresh

Copilot is an advanced AI-powered assistant developed by Microsoft, designed to enhance productivity and streamline workflows for users across various platforms. Integrating cutting-edge machine learning and natural language processing, Copilot is a versatile tool that assists users in writing, coding, and performing complex tasks with greater efficiency. While Microsoft offers a free version of Copilot, FIU’s M365 Copilot licenses extend the functionality and use of generative AI.

Copilot is the only approved tool at FIU for the safe use of FIU data. Please ensure that you use Copilot for all your data-related tasks to maintain security and compliance.

For inquiries about accessing FIU M365 Copilot, please email it@fiu.edu.

The Analytics AI Assistant Proof-of-Concept will leverage data-driven insights to enhance decision-making, optimize resource allocation, improve student outcomes, and support faculty research by automating analysis, identifying trends, and providing real-time recommendations.

The initiative, which began in late 2023, focuses on developing AI services in variety of AI platforms and technologies, from Oracle Cloud Infrastructure (OCI) and OpenAI. Additionally, the proof-of-concept will:

  • Design and develop use cases with FIU data
  • Testing scenarios for different types of users
  • Focus on AI and analytics for administrative benefits
  • Includes generative AI prompts for frequently asked questions
  • Determine a continuous improvement of relative technologies over time

The goal of FIU’s High Performance Computing (HPC) Refresh initiave is to develop and brand an AI platform for our students, faculty, staff, and will provide the following benefits:

  • Building AI Clusters for faculty, researchers, and students
  • On-premises use for curriculum development and research
  • Potential OpenAI integration for on-demand use
  • Minimizing per license fee and overall costs

Business Processes

AI for Employees
Protecting FIU's Data and User Privacy
Procurement of AI-Based Systems and Products
Generative AI has the opportunity to assist employees by boosting productivity through task automation, improving communication with writing and translation tools, and enhancing creativity with idea generation and design support. It aids in learning by offering personalized tutoring and real-time assistance, while also supporting better decision-making through data analysis and scenario modeling. In customer service, it powers chatbots and helps prioritize support tickets. By reducing time spent on repetitive or low-value tasks, generative AI allows employees to focus on more meaningful work, ultimately helping reduce burnout and improve work-life balance.

The Division of Information Technology, along with other academic, administrative and research teams on campus are working towards defining and procuring a comprehensive set of purpose-built AI tools for students, faculty and staff.

Protecting FIU’s data and user privacy is crucial for maintaining trust, ensuring compliance with legal and institutional policies, and safeguarding sensitive information from breaches or misuse. Employees play a key role in upholding these standards, as mishandling data can lead to serious consequences such as identity theft, financial loss, reputational damage, and legal penalties for both the individual and the university. Prioritizing data protection helps create a secure environment where academic, research, and administrative activities can operate safely and effectively.

Use of generative AI tools like ChatGPT, Perplexity and others should be done with caution. These AI models retain data provided by users for training their models and engineering enhancements. As such, any employee information, financial data or other institutional information provided to the AI model becomes public data.

If you have any questions or concerns around the use of academic, employee or research data in combination with consumer-based AI applications, please contact the FIU IT Security Office at itso@fiu.edu.

When procuring AI-powered systems or applications, it’s essential to follow a thorough review process that includes evaluating the tool’s purpose, functionality, data usage, and alignment with institutional goals and privacy policies.

Key questions to ask include:

  • What data does the system collect and how is it stored?
  • Is the data shared with third parties?
  • How does the AI make decisions, and is the process transparent and explainable?
  • Does the vendor comply with relevant security and privacy regulations?
  • Are there risks of bias or misuse?

Involving FIU IT, Technology Evaluation Group, General Counsel, Compliance and data privacy teams in the review ensures responsible and ethical AI adoption.

Resources

Resources

LinkedIn Learning
Training Resources
Microsoft Copilot
Online Training at learn.microsoft.com
Oracle AI
Training on Oracle University

FIU’s partnership with LinkedIn Learning provides all Panthers with access to thousands of AI courses and even offers certain certifications!

 

Microsoft Copilot training offers a free, in-depth online training into effectively using and extending the compatibility of Copilot for academic and administrate purpose. For inquiries about accessing FIU M365 Copilot, please email it@fiu.edu.

Oracle University provides a range of role-specific learning paths and specialized certifications designed to help organizations fully leverage artificial intelligence within Oracle Cloud Infrastructure (OCI) for enhanced innovation and efficiency in enterprise applications. Through digital training, OCI users gain the knowledge to implement best practices for OCI Generative AI, OCI AI Services, OCI machine learning solutions such as OCI Data Science, and OCI AI infrastructure. Additionally, users of Oracle Cloud Applications will explore AI fundamentals and learn how to integrate AI capabilities within Oracle Fusion Applications.

Events

FAQs

What is Generative AI?

Generative AI refers to a class of artificial intelligence systems that have the ability to generate new content, whether it be text, images, audio, or other forms of data. These systems are trained on large datasets and learn patterns and structures within the data, enabling them to create new, realistic content on their own.

What is a Large Language Model (LLM)?

A Large Language Model (LLM) is a type of artificial intelligence that is trained to understand and generate human language. LLMs read and analyze large amounts of data (books, websites, articles and other sources of data), learns patterns on how to write and speak and, ultimately, responds to prompts, answers questions, writes essays, translates languages, generate codes and much more.

LLMs in the market:

  • Open AI ChatGPT
  • Google Gemini
  • Anthropic Claude
  • Meta’s LLaMA
  • Mistral

There are also companies like Perplexity that directly connect and integrate searches and inquieries to other LLMs like OpenAI’s GPT-4, Claude and LLaMA.

Are there any privacy or personal data-related concerns around Generative AI?

Generative AI raises several privacy and personal data concerns due to its reliance on large datasets, which may include sensitive or personal information collected without explicit consent. Users may unknowingly share personal data when interacting with AI tools, risking potential exposure or misuse. Additionally, generative AI can unintentionally produce responses containing confidential or real-world details present in its training data. Risks such as deepfakes, identity theft, and misinformation are also prevalent, as AI-generated content can be used to manipulate information or impersonate individuals.

Other concerns involve data security, compliance with privacy regulations like GDPR and CCPA, and the ethical collection and usage of personal data. Many users are unaware that their input may be stored and used to improve AI models, raising transparency and consent issues. If AI models are trained on biased data, they can perpetuate discrimination and make unfair predictions. Mitigating these risks requires transparency about data use, user control over shared information, stronger regulations, and security measures to protect sensitive data.

If you have any questions or concerns around the use of academic, employee or research data in combination with consumer-based AI applications, please contact the FIU IT Security Office at itso@fiu.edu.

What AI and non-AI applications are prohibited from being used at FIU?

The Department of Management Services has issued of list of prohibited applications, based on Section 112.22, Florida Statutes.
Prohibited Applications List on MyFlorida.com

As a student, can I use ChatGPT for my classes?

Before using any AI tool for your coursework, please review your syllabi and check with your professor for guidance.

As a professor, whom should I contact for any generative AI or AI tools-related questions?

Please contact the Center for Advancement of Teaching (CAT). CAT can provide additional information the proper way to incorporate generative AI into courses and advise professors on the tools that may provide the greatest value for their course.

Teaching with AI on CAT Wesbite

How can I get access to Microsoft Copilot?

Microsoft 365 Copilot is only available for FIU faculty and staff. If you are interested in a license, please email it@fiu.edu.