copyright notice
link to the published version: in IEEE Computer, September, 2025; archive copy


accesses since August 26, 2025

An Overview of Generative AI Acceptable Use Policies by Universities with Top 25 Computer Science Programs (2025)

Hal Berghel

ABSTRACT: The overwhelming popularity and widespread use of Generative AI has forced Universities to revise their academic policies. This is a brief summary of the changes made by universities with leading computer science programs as of Summer, 2025.

Academics have been struggling to find an acceptable use strategy for Generative AI (GenAI) for several years. Just a few years back, universities were overcome with ambivalence. The fervor over GenAI changed that. Within a few months of release of release of ChatGPT 4 in early 2023, faculty began to take notice of purported well-written student reports, program code, and projects that betrayed inconsistencies with more controlled and monitored student exam performance. At that point, faculty senates and university administrations felt the necessity to formally address this issue. The resulting rush to judgment typically fell short of universal satisfaction, but it was a start. At this writing the need for a carefully articulated policy on acceptable GenAI use is beyond question. However, the exact purpose, scope, and detail of such policies remain in flux as institutions attempt to deal with the most recent implications of GenAI. Institutions grapple with such issues as: 1. Which units should take primary responsibility for defining GenAI policy (e.g., libraries, individual colleges and schools, centers for teaching and learning, offices of student affairs/guidance, provost's office)?, 2. Which stakeholders should be affected by the policy (e.g., students, faculty, staff, affiliates)?, 3. What penalties might be imposed for policy violations?, 4. How much flexibility should be given to the instructor in interpreting institutional policies in the classroom?. 5. What would constitute transparency in the disclosure of GenAI use?, and 6. What would qualify as acceptable use of GenAI in various academic domains (e.g., research, publication, student scholarship, administrative reporting, etc.).

 

Thus, GenAI institutional policies remain works in progress. So, with the initial wave of GenAI enthusiasm behind us, this may be a good time to revisit the policies that we've set over the past two years, compare our work with our peers, and take an important second pass at the process to achieve greater clarity and consistency with institutional mission. My intention is to both encourage this process and make it more convenient.

We provide twenty-five links in the first sidebar to websites that outline the policies on acceptable GenerativeAI use taken by some major universities. Our filter was the UK Times Higher Education Supplement ranking of U.S. universities with the top 25 computer science undergraduate programs as a source ( https://www.timeshighereducation.com/student/best-universities/best-universities-us-computer-science-degrees ). Although most top 25 rankings of CS programs seem to be very similar, this summary has the advantage of being created outside the US and perhaps more objective. I verified that the usnews.com and csrankings.org lists produced essentially the same rankings.

Our goal is twofold. First, to direct attention to important issues that were uncovered by reviewing the policies developed by the leading universities in our field, and second to provide links to the policies themselves to facilitate convenient perusal. The first goal provides a contextual background against which we may place our own policies and possibly enhance them. The second goal provides convenient access to the source data as well as a rich resource of boilerplates that may suggest possible improvements to our existing policies . In addition, we provide a few illustrative syllabus language recommendations from selected universities in a second sidebar.

CONSENSUS

Although some institutions refer to artificial intelligence resources generally, without question the primary concern of institutions is the use of GenAI. Not surprisingly, the common objective of these policies is to support the “responsible” use of GenAI as a tool for educational enhancement and enrichment. While some institutions emphasized student use of GenAI in their policies, [25] others took an inclusive approach by drawing students, faculty, staff, and affiliates under the same policy, [24] and some include administration [8]. Some institutions recognized that distinctions needed to be made between GenAI use in instruction, scholarship, publication, and staff use. [17] All institutions were clear to emphasize that GenAI output was not an acceptable substitute for scholarship. And all policies emphasized that users were required by existing standing rules and policies to observe institutional guidelines for academic integrity and intellectual honesty. In addition, the following provisions were added by some, but not all, universities. In some cases, I have provided a link to exemplars of institutions whose policy is particularly illustrative of a provision. Note that what appears below is intended as a composite, and not literal, representation of the various provisions.

A. RATIONALE: Typically, a non-prescriptive orientation was taken that encourages community members to carefully consider:

1. Recognition that GenAI is here to stay [12] 2. Recognition that the use of GenAI must be compatible with institutional mission and standards [common to all] 3. Consideration of the legal and ethical implications of GenAI use [8] 4. Consideration of the possible impact on GenAI on pedagogy, [18] by asking questions such as: a. Is GenAI providing something that is being assessed? [18] 1. YES – then GenAI is not appropriate [e.g. using GenAI on TOEFL tests that measure language skills] 2. NO – then GenAI may be appropriate [e.g., using GenAI to assist with the narrative on a math or science project] b. Does use of GenAI undermine integrity standards? c. Is the very use of GenAI something that student would prefer to remain undetected? B. POLICY SCOPE 1. RANGE OF ACTIONS a. Proclivity: 1. Opposition – Policy tends to view the use of GenAI in much the same way as plagiarism [23] 2. Advocacy - Policy tends to view the use of GenAI in much the same way as the use of electronic devices and use of the Internet [2] 3. Balanced [3] [5] [12] b. Suggested application of acceptable GenAI use (e.g., generating ideas, editing, translating, outlining, brainstorming, summarizing) c. Provisional Acceptance (as long as students document the use of GenAI; clearly distinguish between their work and GenAI output) – (cf. G. TRANPARENCY, below) d. OPT IN vs. OPT OUT 1. INCLUSIVE– GenAI is only allowed with instructor permission [7][23] 2. EXCLUSIVE – GenAI allowed unless specifically prohibited by instructor [2] C. FULL OWNERSHIP PROVISION: Students are responsible and accountable for academic product as overseen by officials/instructors. Responsibility includes 1. Ensure accuracy of content 2. Compliance with {institutional, state, federal rules and regulations} [24] that relate to a. Data privacy [1][9] b. Information security [1][6] c. Equal access [5][24] d. Confidentiality restrictions (e.g. 3rd party information) [1][10] e. Compliance with intellectual property laws (patent/copyright) [1][9] f. Avoiding biases, stereotypes, and barriers for protected classes [10] g. Respect institutional commitment to DEI (anachronistic?) 3. Responsibility to report abuses regarding [11] a. Plagiarism b. Cut-copy-paste assemblages c. Offensive content 4. Common Theme: treat any information that you provide a GenAI tool should be assumed to be public D. COMPLIANCE PROVISION: GenAI use must comply with all applicable institutional rules and relevant federal, state and local laws (e.g., FERPA, HIPAA, state privacy laws) E. COMPATIBILTIY PROVISIONS (common theme, but with differing details and emphasis) 1. GenAI policies must align with institutional academic mission and governance policies as appropriate (e.g., campus-wide, college, school, department levels). Course policies must be compatible with a. Disclosed course learning outcomes and objectives b. Course goals and expected competencies F. FLEXIBILITY PROVISIONS 1. Instructors are given considerable latitude at defining policies for their courses, including distinguishing between use of GenAI for a. graded vs. ungraded assignments b. in-class vs. take-home assignments c. required reports, presentations d. exams e. group projects 2. However, in all cases, instructor’s expectations must be clearly articulated to the students and consistent with institutional principles G. TRANSPARENCY PROVISIONS 1. Categories a. Academic projects and assigned course-related activities where submission without disclosure of GenAI use may be prohibited b. Research-related and use relating to performance of university duties 2. Proposed standards for disclosure (varies widely between institutions). a. Does GenAI output qualify as source material? i. YES - citations provide the appropriate transparency. [2][11] ii. NO - disclosure of use is adequate [4] b. What constitutes acceptable disclosure? i. Citations [15] ii. Generic disclosure [9][3] iii. Clear identification of GenAI content [3] iv. Providing a list AI tools used [15][19] c. Disclosure should include specific details of GenAI use (e.g., platform, prompts/queries/keywords used, sample output, session dates, etc.) [3][11] d. Disclosure should specifically highlight content derived from GenAI in submitted documents (including the possible use of document comparators) [11] e. Disclosure should treat GenAI content as you would a quotation from published source [21] f. Disclosure should include a link to the imported GenAI content [9] g. Disclosure should include academic integrity affirmations that are implied by submission [8] [10] [24], including i. Acknowledgement that the student has verified GenAI claims ii. Acknowledgement that the student has respected legal and ethical standards required by the institution iii. Acknowledgement that the student takes full responsibility for content iv. Acknowledgement that the student agrees to retain records relating to GenAI use (especially when personally identifiable information (PII) was involved) [10] v. Suggested miminalist verbiage: “this doc was created with assistance from ChatGPT4. For further information, contact the author” [4] H. PROHIBITIONS 1. Prohibitions regarding the submission of data to GenAI platforms: a. Entering protected data without appropriate internal review (e.g., FERPA-protected data, Intellectual property, trade secrets, export-controlled data) [24][25] b. Entering data for which use has not been authorized [9][10] c. Submitting any input that might produce illegal (or unethical) content (e.g., computer malware, deepfakes) [26] d. Entering sensitive data (e.g., aerial photographs of secure facilities, topographic maps of environmentally-sensitive geography) [26] e. Entering PII data on individuals without their permission [8][9] 2. Prohibitions regarding the use of GenAI output a. Use of GenAI program code without institutional review [5][9] b. Use of GenAI to circumvent institutional policies on harassment, stalking, etc. [26] 3. Prohibitions regarding the use of GenAI (aka plagiarism) detectors a. Prohibited by the institution - [21] b. Discouraged by the institution – [12] I. PENALTY PROVISIONS AND CONSEQUENCES FOR VIOLATIONS [10][15] [6] 1. Typically, a student-centric emphasis, with little or no consideration for research, faculty, and administrative use 2. Requirement that instructor-imposed student penalties for violations must be explicitly stated (e.g., in syllabus) 3. Tendency to subsume penalties under pre-existing policies with no particular mention of GenAI [9]

DISCUSSION:

As mentioned above, a consistent philosophy is to be found in the twenty-five GenAI policies reviewed, but the details differed in interesting ways. For example, Princeton resists the temptation to unilaterally treat GenAI output as a ‘source document.' “Generative AI is not a source… because the output is not created by a person. If generative AI is permitted by the instructor, students must disclose its use rather than cite or acknowledge the use, since it is an algorithm rather than a source.” ( https://rrr.princeton.edu/students-and-university/24-academic-regulations , section 2.4.7) This aligns with reservations held by major publishers of academic scholarship. It is interesting to note that Princeton is, so far as we can tell, unique in weaving an important epistemological issue into their GenAI policy. We mention in passing that discussion of the core issue of whether trained Large Language Models should be accepted as reliable knowledge sources is noticeably absent in GenAI policies under review. Princeton should be commended for even asking the question. Interestingly, the Princeton online library guide recognizes that some publishers may treat GenAI tool creators as “authors” or even “publishers,” which seems somewhat at odds with the spirit of the general statement referenced above. This epistemological discord is a testament to the lack of preparedness academe had for the onslaught of GenAI, and how difficult it is to come to terms with the implications of this new technology de novo.

There are a wide variety of policies regarding acceptable disclosure. Carnegie Mellon recommends generic wording such as “I generated this work through ChatGPT and edited the content for accuracy.” [3] The University of Washington on the other hand, recommends that a record of GenAI output be retained for possible subsequent review [10] – a records-retention policy somewhat reminiscent of the Arthur Andersen discord during the Enron collapse. Some universities, e.g. the University of Pennsylvania and Columbia University, require listing the AI tools involved in student projects. [15][10] These are interesting alternatives worthy of consideration.

With regard to data privacy, Cornell offers that “any information you provide to public generative AI tools is considered public and may be stored and used by anyone else. [8] This reminds me of the 1970's mantra ‘don't put anything in email that you wouldn't post on your office door.' Good advice, then and now.

I am pleased to report that the University of California, San Diego was exemplary in tying their GenAI policy to pedagogy. UCSD encourages every student to ask two questions before using AI tools: 1. Is the resource/tool doing the thing for you that is being assessed?, and 2. Is the resource/tool allowed by the instructor? [18] They clearly want the student from distinguishing between the use of an automated spelling checker for a social studies term paper, and the use of an automated spelling checker to take a spelling test. A point well taken, and once again, worthy of consideration.

A further issue deals with the use of ‘detectors' to disclose the unacknowledged use of GenAI in submitted work. Most of the surveyed policies avoided the issue. Some discouraged its use ( “Trusting AI detectors or trying to otherwise “catch” students using generative AI tools may also lead to unproductive, adversarial relationships with students.“ [12]), while another seemed to think of it as perfidious (“UM does not currently support the use of surveillance and plagiarism detection tools as they cannot be reliably counted upon.” [21]). What this contrast illustrates is that the issue should be seriously considered.

The University of California, Los Angeles is notable for making recommendations for GenAI developers, such as: 1. GenAI systems should be regularly evaluated for bias, fairness, discrimination, etc. to reduce the possibility of social harm, 2. GenAI systems should be transparent on how they make decisions, and 3. GenAI systems should be used to enhance positive social change and encourage sustainability and environmental responsibility. [9] While this is noteworthy, it is unlikely to have much singular effect unless universities share this concern in their policies

inally, Georgia Tech included the following reality check in its GenAI policy: [12]

Generative AI tools are here to stay

CONCLUSION

Our survey is designed to emphasize the issues that leading institutions find important in forming GenAI policies. While this is a useful starting point, there is much more that can be gleaned from a more careful analysis of these policies, especially relating to geographical distribution, demographics, missions of institutions, differences between professional schools and those that emphasize sciences and the humanities, etc. We noted in preparing this survey that policy provisions seem to cluster around some of these characteristics. It would be interesting to discover and explain this clustering.

In addition, some under-represented policy provisions seem to us to deserve more emphasis – such as the epistemological issue dealing with the question of whether GenAI output should qualify as a scholarly source. Another example would be the policy language that the University of Wisconsin devoted concern ver the use of GenAI for producing malware or to violate civil rights by harassing, stalking, doxing, bullying, etc. [ https://it.wisc.edu/generative-ai-services-uw-madison/generative-ai-uw-madison-use-policies/ ]. As far as we can tell, this issue was largely ignored by the twenty-five policies we covered, even though in seems to be an issue worth considering.

Our impression from this survey is unequivocal in one sense. We are convinced that much more work to be done if we are to make GenAI policies responsible, realistic, balanced and optimally effective. The consequences of GenAI use are so consequential that anything less than a serious study and national effort would be an injustice to education.

 

ACKNOWLEDGEMENT: We express appreciation to Ernesto Dones Sierra for his assistance with data collection.

<<BEGIN SIDEBAR #1>>

SELECTIVE LINKS TO UNIVERSITY POLICIES ON GENERATIVE AI from Top-25 U.S. Universities listed in the March 5, 2025 of the UK Times Higher Education Supplement (links active @ June, 2025)

•  MIT: https://ist.mit.edu/ai-guidance

•  Stanford: https://tlhub.stanford.edu/docs/course-policies-on-generative-ai-use/

•  Carnegie Mellon University: https://www.cmu.edu/teaching/technology/aitools/academicintegrity/index.html

•  Princeton University: https://libguides.princeton.edu/generativeAI/disclosure

•  University of California, Berkeley: https://rtl.berkeley.edu/ai-teaching-learning-overview

•  Harvard University: https://oue.fas.harvard.edu/ai-guidance

•  California Institute of Technology (Humanities & Social Sciences): https://www.hss.caltech.edu/hss-policies/hss-policy-on-generative-ai

•  Cornell: https://it.cornell.edu/ai/ai-guidelines

•  University of California, Los Angeles: https://genai.ucla.edu/guiding-principles-responsible-use

•  University of Washington: https://it.uw.edu/guides/security-authentication/artificial-intelligence-guidelines/

•  University of Illinois Urbana-Champaign: https://ldlprogram.web.illinois.edu/academic-integrity-statement/

•  Georgia Institute of Technology: https://sites.gatech.edu/bfhandbook/requirements-for-developing-generative-ai-tool-policies-in-wcp-courses/

•  Yale University: https://poorvucenter.yale.edu/AIguidance

•  Johns Hopkins University: https://it.johnshopkins.edu/ai/

•  Columbia University: https://provost.columbia.edu/content/office-senior-vice-provost/ai-policy

[relatively content-light]

•  New York University: https://teachingsupport.hosting.nyu.edu/teaching-guides/teaching-with-genai/ [“curates source document” broken out by colleges/schools/units)

•  University of Texas, Austin: https://ctl.utexas.edu/generative-ai-teaching-and-learning-policies

•  University of California, San Diego: https://ucsd.libguides.com/AI/academicintegrity

•  University of Pennsylvania: https://cetli.upenn.edu/resources/generative-ai/course-policies-communication/

•  University of Chicago: https://genai.uchicago.edu/about/generative-ai-guidance

•  University of Michigan: https://genai.umich.edu/resources/faculty/course-policies

•  Purdue University: https://www.purdue.edu/teaching-learning/instructors/ai.php

•  University of Massachusetts Amherst: https://www.umass.edu/studentsuccess/guidance-generative-artificial-intelligence

•  University of Maryland: https://ai.umd.edu/resources/guidelines

•  Duke University: https://lile.duke.edu/ai-and-teaching-at-duke-2/artificial-intelligence-policies-in-syllabi-guidelines-and-considerations/

<<END SIDEBAR #1>>

<<BEGIN SIDEBAR 2>>

 

 

APPENDIX: Sample GenAI syllabus language recommendations from selected universities.

 

•  Carnegie Mellon University [3]

The following examples represent a range of options one could adapt or adopt, based on their teaching context and course's student learning objectives.

Example 1: Students may NOT use generative AI in any form.

To best support your own learning, you should complete all graded assignments in this course yourself, without any use of generative artificial intelligence (AI). Please refrain from using AI tools to generate any content (text, video, audio, images, code, etc.) for an assignment or classroom exercise. Passing off any AI generated content as your own (e.g., cutting and pasting content into written assignments, or paraphrasing AI content) constitutes a violation of CMU's academic integrity policy. If you have any questions about using generative AI in this course please email or talk to me.

Example 2: Students may NOT use generative AI in any form.

I expect that all work students submit for this course will be their own. I have carefully designed all assignments and class activities to support your learning. Doing your own work, without human or artificial intelligence assistance, is best for your achievement of the learning objectives in this course. In instances when collaborative work is assigned, I expect for the submitted work to list all team members who participated. I specifically forbid the use of ChatGPT or any other generative artificial intelligence (AI) tools at all stages of the work process, including brainstorming. Deviations from these guidelines will be considered violations of CMU's academic integrity policy.

 

Note that expectations for “plagiarism, cheating, and acceptable assistance” on student work may vary across your courses and instructors. Please ask me if you have questions regarding what is permissible and not for a particular course or assignment.

Example 3: Students are fully encouraged to use generative AI.

I encourage students to explore the use of generative artificial intelligence (AI) tools, such as ChatGPT, for all assignments and assessments. Any such use must be appropriately acknowledged and cited, following the guidelines established by the APA Style Guide, including the specific version of the tool used. Submitted work should include the exact prompt used to generate the content as well as the AI's full response in an Appendix. Because AI generated content is not necessarily accurate or appropriate, it is each student's responsibility to assess the validity and applicability of any generative AI output that is submitted. You may not earn full credit if inaccurate, invalid, or inappropriate information is found in your work.

Example 4: Students are fully encouraged to use generative AI.

You are welcome to use generative AI programs (ChatGPT, DALL-E, etc.) in this course. These programs can be powerful tools for learning and other productive pursuits, including completing some assignments in less time, helping you generate new ideas, or serving as a personalized learning tool.

•  Harvard [6]

Below is sample language you may adopt for your own policy. Feel free to modify it or create your own to suit the needs of your course.

A maximally restrictive draft policy:

We expect that all work students submit for this course will be their own. In instances when collaborative work is assigned, we expect for the assignment to list all team members who participated. We specifically forbid the use of ChatGPT or any other generative artificial intelligence (AI) tools at all stages of the work process, including preliminary ones. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student's responsibility to conform to expectations for each course.

A fully-encouraging draft policy:

This course encourages students to explore the use of generative artificial intelligence (GAI) tools such as ChatGPT for all assignments and assessments. Any such use must be appropriately acknowledged and cited. It is each student's responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student's responsibility to conform to expectations for each course.

Mixed draft policy:

Certain assignments in this course will permit or even encourage the use of generative artificial intelligence (GAI) tools such as ChatGPT. The default is that such use is disallowed unless otherwise stated. Any such use must be appropriately acknowledged and cited. It is each student's responsibility to assess the validity and applicability of any GAI output that is submitted; you bear the final responsibility. Violations of this policy will be considered academic misconduct. We draw your attention to the fact that different classes at Harvard could implement different AI policies, and it is the student's responsibility to conform to expectations for each course.

•  University of Pennsylvania [19]

Models for Syllabus Language

Use of Generative AI is Prohibited:

Students may not use ChatGPT or any other generative AI tools for any assignment in this class. If we discover that you have used generative AI, we will follow the procedures for academic dishonesty as outlined in the Pennbook. If you are unsure whether something counts as use of generative AI, please ask before submitting your work.

Use of Generative AI is Permitted in Limited Ways:

Students may use generative AI tools, such as ChatGPT, for certain assignments in this course, but not for others. If AI is permitted, this will be clearly stated in the assignment guidelines. For assignments where AI is allowed, students must disclose how the tool was used and cite any AI-generated content. Misuse or failure to acknowledge the use of generative AI tools will be treated as academic dishonesty.

Use of Generative AI is Encouraged:

Students are encouraged to explore generative AI tools, such as ChatGPT, in the completion of assignments. When using these tools, students must critically evaluate AI-generated content, verify facts, and properly cite the AI tool. Transparency is essential. Please include a brief note describing how the tool was used.

AI as a Learning Partner:

Students may treat generative AI as a learning partner. This includes using it to brainstorm, organize, summarize, or critique ideas. Students should not submit unedited AI output. Final work should reflect the student's own voice and ideas. Any use of AI should be acknowledged in a short note at the end of the assignment.

<<END SIDEBAR 2>>