AI-supported Research Skills
For College students Generative AI offers opportunities for creativity, personalized feedback, and efficient drafting. Generative AI is appropriate for:
- Brainstorming ideas
- Clarifying confusing concepts
- Practicing explanations
- Getting feedback on writing structure
- Generating questions to guide research
But unlike Google Scholar or Library subscribed database content, Generative AI introduces serious academic risks that can weaken the credibility of your work.
How AI Differs From Google Scholar and Library Databases
|
Feature |
Generative AI |
Google Scholar / Library Databases |
|---|---|---|
|
Accuracy |
May produce errors or fabricated facts |
Sources are real and verifiable |
|
Citations |
Often invents citations |
Provides authentic, traceable citations |
|
Bias |
Reflects training-data bias |
Scholarly sources undergo peer review |
|
Academic Integrity |
Use may be restricted |
Always permitted |
|
Transparency |
Cannot show original sources |
Shows full articles, authors, and publication data |
Here are issues that a college student should be aware of when using a Generative AI tool for research:
-
AI is not an accountable source
- AI cannot take responsibility for accuracy. Academic work requires human authors who stand behind their claims. (Citation Styles cite AI content as Software not a source, for this reason)
- Responses may change every time you ask. This makes AI output impossible to verify or trace back to a fixed source. Academic research must be traceable, reproducible, and credible.
-
Generative AI does not fact‑check
- It predicts what text should come next based on training data.
- It may present false information confidently.
- It cannot distinguish between verified research and inaccurate patterns.
- It may oversimplify complex academic topics.
- Library databases and Google Scholar, by contrast, link to real, peer‑reviewed sources.
Bottom line: AI output must always be verified with real scholarly sources.
-
AI frequently invents citations
One of the most serious risks:
- AI tools often fabricate journal articles, authors, DOIs, and page numbers.
- These fake citations look real but do not exist anywhere.
- Academic research must be traceable, reproducible, and credible. AI fails these basic requirements.
Bottom line: Library databases and Google Scholar never do this — they link to actual publications.
-
AI reproduces biases in its training data
Generative AI reflects the biases of the internet and other sources on which it was trained. This can lead to:
- Stereotypical or biased explanations
- Limited perspectives
- Misrepresentation of marginalized groups
Bottom Line: Scholarly databases undergo peer review and editorial oversight to reduce bias.
-
AI cannot replace critical thinking or academic judgment
AI can summarize or rephrase, but it cannot:
- Evaluate the quality of evidence.
- Compare scholarly arguments.
- Build a nuanced thesis.
- Understand disciplinary standards.
Bottom Line: Students must still do the intellectual work of research and analysis.
-
Using AI when academic misconduct is not allowed
Most colleges require students to follow instructor‑specific rules about AI use. Using AI without permission can result in:
- Academic integrity violations
- Grade penalties
- Formal misconduct reports
Bottom Line: Library research is always allowed; AI use is not.
-
AI may plagiarize without warning
- Generative AI can unintentionally produce text that closely resembles copyrighted or proprietary material. If a student submits this content, they are responsible for the plagiarism not the AI.
-
Privacy and data security risks
Students should never upload:
- Unpublished work
- Class materials
- Personal information
- Research data
- Confidential documents
Bottom Line: AI tools may store or use input data in ways students cannot control.
-
Overreliance weakens research skills
AI can make research feel easier, but it can also:
- Reduce information‑literacy skills.
- Limit students’ ability to evaluate sources.
- Undermine long‑term academic development.
Bottom Line: Library research builds essential skills that AI cannot replace.
-
AI generated works create uncertainty about authorship/ownership
- Canadian law requires original expression created through human “skill and judgment” for copyright protection.
- If a human contributes meaningful creative input (e.g., selecting prompts, editing output), they may be considered the author—but this is not yet settled in law.
Hallucinated citations are polluting the scientific literature. What can be done?
Tens of thousands of publications from 2025 might include invalid references generated by AI, a Nature analysis suggests.
Source: Nature
Learn More: Consultation on Copyright in the Age of Generative Artificial Intelligence: What we heard report.