From Copiers to AI: The Technology Evolution of Research Administration and Why Adoption Matters
By Jeanne Viviani-Ayers, MPA, CRA
A Profession Built on Reaction
Research administration has always been a profession defined by reaction. New regulations arrive and we react. Audit findings surface and we react. Sponsor requirements change—sometimes overnight—and once again, we react. Even organizations that appear calm and polished are often scrambling behind the scenes to keep up with shifting compliance landscapes.
With every wave of new technology, we hope it will finally relieve some of that pressure. We hope it will help us move from reactive to proactive. Today, that hope has a new name: artificial intelligence (AI). And yet, for many research administrators and institutional leaders, AI feels less like a source of relief and more like another moving target. I’ve been around long enough to know this feeling isn’t new.
From Paper to Platforms: A Familiar Pattern
Not so long ago, research administration was confined to what could be printed and mailed. Federal Registers arrived at our offices by post. Proposals were completed on carbon copy paper, printed at university print shops, and mailed in bulk to funding agencies(always with tracking, because something could and usually did go wrong).
Then, technology advanced and copy machines arrived. Xerox machines, specifically—because Xerox was the copy machine. Suddenly, research offices no longer needed appointments at the print shop. We could make copies in-house. Twelve copies of a proposal? No problem. Efficiency gold. Or so we thought.
Faculty thought that if printing was easier, deadlines must be more flexible. And when the printer jammed ten minutes before the mail pickup, the time pressure didn’t disappear; it simply shifted its appearance. The stress moved from the print shop to research administration.
That same pattern repeated itself with computers, then email, and eventually the internet. Email made communication faster and revision easier, but it also created the illusion of endless time. Online submission systems removed mailing delays but introduced new expectations. Like the Xerox machine, the introduction of these new technologies made changes quicker, which often translated into faculty working closer to the deadline.
While funding agencies modernized rapidly, institutional processes struggled to keep pace. Many of the improvements we experienced over the years were driven more by what agencies required than by how universities fundamentally redesigned their workflows. And through it all, the core challenge remained: we were still chasing deadlines instead of shaping strategy. Now, we find ourselves at another inflection point.
Enter AI: Promise, Pressure, and Reality
Fast forward to today. We operate in a world of enterprise systems, electronic research administration platforms, dashboards, and integrated workflows. And yet, despite decades of technological advancement, research administration remains deeply reactive. The volume and complexity of sponsor requirements are crushing. Artificial intelligence is being marketed as the next great solution. And to be fair, there is real opportunity here.
AI has the potential to reduce repetitive administrative work, accelerate document review, identify bottlenecks before they become crises, and allow research administrators to spend more time on strategic, high-value activities. In practice, many research administrators are already using AI, quietly and informally.
Ask around and most will admit to using it in small ways: rewriting emails they know they shouldn’t send, summarizing dense RFPs, or creating internal checklists to stay organized. These are reasonable entry points. They’re also revealing. AI is only as useful as the skill of the person using it.
The Skill Gap No One Talks About
Prompting matters. Context matters. Knowing when not to use AI matters just as much as knowing when to use it. Something as simple as telling an AI tool to rely only on source material and not to be creative can dramatically change the quality of the output. Despite these variables in AI performance, many users have never been taught even these basics.
Meanwhile, tools like Microsoft Copilot, Google Gemini, and ChatGPT are already embedded—officially or unofficially—into research administration workflows. The potential upside is enormous. The risk comes when adoption happens without guidance, training, or shared expectations. So, if the opportunity is so apparent , why is adoption so uneven?
Why Adoption Feels So Hard
Part of the answer is risk tolerance among leaders and decision makers in RA. AI feels opaque. Security, privacy, and data governance concerns are legitimate. In response, some institutions have issued blanket prohibitions against AI use. While well intentioned, this approach ignores reality. AI use doesn’t disappear; it simply goes underground.
But the deeper issue isn’t the technology itself. It’s the lack of standardized work.
From a Lean Six Sigma perspective, inconsistent processes and undocumented workflows are far bigger barriers to responsible AI adoption than the tools themselves. AI amplifies whatever system it is placed into. If the underlying process is messy, the output will be, too. Layer cultural resistance on top of that and progress slows even further. We have seen this before. I remember when I had a faculty member resist submitting electronic letters of recommendation for their students. Even some staff resisted word processing and spreadsheets that could be shared on computers. None of those tools failed. What failed—temporarily—was trust.
When AI Goes Wrong (and Makes More Work)
The reality today is that AI is already here. Some research administrators are experimenting thoughtfully. Others are avoiding it entirely. This uneven adoption creates skill gaps, workflow inefficiencies, and frustration on both sides.
Poorly reviewed AI output—what many now call “wordslop”—adds noise instead of clarity and can actually increase workload for those on the receiving end. I’ve seen this firsthand: a simple request for an SOP ballooning into an 80+ page document full of polished-sounding gobbledygook. I genuinely felt bad for the person who had to sift through it.
AI does not eliminate work. It changes the nature of work. And when adoption is uneven, misconceptions flourish.
Why Training Has to Change Too
One-hour webinars raise awareness, but they do not build confidence. Research administrators need opportunities to practice, experiment safely, and learn from one another—just as we always have.
Moving forward, successful AI adoption in research administration will require a more human approach. Clear policies are necessary, but they are not sufficient. Training must focus on practical application, not abstract theory. And perhaps most importantly, institutions must create spaces where research administrators can learn together.
Communities of Practice (CoPs) matter because they lower the stakes. They allow people to see peers—not vendors or leadership—using AI responsibly and effectively. No one likes being told what to do, but everyone watches what trusted colleagues are doing. Confidence grows through shared experience, not top-down mandates.
The Next Leap
Research administrators live in an “it depends” world. We did not enter this field fully trained. We learned through case studies, examples, and each other. AI should be no different.
Research administration has evolved from paper to platforms. Artificial intelligence is the next leap. Adoption isn’t optional, it’s inevitable. The real question is whether we will lead the change or lag behind it.
What can you do now?
You can’t learn to ride a bike by just watching others—you have to try it yourself. The same goes for understanding AI: hands-on experience is key. Many people have already used AI tools without realizing it (for instance, you’ve likely interacted with Google’s AI overview). Experimentation is often free since most AI platforms offer free versions. For many of us, there are accessible pathways for getting started.
But it’s also important that we get started responsibly and mitigate the risk that these tools present. Begin with a simple task, using information that is nonsensitive, the type that is safe to disclose to the world. You might remove names and sensitive details from an old spreadsheet on a closed grant and ask the AI questions about it. You could request it to draft a budget justification or generate a report on how the funds were allocated. Another approach is to upload a notice of funding opportunity and pose questions; you could even experiment with prompts to ask for a checklist and see what responses you get.
There are a lot of ways to start experimenting, but the important thing is to do it the right way. Always prioritize safety, much like you would around water. AI isn’t perfect—review its output carefully. Just as you wouldn’t let an intern submit a document to leadership without oversight, you should always check AI-generated content before using it. Customize your interaction with AI by instructing it to respond in a style that suits you, such as adding encouragement, sarcasm, or humor. Engaging with AI doesn’t have to be dull! But it does have to be responsible.
