University of Waterloo AI Professional Development
March 2026  ·  70 Respondents
Pre-Session Survey Analysis  ·  Your Colleagues Speak

Your
Group.
Right Now.

Here's what your 70 colleagues told us before they walked in today — in their own words.

57%

Use AI
often or daily

89%

Unclear on
what's allowed

67%

Worried about
hallucinations

69%

Want real
UW use-cases

Scroll
01  /  Familiarity
Where the
Group Stands
57%

Use AI often or daily

30 use it "often," 5 call themselves addicted. This is not a beginner room — it's a room that wants to go further.

4%

Have never used AI

Only 3 respondents. But ~33% are infrequent users who haven't made it a routine yet.

80%

Use Microsoft Copilot

The University's sanctioned tool. But 64% also use ChatGPT — almost certainly without formal guidance.

44%

Editing & summarising

Most treat AI as a polish layer, not a full creative or analytical partner. There's headroom to grow.

🔍 The shadow tool problem

45 people — nearly two-thirds of the room — use ChatGPT alongside Copilot. This is almost certainly happening without policy guidance. It's not a behaviour to shame; it's a signal that people are hungry to do more than Copilot lets them. Worth addressing directly today.

02  /  Tools in use
What They're
Working With
MS Copilot
56
ChatGPT
45
Google Gemini
22
Grammarly / Canva / etc.
19
Claude
8
03  /  Jobs to be done
What They Want
AI to Handle

In their own words — the tasks they most want to hand off at Waterloo:

📧 Emails & comms

Follow-up templates, drafting announcements, summarising inboxes, automating routine responses. The #1 repeated theme.

📄 Documents & contracts

Reviewing counterparty contracts, summarising grant requirements, extracting key clauses, checking against playbooks.

📊 Data & spreadsheets

Cleaning datasets, automating macros, generating formulas, converting formats, preparing summaries and visualisations.

📅 Admin & scheduling

Meeting notes to summaries, task tracking, building agendas, managing tickets. High-volume, low-complexity drains on time.

04  /  Concerns
What's Holding
the Group Back

Two fears tied for #1 — at 67% each

Hallucinations / incorrect output and data privacy were both flagged by 47 out of 70 people. These are rational, well-founded concerns — and they deserve honest, direct answers today, not reassurance.

67%

Hallucinations

AI confidently getting things wrong

67%

Data privacy

Where does my input actually go?

60%

Confidential info

What can I safely enter?

44%

Overreliance

Losing skills, losing judgment

43%

Bias

AI reflecting or amplifying prejudice

41%

Unclear on rules

No policy, no permission, no direction

36%

Job impact

What this means for their role long-term

7%

No concerns

Only 5 people — a confident minority

05  /  The critical gap
How Clear Are They on
What's Allowed?
VERY
UNCLEAR
16%
SOMEWHAT
UNCLEAR
21%
SOMEWHAT
CLEAR
51%
VERY
CLEAR
11%

89% of the room lacks full confidence

This is the single clearest, most actionable finding in the data. People who want to use AI well are being held back not by ability — but by the absence of clear institutional guidance. Governance isn't a dry compliance exercise today. It's the unlock for everything else.

06  /  Their priorities
What They Want
From Today
Real university-specific use-cases
48 / 70
Practical prompting techniques
35 / 70
Automation & agents
35 / 70
Hands-on exercises
35 / 70
Data governance & risk clarity
34 / 70
Tool comparison (Copilot vs others)
31 / 70
Strategic implications of AI
30 / 70
Building AI-enabled teams
11 / 70
07  /  Recommendations
How to Make
Today Count
01

Open with governance — lead with it

Don't save the policy conversation for the end. 89% need clarity on what's allowed before they can use anything confidently. A clear, practical data classification framework is the unlock for the rest of the day.

02

Ground every example in this building

The #1 ask (48 people) is Waterloo-specific use-cases. Every demo should come from work they actually do: grant follow-ups, contract review, meeting notes, spreadsheet automation.

03

Make prompting a hands-on workout

35 people want both prompting techniques AND hands-on exercises. Teach by doing — build practice sessions around real tasks so people leave with skills they used, not just heard about.

04

Be honest about hallucinations

Don't minimise the top concern. Show what human-in-the-loop verification looks like in practice. Practical verification skills build far more trust than reassurance.

05

Hold space for real skepticism

Several respondents are thoughtfully reluctant — citing job displacement, environmental cost, or roles where privacy genuinely limits AI use. Informed literacy, not enthusiasm, is the goal.

06

Include a short automation preview

35 people flagged automation & agents. The free-text responses are full of email and process automation requests. Even 30 minutes on what's possible — with one worked example — will land strongly.

This analysis — raw survey data → structured insights → interactive report

Built in
Minutes.

University of Waterloo  ·  AI PD Day  ·  n = 70  ·  March 2026