Here's what your 70 colleagues told us before they walked in today — in their own words.
Use AI
often or daily
Unclear on
what's allowed
Worried about
hallucinations
Want real
UW use-cases
30 use it "often," 5 call themselves addicted. This is not a beginner room — it's a room that wants to go further.
Only 3 respondents. But ~33% are infrequent users who haven't made it a routine yet.
The University's sanctioned tool. But 64% also use ChatGPT — almost certainly without formal guidance.
Most treat AI as a polish layer, not a full creative or analytical partner. There's headroom to grow.
45 people — nearly two-thirds of the room — use ChatGPT alongside Copilot. This is almost certainly happening without policy guidance. It's not a behaviour to shame; it's a signal that people are hungry to do more than Copilot lets them. Worth addressing directly today.
In their own words — the tasks they most want to hand off at Waterloo:
Follow-up templates, drafting announcements, summarising inboxes, automating routine responses. The #1 repeated theme.
Reviewing counterparty contracts, summarising grant requirements, extracting key clauses, checking against playbooks.
Cleaning datasets, automating macros, generating formulas, converting formats, preparing summaries and visualisations.
Meeting notes to summaries, task tracking, building agendas, managing tickets. High-volume, low-complexity drains on time.
Hallucinations / incorrect output and data privacy were both flagged by 47 out of 70 people. These are rational, well-founded concerns — and they deserve honest, direct answers today, not reassurance.
AI confidently getting things wrong
Where does my input actually go?
What can I safely enter?
Losing skills, losing judgment
AI reflecting or amplifying prejudice
No policy, no permission, no direction
What this means for their role long-term
Only 5 people — a confident minority
This is the single clearest, most actionable finding in the data. People who want to use AI well are being held back not by ability — but by the absence of clear institutional guidance. Governance isn't a dry compliance exercise today. It's the unlock for everything else.
Don't save the policy conversation for the end. 89% need clarity on what's allowed before they can use anything confidently. A clear, practical data classification framework is the unlock for the rest of the day.
The #1 ask (48 people) is Waterloo-specific use-cases. Every demo should come from work they actually do: grant follow-ups, contract review, meeting notes, spreadsheet automation.
35 people want both prompting techniques AND hands-on exercises. Teach by doing — build practice sessions around real tasks so people leave with skills they used, not just heard about.
Don't minimise the top concern. Show what human-in-the-loop verification looks like in practice. Practical verification skills build far more trust than reassurance.
Several respondents are thoughtfully reluctant — citing job displacement, environmental cost, or roles where privacy genuinely limits AI use. Informed literacy, not enthusiasm, is the goal.
35 people flagged automation & agents. The free-text responses are full of email and process automation requests. Even 30 minutes on what's possible — with one worked example — will land strongly.