What is AI?
Artificial Intelligence refers to technology used for learning, problem-solving, decision-making, and language understanding.
Generative AI is a type of AI that can generate original content such as text, images, music, code, and more.
This interactive site turns the workshop into a self-guided learning experience. It focuses on what AI is, how to choose the right tool, how to prompt effectively, how to work safely with University data, and how teams move toward confident, responsible AI use.
Understand the basics before chasing tools. The workshop frames AI as something to use critically, practically, and safely.
Artificial Intelligence refers to technology used for learning, problem-solving, decision-making, and language understanding.
Generative AI is a type of AI that can generate original content such as text, images, music, code, and more.
At a high level, generative AI systems learn patterns from huge amounts of data, are tuned to align to human preferences, and then generate outputs by predicting what comes next based on the prompt and context.
The materials emphasize that model capabilities are rising, costs have fallen sharply, and agentic systems are improving fast. The practical takeaway: experimentation is increasingly worthwhile, but oversight still matters.
Examples from the session include knowledgebase assistants, deep research, agents that act on your behalf, legal research automation, adaptive learning platforms, student support agents, tutors, and course design assistants.
Create net-new content. Augment content you already have. Collaborate to think through decisions, tradeoffs, and options.
The content repeatedly comes back to one idea: use-case discovery is hard work, and sharing real examples across colleagues is the fastest way to turn experimentation into practical gains.
Not every AI tool is the same. Start with the job to be done, the level of reasoning needed, whether action is required, and the sensitivity of the data.
Great for drafting, summarizing, quick questions, image generation, and everyday productivity tasks.
Better for harder analysis, tradeoffs, structured planning, and deeper research tasks.
Useful when the system needs to search, retrieve, transform, and act across multiple systems.
Higher complexity, but helpful when infrastructure control, customization, or local hosting matters.
Free SaaS is easy to start with but limited. Paid SaaS usually offers stronger features or capacity. Open source offers control but requires more technical overhead.
Microsoft Copilot, OpenAI ChatGPT, Google Gemini, Anthropic Claude, and Ollama as an open-source path are all covered as examples of how the AI interface layer differs from the underlying models.
Answer three questions and get a recommendation aligned to the workshop's logic.
The workshop emphasizes clarity, context, examples, and iteration over magic phrasing. Use AI to strengthen your work, not replace your judgement.
Replace vague asks with defined audience, length, tone, and outcome. Specific prompts reduce drift and improve usefulness.
Zero-shot is fastest, one-shot adds guidance, and few-shot prompting helps when you need a pattern, structure, or style to be followed closely.
Tell the model what role to assume and what situation it is operating in. More relevant context usually produces better results.
Ask the model to reason about the broader problem first, then feed that thinking into the specific task you want completed.
You can ask the model to generate prompt options, compare strengths and drawbacks, and create reusable templates with variables.
One of the strongest lessons in the material is to draft first, then use AI to refine grammar, tone, structure, and concision.
Build a prompt using the techniques from the session, then copy it into your tool of choice.
A vague request produces a generic result. Adding details improves relevance. Starting with your own draft and asking for grammar and tone improvements often produces the strongest outcome because the AI has both your intent and your boundaries.
AI is still just another information system owned by a vendor. The safest way to work is to match the tool to the data classification and keep humans accountable.
Public: university publications, public websites, social media channels, the university calendar, published RFPs, and salary disclosure data where applicable.
Restricted: personal information, usernames, student or employee numbers, WatCard numbers, IP addresses, and combinations of data that can identify a person.
Highly Restricted: examples given include SIN, PHI, credit card data, and similar highly sensitive information.
Hallucinations, bias, data accuracy, explainability, copyright, accountability, value alignment, cost, energy, and sustainability are all listed as live concerns in the session materials.
Click each scenario, make a judgement, and then reveal the recommended handling.
You want AI to rewrite public event information for a broader audience.
You want a summary of internal notes that include identifiable staff information.
You want AI to extract totals from a folder of reimbursement receipts containing full card numbers.
You want AI to identify risks and improve clarity in a request for proposals that is already public.
You want AI to sort and classify a spreadsheet that includes student IDs and usernames.
You want AI to summarize medical details included in a student support record.
The workshop treats AI adoption as a team journey, not just an individual skill. Progress comes from making time, sharing examples, and building guardrails.
Foundational training, personal experimentation, and early use-case discovery.
Roadblocks: lack of interest, lack of time, fear of mistakes.
Countermeasures: make it relevant, engage leadership and peers, empower champions, provide protected learning time, and create safe sandbox use-cases.
Repeated use leads to better prompts, clearer patterns, and growing awareness of how AI intersects with automation.
Roadblocks: productivity gains without quality gains, uncertain data access, poor tool fit.
Countermeasures: define what “done” looks like, use spot checks and peer review, create green-light zones, and maintain approved tool lists.
Usage becomes habitual and the team evaluates, shares, and improves openly.
Focus: responsible AI principles, team norms, data guidance, and continuous improvement.
Choose the option that feels most true right now.
The advanced material explains that a chatbot is a model plus an interface, while an agent adds tools, data access, and memory so it can complete more complex work.
Define clear objectives, constraints, and success criteria.
Limit permissions and require approval for high-impact actions.
Apply least-privilege access and monitoring controls.
Assign named business, technical, and risk owners.
Maintain meaningful human review and train users on limitations.
Use this to see what stuck. The feedback is designed to reinforce the workshop’s core messages.