AI for Work
You are likely excited, if apprehensive, about the opportunities that Generative and Assistive AI technologies present for our institution. We, in ITS, have been thinking a lot about this, and on this page we will discuss how to safely use these new tools.
Generative AI and Assistive AI Concepts
Generative AI and Assistive AI represent two distinct concepts within the field of artificial intelligence, each with its own focus and applications.
Generative AI (also known as Gen AI) creates new content (text or media) by analyzing the patterns in data it has been trained on, and then mimics those patterns based on the request being made to create something new. Some examples include creating an image based on a text description or composing music based on existing examples. The ability to create new content presents exciting opportunities across many different disciplines, however it also introduces concerns about potential misuse as well as regulatory and ethical considerations.
Assistive AI aims to increase productivity by automating tasks or processes that would normally require manual effort to accomplish. Examples include automated closed captioning of videos, support chat bots, predictive typing, or virtual assistants such as Siri or Alexa.
While both use complex logic to solve problems, the key difference between the two is that GenAI is attempting to create new content, while assistive AI is focused on automating processes for productivity gains.
Is AI “New Technology”?
Not exactly. AI tools, including Generative and Assistive AI, have been around for decades. What’s new, and why AI is getting so much attention now, is that companies have figured out how to add natural human-like language (called natural language processing) to the data, which allows us to interact with large complex data sets as if we were speaking to a friend. This means that you no longer need to be a data scientist or have specialized knowledge to access this data. Anyone with a computer can now participate.
Is AI “Disruptive Technology”?
AI—like the Internet, mobile phones, and SaaS (software as a solution) before it—will impact the way we work for generations to come. It’s called “disruptive technology,” but that doesn’t mean it’s a bad thing. There are, though, a lot of unanswered questions as we navigate this new space, and we need to be thoughtful and intentional in our approach. Here in ITS, we’ve dealt with many disruptive technologies over the years, and we have well established policies and practices for analyzing, evaluating, and operationalizing these new offerings.
How is ITS using AI?
ITS uses assistive AI technology to help keep our systems and data secure and reliable, allowing us to respond faster to issues and support an increasingly complex and technology-rich institution. As good stewards of resources, ITS uses AI in alignment with Middlebury’s mission, always guided by ethical and responsible practices.
How is AI used in our Enterprise systems?
Large companies like Microsoft, Google, Adobe, and others, are already introducing AI functionality into their software. In some cases, these features are implemented without an option to shut them off. However, where we have more control over that functionality, ITS uses a formal evaluation process to determine if that software is safe, secure, accessible, and sustainable for long-term use. Our goal is to help educate the community to ensure everyone engages with AI tools in a safe and responsible way.
When an evaluation of an AI tool is conducted, we focus on the following areas:
- Is it safe? (Are proper data protections in place? Will our data remain private?)
- Is there a clear method for our community members to get support?
- Is access provided in an equitable way?
- Is the current cost model in alignment with our financial sustainability efforts?
- Is the functionality in alignment with our academic and administrative policies?
Which AI tools are available to the community?
Microsoft Copilot
Copilot (formerly called Bing Chat) is an AI chatbot powered by OpenAI’s ChatGPT. It accepts chat-based input from a user and uses AI to respond to those instructions, frequently creating new content. Copilot can be accessed by visiting https://www.microsoft365.com/chat and signing in with your Middlebury credentials.
Google Gemini:
Similar to Microsoft Copilot, Google Gemini is an AI chatbot that can be used to brainstorm ideas, create text and images, and analyze data. Gemini can be accessed by visiting https://gemini.google.com/ and signing in with your Middlebury credentials.
Adobe Firefly
A collection of AI tools, primarily in Adobe Photoshop, that allow for generative AI image manipulation. Firefly tools are automatically included, where available, in Adobe products. To access Firefly directly, you may visit htts://firefly.adobe.com and sign in with your Middlebury credentials.
What opportunities exist for the workplace?
Search the internet for AI and you will find thousands of articles, videos, and examples of amazing work being done with these new tools. Some are inspiring, and some raise questions around the ethics of certain activities.
For our administrative offices, we’re focused on three main areas where we see opportunities for leveraging AI tools to improve organizational nimbleness and productivity.
I. Making existing systems easier to use.
Reviewing ways to Introduce AI tools into our existing platforms to make using that platform, or using the data within that platform, easier.
II. Using AI as a Creative Partner
Collaborating with an AI tool as a thought partner to brainstorm, review, or create first draft material where appropriate.
As a rule, AI generated content is should never be used “as-is” but rather within the H-A-H model—Human-AI-Human.
- A human guides and instructs the AI tool in a thoughtful, ethical, and safe manner.
- The AI agent generates its output.
- A human reviews, corrects, and approves the output.
III. Realizing Efficiency Gains
With an eye towards efficiency, examining existing processes that are ripe for automation or where we engage in repeated tasks with minor differences (we call these serial processes).
How do I use AI safely?
Community members that use AI technologies are expected to adhere to Middlebury’s existing Responsible Use and Information Security policies to engage in safe, ethical, and law-abiding behavior, conserving common resources and with respect for others. ITS will continue to support the Middlebury community in its effort to adhere to these policies by communicating specifically what this means for AI technologies.
Some basic guidelines to follow are:
- Always consider the data security, privacy, and the ethics of the work. If you wouldn’t want that information published on a public website, then do not use that data within an AI tool.
- Never enter your account credentials within an AI chatbot. This includes usernames, passwords, and special access tokens or keys.
- Never use AI on “auto-pilot”. AI features are not a “set it and forget it” toolset. Human judgment and guidance is needed to properly instruct an AI agent and to review the output of any AI agent.
How do I get support for these tools?
Support and Resources:
For assistance with these AI tools, please contact the ITS Helpdesk:
- Email: helpdesk@middlebury.edu
- Phone: (802) 443-2200
Or review our available Knowledge Base articles:
We will be updating and expanding this resource with new information as we all explore and grow in our understanding of what AI means for Middlebury. ITS is committed to supporting you. If you have any questions, please contact us to get a conversation going.