Exploring The AI Frontier Isn’t Easy, But Common Sense Offers a North Star
By Jason Cahoon
Hi,
I’m Jason, a creative writing graduate student at the University of Idaho and a content writer for AI Trail Blazer. When I started learning about AI tools, they felt utterly mystifying to me. But soon, with some research and practice, I started to recognize AI tools as just that: tools. From there, I was able to take a step back, draw from my approaches using other tools, and begin to map common sense onto the AI landscape.
So, in the spirit of orientation, let’s consider how our common sense can direct us toward safe and constructive exploration with these tools.

For starters, when we use any tool, whether it be a jackhammer or a spreadsheet, we don’t start with the tool itself, but rather with an assessment of the job at hand. There are questions to consider: Where are the potential hazards for this job? What measures must I take to avoid these hazards? What specific tasks is this job comprised of? For which tasks might this tool be effective?
Similarly, addressing these questions is a crucial first step to using any AI tool, so let’s dedicate some time to them, starting with our first and most important point of assessment: security and privacy.
We don’t need to know everything about an AI tool to begin anticipating its security and privacy hazards. When we use these tools, identifying the entities at play is a great starting point, especially for commercially available tools. Here’s why…
Behind every AI tool is a commercial enterprise hosting it. Just like the companies that host search engines, AI companies are positioned to benefit from harnessing our data. They can leverage this information to improve their products; companies that host predictive AI tools can use our inputs for a process called Machine Learning, where AI models are trained on immense volumes of data to improve their performance. Technology companies can also profit from selling our data to other corporations, and from there, there’s no telling where our data can go.
Collectively speaking, our data is gold to these companies, and with limited transparency into how our data is processed and circulated, it is crucial that we evaluate what we share with these tools, and when warranted, protect that information as if it were gold.
Privacy and security remain highly technical in the AI sector, both from a technological and legal standpoint.

These technicalities introduce thorny territory, and while identifying the stakeholders provides some bearings for secure usage, the risks aren’t always obvious. When getting started with AI, it’s safest to assume that we relinquish ownership of our data whenever we exchange it through these tools, especially those hosted by commercial entities.
Avoiding costly hazards should always be our top priority when exploring AI, and with a foundational understanding of these tools, we can also direct our exploration toward territory that is not only safe, but also fruitful. So, to orient ourselves toward constructive use cases, let’s take a closer look at the tools themselves…
Just like any other tool, AI tools work most effectively when their purpose and design aligns with the task at hand. Not all AI tools are created equal. Some AI tools are designed to tackle a multitude of tasks, while others are designed for a more specific purpose such as title recommendation or object recognition.
We do, however, have Swiss Army knives available in our AI toolbox. These are the general-use, AI chatbots like ChatGTP, Claude, or Gemini. AI chatbots can help with a vast assortment of tasks. They can generate information, including text, images, and plausible statistical data; they can also make inferences about the data we feed them, allowing them to revise, summarize, or reformat our inputs. The breadth of their capabilities is remarkable, making them great starting points to discover the potential for AI more broadly.
Though AI chatbots are widely capable, there are times where alternative tools, including our human minds, are better suited to the task at hand. In addition to privacy and security hazards, there are points of complication that we navigate when we employ these chatbots, complications which alternative tools might avoid altogether. Let’s touch upon some of these points…
Accuracy: AI Chatbot don’t produce accurate outputs 100% of the time. Chatbots have been observed to “hallucinate,” meaning that they produce information that is incorrect, nonsensical, or fabricated, while presenting it as factual. The training data that informs a model’s calculations may not always reflect recent updates in information. These discrepancies create more opportunities for inaccurate results.
Flexibility: AI chatbots have limited flexibility to work in different data environments and on different jobs, especially complex jobs that involve multiple steps in their design.
Reproducibility: By default, these chatbots leave some space for variability in the calculations under the hood. In effect, if you give a chatbot the same prompt more than once, its answer is likely to be a little different each time.

These points of complication aren’t always dead ends, however. With practice and research, there are ways that one can turn to the chatbots themselves to navigate these complications. For example, one can “turn the dials” of an AI chatbot to reduce the variability of its calculations, ensuring that its outputs are the same every time. Also, chatbots can be used to validate the accuracy of an output, taking measures to identify and remove hallucinated data.
But we don’t need to get technical with these tools to begin discovering what works. There are plenty of low-stakes tasks that we can turn to, ones that don’t involve sensitive information and offer wiggle room for accuracy, flexibility, and reproducibility. If common sense is your North Star, let your curiosity be your fuel. At the beginning, exploring AI should be fun and exciting. Do you want to learn a new recipe? Do you need help writing a song? a poem? a skit? AI can help!
While AI tools are complex and powerful, we can rely on our common sense, some foundational knowledge, and our commitment to safety to launch our exploration of these tools. And some of you might have already started putting these tools to work. The AI4RA team wants to know more about how others in our community of practices are using AI. What tasks have you found AI tools reliable for? Do you have any “getting started” tips for others in our AI4RA community? Let us know in the comments below!
