Total Seminars

AI Tools Every IT Professional Should Learn

IT has always required staying current. New vulnerabilities, new platforms, new certifications, and shifting vendor stacks arrive before you have finished absorbing the last wave. AI tools do not remove that pressure, but they change how quickly you can close gaps, draft documentation, generate scripts, and work through research. For IT professionals, these tools are not a novelty. They are a practical upgrade to the way the job gets done.

The first thing to understand is that an AI model is not a faster search engine. Google acts like a librarian. It finds documents and pages that already exist and points you toward them. An AI model works more like a student. It has read an enormous volume of text, learned the structure of language, and can now write a new response from scratch based on your specific question and context. When you paste an error message into ChatGPT along with your OS version, your configuration, and the exact log output, you are not searching for a solution that already exists. You are building one in real time.

That distinction matters for how IT professionals approach these tools. The ones getting the most useful output are not the most experienced or the most technically advanced. They are the ones writing the most specific prompts. Add your environment details. Include the constraints. Define what a good answer looks like before you ask for it. The more context you give, the sharper the response.

For scripting and code generation, ChatGPT and Claude are both strong starting points. You do not need to be a developer to use them effectively. A clear description of what the script should do, the platform it runs on, and any edge cases you want handled is often enough to get a working first draft. Treat that draft the way you would treat a junior colleague’s work. Review it, test it in a safe environment, and refine before you apply it anywhere that matters.

Documentation and runbook drafting are two of the highest-return uses in day-to-day IT work. Incident summaries that used to take thirty minutes to write can be drafted in a few minutes with the right prompt. Runbooks that never got written because there was never time can now be started from a rough outline and refined from there. The AI does not know your specific environment, but it knows the structure and language of technical documentation, and that combination saves real time.

For troubleshooting research, Perplexity is worth adding to your toolkit. It is built for research that requires cited, real-time sources, which makes it useful for CVE lookups, vendor documentation, and incident analysis where you need traceability. For general troubleshooting conversations, ChatGPT and Claude handle ambiguity better and can walk through diagnostic logic step by step.

For certification study, Google’s NotebookLM is one of the most useful tools available right now. Upload your study guides, textbook chapters, or practice exam notes and it generates Q&A sessions, summaries, and even an audio podcast-style overview of the material. For anyone preparing for CompTIA, Cisco, or any vendor certification, it dramatically shortens the time between reading dense material and actually retaining it.

One of the more powerful moves an IT professional can make is building a custom AI assistant for a task you repeat regularly. In ChatGPT these are called custom GPTs. In Claude they are called projects. In Microsoft Copilot they are called agents. The concept is the same in each case: you define a role, set specific instructions, and give the tool the context it needs to produce consistent output without you re-explaining every time. Incident summary templates, CVE research prompts, and runbook starters are all good candidates. Once built, you stop starting from scratch.

The two skills that make the biggest difference are prompting and output judgment. Prompting is the practice of writing clear, specific, well-contextualized instructions. Output judgment is your ability to read a response and quickly assess what is accurate, what is plausible but wrong, and what needs verification before you act on it. AI models can generate confident, fluent responses that are technically incorrect. They are not lying. They are predicting the most likely answer based on training data, and that process produces errors. Your job is to catch them before they reach production.

The IT professionals who will get the most from these tools are the ones who treat AI output the way they treat any untested script: review it, run it in a lab environment, and verify before it touches anything real. The bar to start is low. The skill ceiling is high. And the compounding advantage of building these habits now is significant.

Talk to you next week!

Check out an episode of our series, Getting to Know AI on sale now, $19.99!

Scroll to Top

Discover more from Total Seminars

Subscribe now to keep reading and get access to the full archive.

Continue reading

Total Seminars
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.