Eamonn O'Neill
Contributor

Are you ready for the era of AI-powered, self-service IT?

Opinion
Oct 1, 20256 mins
Artificial IntelligenceIT SkillsStaff Management

AI makes it easier than ever for employees to do tech-heavy work on their own — but companies still need guardrails to stay safe.

primary applying tech gm self service analytics car speed driving gas sign
Credit: Getty Images

For years, IT self-service — the practice of allowing users to complete tasks for themselves, without having to ask staff for help — has been an important part of strategies for making IT operations more efficient and scalable, while also enhancing user productivity.

Now, thanks to AI technology, it has become easier than ever to implement self-service workflows. Using AI, employees without specialized technical skills can do technically complex things, like write and deploy applications.

Yet, AI-powered self-service workflows also introduce a host of new risks in areas like security and compliance. When employees use AI without understanding what they’re building, they may end up doing things that could harm the business.

Thus, a priority for businesses aiming to adopt AI is finding ways to balance AI-powered self-service capabilities with risk mitigation. This is something my company has been doing for some time. Here’s what we’ve learned along the way, and the solution we’ve settled on.

AI as the next frontier in self-service IT

Self-service IT has a long history. Starting decades ago, IT departments began deploying basic self-service solutions, like online portals that allowed employees to recover forgotten passwords without having to involve IT staff. Then came low-code/no-code platforms, which enabled non-technical users to build software applications with minimal help from professional programmers. In more recent years, platform engineering — a practice in which companies build ready-made IT solutions that employees can deploy on demand — has become another facet of the self-service movement.

Today, thanks to generative and agentic AI, businesses have access to a whole host of new opportunities for implementing self-service solutions. With AI, users who know nothing about programming can build and deploy software. They can also create AI agents to automate workflows across virtually any area of the business — finance, HR, engineering and so on.

In this respect, AI has opened up a new era in self-service IT. Traditional solutions like those I’ve mentioned above won’t go away. But companies that adopt AI successfully can now support all manner of novel self-service use cases that would not otherwise be feasible.

When businesses leverage AI to this end, they gain more than time savings for IT staff. AI-powered self-service also leads to faster innovation. Indeed, the most efficient and productive businesses going forward will be ones that take an AI-first approach by allowing workers to kick off workflows autonomously with help from AI, rather than making a technical request and waiting for someone to fulfill it.

Doubling down on self-service risks

Yet, while AI is making it possible to supercharge self-service IT initiatives, it’s also supercharging the potential risks.

At their core, these risks stem from the fact that AI can work in unpredictable ways, and most users cannot fully understand what AI-generated solutions do.

For example, imagine an AI-generated application that handles input insecurely, leading to code injection vulnerabilities. A non-programmer would not be able to identify this risk. For that matter, even someone who does know how to code may not necessarily be able to recognize the vulnerability, especially if the app consists of unwieldy “spaghetti code” that is hard for humans to read. So long as the app works, the user may happily deploy it without taking steps to mitigate the security risk.

Agentic AI (meaning the use of AI agents to carry out tasks autonomously) introduces similar risks. When users build AI agents themselves without understanding exactly which data the agents can access or which actions they can take, they run the risk of deploying agentic solutions that violate compliance or security rules, or that break mission-critical systems.

Risks like these have long been among the challenges stemming from self-service IT. Low-code/no-code apps could also contain insecure code, for example. However, these risks tend to be more profound in the context of AI-driven self-service solutions because it’s more challenging to predict what AI will do or which type of solution it will implement.

With traditional self-service solutions, IT departments could implement guardrails to prevent certain actions; for example, they could prohibit apps developed using low-code/no-code tools from accessing certain types of data. But with AI systems powered by complex, opaque large language models (LLMs), imposing restrictions is much harder.

A practical approach to implementing AI-powered self-service

The question facing companies is: How can they adopt AI to enable self-service while keeping risks in check?

At my business, where we rely extensively on AI to help modernize and document legacy application code, we’ve settled on an approach that focuses on managing which AI solutions our users adopt and how they use them. Instead of allowing employees to use an AI model of their choice without any type of guardrails in place, we route all calls to AI models on our network through a management platform we’ve built. The platform mediates the calls and responses to flag those that may present security or compliance risks.

With this approach, we can offer users the flexibility to work with a variety of AI models, since our platform supports offerings from all of the major vendors (OpenAI, Anthropic, Google and so on). We also allow an open-ended approach to how employees use AI. They’re not restricted to a limited range of predefined tasks or solutions; they can issue whichever prompts they want on a completely self-service basis. Yet, at the same time, routing calls through our platform allows us to enforce enterprise-level controls over what employees do with AI tools.

This is what I expect the future of AI-powered self-service to look like for IT departments. Rather than simply placing third-party AI tools in the hands of users and leaving them to their own devices, businesses will need to implement hubs that govern the way employees interact with AI. They’ll want to keep AI use cases flexible, while still retaining the ability to mediate security, compliance and performance risks.

The future of user-driven IT

AI is poised to transform the way businesses approach self-service IT solutions. In many ways, that’s a great thing, because AI provides opportunities for users to complete tasks on a self-service basis that would be impossible with more conventional types of self-service IT solutions.

But from the perspective of security, compliance and reliability, AI-driven self-service can be a terrifying thing — which is why now is the time for businesses to begin investing in strategies that allow them to manage the way users interact with AI tools and services. They must allow workers to take full advantage of the power of AI, while still erecting guardrails to manage AI-related risks.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Eamonn O'Neill

Eamonn O'Neill is the co-founder and CTO of Lemongrass with 28+ years of experience in SAP. He brings strong technical leadership and focused expertise in enterprise software and architecture and leads a global team in the design, development, implementation and support of the Lemongrass Cloud Platform (LCP), which is used by companies to migrate and manage their SAP applications running on cloud.