Using AI as a Service: Overcoming the Challenges and Reaping the Rewards

using-ai-as-a-service_1000x500

 

At GoTo, we're dedicated to delivering user-friendly, IT-focused solutions that provide exceptional experiences for agents and end-users. With the emergence of AI technology, we've been focused on harnessing its power to support IT teams. That's why we've integrated OpenAI's generative language models into GoPilot, our innovative, chat-based AI assistant that we believe will transform IT management and support. GoPilot, a native feature of our GoTo Resolve product, represents a big step in IT management, offering 24/7 task efficiency, accuracy, and analysis across managed devices.

The decision to utilize OpenAI’s models brings numerous benefits to our customers. OpenAI provides the best models available in the market and continuously improves them at a faster pace than we could achieve independently. This ensures that our customers always have access to cutting-edge technology which can in turn enhance the functionality of GoPilot. OpenAI’s infrastructure also scales very well, allowing us to meet customer needs without compromise. However, as with any opportunity, there are inherent challenges that come with utilizing AI as a service (AIaaS). These challenges primarily revolve around security, vendor lock-in, and ensuring that the Large Language Model (LLM) performs as desired. Let's explore each of these aspects in more detail.

Security Challenges with AIaaS

Security challenges with employing third-party models and services, such as those offered by OpenAI, need to be evaluated and addressed. Ensuring data privacy and protection against potential vulnerabilities becomes paramount when incorporating external systems into our platform architecture.

With GoPilot, we hope to assist agents in comprehending what is occurring on users’ devices, provide guidance on troubleshooting, and ultimately resolve difficulties on their behalf. To accomplish this, we need information from the devices as well as our GoTo Resolve app. We’ve made certain that the LLMs only have access to the data required to complete the requested task, that the data is not stored outside of our systems or used for training the OpenAI models. Only the data you can view as an agent will be available for the AI assistant to read.

Our focus is not only on safeguarding data but also on protecting the managed devices. We always involve a human in the process, meaning that GoPilot will not take any action without your approval, and you can verify each step before it is carried out. We also secure all actions with our zero trust technology that makes it impossible for others, even our systems or the AI assistant, to execute actions on your managed devices. Furthermore, GoPilot does not have write access to our databases and can only execute the same actions as humans.

How did we achieve this? We have restricted the LLM’s access to only the data necessary for their specific task, and we have built specific integrations that can only perform one task. This allows us to have control over the data and actions required. Our backend then combines these functions to complete complex tasks.

Solving for Vendor Lock-in

Vendor lock-in represents another challenge stemming from relying heavily on external services like OpenAI's models. While such a prominent product greatly enhances GoPilot's abilities, relying on a single provider means there is a risk of restricted or unavailable features during a major outage. It could also limit our flexibility and adaptability if alternative solutions arise or customization beyond the vendor's offerings is needed. To mitigate these consequences, we have adopted the following strategies.

Continuous assessment of alternative vendors. As with any nascent technology, it is crucial to constantly review and evaluate the technical and market landscape. We regularly conduct proofs of concept with various AI vendors to determine the optimal model, based on factors such as pricing, performance, and technical requirements.

Decoupling the API from the product. With the fast-paced evolution of AIaaS offerings in the market, we must be prepared to switch vendors if necessary. To ensure this, we are building our solution to be vendor-agnostic, introducing an abstraction layer that handles the communication and interaction between GoPilot and external APIs within a standardized framework. This layer will handle any differences when switching to a different model or vendor, ensuring minimal impact on user experience and functionality.

Adopting a multi-model approach. Certain features within GoPilot may require different models to fully utilize the underlying AI. For instance, while Helpdesk tasks like ticket management would benefit from a text analysis and summarization model, remote support sessions would require a model capable of interpreting on-screen information and providing real-time recommendations to agents. In such cases, we integrate with multiple vendors to ensure all necessary capabilities are available. This will become even more crucial as more AI vendors, models, and capabilities emerge both in the short and long term.

Getting Your Desired Outcomes from AI

When GoTo decided to venture into the world of AI, we aimed to stand out from the crowd of generic ChatGPT-like plugins and sidebars. Our goal is not only to offer a helpful search assistant that can summarize information, but to fully utilize the potential of LLMs by understanding large amounts of data and user intentions, while also being able to perform complex tasks within our IT support software. We rely on our unique data to keep GoPilot informed about our customer's infrastructure and enable it to complete tasks on their behalf. 

Getting AI to deliver our desired outputs required numerous adjustments and improvements to make sure that an off-the-shelf LLM would operate as needed within our software. This involved setting appropriate guardrails to strike a balance between preventing misuse or malicious intent and providing reliable assistance. 

Another crucial factor in the performance of LLM-based AI is the system prompt and context. In our case, we wanted a reliable and smart assistant that can answer questions about GoTo Resolve and the broader IT domain while also being able to execute tasks on behalf of customers. However, we also wanted to protect our proprietary information from curious users. Therefore, we had to be very specific and clear about the information our assistant is authorized to disclose, especially when it comes to safeguarding our "secret sauce".

Lastly, let's discuss functions. We took advantage of LLMs' ability to understand specific pre-defined operations or functions, allowing us to offer powerful features through natural language prompts. For example, if a user asks GoPilot to check a device, the LLM will know to call our diagnostics function, which is a complex analysis listing relevant device parameters and their condition, as well as providing action items and recommendations on how to resolve potential issues. We plan to drastically expand the range of functions that GoPilot can perform, so that every operation in GoTo Resolve can be executed through conversational chat, including bulk and multi-step actions.

The future of AIaaS

While we can control many factors of how an LLM behaves, there are inherent shortcomings we must acknowledge and prepare for. For example, you cannot really stop the model from hallucinating (i.e., producing false or irrelevant information), but we can include a validation step to reduce this phenomenon.

All in all, AIaaS bears huge potential for companies to deliver innovative and valuable new capabilities to customers. While token limits, context size, and the operational cost will always be something to monitor closely, the possibilities, in my opinion, outweigh those shortcomings by far.

Post correlati

  • GoTo Introduces GoPilot for GoTo Resolve, the First AI Assistant for End-to-End IT Management and Support

    Da GoTo
    Read Article
  • In che modo gli strumenti di intelligenza artificiale aumentano la produttività dell’helpdesk

    Da Kim Zupancic
    Read Article
  • Human + AI Collaboration: How AI Lowers the Skill Bar for Technical IT Support

    Da Chris Savio
    Read Article