In 2017, shared services conferences buzzed with anticipation as banners proclaimed, “The robots are coming!” with robotic process automation (RPA) and chatbot technologies creating a wave of excitement soon followed by conversational agents, blockchain, and artificial intelligence (AI) promises. Though these technologies have created value in particular parts of our shared services processes, none have yet to revolutionize how we work. While some shared services organizations have successfully embraced RPA, particularly in finance, they have encountered challenges in scaling conversational AI, virtual agents, and chatbots across various processes due to the complexities and time-consuming nature involved in establishing governance and designing and implementing these solutions.
Fast forward to 2023, and we can confidently proclaim, “The robots are here!” The advancement of AI technologies, including generative AI, has paved the way for organizations to deliver greater value at an accelerated pace. Unlike the aforementioned technologies, which are focused on recognizing patterns given a distinct set of data, generative AI is capable of creating new content through a vast set of unstructured data that continues to grow as people use generative AI every day. By harnessing its capabilities, shared services organizations can increase productivity by more easily automating routine tasks and empowering their staff and customers to resolve issues more quickly and effectively.
Many service centers are just beginning to explore generative AI to elevate the customer experience, drive productivity and efficiency, improve controls to enhance compliance, and optimize location strategies.
Leveraging cutting-edge generative AI algorithms can elevate the customer experience by providing personalized responses tailored to each customer’s role, region, and language, enabling virtual agents to provide more customer support with less manual training. As generative AI solutions are directly connected with traditional knowledge management systems, customers will be able to find answers faster (e.g., generative AI can search for a specific, contextual answer versus providing a list of knowledge articles that may be relevant for the user to research). By directly connecting generative AI solutions with internal, proprietary knowledge management solutions, organizations will be able to reduce the risk of sharing intellectual property with the rest of the world since their knowledge base won’t be included in the overall global generative AI data model.
With this technology, organizations can generate more human-like AI responses by leveraging human-sounding speech and continuous customer sentiment analysis, adapting the AI’s reactions to customer needs. Customers can be seamlessly transferred to Tier 1 and Tier 2 agents when generative AI cannot support the customer, and the AI can provide contextual information to the human agent for a seamless customer experience.
Generative AI may continue to play a role after customers are transferred to human agents. Envision equipping agents with tools that would enable a mid-conversation prompt, “How should I respond?” and immediately receiving an exceptional answer based on masses of valid internal (or external) data. But wait… there’s more. Large language modules can also translate native languages to Tier 1 and Tier 2 agents’ languages in near real-time speed. Barriers that have forced organizations to locate service centers in various parts of the world, primarily for language capability, are quickly being dismantled.
As generative AI becomes increasingly robust, businesses will be able to achieve a logarithmic increase in productivity and efficiency. It can automatically learn and streamline processes (within the organization’s parameters and rules), substantially reducing the cost of improving processes, enabling organizations to realize value much faster, and freeing up time and resources for more strategic objectives. Tier 1 and Tier 2 agents can be empowered with generative AI assistance (AI “co-pilot”), generating potential solutions or troubleshooting steps based on customer problem descriptions. Generative AI leapfrogs its AI predecessors with the ability to manage process discrepancies that historically stopped a robot in its tracks. It can realize, “I’m missing this information,” and go get it. Resolution of process “jams” may be converted from days or even weeks to just seconds.
Generative AI can be used to ensure rule-based protocol compliance, minimizing human error and driving higher compliance standards. The data available to generative AI includes SoX compliance guidelines, process-leading practices, tax regulations from multiple countries, and country/state/county regulations that may impact processes.
Be warned that the software can produce wrong answers; not all responses come with a data source. However, generative AI can act as an assistant to pressure test controls and compliance questions within shared services processes.
As we continue through uncertain economic times, many organizations are relooking at their location strategies and delivery models to gain immediate savings and scale by rethinking their delivery center locations. Location analysis has traditionally entailed laborious research on country/city socio-economic data, labor costs, turnover rates, infrastructure, and several other criteria. Generative AI gives us much quicker access to this data and the ability to run scenarios that optimize existing or new, low-cost locations. Additionally, augmenting workers already in lower-cost centers with AI “co-pilots” (i.e., AI-assisted support) empowers them to handle more complex work effectively. This strategic approach not only leads to significant cost reductions, but it also ensures that service quality is maintained and customer demands are met.
Here are several examples of how these benefits can translate into process-based work within a shared services center (along with expanding the scope of work service centers can provide):
To unlock the full potential of generative AI, continuous improvement is essential. Shared services organizations must proactively update and retrain the model with new data and customer interactions to foster enhanced accuracy and performance. This iterative process enables the model to learn from real-world experiences, effectively managing an expanding range of inquiries.
It is crucial to exercise caution during the implementation of generative AI. It is largely unregulated and requires that organizations develop reasonable guardrails around how they use the technology. Many of its tools rely on retraining their databases using the questions and answers provided. To protect sensitive company data, it is advisable to establish a “wall” around the data through direct interfaces, preventing data from being shared with external databases. It is also vital to inform users that they are engaging with AI and not interacting with a human agent. Transparent communication fosters trust and helps manage user expectations effectively. Finally, periodic reviews of the system should be conducted to ensure its effectiveness. It is crucial to have human agents available to address complex or specialized issues that may arise and cannot be adequately resolved through self-service options.
By approaching generative AI implementation with some early exploration of use cases, shared services organizations can better understand where the tool will add the most value and begin to understand and effectively address the inherent challenges. This will give organizations a leap-start to developing faster analyses, better forecasts, and exceptional customer service.
Contact us to discuss how you can take the next step to enhance the services provided by your shared services organization and improve the employee experience.
Sussex Economic Advisors is now part of ScottMadden. We invite you to learn more about our expanded firm. Please use the Contact Us form to request additional information.