Untitled

Overview

SurePeople is an HR tech firm renowned for its expertise in conducting psychometric assessments for individuals and teams. This capability is powered by its core PRISM technology, an intellectual property exclusive to SurePeople. The company aimed to not only enhance its understanding of psychometric analysis but also to improve the interactivity of its application with users through the integration of generative AI.

To achieve this, SurePeople explored various strategies for incorporating generative AI into its operations and initiated a partnership with Lyzr - a low-code agent framework. Lyzr provides agent SDKs that run locally on AWS, ensuring complete data privacy and security. AWS is recognized as one of the most reliable cloud platforms for developing generative AI applications, making it an ideal choice for this initiative.

The collaboration between SurePeople, AWS, and Lyzr led to the introduction of several key generative AI features for SurePeople's users. These features include:

Here is the announcement from Niko Drakoulis, CEO of SurePeople, on SurePeople’s collaboration with Lyzr.

“SurePeople is delighted to announce our partnership with Lyzr.ai, a key player in fortifying the scalability, security, and future-readiness of our AI infrastructure. Thanks to their versatile SDKs, we're empowered to operate at the forefront of innovation, underpinned by a robust framework that bolsters our AI applications. In an ever-evolving landscape of artificial intelligence, Lyzr.ai's SDKs ensure we remain at the cutting edge. Additionally, our collaboration has been enriched by their exceptionally skilled and cooperative team.”

The Problem

Incorporating generative AI into its platform was a strategic decision for SurePeople, a company with a nearly eight-year history. Throughout its operation, SurePeople has accumulated terabytes of data, including detailed psychometric analyses and individual personal preferences. Given the sensitive nature of this data, which includes Personally Identifiable Information (PII) and, to some extent, Protected Health Information (PHI), establishing a robust security architecture was paramount.

The challenge of ensuring data security and privacy was only one aspect of SurePeople's strategic initiative. Another critical issue was selecting the appropriate enterprise AI stack. This technology stack needed to not only facilitate the development of an intelligent AI coach and other AI modules but also ensure seamless integration with SurePeople's existing applications. Thus, selecting the right AI stack emerged as a significant concern.

Furthermore, SurePeople prioritized considerations related to fail-safe mechanisms, failover strategies, and the scalability of the entire application. Although Lyzr's RAG SDK is equipped with auto-scaling capabilities, the challenges extended beyond the SDKs as they involved the availability concerns of Large Language Model (LLM) API endpoints. The task at hand involved determining the right LLMs for both primary and secondary roles and devising a comprehensive failover strategy for the LLM side. This multi-faceted approach underscored SurePeople's commitment to maintaining operational integrity and scalability in the face of potential challenges.

The Solution

SurePeople Architecture.png

To address the paramount concern of security and privacy, a strategic decision was made to operate Lyzr's private SDKs directly within SurePeople's AWS account. This approach ensures that all customer data and sensitive information are securely contained within SurePeople's Virtual Private Cloud (VPC), thereby maintaining compliance with established standards such as SOC2 and GDPR, among others.

In addition to security considerations, the following measures were implemented to address other operational priorities:

  1. Failover Strategy: OpenAI's GPT-4 was selected as the primary Large Language Model (LLM) to drive the generative AI features. However, to ensure resilience and operational continuity, a failover mechanism was established, allowing for a seamless switch to alternative LLMs such as Anthropic or Mistral hosted on Amazon Bedrock, if necessary.
  2. Scalability Measures: To accommodate scalability and enhance operational efficiency, customer has a choice to switch between Amazon EC2 and AWS Fargate. This is to enable a serverless architecture that allows the application backend to automatically scale in response to fluctuations in customer traffic, ensuring a robust and responsive user experience.