How Should IT Services Firms Build Their Generative AI Practice?

Estimated reading time: 11 minutes

So, how do you kick off a Generative AI practice? But first, let’s dive into why launching a Generative AI practice is essential today, especially if you’re in the IT services sector. To shed light on this, let’s rewind to 2015. That’s when I embarked on my journey with Powerupcloud, my previous venture, which was a cloud consulting firm. Over the next four years, we skyrocketed to become one of the fastest-growing cloud consulting companies. We bagged numerous awards, developed a range of products, collaborated with major enterprise customers and unicorn startups, and ultimately, in 2019, merged with L&T Infotech.

In 2015, I grappled with a similar question: Why start a cloud company? What makes it so special? Why not stick to app development, an evergreen field still thriving today? I realized the answer lies in the technology adoption trend of large enterprises. Take the Fortune 500 and Fortune 2000 companies; they often dictate which B2B providers thrive over time. AWS launched its first commercial cloud in 2006, followed by Azure around 2009, and Google Cloud in 2011, and it became abundantly clear by 2015 that major companies had started adopting cloud technology.

Cloud was set to grow exponentially, and the evidence was right there for everyone to see. And this led me to start Powerupcloud. In hindsight, our timing for starting and selling the company could have been better – perhaps a year or two earlier for launching and a year or two later for selling. Yet, I have no regrets about those decisions. Looking at the present, I see Generative AI in a similar light. This technology is massively disruptive, far more so than cloud computing or big data. Enterprises are adopting Generative AI at a pace that dwarfs their adoption of cloud technology.

Enterprises Are Leading the GenAI Adoption Wave

A recent study by Goldman Sachs revealed a staggering fact: nearly 67% of enterprise companies are either discussing, implementing, or have already integrated Generative AI into their operations. This shift aims to enhance processes and boost automation. It’s become abundantly clear that when enterprises lead the charge in embracing new technologies, it’s time to sit up and take notice. And Generative AI? Well, it’s not just another tech trend. Icons like Jeff Bezos are hailing it as a groundbreaking discovery. Generative AI is poised to eclipse the impact of cloud computing and big data – it might even surpass both combined.

This seismic shift signals a crucial moment for IT service providers. It’s not just about hopping on the bandwagon; it’s about driving it. The time is ripe for the solution providers to take bold steps in developing and integrating Generative AI practices. By bringing this revolutionary technology to their clients, they’re not just keeping pace with the times – they’re defining the future for their clients and not letting the competition walk away with new clientele.

What Does Generative AI Mean for Enterprises

Now that we’ve tackled the ‘why’ of Generative AI, let’s zoom in on the ‘what’. What exactly does Generative AI mean for enterprises? Rewind to early 2023, when GPT-3.5 was the talk of the town, along with a handful of open-source models. Back then, the adoption of Generative AI mostly involved leveraging GPT-3.5 or other open-source models, fine-tuning them to specific needs, and utilizing them locally.

However, as the market rapidly evolves, a trend unlike any we’ve witnessed in other technologies, we’re seeing a significant shift. Major enterprises are now eagerly adopting closed-source or so-called frontier models, such as OpenAI’s GPT, Azure’s version of OpenAI GPT, or Anthropic Cloud. The reason? These closed-source LLM giants have made a public commitment to software compliance like SOC2, ensuring that customer data isn’t used in training their models. This aligns perfectly with enterprise requirements, eliminating the intricate process of fine-tuning models.

So, what does Generative AI look like today in the enterprise sphere? It boils down to pinpointing a customer’s use case, understanding the problem, and identifying the most fitting solution. The solutions can generally be categorized into three types:

  1. Prompt-based
  2. Prompt + RAG (Retrieval Augmented Generation)
  3. Fine-tuning or a mix of the above two combined with fine-tuning.

The key is to choose the solution that addresses the problem effectively and then deploy it on a suitable cloud platform, like AWS. Of course, there are complexities. For example, some solutions necessitate a comprehensive ETL (Extract, Transform, Load) process, revamping the data lake, or creating a new data store to feed data into AI engines. Additionally, custom applications and APIs may need to be developed for seamless integration with the LLM platforms. These aspects are par for the course, but broadly speaking, this is what Generative AI represents from the perspective of enterprise customers.

Getting Started With Building The Generative AI Practice

Diving into the ‘how’ of Generative AI practice development for IT services companies, let’s demystify the process. The starting point? Education at the leadership level. That means getting everyone from the founder and CEO to the executive and senior management teams up to speed on the basics of Generative AI and its business applications. A great place to begin is the free course offered by Google, designed to lay down the foundational knowledge of Generative AI. 

image 6

Link to the course: https://www.cloudskillsboost.google/course_templates/536

This step is critical for securing top-level buy-in. Remember, in IT services firms, the leadership team often handles client interactions, so they need to be clued up on market trends. If budget allows, delving into in-depth reports from industry analysts like Gartner and Forrester is also smart. These reports provide a more concise, accelerated learning curve that can empower your team to discuss Generative AI fluently with clients.

Considering the longevity of this technology, I’d advise jumping in now. It’s an investment of about six to eight hours – something you can easily tackle over a weekend. And that’s just the first step. The next move is to familiarize your entire tech team with Generative AI, from the CTO to the junior engineers. The learning curve might start with programming in familiar languages like Python or JavaScript, supported by platforms like Lyzr AI and Langchain.

Building is the Best Way to Learn Generative AI

Start building basic Generative AI applications, such as a chatbot, a knowledge search engine, or even simpler – a chat with PDF application. This approach ensures that every tech team member, regardless of their level, has hands-on experience creating at least one Generative AI application. This practical exposure shatters any myths about the complexity of Generative AI and lays a solid foundation for further development.

For the non-technical crowd, it’s not as daunting as it seems. The non-coders should focus on mastering the art of prompting (called Prompt Engineering) and building GPTs. Tools like OpenAI’s GPT Builder are accessible to almost everyone and insightful for understanding the nuances of Retrieval-Augmented Generation (RAG), prompt engineering, and how GPT responds to queries. Prompt engineering is also an absolute necessity for developers as it’s the primary way to interact with Large Language Models (LLMs).

At Lyzr, we also build free tools that accelerate the learning curve for Generative AI enthusiasts. Lyzr’s Prompt Studio, built on the lines of the Prompt Engineering Guide, is a great place to start learning the techniques and best practices involved in prompting and building production-grade prompts quickly on pre-built templates and production GenAI apps.

image 5

Link to the tool: https://promptstudio.lyzr.ai/login

By equipping developers and non-developers with these skills, your team will grasp what’s possible with Generative AI and engage more effectively in brainstorming sessions with clients, understanding their needs, and proposing feasible solutions.

Assembling the Core GenAI Team

Following the foundational steps, the next crucial phase is team formation. This core team, which will spearhead the Generative AI initiatives, should resemble a blend of an app development team and a product engineering team. Here is the makeup of the team,

TitleRole
AI Architect (LLM)Responsible for designing the application architecture and decides on the applicable Generative AI stack
GenAI ConsultantUnderstand the possibilities and limitations of LLMs and ensure that the projects are within the scope of development
AI EngineersGenerally, backend Python developers, but any backend engineer with a leaning towards GenAI stack would do
Full Stack EngineersThe application layer, including the UI UX, plays a major role in the success of Generative AI projects
Data EngineersA lot of data preprocessing is required to get them LLM-ready, and hence, data engineers play a key role in the lifecycle
DevOps EngineersAs deployments are mostly on the cloud, be it on servers, containers, or serverless architecture, DevOps engineers play their part
Project ManagerAnd someone who could stitch the development process and doesn’t allow scope creep, which is almost unavoidable in GenAI projects

You might wonder why there’s a focus on application engineering. The reason is straightforward: the effectiveness of Generative AI heavily relies on the applications through which it’s accessed. A well-designed app enhances user experience for internal or external customers and ensures the success of Generative AI-based solutions. UI/UX design, in this context, is inseparable from the success of Generative AI applications. Equally important is preparing, training, and possibly fine-tuning the data. These two aspects – application development and data handling – are both critical, each contributing about 50% to the success of a robust Generative AI application.

Once the team is in place, the focus shifts to building conceptual apps based on your understanding of customer needs. This can be achieved in two ways: 

  1. Developing apps based on your team’s experience or
  2. Directly engaging with customers to identify and address their pain points. 

Early projects might be non-commercial (free POCs), but they provide invaluable learning experiences that shape your Generative AI practice and team development.

Why partner with Lyzr?

Strategic partnerships play a pivotal role in this journey. At the foundational level, partnering with cloud service providers like AWS, Azure, or Google Cloud is advantageous, given their significant investments. Enterprise agent frameworks like Lyzr, which have dedicated partner programs for building enterprise-grade Generative AI apps, are invaluable on the application layer. These partnerships – with enterprise agent framework, Lyzr, and a competent database company like Weaviate or Pinecone – form the ideal combination for a successful Generative AI practice.

While LLM providers like OpenAI or Anthropic might not offer partner programs, leveraging their services is still critical. The ideal partnership model involves collaboration with AWS or a similar cloud provider, Lyzr, for agent frameworks and a leading vector database platform like Weaviate. Lyzr’s AI Management System allows IT service providers to build, deploy, and manage ‘private and secure’ Generative AI apps for their customers.

image 4

Lyzr offers tools and training programs to support IT service companies in further establishing a generative AI practice. Prompt Studio, Magic Prompts Builder, Lyzr Academy, Lyzr AI Management System, Pre-built Colab Files, and more. Training programs are available for executives, senior leadership, and technical teams, ensuring all levels of the partner organization are well-equipped to handle the massive wave of Generative AI projects. Lyzr’s partner-friendly commercial model allows partners to build and grow multimillion-dollar Generative AI practice in under a year.

“For every $10,000 ARR Lyzr makes on an Agent SDK subscription, our implementation partners make between $100,000 and $500,000 in professional services charges based on the complexity and customization needs of the customer’s GenAI application. This doesn’t include the possible multi-year managed services deal the partner may sign up directly with the customer. Think of us like Snowflake but for Generative AI Agents.”

Moreover, the Lyzr team is proactive in the initial stages of lead generation and deal qualifying, joining partner calls with customers to assist in deal closures. Lyzr also offers training to the partner’s pre-sales and sales teams, focusing on qualifying opportunities and setting realistic customer expectations. This comprehensive support system aims to facilitate the rapid scaling of Generative AI practices among partner organizations.

In 2024, Lyzr plans to onboard 10 active partners and help each build a $1M – $3M Generative AI business. If you want to join the cohort, please email us at partnerships@lyzr.ai.

What’s your Reaction?
+1
5
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
Book A Demo: Click Here
Join our Slack: Click Here
Link to our GitHub: Click Here
Share this:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *