Brands
Discover
Events
Newsletter
More

Follow Us

twitterfacebookinstagramyoutube
ADVERTISEMENT
Advertise with us

We need to build our own AI solutions: Shailesh Davey, CEO, Zoho Corp

In an interview with YourStory, Shailesh Davey, CEO, Zoho Corp, discusses his vision for the company in his new role, elaborates on the company’s AI strategy, and outlines the $20 million bet on indigenous foundational models.

We need to build our own AI solutions: Shailesh Davey, CEO, Zoho Corp

Thursday April 03, 2025 , 7 min Read

The development of artificial intelligence (AI) model in India has been the hottest debate of 2025. While several companies are racing to ride the new wave, SaaS company Zoho Corp has placed its latest bet with the launch of two foundational models, which will go live before the year-end. 

For Shaliesh Davey, the newly-appointed CEO of Zoho Corporation, building AI models is like running experiments–while some succeed, others don’t. The SaaS unicorn, which has embraced change, right from pivoting its original network management platform into a CRM solution decades ago, isn’t shying away from AI. 

Davey, who previously served as the Vice President of Engineering at ManageEngine, Zoho’s IT management arm, co-founded the firm in the mid-1990s. 

With Zoho founder Sridhar Vembu now serving as the Chief Scientist, the company is focused on AI-driven automation, foundational models, and right-sized solutions for enterprises. 

In an interview with YourStory, Davey discussed his leadership style, the company’s AI strategy, and its next big bet on indigenous foundational models.

Also Read
AI is no longer a USP, it’s now fundamental to all tech: Zoho Corp’s ManageEngine CEO Rajesh Ganesan

Edited excerpts from the interview:

YourStory [YS]: Having been a part of Zoho for years, what has been the biggest shift in your learning as you take on the role of CEO?

Shailesh Kumar Davey [SKD]: There’s been a strong focus on AI. At this stage, AI is beginning to shape how a software organisation reacts to these new changes, with projects like DeepSeek emerging alongside our efforts. We also had our own conviction about how we wanted to make it available to the customers. All these forces were coming together, and one of the steps was to bring in more minds and eyes. 

We’ve always welcomed challenges. That’s when we decided that Sridhar [Vembu] should start looking into it. Independently, he has been focusing a lot more on programming languages and how they fit into this whole AI world.

While that work has been been going on for some time, it came at a time when there were a lot of upheavals. We felt it was a good time to reorganise a bit and have him focus as a Chief Scientist on this issue.

From a structural perspective, one advantage is that as an organisation, we’ve all worked together for more than 20 to 25 years—myself, Rajesh, Tony, and Mani, who are the new leaders now. Most of us have already been handling these roles.

Essentially, we wanted to distribute responsibilities so Sridhar could focus more on AI and programming language initiatives.

I’m still in the observation phase, because it is still early days–I’m observing, learning, and picking up things as we go.

YS: Everyone’s talking about AI agents. Could you give a broad overview of your plans for AI agents over the next three to four years? 

SKD: There have been two major changes in the past year—the advent of reasoning engines, where an LLM can explain why it took a particular path, and the introduction of tools calling, allowing the LLM to access your environment directly. 

Suddenly, new technological developments are emerging, so I’d say we’re still in the early stages of this agentic approach.

Drawing on our experience with object-oriented principles—and the fact we’ve been in business for over two decades—we understand how enterprises operate. Our goal is to design a system that works well within an enterprise setting. 

YS: Zoho has also invested in data centers and is developing its own foundational models. Where do you see India’s foundational model space heading to?

SKD: I believe we’re still in the early stages of exploring LLMs and the concept of “intelligence”. More technologies will emerge. Our brain uses just 20 watts (20 joules) per second, while a GPU can consume 300–350 watts per second. While training an LLM, we compress time (equivalent to a child’s 15 years of learning condensed into three months), so the energy usage may be understandable. 

Secondly, technologies once deemed highly complex, like relational databases (Oracle, MySQL, Sybase, etc.), are now fairly ubiquitous. It’s not hard to imagine every country eventually having its own system, much like they do with databases.

I believe many of these technologies will follow a similar path. As a third point, if we’re to embrace new developments in intelligence and AGI (Artificial General Intelligence), we need to understand our current systems first, which is why we must build them.

There are also national and security concerns, and not every country will respect or align with your own priorities. This makes it essential to have a few homegrown efforts.

We need to build our own solutions. It doesn’t necessarily have to be at the same scale as the largest models. We can determine the specific context where the model will be used and build accordingly. That’s what DeepSeek showed us: despite running on larger models, they had resource constraints and chose a specific path.

Also Read
CEO to chief scientist: Tracing the 3-decade journey of Saas pioneer Sridhar Vembu

YS: Could you share more about the company’s investment in AI infrastructure?

SKD: We’ve spent around $20 million over the past two years, and we expect our spending to remain in that range moving forward. This includes both direct investments and renting GPU machines in data centers, which we plan to continue.

That said, our biggest constraint isn’t just the budget—it’s about learning and understanding the technology better. We have the talent, but we need to build expertise and move forward strategically. 

We have been training a 3-13 billion parameter model and have been working on it for the past eight months... Whatever DeepSeek did was experimental. Similar experiments were conducted by OpenAI, Gemini, and Anthropic, but DeepSeek did a different set of experiments because the constraints they faced were very different. 

From an intellectual challenge perspective, I would consider all of them to be almost equal. Just that the constraints that some would prioritise is what would differ. 

We are more constrained by our learning pace than our ability to spend on GPUs. As we learn more, gather speed in our implementation, and gain more clarity, we would be willing to spend on it. 

YS: You earlier mentioned some unique projects, such as building a LLM with minimal compute. Can we expect more such innovations coming out of the R&D efforts at Zoho AI Labs? 

SKD: We are largely focusing on right-sizing a model, which is all about understanding the customer use case and selecting models that are efficient enough to solve the problem effectively. It’s more of a product management approach. 

We are also focused on training LLMs in resource-constrained environments, particularly where data availability—not just compute power—is limited. For example, there are certain languages where the amount of training data itself is very scarce.

The challenge is making an LLM learn in an environment where input data is limited, where there isn’t a large volume of training data available. 

YS: ManageEngine is expected to hit $1 billion revenue soon. To what extent is AI contributing to this growth—particularly in the IT security segment? 

SKD: At ManageEngine, we operate across multiple pillars, including IT service management, observability, IT security, IT analytics, and low-code platforms. Among these, IT security—especially unified endpoint management and security information management—is growing rapidly for us.

Security generates vast amounts of data, and AI excels at identifying anomalies—essentially finding the needle in the haystack. This makes AI and security a perfect match. We are investing heavily in this space as we believe IT security will be a huge growth area for us. 

YS: Is traditional SaaS dying? With AI coming into the picture, do you think that firms are forced to pivot from their previous offerings?

SKD: We need to stay highly aware of where the industry is headed—keeping our ears to the ground and reacting quickly. That’s exactly why we’ve embraced change.

The key is to accept and embrace your worst fears. Right now, the entire industry is undergoing a transformation, and the question is: where do we go from here?

Zoho itself has gone through multiple shifts. Initially, we were focused on network management platforms, but when that market disappeared in 2001, we pivoted, and ManageEngine was born. Later, when the cloud wave hit, we adopted that as well.

What sets us apart is our ability to adapt. As a privately-held, engineering-driven company, we have the time and mindset to embrace these shifts. We’re watching the space closely, embracing change, and ensuring we take the right actions based on where the industry is heading.


Edited by Megha Reddy