DOCN One on One with CEO – Q2 2024 (transcript)
Lede
DigitalOcean (DOCN) reported earnings on 8-8-2024 and we reviewed it five days ago.
The stock has risen 30% off of that report.
I have completed my one-on-one conversation with the CEO and include that transcription below.
As a reminder, all of our CEO and CFO interviews for all companies are available on the Interviews Tab.
Takeaways
Without speaking about specific companies, in general we do not feel that generative AI is over hyped, we feel it is, in fact, under appreciated. It will be larger than the “hype,” and quite possibly by several fold or even an order of magnitude.
Analysts that doubt this are wrong (or so I write with my opinion).
Separately, based on the conversation with DigitalOcean’s CEO, we feel quite bullish on 2H 2024.
Here are some comments that stood out to us from the call. These are all from the CEO:
- We are doing quite well in attracting high quality customers from the front end of the funnel, and there are multiple reasons for it, and I think it is durable over the next several quarters.
- [O]ur product velocity looks very, very good, and more importantly, with Wade Wegner now joining, we are starting to wave our flags in the developer community quite nicely.
- The good news is we seem to be on the other side of contraction and churn, our contraction and churn are actually either steady or improving slightly.
- That’s one of the first things I did, was I said, we have to pick up product velocity, and you can already see that in how many features we are shipping quarter over quarter.
- Regarding SMBs: this is probably the worst hit they have been over the last three or four years.
- I think the Fed cut is, by itself, yes, it will release some liquidity, and enable them to access funding and things like that, but I think the bigger thing is what a Fed cut would enable them to do, or their customers to do.
Please enjoy the full conversation below.
One-on-One with the CEO of DigitalOcean (DOCN)
Ophir Gottlieb:
All right, let’s get into it. It was a great quarter, congratulations again. So, Paddy, I’m going to start with questions about the quarter and then like usual I’ll move to larger evergreen questions, and I’ll ask about the GPU Droplet in a little bit, but I first want to cover some other highlights.
There was revenue acceleration in back-to-back quarters, so 11% to 12%, then 12% to 13%, and you mentioned on the call that there were signs of success from product and go-to-market.
So, let’s talk about go-to-market first, what can you tell me about how your go-to-market momentum is changing or increasing?
Paddy Srinivasan (CEO, DigitalOcean):
Yeah. So, I’ll get to that in a second, I just want to make sure that — the numbers you quoted are right, but when you look at the next quarter, we will be lapping the whole Paperspace acquisition from last year.
OG:
Yes, yes.
PS:
Next quarter will be the full quarter. So, it will, on the surface, it will look a little muted, but some of the stuff that we are about to talk about are evergreen, and they will start re-accelerating the business after that.
OG:
Yes, yes, absolutely.
PS:
So, if you look at, fundamentally, there are three pillars that I look at on a daily basis, Ophir.
One is the ARR [annual recurring revenue] from net new customers, which are, we call them the month one to month 12.
So, essentially, anyone that is signing up with a credit card today, it takes them 12 full months for them to become part of our NDR cohort.
So, anyone that is not in our NDR [net dollar retention] cohort, how are they doing?
That’s really a super leading indicator for the health of our business, so are we bringing in healthy customers at the top of the file?
The second thing, of course, is NDR, and the three components of NDR, as we have talked about before, expansion, churn, and contraction of existing M12 plus cohort of customers.
The third thing, which is, or maybe third and a fourth thing, third thing is cloud-based business, and fourth is the new AI ML business.
So, I would say the pillar number one and four are doing quite well.
So, we are doing quite well in attracting high quality customers from the front end of the funnel, and there are multiple reasons for it, and I think it is durable over the next several quarters, I’m not prepared to start reporting on exactly what that number looks like, but it looks very promising and very healthy.
And the primary reasons for that is, I would say, twofold. One is our product velocity looks very, very good, and more importantly, with Wade Wegner now joining, we are starting to wave our flags in the developer community quite nicely.
So, we conducted our deploy conference and so forth, we are starting to get quite active in social channels, and all that adds to the top end of our funnel, and we are also starting to improve the experience on the website, the onboarding experience in the products, still early days, but even small changes to these onboarding workflows have big impact in the conversion rates.
So, we feel fairly good about that, so that’s number one.
Pillar number two is NDR. NDR, I wish we had a lot more positive traction to talk about, but the good news is contraction and churn both look either stable or improving steadily.
Expansion is steady, but the best case would be for that to start growing back.
It’s recovering, and it is absolutely steady, but it’s not going up to the right as fast as they would want to.
And there are two drivers. One is what we control, which is, hey, are we building and shipping great products and keeping the platform stable and secure for our customers to grow? Yes.
And number two is, are customer’s business growing and how is the macroeconomics doing?
That’s where there’s a little bit of sluggishness given the market overhead and things like that.
So, I have no questions, if we keep doing our jobs, which is to ship great products and take care of our customers, the expansion element of the NDR will eventually start going up to the right, but we are still not there yet.
So, that’s two, three is cloud-based is steady, nothing really to report there, we are working on a bunch of cool AI-related features, which will be shipping between now and end of the year.
Cloud-based is doing what we said it was going to do, and AI, we reported 200% year over year, quarter-over-quarter growth, from the time we acquired the Paperspace business.
So, that was a big part of our revenue base in the previous quarter.
So, I would say product led growth at the top of the funnel, it’s looking very robust, AI growth is very good, and the middle two pillars are doing what we said they were going to do, and I think NDR will start picking back up when the economy starts going back a little bit.
OG:
Okay, that’s fantastic color on the pre-NDR cohort, that was excellent, thank you.
All right, so just to put a pin in this one. DigitalOcean reported its largest organic incremental ARR step up in nearly two years, it was 158% higher than incremental ARR in Q2 2023, and it’s also back-to-back quarters of ARR growth acceleration.
Was this incremental ARR mostly Paperspace?
PS:
So, it was mostly AI/ML, and I want to be careful not to conflate the two.
So, yes, we acquired Paperspace business, but right now there’s not a whole lot of Paperspace only business, so we have completely absorbed that, and I would say a large portion of that net ARR increase is our AI/ML business.
So, yes, and that’s why it’s also a little bit lumpy because of the infrastructure being put in place.
So, Q2 was a good part of that lumpiness, and Q3 is going to be more of the steady part of that lumpiness, and we expect another slight spike in Q4 when the next generation of gear gets here.
OG:
Okay. So, the GPU Droplet, virtualized, fractional on-demand GPUs is a big deal, and some market participants see it, some don’t get it.
I have told investors why I think it’s a big deal, but I like it better when you tell them.
So, tell investors why this offering has such meaning and potential for the future.
PS:
Yeah, no, I appreciate that.
Yeah, I think it’s a pretty big deal for a number of reasons, one is, right now, it is impossible, if you are not a super high-end funded startup, it is really
impossible… Or a large enterprise, like a Goldman, or Walmart, or something like that.
It is really hard to get access to GPUs, and no one wants to return your call if you don’t want to take a minimum of a 32 x 8 GPU cluster, or something in that order of magnitude.
So, what this does is that’s a couple of different things.
One is for those use cases, which doesn’t require foundational model training, but it just needs some fine-tuning with your own data, whether it is for RAG [retrieval augmented generation], or even in some cases very bare minimum tweaks or fine-tuning to the base and biases of an existing open source model, you can do that pretty nicely with fairly minimal GPU horsepower.
Because you are not trying to load up petabytes of data, but instead you are injecting gigabytes worth of data, of your custom data, and tweaking the parameters of the base model a little bit to make sure that the answers are tailor-made for your use case.
So, for those types of scenarios, you don’t need mega clusters of GPUs, what you need is access to enough GPUs on an ongoing on-demand basis, because you’re not running long-running model optimizers, you are running some fine-tuning algorithms which run for a few hours, and then you want to put it to test, you want to see how the answers are, and then you want to do the same thing in a day or two.
So, having this on-demand, fractional access to GPUs, democratizes the use case and brings it to people who are not doing foundational model scaling, but instead are doing fine-tuning and extension of existing open source foundational models.
Which, we believe is the vast majority of our customers are in that camp, so it’s a pretty big deal for companies that typically are on platforms like DO [DigitalOcean].
OG:
I completely agree, I think the future is that there’s going to be, I don’t know, 3, 5, 4, 2… I don’t know, major LLMs, and there’ll be like the operating systems of generative AI.
And then the rest of the money that’s going to be made, and it’s going to be a lot, is going to be on exactly this.
It’s exactly this fractional on-demand GPU usage, to tweak the model such as they work for your company.
You’re taking the base model, that LLM was already built, and I see that as just a massive, massive opportunity.
And that’s why I was telling investors this was such a big deal. This is the market. This is where-
PS:
Absolutely. Yes.
OG:
… DigitalOcean can grow 10x, not in a quarter or a year, I’m just saying broadly speaking.
This is where we are headed in the future.
There’s only going to be so many ChatGPTs, right? I don’t think that’s where the competition is.
There’s Windows, there’s Linux, and there’s macOS. Okay, so there’s three operating systems, I think LLMs will be seen as operating systems.
Where the money is made is when you customize them.
PS:
Exactly, yeah. Using the applications where you take the operating system and you actually drive business value, I completely agree.
And I think the other top bet there, Ophir, is this concept of open source models versus closed source models, right?
Closed source models, you can live with some APIs and you can tweak it to a certain degree, but I feel those are also going to be few and far between because LLMs are not going to cater every use case.
If you want a demand forecasting model, or you want a model that does a very specific thing, you need models that are not closed source, but you need a fork of an open source model that you can do something.
And even today, like Llama 3.1, the 405 parameter model is actually pretty good for a fraction of the cost, compared to OpenAI and Anthropic.
OG:
I think that that’s true, really, really good RAG can turn these closed, what you’re calling these closed LLMs, up to speed for a use case.
But another huge boon for DigitalOcean, which is where it’s really going to grow, is that, you know my view…
Okay, forget the LLMs and forget the training of them, just they exist. Okay.
It’s going to increase the amount of digital product created, I think, 10x, 100x, 1000x.
I think estimates for the amount of, just broadly, the amount of digital product available, I think people are, in order of magnitudes, too low, maybe two orders of magnitude too low.
And that all has to sit in the cloud.
This is what platform is, that has to be hosted, that has to be managed, that has to be observed, that has to be secured.
That’s what’s coming.
So, it’s both training open LLMs for specific use cases, like your GPU Droplets, and then just broadly speaking, infrastructure, which is what DigitalOcean provides.
And so, I’m so excited about the possibilities for DigitalOcean.
PS:
And if I may ask, so why do you think the number of digital products are going to explode?
Do you feel like AI will enable applications to go where SaaS was unable to go, because now we can automate multi-stage workflows, and things that only humans were able to do so far?
OG:
Absolutely.
I think it comes from two places, one is let’s take senior developers, a true computer scientist, they can now do 10x what they used to be able to do.
Even if it’s as simple as, like our top developer, he’s obviously writing code, but at times, he just doesn’t want to write certain methods, and he’ll just describe it in Notepad, and put it in an LLM.
The LLM will write most of it for him… Okay, you need to be a senior computer scientist to really turn that into good GA code, and it does it, and then it’s also in any kind of tricky coding, when the debugging isn’t so trivial, like you can’t just step through the debugger.
And then on the other side, for DigitalOcean, it’s these citizen developers.
Citizen developers have been empowered in a way that I don’t even think they know yet.
They can create full stack products now, basically, with really, really good prompting.
So, it’s going to be a profound, profound increase in the amount of digital product, which, by the way is why I’m very bullish on the cybersecurity market because the thing about all of this product, it has to sit on infrastructure, and it has to be secured.
These are two fundamental things, right? So, that’s my view.
All right. One of the original moats for DigitalOcean, Paddy, was the DigitalOcean ecosystem, specifically, DigitalOcean had, it was the community.
DigitalOcean had these demo projects and tutorials, and in the tutorials, DigitalOcean cloud was being used for these run-throughs, and that’s how it was demonstrated.
It was this cycle of like, we’ll help you, we’ll teach you how to do what you need to do, and we’ll teach you how to do it on DigitalOcean, people become attached to DigitalOcean, it’s the ease of use and then these customers grow.
And that’s in the good days, when SMBs weren’t getting choked off by the Fed, and your NDR was actually positive.
So, I think you answered this, but I just want to ask very directly, do you view that same kind of ecosystem for the GPU Droplets too?
Because this is harder now, this is not just, I don’t know, CSS tutorials, this is a little bit harder.
Is that how DigitalOcean is going to maintain this moat?
Say, okay, here’s how you use GPUs, here’s how you can train the models.
PS:
Absolutely. Absolutely, that’s why we got Wade Wegner as the chief ecosystem officer, to do exactly that, and he was a driving force for the Windows Azure ecosystem, and then subsequently Heroku.
So, the guy knows how to build developer advocacy teams, build a community, community of influencers, do all the tutorials that you were talking about, build sandboxes, canonical applications…
I think this is a huge opportunity for us, and I completely agree with you, it is a super complex and super expensive path, so what a great opportunity for us to run that playbook again.
Of course, it’s easier said than done, so we have to actually do it, but that’s exactly why I got Wade to step into that role because I completely agree with you, it is a phenomenal opportunity for us.
OG:
Okay. Yeah, it’s such a big deal if people can come to DigitalOcean, first, as the resource, like how do I train Llama? How do I specify this to my use case?
Then, how do I do it…
That’s a little bit of code, but how do I do this with the infrastructure?
Can you give me managed infrastructure?
And if that’s what DigitalOcean does, honestly, that’s bigger than many of the other opportunities with LLMs, I think.
This is it, it’s empowering… LLMs have empowered people, now who’s going to show people that they are empowered?
That’s where there’s $100 billion because that’s where there are hundreds of millions users already.
All right. So, my second to last question.
So, I know that NDR is at 97%, it was the same last quarter, up from 96%, and I think Matt [CFO] actually said if you went out to decimal points, it’s actually doing a little better.
I see this as a tech SMB problem, not as a DigitalOcean problem.
But I would like you to paint the whole picture for me.
I’m going to ask you this every quarter for a while, how are tech SMBs looking to you, Paddy, what are you hearing?
PS:
Yeah, I think I’m hearing everything you are hearing.
So, there’s absolutely no question that there’s a macro overhang, the liquidity crunch, as companies… A little bit in the bind.
The good news is we seem to be on the other side of contraction and churn, our contraction and churn are actually either steady or improving slightly.
It is expansion, and expansion is a direct consequence of how well their business is doing.
So, we are at least not any longer in the phase where these companies are doing hygiene cleanups, and aggressively managing their instances, and stuff like that.
They went through that phase, they’ve all done that, and all the cloud platforms have great monitoring tools now.
So, you eliminate or reduce wastage to the extent possible on the cloud.
But now, what we are seeing is, as I was previously explaining, there are two things, one we control, one we don’t control.
What we don’t control is how well our customers’ businesses are doing, and that’s a factor of the overall demand of the market, the liquidity in the market, and so forth.
The second thing which we absolutely control is are we taking care of our existing customers, and are we earning the right to get bigger share of their wallets?
And I think that journey, that’s one of the first things I did, was I said, we have to pick up product velocity, and you can already see that in how many features we are shipping quarter over quarter.
And the second thing is we need to take that message to our customers and say, hey, look, we have shipped all this, you have daily back-ups, but have you thought about doing this extra thing to protect you even further?
So, we are not really world-class at that yet, we write blogs, we are publishing videos explaining our products, but being able to do this in a delightful and consistent and persistent way, both outside the product and inside the product, is something that we are getting better at on a week-over-week basis.
So, I have no question that once we do that over the next couple of quarters, we’ll put ourselves in the best position to earn the right to get more share of the wallets as our customers, as their businesses will cover it, and they have the ability to spend more.
OG:
Okay. So, I think I heard from you that quarter over quarter SMBs, just broadly, SMBs that are in your purview, these are tech SMBs, are, they’re not hurting worse and they might be doing a little bit better, that’s their view broadly of their companies.
Is that a fair characterization?
PS:
So, that’s how they look to us, and I don’t know if I want to be bold enough to say that that is their business outlook.
I think if you ask them, they would say that this is the crunchiest they have felt in terms of the lack of liquidity and some of the challenges that they’re facing in the market, this is probably the worst hit they have been over the last three or four years.
But the reason why I was saying that for us it looks a little bit better is because I think we are controlling what we control better than we have over the last few quarters, in terms of ensuring that we are managing churn, we are helping our customers by shipping more products, and getting them to get better ROI on our platform, I feel we are doing a better job of controlling what we control.
OG:
Okay, fair enough. And do you get the sense, Paddy that it’s something like they just want to see the Fed cut, or it’s really, it’s simply what they’re experiencing as a business, and a Fed cut is a derivative of what their business might feel in the future?
I’m just curious how closely they’re watching that.
PS:
Yeah. Yeah, it’s a good question.
I think the Fed cut is, by itself, yes, it will release some liquidity, and enable them to access funding and things like that, but I think the bigger thing is what a Fed cut would enable them to do, or their customers to do.
Many of them are retailers or streaming video providers of some sort and things like that, so for them, the consumer spending has to start going up.
OG:
Yeah. We’re on a razor’s edge. Okay. So, my last question is always the same, is there anything I didn’t ask that I should have asked, or anything that you wanted to say but you didn’t because I didn’t give you a chance to say it?
PS:
Yeah. So, I think the one thing I will say, Ophir, I think I pointed this out a little bit in the call as well.
So, I just want to make sure that investors understand that our AI strategy is the strategy for our customers.
And it is easy to get carried away by the hyperscalers AI strategy, which actually has two parts.
One is, of course, they want to be the home where these operating system, LLMs, run, and they want to control that closed walled garden of LLMs, but there’s also another aspect of hyperscalers that they all need to invest very aggressively to improve their own respective products.
Like Google has a search franchise and a YouTube ad revenue franchise to protect.
Meta has its recommendations engine and social engagement metrics to improve.
They all have use cases, that they have to invest in the order of magnitude they’re investing.
On the other extreme, you have the core LLMs and the Lambda Labs of the world, that are going after the operating systems or the LLMs to be their GPU provider.
Our strategy is different.
Our strategy, as you were saying, is to skate to where the puck is going, not necessarily chase the infrastructure hungry foundation model developers, but rather go after the platform and the application providers that need different types of infrastructure, and I feel very strongly that our strategy is tailor-made for our customers.
And this strategy is software and inferencing centric, which translates to, do we need CapEx and hardware?
Yes, of course we do, anyone that is pursuing an aggressive AI strategy needs that, but the order of magnitude and the center of gravity is more, the order of magnitude is different, and the center of gravity for us is more software oriented rather than hardware heavy.
So, I want to make sure that investors understand that we want and have the right takeaways for a company like ours.
OG:
Okay. Yeah. So, this is essentially allaying, it’s clarification and allaying any fears that anyone who doesn’t understand what DigitalOcean is doing, doesn’t have to start increasing CapEx 5x to try to catch up of hyperscalers, that’s precisely not what you’re trying to do anyway.
That’s essentially what you’re saying?
PS:
Exactly. Will it increase? Yes, it will increase, modestly, as we have done all through the year, but we’ll be super responsible and make sure that we’re investing enough to enable us to pursue that software strategy.
OG:
Yeah. What I like to tell investors is that OpenAI is not going to be a customer of DigitalOcean, but [the 200 million] customers of OpenAI will be DigitalOcean customers.
PS:
That’s a great way to explain. Yeah, absolutely. Yeah, exactly.
Awesome, great, thank you so much for great questions as always, and do let us know if you have any other questions.
OG:
Will do. Okay. Talk to you next quarter, Paddy, thank you.
PS:
Okay, take care. Bye.
OG:
Bye.
Conclusion
The precious few thematic top picks, research dossiers, and alerts are available for a limited time at a 60% discount.
Thanks for reading, friends.
The author is long DOCN at the time of this writing.
Please read the legal disclaimers below and as always, remember, we are not making a recommendation or soliciting a sale or purchase of any security ever. We are not licensed to do so, and we wouldn’t do it even if we were. We’re sharing my opinions, and provide you the power to be knowledgeable to make your own decisions.
Legal
The information contained on this site is provided for general informational purposes, as a convenience to the readers. The materials are not a substitute for obtaining professional advice from a qualified person, firm or corporation. Consult the appropriate professional advisor for more complete and current information. Capital Market Laboratories (“The Company”) does not engage in rendering any legal or professional services by placing these general informational materials on this website.
The Company specifically disclaims any liability, whether based in contract, tort, strict liability or otherwise, for any direct, indirect, incidental, consequential, or special damages arising out of or in any way connected with access to or use of the site, even if we have been advised of the possibility of such damages, including liability in connection with mistakes or omissions in, or delays in transmission of, information to or from the user, interruptions in telecommunications connections to the site or viruses.
The Company makes no representations or warranties about the accuracy or completeness of the information contained on this website. Any links provided to other server sites are offered as a matter of convenience and in no way are meant to imply that The Company endorses, sponsors, promotes or is affiliated with the owners of or participants in those sites, or endorse any information contained on those sites, unless expressly stated.