Saturday, October 22, 2011

Adding Adequacy to the Growth Variable in DCF

It’s always a challenge to make a quality estimation of inputs to the valuation model like the one utilized in the Startup Valuator. If your company plans to acquire venture capital you should be ready to get the most adequate information about the market. That’s a topical problem for emerging market segments with low entry barriers and high rates of business failures. And e-commerce is a good example.

There is no better way to prove that your company worth the value you expect than using the method of comparables. But multipliers are not always available or they aggregate the information, which is hardly relevant. That’s why I recommend you to use a DCF approach as a backup. Use Startup Valuator with the parameters estimated carefully and you’ll get a lot of useful information on the value of your company.

This post will be about the estimation of growth during the post forecasted period. This parameter is extremely important for the reason that it’s used in calculation of terminal value. For the most of internet companies terminal value has the biggest share to the total value of the firm due to the high expectation of future growth and low need in capital consuming production facilities.

My idea is that we can say for sure what this parameter will be. We will need the following instruments: Microsoft Excel or some soft similar to that and NLREG. The last program is an awesome thing for nonlinear regression analysis.

Let me say again, that our goal is to calculate g (future market growth) and be as much realistic as we can. That’s quite challenging, but we’ll make a try.

We are taking a Russian internet company which conducts its’ operations in the segment of context advertising aggregators. It’s good for you to know what the revenue is. The data is presented by months on the graph below:


What drives the revenue change in a way like this? There is no sophisticated answer. It’s clear that the more people are surfing the web the more the probability of their accurance on the website of the company. Let’s see if our assumption is adequate.


Both variables are growing in their value. We’ll gather more information by taking the first derivative of them. Look at the example in this google spreadsheet. My results are the following:


The founding is not trivial. We have almost proved that our assumption is down to earth. There is some sort of correlation between these two factors. It’s not linear. Next we’ll use some econometrics. To figure out what sort of dependence there is we build a correlation pole. The correlation pole allows conducting the interpolation, which is a crucial point of the analysis in our case.


Hm… That’s not as good as I expected. Human being is still much cleverer than any algorithm, so we’ll help the computer a little to make a more quality estimation.


Now it’s a whole new ball game! I can’t help thinking that these stuff is very similar to a sine wave with a linear trend:


So we download the demo of NLREG and type into the following command:  

Variables RIU,Revenue;
Parameters p0,p1,Amplitude,Period=2.3,Phase=0;
Function Y = p0 + p1*X + Amplitude*sin(2*pi*(X-Phase)/Period);
Plot Residual,Grid;
Data;
[data array]

I won’t publish here the full report. The most important thing for us is that p-value for all the parameters except p0 is less then 0.001. That’s very good!



Sunday, June 12, 2011

Monte Carlo #2 | The Practical Implementation of The Model

Let's continue the topic. Today I am going to show an example of the practical implementation of the model, which was described in the previous post. The calculations are made in Maple. If you are not familiar with this framework I highly recommend you to read the book of J.S. Dagpunar 'Simulation and Monte Carlo'. You can also download samples here.

Mathematicians say that the simulation is rather time consuming in Maple. It's not our case, because simulation in finance is not as compicated as in physics or neurobiology. We're going to generate only 10 000 scenarious for 48 periods, that's not so much. Before going to the algorythm I'd like to answer two nice questions I was asked after the publicaation of the previous post:

> The definition of the conversion among the practicioners is a bit different from one you operate with in your model. What is the conversion rate in your opinion?

Thanks a lot for your comment. I have not noticed that traditionally conversion is understood as the relation of new customers and unique visitors. I prefer to use conversion of unique visitors into customers in order to show the share of all customers from the total number of unique visitors.

> You predict unique visitors and the conversion of them into customers. Why don't you predict customers instead of all that staff?

This question is quiet controversal. If we deal with the model like the one, which was described in the previous post, we can say that the answer is evident. Predicting the future flow of customers is a more rational approach, but there is an idea I want to share with you. According to my research, I've found out that there is correlation between the number of unique visitors and the conversion rate. The linear regression model was not significant. And I decided that it would be more correct to analize the functional dependance between the flow of unique visitors and different contingency factors. The stochastic nature of conversion is my assumption, which was prooved by my observations. I see volatility of conversion as a sort of noise, which does'nt allow us to make more accurate predictions. That's why I see it rational to exclude that noise at the very beginning, before some sophisticated models of prediction are built.

Let's go to the implementation of the model, described in the previous post.

Firstly, we begin to generate scenarious for the flow of future visors. We define the parameters from the equation (3) in the previous post. S[0] is the initial point, r is the growth rate, sigma is the level of volatility, T - the number of periods (in months). M - the number of scenarious. N is the number of 'dots', which makes the graph more detailed, but is not needed thats why we define it the same as T.
restart;
with(Statistics):
Vis[0]:= 5000: rvis:= 0.15: sigmavis:= 0.05: N:= 48: M:= 10000: T:= 48: h := T/N: 
randomize():
U := RandomVariable(Normal(0, 1)):
W := exp((rvis-(1/2)*sigmavis^2)*h+sigmavis*sqrt(h)*U):
Visitors := [seq(Sample(W, N), i = 1 .. M)]:
for i to M do Visitors[i][1] := Vis[0] end do:
Visitors := map(CumulativeProduct, Visitors):
Xrng := [seq(h*i, i = 1 .. N)]:

Then we generate scenarious for the conversion. The parameters are relatively the same, but with different values. There is a very important thing
I want you to notice. The convertion rate is taken as a mean from all its' scenarious in each particular period.
Conv[0]:= 0.07: rconv:= 0.0001: sigmaconv:= 0.05: N:= 48: M:= 1000: T:= 48: h := T/N:
randomize():
U := RandomVariable(Normal(0, 1)):
W := exp((rconv-(1/2)*sigmaconv^2)*h+sigmaconv*sqrt(h)*U):
SPr := [seq(Sample(W, N), i = 1 .. M)]:
for i to M do SPr[i][1] := Conv[0] end do:
SPr := map(CumulativeProduct, SPr):
Xrng := [seq(h*i, i = 1 .. N)]:
SD := 50:
for k to N do
Conv[k]:=Mean([seq(SPr[i][k],i=1..M)])
end do:

After that we try to predict the dynamics of the changing portfolio structure.
Cast[0]:= 0.15: rind:= 0.0045: sigmaind:= 0.000001: N:= 48: M:= 1000: T:= 48: h := T/N:
randomize():
U := RandomVariable(Normal(0, 1)):
W := exp((rind-(1/2)*sigmaind^2)*h+sigmaind*sqrt(h)*U):
SPr := [seq(Sample(W, N), i = 1 .. M)]:
for i to M do SPr[i][1] := Ind[0] end do:
SPr := map(CumulativeProduct, SPr):
Xrng := [seq(h*i, i = 1 .. N)]:
for k to N do
AvPers[k]:=Mean([seq(SPr[i][k],i=1..M)]):
AvSelf[k]:=1-AvPers[k]:
end do:
To predict sales revenue we'll need ARPU (average revenue per user) in each product category:
arpuSelf:=500:
arpuPers:=1500:
Cost of revenue is considered to be 85% of an average revenue per customer.
cogsSelf:=0.85*arpuSelf:
cogsPers:=arpuPers*0.85:
I didn't have enough time to put a simple exponential function instead of the randomizer,
so I put sigma to the minimal level and the number of scenarious to 1.
FCost[0]:= 3000: rfcost:= 0.02: sigmafcost:= 0.00001: N:= 48: M:= 1: T:= 48: h := T/N:

randomize():
U := RandomVariable(Normal(0, 1)):
W := exp((rfcost-(1/2)*sigmafcost^2)*h+sigmafcost*sqrt(h)*U):
SPr := [seq(Sample(W, N), i = 1 .. M)]:
for i to M do SPr[i][1] := FCost[0] end do:
SPr := map(CumulativeProduct, SPr):
Xrng := [seq(h*i, i = 1 .. N)]:
for k to N do
OpC[k]:=Mean([seq(SPr[i][k],i=1..M)])
end do:
The same with the depreciation. That's just an example. I recommend you to exclude this factor from the model, because it's not so important in case of an internet company at the early stage.
Dep[0]:= 250: rdep:= 0.005: sigmadep:= 0.00001: N:= 48: M:= 1000: T:= 48: h := T/N:
randomize():
U := RandomVariable(Normal(0, 1)):
W := exp((rdep-(1/2)*sigmadep^2)*h+sigmadep*sqrt(h)*U):
SPr := [seq(Sample(W, N), i = 1 .. M)]:
for i to M do SPr[i][1] := Dep[0] end do:
SPr := map(CumulativeProduct, SPr):
Xrng := [seq(h*i, i = 1 .. N)]:
for k to N do
Dep[k]:=Mean([seq(SPr[i][k],i=1..M)])
end do:
The amount of money we reinvest. Here is a weak place of the model. We can not reinvest with the negative cash flow. That's why it will be more correct to make a condition identical to the taxes calculator which will be further. I'll fix this bug pretty soon! :)
Cap[0]:= 30000: rcap:= 0.01: sigmacap:= 0.0001: N:= 48: M:= 1000: T:= 48: h := T/N:
randomize():
U := RandomVariable(Normal(0, 1)):
W := exp((rcap-(1/2)*sigmacap^2)*h+sigmacap*sqrt(h)*U):
SPr := [seq(Sample(W, N), i = 1 .. M)]:
for i to M do SPr[i][1] := Cap[0] end do:
SPr := map(CumulativeProduct, SPr):
Xrng := [seq(h*i, i = 1 .. N)]:
for k to N do
Cap[k]:=Mean([seq(SPr[i][k],i=1..M)])
end do:
with(SumTools):
r:=0.42/12:
k:=1:
In order to show the distribution of results there was a special function built called 'valuator'.
Valuator:=proc(i)local TotalCust, CustPers, CustSelf, SalesPers, SalesSelf, GrossMargin, OperatingProfit, OperatingProfitAT, tax, FCF, PV, PVn, PrVal;
global k,N,r, arpuPers, arpuSelf,cogsSelf,cogsPers,Visitors,Conv,AvPers,AvSelf,OpC,Cap;
for k to N do
Totalcust[k]:=abs(Visitors[i][k]*Conv[k]):
custPers[k]:=abs(Totalcust[k]*AvPers[k]):
custSelf[k]:=abs(Totalcust[k]*AvSelf[k]):
SalesPers[k]:=custPers[k]*arpuPers:
SalesSelf[k]:=custSelf[k]*arpuSelf:
GrossMargin[k]:=SalesPers[k]+SalesSelf[k]-cogsSelf*custSelf[k]-cogsPers*custPers[k]:
OperatingProfit[k]:=GrossMargin[k]-OpC[k]:
if (OperatingProfit[k] 0) then tax[k]:= OperatingProfit[k]*0.2: else tax[k]:=0 end if:
OperatingProfitAT[k]:=OperatingProfit[k]-tax[k]:

FCF[k]:= OperatingProfitAT[k]+Dep[k]-Cap[k]:
PV[k]:=FCF[k]/(1+r)^k:
end do:
PV[49]:=(PV[48]/(r-(0.05)/12))/(1+r)^48:
PVn:=[seq(PV[k],k=1..N)]:
DefiniteSummation(PV[t],t=1..N)+PV[49];
end proc:
Finally, we can build a histogram, illustrating the result.
with(Statistics):
q:=1:
M:=10000:
for q to M do
Value[q]:= Valuator(q):
end do:
ValueList:=[seq(Value[q],q=1..M)]:
with(Statistics):
A := ValueList:
B:=Mean(A);
Q := Histogram(A, averageshifted=3, color=grey, title="The value distribution according to 10 000 scenarious"):
plots[display](Q);

That's it!

Friday, June 3, 2011

Valuation of startups with the Monte Carlo simulation model

The issue of the valuation of internet companies at the seed stage is quite controversial. Companies of this category are considered to be quiet risky from the point of view of the investor, but the risk is not only the probability of loss, but big capital gains as well.

If you are the person, who wants to measure the value of the company of this sort with the discounted cash flow approach, I’d recommend you the following sources:

My goal in this post is to describe my model of valuation with Monte Carlo simulation instruments. In my opinion it’s an optimal choice for those who want to measure the value of the Internet Company and take contingency factors into consideration. After all this method can be easily modified for the real options approach implementation, which allows considering specific risks of the company as well.

Let me go straight to the point. The model was built for the valuation of companies in ecommerce. For this reason my approach is based on the assumption that the customers of the company in the time period are calculated according to the number of unique visitors and the conversion of this people into the customers:


It’s important to note that if we deal with startups there is a very high volatility in the flow of visitors.

The daily reach of visitors for Mixpanel and Zemanta are good examples:



Conversion rate of visitors into customers is volatile as well. Traditionally it varies from 5% to 7%. Let’s make an assumption that it’s like that, but of course there are cases of a higher one or lower. It fully depends on the business model of the company, the way how sales are organized, but it’s not actually the issue of this post.

Almost undetermined behavior of future visitors can be described with the differential equation of geometric Brownian motion:


As far as I know this differential equation can’t be solved and the only way to get the information on the most possible answer is to generate a big number of scenarios. And that is the basic idea of Monte Carlo simulation approach. The number of unique visitors in the period t can be described in the following way:


μ is the expected growth and σ is the volatility. The parameters are estimated on the historical data.

Z is the random variable with a normal distribution.

Almost the same is for the conversion rate:


Thanks to the conversion rate and the number of unique visitors we get the number of costumers. Let call this variable ‘TotalCastumers’.

The customers of the company are the sum of customers in each product category i, with the total number of product categories N.

Let’s consider that there are two categories of services provided by the company, then i=2.


The calculation of %Cast allows us to operate with the portfolio structure, which can be permanent or changing exponentially or even stochastically. I recommend you to make your choice according to the historical information of the company you want to evaluate.

After that we can start calculating sales revenue:


arpu_ is average revenue per user. The formula for the GrossMargin in the period t can be calculated according to the following equation:


Where COGS is the cost of goods sold.

After that we calculate OperatingProfit which is the GrossMargin minus FixedCosts:


We can also calculate Income after taxes which is calculated according to the value of OperatingProfit:


I decided to express depreciation and investments exponentially, but any other functional form also possible.


Free cash flow is calculated in the following way:


The value of the firm can be calculated according to the formula:


r is the cost of acquired capital, which is identified according to the ROI of the venture capitalist, who gives the money. g – is the future growth, which varies from 5% to 15% and depends on the industry.

The idea is that we generate a big number of scenarious for the flow of website visitors. Conversion can be taken as a mean of one's distribution.

That’s the theoretical model I wanted to share with you. The practical implementation of the model will be in the further post.

Monday, May 2, 2011

Using comparables for the valuation of internet startups at emerging markets

PART #2. Country Risks

As it was mentioned in the previous post we deal with a theoretical Russian Company which is getting compared to the set of mature rivals In the USA. Obviously markets are different. There are higher political and economic risks in Russia. We should take this fact into account. Let’s have a try.

There are several different approaches. To make a cross-border adjustment we can:
  • Calculate Sovereign Bonds Yield;
  • Find a correction coefficient through the market multiples ratio;
  • Use smart multiples;
  • Do something more adequate.
You can easily find description of those three approaches in those articles which have already been mentioned in the previous post. I just want to make several comments on them.

Sovereign Bonds Yield
Making adjustments of market multiples according to the non-market variable seems quite strange. Nevertheless nobody can forbid us from trying to do that on our own. Let’s take Russian Eurobonds and U.S. treasuries with the maturity in five years.





Correction multiple is 0.675. As we see the yield of Russian Eurobond is higher. We can make an assumption that the spread between euro-5 and tn-5 was provoked by the higher level of risk in Russia. On the other hand, I don’t see any logic in taking this method as a fundamental one.

Market multiples ratio
Nothing bad can be told about this kind of approach. To do a cross-border adjustment according to the Market Multiples Ratio method you have to gather information about multiples from both of the markets. In our case this markets are ones in Russia and USA. To get a result you need to conduct a calculation in accordance of the following formula:


Calculating the market multiples ratio can be proved with a simple sense of logic, what can’t be told about the Sovereign Bonds Yield Ratio. But as for me I don’t have a lot of time to collect everything needed for the calculation. That’s why this method seems to me unreasonably resourceful.

Smart multiples
The farther into the forest, the thicker the trees. In few words smart multiples aggregate information about market and specific risks. I can describe it in more details according to your request. To find out more about them I recommend you to read [Bhojraj, Sanjeev, Charles M.C. Lee, and David T. Ng. 2003. International Valuation Using Smart Multiples. Working Paper (March 14)].


These were the methods I had a chance to find the information about. I am sure that there are many others. And now I’d like to share with you my own approach based on calculating the ration between market indexes.

This method seems to me quite useful for several reasons. Among them:
  • Doesn’t need to gather a lot of information.
  • It has a good theoretical underpinning proved by classical statistic analysis.
  • Based on market variables but non-financial somehow related staff.
The idea is that we can make cross-border adjustments according to stock market indexes. We will calculate the ratio between S&P 500 and RTS Classic. Though there are plans to build an international financial center in Moscow we should face a reality - the liquidity of the Stock Exchange in Russia is dramatically lower in comparison with one in the US. Moreover there is no good alternative for NASDAQ in Russia, though it’s well everywhere that Russia has an extraordinary technological potential. It's a pity but let's get straight to the point.

where t is a set of months in the period from September 2009 to the April of 2011.

Let’s make a cross-border estimation with this method. The information needed is presented in the table below. All calculations were made in Stata.


There is nothing easier to get the estimation of correction multiple needed:



Let’s get to the statistical proof of this method. It seems reasonable to say that if there is a significant linear regression with RTSI and GSPC as variables we can say for sure that our approach is statistically correct.
The correlation pole for these two indexes looks in the following way:



The correlation is evident. Now it’s high time to see at the results of Stata analysis.


We can observe that the probability of obtaining a test statistic for the model is almost 0, the same is with the RTSI as a factor. As a result we can conclude that our model is significant. Bingo!

In conclusion I place histograms for RTSI and GSPC for the reason that they show the way how variables are distributed.


Summarising everything I can say that cross-boarder adjustments are always subjective. Different approaches have a different combination of complexity and accuracy. It's up to you to decide which kind of method to chose. What methods do you use? Would you like to implement mine?

Using comparables for the valuation of internet startups at emerging markets

PART #1. Introduction to the problem.

In the next post series will talk about well known method of comparables, a good way to get quick estimation of the value of the company or it’s equity. 

Let’s make a set of assumptions which are very important in our case:

• The firm is a typical e-business;
• It is not global. Operates on the local market;
• Located in Russia;
• Comparable analogues are presented abroad, e.g. USA.

Hope that would be enough. You should notice that despite the fact of taking the Russian company as an example the methodology seems to be the same as for other developing markets. It should not be difficult for you to make some adjustment to the model, but if there is any problem you can describe it in comments and challenge us.

Let me tell you what made me feel puzzled in case of applying comparables to the valuation of the company described above.

First. How to estimate country risks. If we compare the Russian company with one presented in the US, it’s evident that the value of American companies is mostly higher than in Russia. It may be caused by the fact of the capital market, which is more developed in America. There are more deals, the market is more liquid and more people like to invest.

Second. How to compare public company with a private one. IPO can increase the value of equity up to 10 times. It’s okay when the object of valuation goes public either, but the situation may be different. What about seed, first, second or other stages of financing before bridge.

Finally. I decided to use specific industrial multiplicator, but traditional financial ones for the reason of lack of information for the last. Private companies do not publish a lot of financial information. Moreover there is a strong need for some sort of adjustment, which could make the multiple approach more realistic. There was a number of unique visitors used as a variable. 

If you are interested in good articles on this topic I’d like to recommend you the following:

• Goedhart, M., Koller, T. & Wessels, D. 2005. The right role for multiples in valuation. The McKinsey Quarterly, March. New York: McKinsey & Company Inc.:1-3.
• Damodaran book on Valuation. There is also a short introduction to the problem at his home page.
• P. Fernandez. “Valuation using multiples. How do analysts reach their conclusions?”
• Lie E., Lie H., "Multiples used to estimate corporate value", Financial Analysts Journal, Mar/Apr 2002
• Liu, J., Nissim, D., Thomas, J., 2000. International equity valuation using multiples. Working paper, Columbia University.
• Ivashkovskaya, I. & Kuznetsov, I. 2007. An empirical study of country risk adjustments to market multiples in emerging markets: the case of Russia.
• Bhojraj, Sanjeev, Charles M.C. Lee, and David T. Ng. 2003. International Valuation Using Smart Multiples. Working Paper (March 14).
• Bhojraj, Sanjeev, and Charles M.C. Lee. 2002. Who Is My Peer? A Valuation-Based Approach to the Selection of Comparable Firms. Journal of Accounting Research, Vol. 40, No.2, (May), pp. 407-439.

You can try to find them at scholar. Maybe there is something you want to recommend as well. You r welcome!

In a couple of days I’ll publish a short insight on how to overcome the problem of taking risk factors into consideration. Until the next time!

Saturday, April 30, 2011

Hello World!

Hi there. This is the first post in this blog.

Yours sencerely,
Maxim