For the past two months, we’ve been maniacally focused on one metric: the week-over-week growth of our Kickstarter pre-registrations (Cf. How focus on growth changed our business). Moving forward, we will expand our focus to cover a few more metrics:
Once our software becomes available, we will replace our primary registration metric with a composite adoption metric defined as the sum of the number of paying customer c and total number of users u, divided by two.
α = (c + u) / 2
We constructed this composite metric in order to take into account both the number of customers and the total number of users at a time when each customer will have a small number of users, usually just one (the original app maker who subscribed to STOIC). Eventually, we will track c and u individually.
Most importantly, we will track the weekly growth of this adoption level (Cf. Startup = Growth).
Average Revenue Per Unit
This metric tracks the average revenue per customer, on a monthly basis.
Customer Acquisition Cost
This metric tracks the cost of acquiring new customers from a sales and marketing standpoint. A simple way of measuring it is to divide the total amount of money spent on lead generation and deal closing activities over a long-enough period of time (ideally a year), by the number of new customers signed over the same period (possibly with a small time offset).
Customer Lifetime Value
In order to measure the return on investment from our marketing campaigns, we will track the value of a customer over its lifetime. The absolute value of this metric is not important. What matters is how it evolves over time, and how it contributes to much more critical metrics, such as the Time to Return on Marketing Investment.
Time to Return on Investment
This metric tracks the time it takes to get a positive return on marketing investments. If the customer acquisition costs are $X and the Average Revenue Per Unit is $Y/month, the average Time to Return on Investment equals X divided by Y. Obviously, a business for which the Customer Lifetime (Customer Lifetime Value divided by Average Revenue Per Unit) is lower than the Time to Return on Investment is not profitable and is doomed to fail eventually.
This metric measures the ratio of customers that are not renewing their subscriptions.
Revenue per Employee
While we will do everything we can to maximize our total revenue, the revenue per employee ratio will be even more important to us. And once we become profitable, we will replace this metric by a profits per employee ratio, which is a pretty good measure of the value created by every member of our team. Until we’re profitable, we will closely monitor our funding runway.
In order for our customer development process to work effectively, we need as much participation from users as possible, especially at the validation stage. For this purpose, we will track the number of individual answers we’re getting from surveys. This metric will be directly influenced by the number of surveys and polls we run, but actual participation is what matters.
According to our customer development process, the development of most major product features should be funded through dedicated Kickstarter campaigns. In such a context, we will track both the number of backers and the number of individual pledges we’re getting for our Kickstarter projects. These metrics will be directly influenced by the number of Kickstarter campaigns we organize, but actual participation (backers and pledges) is what matters.
This metric tracks the ratio of crowdfunding for the financing of all R&D activities.
This metric tracks the ratio of new customers who came through referrals.
This metric tracks the ratio of new customers who came through partners.
The velocity of our product development process as defined by Pivotal Tracker will be tracked on a weekly basis. While it can be used for forecasting purposes, we will use it mostly as a way to learn from the past and improve our project management skills.
We will monitor the completion level for our next product release, also using Pivotal Tracker.
The number of lines of code covered by automated tests divided by the total number of lines of code is one of the primary metrics we will use to estimate the theoretical quality of our code. Over time, we might add similar metrics to cover the documentation coverage of our code.
This metric tracks the aggregated utilization rate of individual product features across the entire user base. It is measured by instrumenting the product with as many usage monitoring and metering sensors as possible. Such sensors capture anonymized and obfuscated data points that are both qualitative and quantitative. Its evolution over time indicates whether the product is getting better at addressing real user needs, or is subject to uncontrolled feature creep.
In order to asses the quality of our product, documentation, and community-lead support process, we will measure the average number of support tickets created by users. Obviously, we will want this ratio to be as low as possible, and to decrease over time.
Moving forward, more and more metrics will be added, many as the aggregation of sub-metrics. While we can’t promise that we will share all of them publicly, we will do our best to share the most meaningful of them, including our weekly growth rate. Stay tuned for cool dashboards!