Building the Network Steward Productivity Ratio
An alternative analysis methodology for public networks
The majority of active frameworks used for the analysis of blockchain projects are based on high-level metrics like market cap, circulating supply, volume and easily manipulable metrics like Github commits, repos and Twitter followers. While these do provide some insights into the project and network activity they fail to explore the beating heart of the project — the productivity of the organization which raised funds.
At Aion we continuously collaborate and monitor the activity of our fellow projects building public networks. Many projects share similar fundamental objectives, such as:
- Developer acquisition
- Network security
- Network activity
With each project having raised varying levels of funding to achieve similar fundamental goals, how effectively have they deployed the capital they raised? We want to understand our own productivity, but also that of our peers.
Enter the Network Steward’s Productivity Ratio (NSP).
Each project has their own unique governance design (if you can find out publicly what it is), so the term ‘steward’ refers to the organizational structure that collected funding (whether it be a private sale, ICO, equity..etc) and deploys that capital towards reaching the objectives of their network.
The NSP ratio is broken down into 2 major categories; inputs (capital) and outputs (technical, accessibility, financial).
Note to reader:
- This is v1.0 of the NSP ratio and is not comprehensive of all the potential outputs. Very limited set of available information.
- All categories based on 0–5 scale. Simplicity was key to start.
- The weighting of categories is open to feedback. Influenced by the common objective set above.
- Project-specific data can be inaccurate or out of date
- There are many inherent biases as an employee of the Aion Foundation
Funds Raised ($USD)
This is relatively straightforward as a metric — how much did the organization raise in capital? However, it is complex due to the assets they raised, at what valuation and how did they manage that funding. Due to the widespread opaque nature of project financials, the one metric we have relative confidence is the USD amount they raise at the time of their sale or round. We hope the ecosystem becomes increasingly transparent and this input can be improved.
It is also important to note that we cannot accurately measure the capital held or deployed by projects in the form of their own asset. For some projects, this represents the majority of their funding, but we cannot with any consistency measure this amount. As transparency increases, this data can be included.
- Network Status: Is there a live public network in production? (5-yes, 0-no)
- Active Clients: How many unique client implementations exist on the network? (# of implementations)
3. Open Consensus: Is the network’s consensus mechanism open for public participation? Can an individual or organization participate in the validation of the blockchain without approval from some authority? Can the network be unilaterally controlled by a single entity (ex: validator approval, coordinator validators, authority approval)(5- yes, 0- no)
4. Fundamental Tools: Do the basic tools needed for any developers to effectively build an application on the network exist? (/5)
Example tools: Testnets, faucets, CLI’s, IDE’s, debugger, API’s, node hosting, docker images, user interaction tools (ex: metamask), SDK’s
- Documentation: Based on a quick google search how easy is it to find, understand, learn, build and troubleshoot development of an application on the network
2. Languages: The number of languages that developer, marketing and key information is available in (# of languages)
3. Asset Accessibility: How easy is it to access the project asset? (volume, fiat pairings, # of exchanges)
Financial & Organization
- Market Cap: Current market cap (circulating supply x price per coin)
2. Transparency: Is there transparent routine reporting or documentation about the project’s financial statements, organizational structure, shareholders..etc)
Time since Raise
How many months has the project been in operation for? The longer the project has been funded, the higher the expectation of outputs.
As a first attempt, weighting for each output is influenced by their relevance to the common objective of network adoption that is shared by projects. Outputs that are critical or directly correlate to adoption are given a higher weighting. In addition, outputs with higher degrees of confidence in their accuracy are higher.
Network Steward’s Productivity Ratio Formula
Putting it into practice
To test out this ratio it took a sample set of public blockchain projects that are well-known and that have at least the bare minimum of public information needed to evaluate them with reasonable confidence.
To evaluate the technical outputs of each project, John our in-house developer guinea pig navigated through every website, readme and repo of the projects below to put himself in the shoes of an interested developer motivated to build.
The ratios are all over the place, clearly demonstrating the wide range of productivity between teams. The first obvious insight gathered from the ratio illustrates outliers that have created a significant amount of outputs with limited initial funding. However this also highlights one of the caveats of the ratio — it does not take into consideration the token/coin treasuries held by teams that they use for operational capital. Projects like Stellar, IOTA, and NEO raised very little in their ICO or seed raise, resulting in a large productivity variance. Instead, they have relied heavily on their treasuries or initial distributions as operational capital. None the less with a small amount of initial liquid capital (early token markets provide little early liquidity)they have significantly outperformed.
The second immediate insight is the projects that are barely visible. These projects have either raised an astronomical amount of funding and produced marginal output (as defined in the methodology) since their raise, or have just not done much independent of how much they raised. For example, Tezos has been operating for over 18 months since they raised a massive amount of capital, and they have significantly underperformed relative to their less funded peers on basic outputs like developer documentation and tooling — key outputs for gaining any developer mindshare.
Taking out the outliers we can see the productivity variance among projects with similar capital and time since raise.
Lisk is the dominant project within this group. Having only raised $5.8M (USD value at the time of raise) they have produced a developer ecosystem with comprehensive documentation and tooling, an accessible and robust market around their coin and have begun to increase organizational transparency.
Aion is a strong performer in this comparable group. Despite a lower market cap and asset volume, they have outperformed many of their peers on technical execution, development fundamentals, and transparency.
What’s under the covers?
Raising big sums of capital, especially with brand name VC’s bring a level of clout and prestige to many of the notable projects. At first glance, it seems like they are rushing towards mainstream adoption and crushing through engineering milestones — calling for software engineers far and wide to apply their creativity on their platform. When you examine a sample set of fundamental building blocks required to build a technology and community focused on developing the applications or payment rails of the future — many have yet to lay the groundwork.
A few projects that have raised considerably smaller sums of capital have been very strategic in their deployment. Focusing on building fundamentals, shipping code and driving underlying value into their coin market in order to leverage that capital in the long-term.
Lever up with 3rd parties
Projects that have focused on deploying their capital into a loyal and productive ecosystem of 3rd party companies have been able to significantly lever up on their productivity. Ethereum is a prime example of this model and ecosystem outputs arent effectively captured in this first version of the ratio.
Many projects do not have a comprehensive network dashboard that provides real-time and historical data on transactions, contracts, nodes, hash rate and so forth. Having consistent on-chain data points across networks would enable further insight into network adoption, dapps, contracts, and users. This would provide additional output categories by which we can evaluate the effectiveness of how projects have deployed their capital to gain network adoption. As tools like State of the Dapps and Amberdata continue to build out this will provide additional consistent data points for reference.
Project Financial Transparency
Funds raised and their USD equivalent at the time of raise only provides a small glimpse into how projects are funded. Each project employs a unique treasury management policy that determines how they managed their raised crypto, holdings of their own coins, liquidation events, and lock-ups. We are just starting to see more emphasis and grassroots activism from users and investors demanding additional transparency into projects finances. Precedents are beginning to be formed on reporting guidelines, frequency and companies like Messari are leading the way in making this information available. This data will greatly improve accuracy into how much capital projects have at their disposal.
This is a first pass on a potential framework for comparing the productivity of teams that have raised capital to build public blockchain networks. The objective of this article and ratio is to highlight alternative metrics of success and open the door to a deeper discussion of how we evaluate projects and teams. If we want to continue to grow the web 3 ecosystem, as a community we must ask for increased transparency into the projects that are building these networks. Not only to be able to understand their progress and performance but to build collective trust and credibility into this new technological paradigm.
Do you have any suggestions for additional outputs or inputs to include in the ratio? Send them along!