Discussion: Potential Changes to Compensation Program for Cycle 5

Table of contents

  1. Projection and Reflection
  2. Contributor Tracks and Skill Domains
  3. Contributor Performance Evaluation

Objectives of these proposed changes

Our approach is informed by the results of the survey.

  • Enable decisions to be made at the level where context is highest
  • Create process and valuation parity across all contributors
  • Enable more / better evaluation of workstreams, priorities, and outcomes
  • Minimize popularity contests and the impact of visibility on evaluation
  • Facilitate more effective allocation of resources, both for the DAO (workstream priorities) and contributors (time/attention)
  • Diversify our evaluation tools from one to multiple modalities
  • Avoid concentration of power without accountability
  • Create more modularity in assessing priorities, commitments, progress, performance/contribution.

Out of scope

Variables left to be specified in subsequent proposals

  • Monthly burn rate
  • Pay rate for different market value/skill ratings
  • Actual compensation amounts per individual
  • Relative ratings of skill domains


As we see it, there are four big categories of challenges the DAO currently faces. None of these are new, but in recent months several have become significantly more accute, eg with our DAO expanding and especially with recent unfavorable market conditions.

A) Resource Constraints (budget, runway, etc)
B) DAO Objectives and Priorities
C) Contributor Compensation and Engagement Options
D) Performance Evaluation and Accountability

This proposal focuses on mechanisms and process for how to allocate resources and to whom, addressing categories B, C, and D. It leaves the question of how many resources – category A – as a separate decision.

High level changes

  • Extend cycles to 3 months
  • Keep the tracks, but use the same evaluation primitives and methods for both
  • Introduce a projection and reflection process for bottom-up priority-setting and performance evaluation
  • Change value levels to Market Value Levels
  • Add granularity (10 levels) and more concrete definitions of each level, based on skill domains

Concept 1: Projection and Reflection

The first part of our proposal addresses the problem of how to set DAO objectives and priorities (category 2) and forms part of a process for facilitating performance evaluation and creating accountability (category 4).

The approach is split into four phases, with two looking forward as “projection” and two looking backwards as “reflection”. Both projection and reflection include a DAO phase and a contributor phase.

The goal here is to facilitate bottom-up determinations of the following:

  • Prioritization of DAO objectives and workstreams (“importance”)
  • Allocation of contributor commitment to workstreams (“commitment”)
  • Evaluation of workstream success (“progress”)
  • Evaluation of value created by contributors in relation to each workstream (“performance”)

Each of these measures are valuable directly and also as primitives that can be composed into additional mechanisms and processes.

For illustrations of the concept, see the following resources:


The projection phases occur at the beginning of the cycle. They can also be updated by individual contributors at any subsequent point.

DAO Phase (A)

The DAO uses a collective signalling mechanism to weight high-level workstreams.

Prototype case: We use the 4 circles as starting workstreams.

  • Each contributor is given a number of “Importance Points” in proportion to their Warcamp DAO shares.
  • Contributors allocate Importance Points to workstreams, as a prediction or projection of how valuable that workstream will be during the cycle, i.e. as reflected up in Phase (D).
  • Relative Importance scores among workstreams serve as a signal for where the DAO should allocate funding and/or where contributors should allocate their attention/energy.
  • Data collection could be as simple as using a google form.

Contributor Phase (B)

Each contributor sets their individual commitment to workstreams (now rated by importance in previous Phase)

  • Each contributor is given a number of “Commitment points”
    • Commitment trackers use their Commitment % as points
    • Reotractive trackers can allocate up to 100 points
  • Contributors allocate their Commitment points across workstreams, signalling their intended areas of focus for the upcoming period


The reflection phases occur at the end of each month and at the end of the cycle. Within a cycle, the monthly checkpoints serve as preliminary measures and may also determine base compensation for retroactive trackers as well as bonus compensation for all contributors.

DAO Phase (C)

Contributors reflect on the value created by each workstream

  • Each contributor is given a certain number of “Progress points”
  • Contributors allocate Progress points across workstreams according to individual assessments of progress made within and value created by each workstream. In other words, closing the loop on Phase (A) by evaluating the “actual” importance of each workstream.
    • This could be done with a Warcamp coordinape circle, with 4 recipients (the workstreams)

Contributor Phase (D)

Each workstream outputs a list of contributors who added value to the workstream and a Performance score for their contributions to the workstream. The method for deriving these scores is left to each workstream.

One example is to use a workstream-specific coordinape circle. Another would be to allow individual contributors to self-evaluate.

The resulting Performance scores are then weighted by the Progress scores from Phase (C) to normalize individual contributor Performance scores across all workstreams.

DAO factors: Importance & Progress
Contributor factors: Commitment & Performance

Concept 2: Contributor Tracks and Skill Domains

This approach maintains both Commitment and Reotractive tracks, but modifies them in several ways:

2.1 Market Value Levels (MVLs)

Value Levels are replaced with Market Value Levels

  • There are 10 Market Value Levels
  • Each MVL now corresponds to a particular relative “market value” rather than “predicted value created”. This means that contributors with skills domains that are valued/priced more highly by the market will tend to be at higher MVLs.

2.2 Skill Domains

Each MVL has a more concrete definition based on skill domains.

The specific skill domains and ratings are to be established separately from this proposal (see some discussion here), but the idea is to create common grounding for more articulate peer feedback and evaluation. Each skill domain will likely have a 1-10 scale.

Contributors with multiple skill domains may determine their MVL as a balance of ranges from each skill domain.

Illustrative skill domain example:

  • Smart Contract Programming - may fall within MVLs 5-10
  • Web Programming - MVLs 4-9
  • Graphic & Other Design - MVLs 4-9
  • Technical Documentation - MVLs 4-8
  • Copywriting - MVLs 1-7
  • Administrative Function - MVLs 1-7
  • Operational Organization Design & Implementation - MVLs 4-10
  • Project Management - MVLs 3-10
  • Meeting Facilitation - MVLs 1-6

2.3 MVLs for all

All contributors (on both tracks) have an MLV evaluation. This puts retroactive compensation rates on the same scale as the commitment track.

Concept 3: Contributor Performance Evaluation

Multiple modalities of inputs feed into the process for how contributor MVLs are determined

  1. Intersubjective evaluation scores, i.e. from Reflection
  2. Qualitative feedback, i.e. from fellow contributors
  3. Self-advocacy
  4. Peer facilitation

3.1 Intersubjective Evaluation Scores

These are normalized the Performance scores from the Reflection contributor phase (1.2.D).

3.2 Qualitative Peer Feedback

This proposal does not specify a mechanism here, but it does establish it as a first-class input into performance evaluation.

Suggested mechanisms should likely be developed in conjunction with the skill domains from 2.2.

3.3 Self-advocacy

  • Contributors should advocate for themselves at a particular MVL
  • Show your work
  • Peers should give feedback (including as part of 3.2)

3.4 Peer facilitation

We present two options here: a) peer-3-peer facilitation, or b) an evaluation committee

Option A – Feedback Facilitator Triangles

Each contributor is placed in a Triangle with two other contributors. To start, this can be done randomly. Whenever the number of contributors is not divisible by three, contributors can form foursomes to ensure there is nobody left out.

At the end of each month, each contributor facilitates a conversation with their facilitee at the end of each. They work through all inputs – including those described in 3.1-3.3 – and work out preliminary skill levels and overall performance.

  • for retro trackers, this includes coming up with an appropriate payment amount to request

At the end of each cycle, this happens as well, but this time the outcome is that the facilitee come up with a revised set of skill levels for their skill domain. an appropriate value level.

Those value levels are placed into an omnibus proposal for ratification after a 3 day comment and dispute period.

Option B – Evaluation Committee

Under this option, a committee of 5 will be elected by the DAO, with 1 representative each from each circle, and 1 from warcamp overall.

With the start of each cycle, the committee will assume the responsibility for ensuring that each contributor’s performance is reviewed and that a value level is suggested for them.

The committee will not have the authority to set value levels for any contributor. Rather, they will have the responsibility for

That responsibility will certainly come with some influence, so…

  • The committee will rotate every 2 cycles, on a staggared basis
  • Each committee member will be required to stake DAO shares, which can be slashed by the DAO should they not carry out their responsibility appropriately.

Opinionated Suggestion: Prioritize compensating contributors who dedicate 80-100% of their time to DAOhaus, even if that means doing some role shuffling.

In the projection phase, essentially this is a quadratic system, maybe even close to a conviction system. There are tools available that would enable us to hold quadratic votes. I think the hardest issue is surfacing what should be placed on a ballot.

Can you say more about why you’d recommend quadratic voting here?

My initial sense is that I don’t think QV would be appropriate in this scenario. In my view, our DAO shares are the best measure we have for something like this. While are certainly not perfect, they are a reflection of:

  1. accumulation of value created for DAOhaus, as measured by peers (think about this as DAOhaus-specific reputation)
  2. commitment to and alignment with DAOhaus, since shares are backed by HAUS tokens contributors have staked into the DAO
  3. context, knowledge, and experience with DAOhaus, which is an important input into best understanding what DAOhaus needs

In a purely financial case (eg voting with liquid tokens, or money), QV helps dampen the effect of plutocracy while preserving some ability to sense stronger opinions. But in our case, I think it would be a mistake to dampen the effects of the above three properties.

I don’t know if I was recommending so much as seeing a similarity, in that there are systems that are meant to surface multiple choices amongst multiple participants.
Quadratic voting is meant as a system that if people care about an issue enough they are willing to pay more for it (in this case vote costs). I do understand what’s you are saying about vote weighting and I don’t disagree with you about the reasons for this. Ideally if votes could be distributed in a weighted fashion and plugged into a quadratic vote, that would be interesting. Can’t say I know how to do that atm.