What we built and why we built it.
aGoodProject was a platform built to help people make better decisions about which “good” projects, charities and companies to support in the world. This is a retrospective on what it was, why we did it, and what it became.
Problem statement: It’s hard to know who is making a difference.
We started aGoodProject to answer a fairly simple question that most people have faced at some point in their lives: What organization can I support that is really making an impact? This was the basis for aGoodProject - we wanted to help donors understand who to support, and help increase the flow of funds to the best projects possible.
Our team came together from disperate backgrounds but rallied around the cause of solving this problem. Based on our experiences doing international aid work in a number of developing countries, we found that it was difficult to look at charities from the outside to determine if they were worthy of financial support or collaboration. This is a problem faced by many foundations and donors since there are thousands of small charities, NGO’s and social enterprises operating cross-sector, and very few agreed upon metrics for “high-impact” success. This was especially true during disaster relief scenarios, when people realize these evaluations do not exist.
Through personal experience, we developed a theory of how to identify good organizations using social proof. Both qualitatively and quantitatively, the most high-quality organizations tend to consist of and attract the most high-quality people to work within them.
Hypothesis I: Good organizations are made up of good people.
We identified a group of high-impact organizations. This initial trusted “seed” group consisted of people that expressed several key traits internally.
- Emphasis on programmatic impact evaluation
- Objective self-criticism
- Clarity in vision and mission
- Specialization around one region or sector (ie. 3 villages or water sanitation)
Through observation of nonprofit workers, we began developing a second hypothesis:
Hypothesis II: Good people only want to work with good people.
Through our research we found that social proof (ie. contacts through trusted friends) we could positively identify the individuals working within other charities, and determine the quality of their projects. We observed that most organizations with objectively high-impact projects tend to attract good people (solid critical thinking skills, personal alignment with organization mission, strong followthrough) to staff them. In turn, these good people tend to partner with other people within charities of similar quality. Since individuals that work within charities and nonprofits tend to be highly idealistic, they also tend to make strong and principled decisions about who they decided to work with.
Based on these findings on peer-evaluation amongst individuals, we established a third hypothesis, and one that became the foundation of aGoodProject.
Hypothesis III: Good organizations work with other good organizations.
Social proof looked to be one of final and best indicators for quality charities. We saw the potential to scale this type of evaluation across the internet. Our best external example of a service that developed an algorithm using indicators around on social proof was Google.
A Google Page-Rank for Nonprofits
Through promising initial results, we decided to look at ways to scale these findings to build a service to help donors find the best organizations to support. We wanted to make it easy for people to understand quickly who was doing the best work.
An early image from our splash page:
Using examples of platforms that have built quality indexes based on machine intelligence and social proof (ie. Google/Netflix), we generated a registry of over 800,000 nonprofit organizations in the US. We then built a scraping algorithm that would find, categorize and index the link relationships on each of these sites.
We expected the best organizations to have the most quality In-Links (links from other quality charities), which would allow a fairly accurate meta-indicator of quality. We also planned on including a few external evaluators to help ensure fidelity. We would call this CharityRank.
- Find out which charities linked to each other most often in specific categories.
- Using several previously evaluated charities as “seeds”, create a quality index of the most-referenced, top-25 organizations on the internet.
- Build a donation tool to allow these top-organizations to properly receive funds, and take a small fee for these services.
We saw this as a big market. $270 Billion are given to charities by individuals alone in the US annually. Taking a small fee for providing access to data could hypothetically create a great business and provide a tremendously beneficial service.
Our first iteration of this generated a massive amount of data. We ran Hadoop on our server for 7 days and indexed 60,000 organizations, and built a real-time list of the best-ranked organizations based on the number of quality nodes we determined from our seeds (the seeds were previously identified as high-quality organizations by us and others). This process gave us a top-25 list, and the results looked promising.
In an effort to develop our technology and funding prospects, we applied for YCombinator and went through several exchanges and then a face-to-face interview with Paul Graham and his team. They seemed genuinely interested in the problem we were trying to solve, and asked plenty of questions about the way we were developing our solution. By this point we had created a matching system for individual donors, allowing for people to search the top-ranked nonprofits in particular categories (eg. Disaster Relief in Burma). They were impressed with what we had built.
The Metrics Problem:
Unfortunately, we made the mistake of talking about how we anticipated extracting metrics from the best charities to ensure impact of contributions. This was, in retrospect, a fatal mistake in the interview. Since we were providing services and donations to these individual charities, we expected to be able to ask them for metrics in return.
Paul Graham wrote back the following after our interview:
I’m sorry to say we decided not to fund you guys. We liked you personally, but we’re skeptical about whether it’s possible for anyone to solve the problem you’re trying to solve. It seems to us that metrics are more suited to large charities than small ones. Intros to small ones seem likely to continue to happen the way they do now, through friends’ recommendations. We worry that if you tried to make recommendations for small charities depend on metrics, you’d just add to the workload of the people running them, and the result would be GIGO [Garbage In Garbage Out] — or more precisely, donations would end up being won by whoever was best at gaming metrics, just as grants tend now to be won by the people who are good at proposal writing.
Now we had only barely emphasized metrics in the interview, but it was a deep enough issue that it nullified the broader platform solution that we were basing our hypotheses on.
Based on the YC feedback we started sifting through our data. What we found was that the current web presence of existing organizations were low fidelity. In essence, some of the best organizations had some of the worst websites, and at that moment (early 2010), there was just not enough charitable data on the internet for us to feel like we were tracking real impact. Though we could identify good charities, the linkages between the highest-quality ones were still somewhat weak.
We felt like it would take several years before the best organizations established enough visible relationships online to ensure our results were of the highest quality. What we thought was low-hanging fruit was, as we discovered, very high up.
We also found that PG tended to be right on the other front - that people tend to make decisions on who to give to based on personal recommendations instead of algorithms.
As bootstrapped founders, a tenuous market and several years of big data crunching before we had a viable product didn’t look ideal, so we split up. One of the members of our initial team combined her work on aGoodProject into her PhD at Stanford, which continues to this day. Our engineer used the big-data IP built around aGoodProject to start a Social Media analytics company which has become very successful.
- Charities do not yet have incentives to put their data online.
- Metrics are profoundly difficult to standardize and track across organizations.
- Donors say they want data about their impact, but often times don’t actually respond to it.
We still strongly feel this is a nut that can be cracked, but that it’s an expensive one. In order to properly build a system for evaluation around charities, there needs to be more of a charitable web-presence on the internet. Without this, one needs to have clearly aligned incentives for them to participate. Financial incentives are good, as well as marketing incentives. For donors - easy is key. All our research with donors suggested that that giving is for most people a luxury item. The nature of this luxury ensures that there is not much donor-bandwidth for the kind of data that we were generating.
A successful donor-nonprofit platform in this space will require an exceptionally simple UI and very little raw data. Creating strong incentive for charities from passivity will require a lot of finesse in design, and plenty of technical chops to ensure the right data is tracked. We continue to believe this is a problem that can be solved, but will require time, money and effort on the scale of a very well-funded startup.
For questions about this project and thoughts, feel free to email me.
Some links to our research partners: