What we built and why we built it.

aGoodProject was a platform built to help people make better decisions about which “good” projects, charities and companies to support in the world. This is a retrospective on what it was, why we did it, and what it became.

Problem statement: It’s hard to know who is making a difference.

We started aGoodProject to answer a fairly simple question that most people have faced at some point in their lives: What organization can I support that is really making an impact? This was the basis for aGoodProject - we wanted to help donors understand who to support, and help increase the flow of funds to the best projects possible.

History:

Our team came together from disperate backgrounds but rallied around the cause of solving this problem. Based on our experiences doing international aid work in a number of developing countries, we found that it was difficult to look at charities from the outside to determine if they were worthy of financial support or collaboration. This is a problem faced by many foundations and donors since there are thousands of small charities, NGO’s and social enterprises operating cross-sector, and very few agreed upon metrics for “high-impact” success. This was especially true during disaster relief scenarios, when people realize these evaluations do not exist.

Through personal experience, we developed a theory of how to identify good organizations using social proof. Both qualitatively and quantitatively, the most high-quality organizations tend to consist of and attract the most high-quality people to work within them.

Hypothesis IGood organizations are made up of good people. 

We identified a group of high-impact organizations. This initial trusted “seed” group consisted of people that expressed several key traits internally.

Through observation of nonprofit workers, we began developing a second hypothesis:

Hypothesis IIGood people only want to work with good people.

Through our research we found that social proof (ie. contacts through trusted friends) we could positively identify the individuals working within other charities, and determine the quality of their projects. We observed that most organizations with objectively high-impact projects tend to attract good people (solid critical thinking skills, personal alignment with organization mission, strong followthrough) to staff them. In turn, these good people tend to partner with other people within charities of similar quality. Since individuals that work within charities and nonprofits tend to be highly idealistic, they also tend to make strong and principled decisions about who they decided to work with.

Based on these findings on peer-evaluation amongst individuals, we established a third hypothesis, and one that became the foundation of aGoodProject.

Hypothesis III: Good organizations work with other good organizations.

Social proof looked to be one of final and best indicators for quality charities. We saw the potential to scale this type of evaluation across the internet. Our best external example of a service that developed an algorithm using indicators around on social proof was Google.

A Google Page-Rank for Nonprofits

Through promising initial results, we decided to look at ways to scale these findings to build a service to help donors find the best organizations to support. We wanted to make it easy for people to understand quickly who was doing the best work.

An early image from our splash page:

image

The Tech:

Using examples of platforms that have built quality indexes based on machine intelligence and social proof (ie. Google/Netflix), we generated a registry of over 800,000 nonprofit organizations in the US. We then built a scraping algorithm that would find, categorize and index the link relationships on each of these sites. 

We expected the best organizations to have the most quality In-Links (links from other quality charities), which would allow a fairly accurate meta-indicator of quality. We also planned on including a few external evaluators to help ensure fidelity. We would call this CharityRank.

image

Our goals were to:
  1. Find out which charities linked to each other most often in specific categories.
  2. Using several previously evaluated charities as “seeds”, create a quality index of the most-referenced, top-25 organizations on the internet.
  3. Build a donation tool to allow these top-organizations to properly receive funds, and take a small fee for these services.

We saw this as a big market. $270 Billion are given to charities by individuals alone in the US annually. Taking a small fee for providing access to data could hypothetically create a great business and provide a tremendously beneficial service.

The Metrics:

Our first iteration of this generated a massive amount of data. We ran Hadoop on our server for 7 days and indexed 60,000 organizations, and built a real-time list of the best-ranked organizations based on the number of quality nodes we determined from our seeds (the seeds were previously identified as high-quality organizations by us and others). This process gave us a top-25 list, and the results looked promising.

In an effort to develop our technology and funding prospects, we applied for YCombinator and went through several exchanges and then a face-to-face interview with Paul Graham and his team. They seemed genuinely interested in the problem we were trying to solve, and asked plenty of questions about the way we were developing our solution. By this point we had created a matching system for individual donors, allowing for people to search the top-ranked nonprofits in particular categories (eg. Disaster Relief in Burma). They were impressed with what we had built.

Since we were providing services and donations to these individual charities, we expected to be able to ask them for metrics in return. They disagreed. We had only barely emphasized these metrics, but it was a deep enough issue that it nullified the broader platform solution that we were basing our hypotheses on. Paul Graham was still optimistic, but felt like people made more significant decisions based on personal recommendations as opposed to metrics.

Based on the YC feedback we started sifting through our data. What we found was that the current web presence of existing organizations were low fidelity. In essence, some of the best organizations had some terrible websites, and at that moment (late 2009), there was just not enough charitable data on the internet for us to feel like we were tracking the best relationships. Though we could identify good charities, the linkages between the highest-quality ones were still weak. 

We felt like it would take several years before the best organizations established enough visible relationships online to ensure our results were of the highest quality. What we thought was low-hanging fruit was, as we discovered, very high up.

We also found that Paul Graham tended to be right on the other front - that people tend to make decisions on who to give to based on personal recommendations instead of algorithms. This is backed up by Hope Consulting’s great report on donor behavior.

As bootstrapped founders, a tenuous market and several years of big data crunching before we had a viable product didn’t look ideal, but we continued to do product/market testing. One of our initial team members used our existing research and has combined aGoodProject into her PhD at Stanford, which continues to this day. Our engineer used the big-data experience we had built around aGoodProject to start a Social Media analytics company which has become very successful. Several of us continue to do consulting in this space.

Key Findings:

  1. Charities do not yet have incentives to put their data online. This is slowly changing due to social media, but is still cherry-picked.
  2. Contextual metrics are very hard to track across organizations.
  3. Donors say they want data about their impact, but often times don’t actually respond to it.

The Future:

We still strongly feel this is a nut that can be cracked, but that it’s an expensive one. In order to properly build a system for evaluation around charities, great data is needed. Without this, one needs to have clearly aligned incentives for nonprofits to participate. Financial incentives are good, as well as marketing incentives. 

A successful donor/nonprofit evaluation platform in this space will require exceptionally good relationships, clear marketing channels and a solid foundation of referential data. Based on the emergence and popularity of the social web, there is a fantastic opportunity to explore evaluation inside existing graphs. The data may now actually be there. For donors - easy is key. All our research with donors suggested that that giving is for most people a luxury item. The nature of this luxury ensures that there is not much donor-bandwidth for anything but simplicity.

We continue to believe this is a problem that can be solved, but will require time, money and effort on the scale of a very well-funded startup.

For questions about this project and thoughts, feel free to dm me via twitter.

http://twitter.com/tobiasrose

aGP Team:

Tobias Rose-Stockwell of Human Translation
Robert Bailey of Google
Jason Toy of SocMetrics
Karina Kloos PhD candidate at Stanford
Sara Olsen of SVT Group

Some links to our research partners:

Money for Good by Hope Consulting

Amex Charitable Gift Surveys

Givewell

Charity Navigator

eNonprofit Benchmarks Study

GivingUSA

MIT’s JPAL