GiveWell is a charity research organization that helps donors find the most effective places to give money for global health and development. They publish a detailed list of all th...

Our Mistakes | GiveWell
Jump to Navigation
Our Mistakes
>Email -->
This page logs mistakes we’ve made and lessons we’ve learned. We share this information so that others can benefit from our experience and evaluate us as an organization.
GiveWell is dedicated to finding and funding outstanding opportunities in global health and development, publishing the full details of our research on this website for donors to review. The organizations we fund must be open to our intensive review process and transparent public discussion of their track record and progress, both the good and the bad. We expect the same of ourselves.
We focus on issues that could affect the impression that people have of our work and its reliability, including errors in our research, grantmaking, organizational strategy, and operations. We have done our best to include mistakes that may have affected our decisions, that were preventable, and resulted in lessons that led us to make meaningful changes.
We have especially tried to include mistakes that we think might lead donors to reconsider donating to us. We haven’t listed missteps where the main cost was to our productivity or growth. If you know of other items you think should be listed here, please contact us .
Last updated: November 2025 ( December 2024 version , April 2024 version , 2019 version and 2015 version )
Table of Contents
Major issues
2016 to ongoing (first posted in 2018): Failure to publish all relevant intervention research
2020: Privacy Policy–related misstep
2017 to 2019: Failure to publish charity reviews
2007 to 2014: Failure to prioritize staff diversity in hiring
2014 to 2016: Failure to prioritize hiring an economist
2013 to 2016: Failure to address misconceptions organizations have about our application process
2009 to 2012: Errors in publishing private material
2006 to 2011: Tone issues
July 2009 to November 2010: Quantitative charity ratings that confused rather than clarified our stances
December 2007: Overaggressive and inappropriate marketing
June 2007: Poorly constructed “causes” led to suboptimal grant allocation
Smaller issues
For several years and ongoing (posted in 2024): Failure to estimate the interactions and overlap between programs
For several years up to June 2024: Failure to fully account for individuals receiving interventions from other sources
For several years up to January 2024: Failure to more frequently engage with outside experts
For several years up to November 2023: Failure to sense-check all raw data
Late 2020 to early 2022: Overestimated funds raised
April 2022: Failures of training and communication left us vulnerable to a crypto scam
2021: Miscalculation of and subsequent miscommunication around rollover funds
November 2018: Spreadsheet errors led to additional funding for one Top Charity
2017: Failure to publish internal metrics report
November 29, 2016 to December 23, 2016: Poor communication about Top Charity recommendations restricted to a specific program
December 2014: Errors in our cost-effectiveness analysis of Development Media International (DMI)
November to December 2014: Lack of confidence in the cost-effectiveness analyses we relied on for our Top Charities recommendations
January to December 2014: Completed fewer intervention reports than projected
November 2014: Suboptimal grant recommendation to Good Ventures
November 2014: Not informing candidate charities of our recommendation structure prior to publishing recommendations
July 2014: Published an update to the intervention report on cash transfers that misstated our view
February 2014: Incorrect information on homepage
January to November 2013: Social (non-family, non-financial) relationship between GiveWell staff members and staff of a recommended program not publicly disclosed
February to September 2013: Infrequent updates on our top-ranked charity
May to June 2013: Unpublished website pages intermittently available publicly
April to December 2012: Taking too much of job applicants’ time early in the recruiting process
March to November 2012: Poor planning led to delayed 2012 charity recommendations release
June 2012: Failure to discuss sensitive public communication with a board member
July 2007 to March 2012: Phone call issues
December 2011: Poor communication to donors making larger donations (e.g., greater than $5,000) via the GiveWell website
December 2011: Problems caused by GiveWell’s limited control over the process for donating to our Top Charities
December 2011: Miscommunicating to donors about fees and the deductibility of donations to our Top Charity
Late 2009: Misinterpreted a key piece of information about an organization to which we gave a $125,000 grant
August 1, 2009, to December 31, 2009: Grant process insufficiently clear with applicants about our plans to publish materials
November 25, 2009: Mishandling incentives to share information
May 2009: Failed to remove two private references from a recording that we published
January to September 2008: Paying insufficient attention to professional development and support
Table of Contents 1) Major issues 1.1) 2016 to ongoing (first posted in 2018): Failure to publish all relevant intervention research 1.2) 2020: Privacy Policy–related misstep 1.3) 2017 to 2019: Failure to publish charity reviews 1.4) 2007 to 2014: Failure to prioritize staff diversity in hiring 1.5) 2014 to 2016: Failure to prioritize hiring an economist 1.6) 2013 to 2016: Failure to address misconceptions organizations have about our application process 1.7) 2009 to 2012: Errors in publishing private material 1.8) 2006 to 2011: Tone issues 1.9) July 2009 to November 2010: Quantitative charity ratings that confused rather than clarified our stances 1.10) December 2007: Overaggressive and inappropriate marketing 1.11) June 2007: Poorly constructed “causes” led to suboptimal grant allocation 2) Smaller issues 2.1) For several years and ongoing (posted in 2024): Failure to estimate the interactions and overlap between programs 2.2) For several years up to June 2024: Failure to fully account for individuals receiving interventions from other sources 2.3) For several years up to January 2024: Failure to more frequently engage with outside experts 2.4) For several years up to November 2023: Failure to sense-check all raw data 2.5) Late 2020 to early 2022: Overestimated funds raised 2.6) April 2022: Failures of training and communication left us vulnerable to a crypto scam 2.7) 2021: Miscalculation of and subsequent miscommunication around rollover funds 2.8) November 2018: Spreadsheet errors led to additional funding for one Top Charity 2.9) 2017: Failure to publish internal metrics report 2.10) November 29, 2016 to December 23, 2016: Poor communication about Top Charity recommendations restricted to a specific program 2.11) December 2014: Errors in our cost-effectiveness analysis of Development Media International (DMI) 2.12) November to December 2014: Lack of confidence in the cost-effectiveness analyses we relied on for our Top Charities recommendations 2.13) January to December 2014: Completed fewer intervention reports than projected 2.14) November 2014: Suboptimal grant recommendation to Good Ventures 2.15) November 2014: Not informing candidate charities of our recommendation structure prior to publishing recommendations 2.16) July 2014: Published an update to the intervention report on cash transfers that misstated our view 2.17) February 2014: Incorrect information on homepage 2.18) January to November 2013: Social (non-family, non-financial) relationship between GiveWell staff members and staff of a recommended program not publicly disclosed 2.19) February to September 2013: Infrequent updates on our top-ranked charity 2.20) May to June 2013: Unpublished website pages intermittently available publicly 2.21) April to December 2012: Taking too much of job applicants’ time early in the recruiting process 2.22) March to November 2012: Poor planning led to delayed 2012 charity recommendations release 2.23) June 2012: Failure to discuss sensitive public communication with a board member 2.24) July 2007 to March 2012: Phone call issues 2.25) December 2011: Poor communication to donors making larger donations (e.g., greater than $5,000) via the GiveWell website 2.26) December 2011: Problems caused by GiveWell’s limited control over the process for donating to our Top Charities 2.27) December 2011: Miscommunicating to donors about fees and the deductibility of donations to our Top Charity 2.28) Late 2009: Misinterpreted a key piece of information about an organization to which we gave a $125,000 grant 2.29) August 1, 2009, to December 31, 2009: Grant process insufficiently clear with applicants about our plans to publish materials 2.30) November 25, 2009: Mishandling incentives to share information 2.31) May 2009: Failed to remove two private references from a recording that we published 2.32) January to September 2008: Paying insufficient attention to professional development and support
Major issues
2016 to ongoing (first posted in 2018): Failure to publish all relevant intervention research
How we fall short: In early 2016, we began to review the evidence base for a large number of programs to determine how we should prioritize programs for further evaluation. Our 2016 research plan discusses this priority (referred to as "intervention prioritization"). Since then, the vast majority of the work that we’ve done to review interventions remains private, in internal documents that we have not shared because we have not put in the time to ensure the work is of publishable quality.
We prioritized spending time to assess additional opportunities more highly than spending time to prepare our work for publication.
While we don’t believe that publishing this work is likely to have changed any of the recommendations we make to donors, we see this body of private materials as a substantial failure to be transparent about our work. We believe that transparency is important for explaining our process and allowing others to vet our work. The process of formally writing up our research and seeking internal and external feedback has also on occasion changed our conclusions.
This remains an area for improvement.
Steps we are taking to improve (posted December 2018): We plan to make progress on this work in 2020. Our research team has built into its plans for the year more time for publishing research we completed in the past as well as newer investigations.
Update (posted September 2023): Though we didn’t post an update on our progress in 2020, we’ve taken several steps since 2016 to publish more of our research. As our research team has grown, we’ve generated a greater volume of research. We’ve made progress on publishing these findings, but we still have more work to do.
Areas of progress . We’ve made substantial progress in publishing more of our research.
Grant pages. For example, in 2016, we were not yet publishing the full rationale behind each of our funding decisions; we now expect to publish a page on every grant we recommend for funding. A list of all pages we’ve published on grants since 2014 is available here .
Deprioritization decisions. We began publishing short notes that explain our decisions to stop or pause investigation on programs that don’t appear promising after an initial review (example here ). This format allows us to more quickly communicate our views about a deprioritized program so that people can evaluate and respond to our reasoning.
You can find all our short deprioritization notes in the program reviews dashboard .
We’re also publishing more quickly. In 2022, we began setting internal timeline targets for publishing new grant pages. Since the initial goals were set, we have published grant pages for Top Charities more quickly than before; they are now usually published less than three months after making a grant.
We are tracking timelines for all research publishing, so in the future we will be able to assess whether we met our goals. We sped up our publication process, in part, by eliminating unnecessary review steps and streamlining communication with grantees to make their review and signoff easier.
Areas for improvement . While we have shortened our timelines for publishing relative to 2016, especially for grants to Top Charities, we still have more work to do to publish research quickly, particularly research on interventions.
We are also working to increase the legibility of our research. In 2023, we have been prioritizing the legibility of our research. As part of our value of transparency, we want readers to be able to understand our reasoning, evaluate the ways we might be wrong, and provide feedback that will improve our research.
Toward that end, we’ve added new summaries to our research and grant pages that describe what the program or grant does, identify our key assumptions, and clearly explain the program or grant’s cost-effectiveness and what our largest sources of uncertainty are. You can see examples of these features on this grant page .
Update (posted November 2025): Our primary goals with respect to this mistake have been to publish our research more quickly, to share more of our research, and to publish research materials that are more legible. We have made incremental progress in all three areas since our last update, though we continue to have areas for improvement.
Publishing more quickly
We have had timeline targets for publishing new grant pages since 2022. Over the past year, we have substantially increased the number of grant pages we publish within three months after grant approval, though we remain significantly short of our goal. As of November 2025, 22 grant pages remain unpublished after three months, and 8 of those 22 grants were approved more than six months ago.
We have improved by assigning team leaders responsibility for moving grant pages forward and reviewing progress toward our goal each quarter. We are also tracking the steps in the process that take the longest and identifying strategies to streamline them. For example, in order to speed up the initial drafting of the grant page, which is among the longest steps in the process, researchers are now required to have a first draft of a grant page completed at the time they request approval for the grant.
Publishing more of our research
We have made substantial progress in publishing more of our research: Our overall publishing volume has nearly doubled over the past year. For example, during the first eight months of this metrics year (February 1, 2025 through September 30, 2025), we published 50 grant pages—already more than we published during all of 2024. In addition, we published 11 reports on specific programs or research questions—about as many as we published in all of 2024.
We have also developed new ways for sharing our work beyond our website. For example, we launched a podcast in March 2025 to provide updates on the impact of aid cuts on health programs and to share information about other aspects of GiveWell’s work.
Improving legibility
In 2023 and 2024, we made substantial progress in making our work more legible, and legibility is now a guiding principle for our research team. As noted in the update above, we added new summaries (like this one ) to our research and grant pages that describe what the program or grant does, identify our key assumptions, explain our cost-effectiveness estimate, and share our largest sources of uncertainty. In addition, our grant pages now include a more complete walk-through of our cost-effectiveness model (as can be seen here and on other pages).
Nevertheless, our grant pages remain lengthy and complex. The primary goal of our legibility effort has been to enable outside experts to review and critique our work. We believe our research publications are now close to accomplishing this goal, but they are often not legible to non-researchers.
While we aim to serve this broader audience through other communications, such as our blog and podcast, we are beginning to invest more into making the main findings and processes of our research more accessible.
2020: Privacy Policy–related misstep
How we fell short: We have gradually expanded our marketing efforts since 2018. In May 2020, as part of these efforts, we updated our Privacy Policy .
Our updated policy included the ability to share personal information with service providers to assist with our marketing efforts. Our contracts required them to keep the information confidential and only use it to assist us with their contracted services.
We decided to use Facebook as such a service provider, and on July 12, 2020, we used email addresses of some donors to create a Facebook Custom Audience to help us identify other potential donors. We understand this to be a common tool for social media marketing. The email addresses were hashed , or converted to randomized code, locally before they were uploaded to Facebook for processing to create a Custom Audience.
Facebook was required by our contract to delete the email addresses promptly after the Custom Audience was created and was not allowed to use the email addresses for other purposes.
We regret not having offered all donors a chance to opt out before we used their email addresses for this purpose.
How we addressed our mistake (posted June 2021): We deleted our Custom Audience on July 30, 2020, after realizing some of our donors may have wanted the chance to opt out before their email address was used to create a Custom Audience in order to identify potential new donors. This realization was prompted by our CEO asking for an update on our approach to privacy protection.
We notified donors whose email addresses were used about what happened. We emailed others about the update to our Privacy Policy and how to opt into or out of information-sharing in the future. We also added an opt-out form to our Privacy Policy page . We don’t plan to proactively contact our audience prior to each future marketing effort, though we may decide to on a case-by-case basis.
We completed an internal assessment of what led to this misstep. To avoid similar missteps in the future, we piloted a formalized process for scoping projects with a goal, among others, of ensuring the right level of review for very new types of work (as social media marketing was in 2020).
2017 to 2019: Failure to publish charity reviews
How we fell short : Since early 2017, we have had a significant number of conversations with organizations about applying for a GiveWell recommendation. We also completed a preliminary evaluation of a number of applications. Much of this work remains private.
In some cases, this is because we did not get permission to publish information from those we spoke to. In other cases, this is because we did not put in the time to write up what we have learned in a format that we believed the organizations would allow us to publish.
We do not plan to publish these reviews, as they are outdated and likely would not represent the current organizations accurately. We do not think it would be a good use of the organizations’ time to review our outdated work, nor would we expect to be successful in asking them to do so.
However, as we say above: “While we don’t believe that publishing this work is likely to have changed any of the recommendations we make to donors, we see this body of private materials as a substantial failure to be transparent about our work. We believe that transparency is important for explaining our process and allowing others to vet our work. The process of formally writing up our research and seeking internal and external feedback has also on occasion changed our conclusions.”
How we addressed our mistake (posted October 2020): Our research team built additional time for publishing into its process.
2007 to 2014: Failure to prioritize staff diversity in hiring
How we fell short: From 2007 to 2014, we did not prioritize diversity in our hiring, and our staff composition reflects the lack of attention we paid to this issue.
We believe a more diverse staff will make GiveWell better and more effective. We believe broadening our candidate pipeline and reducing any bias that exists in our hiring process will increase our likelihood of hiring the best people to achieve GiveWell’s mission. And, we believe that having a diverse staff and an inclusive culture will make GiveWell more attractive to prospective staff and improve retention.
How we addressed our mistake (updated October 2020): We have made progress, but we still consider staff diversity an area in which to improve.
Since 2014, we have taken a number of steps to increase diversity in our hiring. Those efforts include advertising open roles with professional groups that focus on underrepresented audiences and working with consultants to recruit candidates from underrepresented backgrounds. We also use a hiring process that aims to limit bias by focusing on work samples that are graded blindly where possible.
As of 2020, our team is significantly more gender, racially, and ethnically diverse than it was in GiveWell’s early years. It still is not as racially or ethnically diverse as we would like it to be. People from low- and middle-income countries, in which our Top Charities primarily operate, are not well represented on staff.
As of mid-2020, we continue to undertake specific projects to increase diversity on our staff, such as exploring whether our recruitment processes differ from best practices related to recruiting for a diverse workforce and then working to ensure that we’re following those best practices.
2014 to 2016: Failure to prioritize hiring an economist
How we fell short: From 2014 to 2016, we produced relatively few intervention reports , a crucial part of our research process . Our low production may be explained by the fact that we tasked relatively junior, inexperienced staff with these reports. We did not prioritize hiring a specialist, likely someone with a PhD in economics or the equivalent, who would have likely been able to complete many more reports during this time.
This delayed our research and potentially led us to recommend fewer Top Charities than we otherwise might have.
How we addressed our mistake (posted June 2017): In September 2016, we began recruiting for a Senior Fellow to fill this role. The role was filled in May 2017.
2013 to 2016: Failure to address misconceptions organizations have about our application process
How we fell short: We realized in 2016 that some organizations had misconceptions about our criteria for grantmaking and our research process. For example, some organizations told us that they thought programs could only be recommended for three years; others weren’t aware that we had recommended million-dollar “incentive grants” to Top Charities.
How we addressed our mistake (posted December 2016): We assigned a staff member the duties of charity liaison and made them responsible for communicating with organizations that are considering applying, to help them with our process and correct misconceptions.
2009 to 2012: Errors in publishing private material
How we fell short: There were two issues, one larger and one smaller:
Since 2009, we’ve made a practice of publishing notes from conversations with organizations, subject matter experts, and other stakeholders. Our practice is to share the conversation notes we take with the other party before publication so that they can make changes to the text. We only publish the version of the notes that the other party approves and will keep the entire conversation confidential if the party asks us to.
In November 2012, a staff member completed an audit of all conversations that we had published. He identified two instances where we had erroneously published the pre-publication (i.e., not-yet-approved) version of the notes. We have emailed both organizations to apologize and inform them of the information that we erroneously shared.
In October 2012, we published a blog post titled, “ Evaluating people .” Though the final version of the post did not discuss specific people or organizations, a draft version of the post had done so. We erroneously published the draft version which discussed individuals. We recognized our error within five minutes of posting and replaced the post with the correct version; the draft post was available in Google’s cache for several hours and was likely available to people who received the blog via RSS if they had their RSS reader open before we corrected our error (and did not refresh their reader).
We immediately emailed all of the organizations and people that we had mentioned to apologize and included the section we had written about them. Note that none of the information we published was confidential; we merely did not intend to publish this information and it had not been fully vetted by GiveWell staff and sent to the organizations for pre-publication comment.
How we addressed our mistake (posted December 2012): In November 2012, we instituted a new practice for publishing conversation notes. We began to internally store both private and publishable versions of conversation notes in separate folders (to reduce the likelihood that we upload the wrong file) and assigned a staff member to perform a weekly audit to check whether any confidential materials have been uploaded. As of this writing, we have performed three audits and found no instances of publishing private material.
We take the issue of publishing private materials very seriously because parties that share private materials with us must have confidence that we will protect their privacy. We have therefore reexamined our procedures for uploading files to our website and are planning to institute a full scale audit of files that are currently public as well as an ongoing procedure to audit our uploads.
Update (posted October 2016): We established a publishing process that clearly separates publishable versions of conversation notes from private versions of notes, periodically auditing published notes to ensure that all interviewees’ suggestions have been incorporated. At the time of this update, our process requires that explicit approval to publish is given for each file we upload, and we periodically audit these uploads to ensure that private information has not been uploaded to our server.
2006 to 2011: Tone issues
How we fell short: We continue to struggle with an appropriate tone on our blog, one that neither understates nor overstates our confidence in our views (particularly when it comes to charities that we do not recommend). An example of a problematic tone is our December 2009 blog post, Celebrated charities that we don’t recommend . Although it is literally true that we don’t recommend any of the organizations listed in that post, and although we stand by the content of each individual blog post linked, the summaries make it sound as though we are confident that these organizations are not doing good work; in fact, it would be more accurate to say that the information we would need to be confident isn’t available, and we therefore recommend that donors give elsewhere unless they have information we don’t.
We wish to be explicit that we are forming best guesses based on limited information, and always open to changing our minds, but readers often misunderstand us and believe we have formed confident (and, in particular, negative) judgments. This leads to unnecessary hostility from, and unnecessary public relations problems for, the groups we discuss.
How we addressed our mistake (posted July 2010): We feel that our tone has slowly become more cautious and accurate over time. At the time of this update, we are also resolving to run anything that might be perceived as negative by the group it discusses, before we publish it publicly, giving them a chance to make any corrections to both facts and tone. (We have done this since our inception for charity reviews , but now intend to do it for blog posts and any other public content as well.)
July 2009 to November 2010: Quantitative charity ratings that confused rather than clarified our stances
How we fell short: Between July 2009 and November 2010, we assigned zero- to three-star ratings to all programs we examined. We did so in response to feedback from our fans and followers—in particular, arguments that people want easily digested, unambiguous “bottom line” information that can help them make a decision in a hurry and with a clean conscience. Ultimately, however, we decided that the costs of the ratings—in terms of giving people the wrong impression about where we stood on particular programs—outweighed the benefits.
How we addressed our mistake (posted November 2010): By December 2010, we will replace our quantitative ratings with more complex and ambiguous bottom lines that link to our full reviews.
More information:
September 2010 blog post on the problems with quantitative charity ratings
October 2010 blog post on why these ratings don’t fit with our mission
December 2007: Overaggressive and inappropriate marketing
How we fell short: As part of an effort to gain publicity, GiveWell’s staff (Holden and Elie) posted comments on many blogs that did not give adequate disclosure of our identities (we used our first names, but not our full names, and we didn’t note that we were associated with GiveWell); in a smaller number of cases, we posted comments and sent emails that deliberately concealed our identities. Our actions were wrong and rightly damaged GiveWell’s reputation. More detail is available via the page for the board meeting that we held in response.
Given the nature of our work, it is essential that we hold ourselves to the highest standards of transparency in everything we do. Our poor judgment caused many people who had not previously encountered GiveWell to become extremely hostile to it.
How we addressed our mistake: We issued a full public disclosure and apology, and directly notified all existing GiveWell donors of the incident. We held a Board meeting and handed out penalties that were publicly disclosed, along with the audio of the meeting. We increased the Board’s degree of oversight over staff, particularly with regard to public communications.
June 2007: Poorly constructed “causes” led to suboptimal grant allocation
How we fell short: For our first year of research, we grouped charities into causes (“Saving lives,” “Global poverty,” etc.) based on the idea that programs within one cause could be decided on by rough but consistent metrics: for example, we had planned to decide Cause 1 (saving lives in Africa) largely on the basis of estimating the “cost per life saved” for each applicant. The extremely disparate nature of different programs’ activities meant that there were major limits to this type of analysis (we had anticipated some limits, but we encountered more).
Because of our commitment to make one grant per cause and our overly rigid and narrow definitions of “causes,” we feel that we allocated our grant money suboptimally . For example, all Board members agreed that we had high confidence in two of our Cause 1 (saving lives) applicants, but very low confidence in all of our Cause 2 (global poverty) applicants. Yet we had to give equal-sized grants to the top applicant in each cause (and give nothing to the second-place applicant in Cause 1).
How we addressed our mistake (posted 2007): We shifted our approach to more broadly defined “causes,” which gave us more flexibility to grant to organizations that appeal to us most. e We also switched to exploring broad sets of programs that intersect in terms of the people they serve and the research needed to understand them, rather than narrower causes based on the goal of an “apples to apples” comparison using consistent metrics.
Smaller issues
For several years and ongoing (posted in 2024): Failure to estimate the interactions and overlap between programs
How we fall short: We did not adequately consider or model the potential interactions and overlaps between different health programs we fund or that are being implemented in the same regions. This oversight could lead to inaccurate estimations of the combined impact of these programs. Specific examples include:
In regions like Northern Nigeria, multiple programs by GiveWell or others are being delivered simultaneously, including insecticide-treated nets (ITNs) , seasonal malaria chemoprevention (SMC) , vaccines , oral rehydration solution (ORS) , and azithromycin distribution. 1 We have not thoroughly assessed how these overlapping interventions might interact or affect each other’s efficacy.
In our vitamin A supplementation (VAS) cost-effectiveness analysis, we did not account for the potential interaction between VAS and the expected scale-up of azithromycin distribution in high-mortality settings. This oversight could be leading us to overestimate VAS cost-effectiveness by approximately 20%. 2
More broadly, we have not sufficiently examined how our focus on funding vertical programs (those that deliver a specific intervention) might impact overall health systems and the delivery of other essential health services.
By not considering these interactions, we risk overestimating the combined impact of multiple interventions and potentially missing opportunities to achieve greater impact through more integrated approaches.
Steps we’re taking to improve:
We plan to develop an approach to modeling overlapping effects of programs and address this issue in upcoming grant investigations where overlap is most likely, such as considering the interaction between azithromycin distribution and VAS.
We plan to publish our view on why we typically support vertical over horizontal programs to solicit feedback and encourage discussion on this approach.
By addressing these issues, we aim to improve the accuracy of our impact estimates, identify potential synergies between programs, and ensure our funding decisions consider the broader context of health systems in the regions where we work.
This issue was raised as a part of our “red teaming” of Top Charities. You can read more about this mistake and our broader red-teaming process here .
For several years up to June 2024: Failure to fully account for individuals receiving interventions from other sources
How we fell short: We did not adequately investigate or account for the possibility that individuals might receive interventions like insecticide-treated nets (ITNs), vaccines, or vitamin A supplementation (VAS) from sources other than the programs we fund. While we discussed the possibility that recipients receive interventions from other sources in our pages on ITNs , seasonal malaria chemoprevention (SMC) , VAS , and vaccines , we now believe that the adjustments we made were insufficient. For example:
In our analysis of ITN distribution campaigns in the Democratic Republic of the Congo (DRC), we assumed that only 5% of the population would obtain nets from alternative (non-campaign) sources, 3 based on trials conducted about 30 years ago. However, more recent evidence suggests this figure may be significantly higher, potentially 25-50% for children under 3 years old. 4
For New Incentives’ conditional cash transfer program for vaccinations in Nigeria, we may have underestimated the rate at which vaccination coverage was increasing in the absence of the program. 5 Our adjustment was equivalent to assuming coverage increased by roughly 1.5 percentage points per year, while some surveys indicate it may have been increasing by 5 percentage points per year in several Nigerian states prior to New Incentives’ entry. 6
In our evaluation of vitamin A supplementation (VAS) programs, we relied on outdated surveys and modeled estimates to determine vitamin A deficiency rates, without fully accounting for the potential impact of vitamin A fortification programs introduced in many countries since those surveys were conducted. 7
These oversights could have led to overestimation of our programs’ impact and cost-effectiveness. For instance, in the case of ITN distribution in DRC, this issue could have potentially lowered our estimate of cost-effectiveness by 15-30%. 8
How we addressed our mistake (updated October 2025):
We updated our analysis of ITN distributions to account for higher rates of routine distribution, and have investigated counterfactual coverage in other countries where we fund net distributions, such as the Democratic Republic of the Congo.
We revised our estimates of counterfactual vaccination coverage for New Incentives’ program to account for the underlying increase in vaccination rates over time.
We explored funding additional surveys of vitamin A deficiency in countries where we expect to consider large VAS grants, to get more up-to-date and accurate data.
In the Top Charity cost-effectiveness analyses we use for decision-making, we explicitly state our assumptions about the percentage of individuals who would receive interventions from other sources.
We engaged with experts to better understand how campaigns for health commodities we fund interact with routine distribution systems, and we have considered supporting routine distribution in some areas.
We took these steps to improve the accuracy of our impact estimates and ensure we’re directing funding to where it can have the greatest additional benefit.
This issue was raised as a part of our “red teaming” of Top Charities. You can read more about this mistake and our broader red-teaming process here .
For several years up to January 2024: Failure to more frequently engage with outside experts
How we fell short: We did not consistently or frequently enough seek input from external experts, including implementation experts, researchers, individuals with in-country experience, and fellow funders. This limited engagement may have caused us to miss important perspectives and insights that could have improved our analyses and funding decisions. Specific examples include:
During our red teaming process, external experts pointed out that we may be using overly optimistic or outdated assumptions on insecticide-treated net (ITN) durability. 9 This insight, which we had not previously identified, could significantly affect our cost-effectiveness estimates for ITN programs.
Conversations with malaria experts and program implementers, conducted in parallel with our red teaming, revealed that more individuals were likely receiving nets via routine distribution than we had previously estimated . This information could have important implications for our funding decisions related to mass net distribution campaigns.
By not consistently seeking external input, we risked operating with incomplete or outdated information, potentially leading to suboptimal funding decisions or missed opportunities for greater impact.
How we addressed our mistake (posted November 2024):
We now more regularly attend conferences with experts in areas where we fund programs, such as malaria, vaccination, and nutrition, to stay current with the latest research and implementation insights.
We increased our outreach to experts as a standard part of our grant investigations and intervention research processes. While we have always consulted with program implementers and researchers to some extent, we now allocate more time to these conversations than we had in the past.
We implemented new approaches for soliciting feedback on our work from a wider range of experts and stakeholders.
By increasing our engagement with outside experts, we aim to broaden our perspective, challenge our assumptions, and ultimately improve the quality and impact of our grantmaking decisions.
This issue was raised as a part of our “red teaming” of Top Charities. You can read more about this mistake and our broader red-teaming process here .
For several years up to November 2023: Failure to sense-check all raw data
How we fell short: Note that we don’t list every small research mistake we make and correct. This page lists mistakes that “affect the impression that people external to the organization have of our work and its reliability.” We list these two examples because they’re representative of a category of research error we have made.
In brief, we estimated some parameters in our cost-effectiveness models by plugging in raw data at face value without subjecting the numbers to common-sense scrutiny or examining how they could be inaccurate.
This is a quote from our writeup on how we address uncertainty :
To estimate insecticide resistance across countries, we look at bioassay test results on mosquito mortality. These tests essentially expose mosquitoes to insecticide and record the percentage of mosquitoes that die. , the maximum range possible.
To come up with country-specific estimates, we take the average of all tests that have been conducted in each country and do not make any further adjustments to bring the results more in line with our common-sense intuition.
Another example comes from that same page:
Another major program area we support is childhood immunization… To model the cost-effectiveness of these programs, we need to take a stance on the share of deaths that a vaccine prevents for a given disease. This assumption enters our cost-effectiveness estimates through our etiology adjustments …. To estimate an etiology adjustment for the rotavirus vaccine, which targets diarrhoeal deaths, we do the following:
Take raw IHME data on the number of deaths from diarrhea among under 5s in the sub-regions where these programs operate
Take raw IHME data on the number of deaths from rotavirus (a subset of diarrheal deaths)
Divide the two to get an estimate of the % of diarrhea deaths in each region that could be targeted by the rotavirus vaccine
As Figure 5 shows, this leads to implausibly large differences between countries; we effectively assume that the rotavirus vaccine is almost completely ineffective at preventing diarrhoeal deaths in India. This seems like a bad assumption; the rotavirus vaccine is part of India’s routine immunization schedule, and a randomized controlled trial in India that administered the rotavirus vaccine to infants showed a 54% reduction in severe gastroenteritis.
How we addressed our mistake (posted April 2024): We began taking the steps described in our writeup on uncertainty to address this issue. We are now notably more attentive to the data we aggregate to arrive at our estimates, thus ensuring that we don’t follow the (sometimes noisy) data we have without sense-checking the numbers.
Late 2020 to early 2022: Overestimated funds raised
How we fell short: In late 2021, we believed (and we wrote) that we would raise $1 billion annually by 2025 . This was a massive overestimate (which we corrected in this mid-2022 post ), and this mistake led to the following long-term problems:
In late 2021, we worried that our research might not be able to keep up with the volume of donations we expected. That is, we thought we’d raise significantly more funding than the cost-effective funding needs we would identify. Because we’re committed to being transparent with donors, we wrote that we were holding onto funds we had received (and that we expected to hold funds in the future) because we weren’t finding enough grant opportunities to give them to.
Unfortunately, the way we communicated about this led to a long-standing, hard-to-correct belief in our audience that we have more funding than we can spend.
Because we believed that we would raise so much money, we put significantly more attention on building our research team than on building our outreach team, leading to a further imbalance between the volume of highly cost-effective funding opportunities we’ve identified and our ability to raise sufficient money to fill those funding gaps.
How we addressed our mistake (posted April 2024):
We were very explicit publicly about two facts. First, we expect to find cost-effective programs to which we can direct all funding we receive. Second, the organizations we recommended are in fact funding-constrained.
We hired for senior roles across our outreach team to build outreach capacity so that we can raise more money and fill more of the most cost-effective funding gaps we find.
We previously shared another mistake related to this episode. For more detail, see below in the section titled “2021: Miscalculation of and subsequent miscommunication around rollover funds.”
April 2022: Failures of training and communication left us vulnerable to a crypto scam
How we fell short: In April 2022, we received an email requesting a refund of a cryptocurrency donation, and we decided to grant it despite our no-refunds policy. We later realized that this request hadn’t come from the real donor. We credited the real donor with the gift and lost $4,600, which we made up for by drawing on our unrestricted funding.
Cryptocurrency donations are especially fertile ground for scams because information about all crypto transactions is publicly available online, except for the identity of the person initiating the transaction. The email we got in this case largely fit the description of a common scam: a person claims that they’ve accidentally transferred a larger amount than they intended, often providing screenshots of public details of the transaction as "proof," and asks for a refund, though they didn’t actually make the donation themselves.
GiveWell had safeguards in place against this, including requesting that all crypto donors fill out a donation report form against which to verify such claims and maintaining a no-refunds policy (for all types of donations, but particularly for crypto). However, the donor relations team handling requests like this were relatively new to their roles at the time and unfamiliar with this type of scenario, and decided to override the no-refunds policy in light of what they felt was a straightforward request.
We think this mistake was largely caused by a failure of training and knowledge sharing with the new donor relations staff:
We had made exceptions to the no-refunds policy in the past, but we hadn’t adequately documented the specific and limited reason for which exceptions could be made. We should have made these clearer in our internal training materials so new staff would be less reliant on judgment calls. We should also have communicated the no-refunds policy more clearly on our website.
Former donor relations staff had encountered this type of scam before, but we hadn’t included information about it in our training materials.
How we addressed our mistake (posted September 2022): To avoid this in the future, we did the following:
Provided extra training on crypto scams to the donor relations team and incorporated this information into our training materials for new staff.
Revised the cryptocurrency donation pages on our website to clearly highlight that crypto donations are non-refundable and that donation report forms should be submitted prior to a donation.
Circulated an internal memo clarifying our no-refunds policy for relevant staff.
Discussed our cryptocurrency donation practices with experts and implemented best practices for both straightforward and more complicated transactions to reduce the incidence of fraud.
If you are considering making a cryptocurrency donation and want to know more about the steps we take to prevent fraud, please reach out to donations@givewell.org .
2021: Miscalculation of and subsequent miscommunication around rollover funds
Rollover funds are funds that we raise in a certain year but choose not to spend in that year, instead “rolling them over" to the following year because we believe those funds will have a greater impact if spent in the future. For background on rollover funds, see the page we published here .
How we fell short: In November 2021, we announced that we expected to roll over about $110 million in funding to grant to future opportunities. We ultimately rolled over substantially less. We rolled over $18 million that was available for grantmaking as of the end of metrics year 2021 (i.e., January 31, 2022). We also carried over an additional approximately $40 million that was received in metrics year 2021 but was not yet available for granting; this was a combination of:
unrestricted funds that were designated by the Board for grantmaking in mid-2022, in accordance with our excess assets policy
donations given to the Top Charities Fund in January 2022, which were allocated alongside donations given to the Top Charities Fund in the rest of Q1 2022
While our forecast was roughly accurate about both funds raised and funds directed, we failed to define the question well enough to predict how much of our available funding we would have left over.
Much of the discrepancy came from:
Including funds given through GiveWell and designated for specific organizations (e.g., a donation given through our website for the Against Malaria Foundation) on one side of the ledger but not the other. These funds were granted out to the organizations to which they were designated, but we had erroneously been considering them as adding to the total amount of funds that would be available for granting at our discretion. This led to approximately
Break it down for me
Plain-language summary powered by AI