by Martin Jucker

We learn from our mistakes. It’s a bit of a cliché, but it’s actually proven scientifically. In fact, we learn more from mistakes than from things we’ve done right. So, how is it that in academics, where constant learning is the highest ideal, mistakes don’t get published?

I started to think about that question after I attended an overview lecture about the history of geophysical fluid dynamics by one of the most illustrious figures of climate dynamics, Prof. Isaac Held. He introduced his lecture by reminding us that “history is important, as we need to know history to learn from previous mistakes”. But, unfortunately, the entire lecture that followed was about the great minds and breakthroughs of our field, and no mention of mistakes. The reason for this is that academic mistakes do not get published. Thus, we can’t learn from them.

I think there are two main social reasons for not publishing anything that went wrong (I interpret the word “mistake” in a very broad sense here). First, we don’t want others to know we failed. Our whole “outside” life, and in particular professional life, is about showing our brightest self to the world, selling ourselves, and hiding any possible weakness. This is how we get jobs.

Second, we want others to lose time making the same mistakes. We lost precious time due to mistakes and trial-and-error while our peers went on and published shiny new results. So, once we’ve solved our problems and corrected our mistakes, we want to make sure everybody else trying to follow us encounters the same pitfalls. This is the main reason not to publish our data and, maybe more importantly, our code.

Then, there is good scientific reason not to publish anything erroneous. The scientific community wants to make sure published results are trustworthy, and don’t need to be repeated for confirmation. Rather, we want to use previous results, whoever obtained them, to move forward and push the boundaries. This is why peer-review is so important for research. Having potentially publishable work scrutinised by other experts is a powerful mechanism to assure the accurateness of published work.

However, publishing wrong results is very different from publishing mistakes, dead ends, or difficulties.

A downside of rigorous peer-review is that material where something went wrong will not be accepted. The machinery is set up to filter the positive from the negative results.

For-profit publishing houses are not interested in publishing anything that didn’t work. They want their articles to be discussed in the media and academic circles as that major breakthrough or this major study, that game changer. Because they need to sell the paper to make profit.

This is surprising, as in publishing, bad news sells much better than good news, and humans react much more strongly and quickly to bad news than good news. This is why the daily papers and newsfeeds are full of bad news. It’s what people read. So why is academia so different? I think it’s not, and a lot of researchers would be happy to know about paths not to take, and ways to do things which should be avoided. But this information is passed along by chatting to people at conferences and workshops. It’s not something people read about.

As a result, one of the greatest benefits of working for a prestigious university is the personal network, and therefore access to unpublished research, one can get. Simply being able to chat to a person who published something interesting (rather than only reading the paper) can save years of unsuccessful research, because one can ask how to get to the results, and how one can avoid time consuming technical issues. This is, I think, socially unfair.

I believe that a much fairer process would be to publish research experience in addition to results, and make it available for the world to see. If we can make this happen, we can kill two birds with one stone: We can save the funding agencies around the world millions if not billions of dollars (I couldn’t find any relevant study here unfortunately) in time lost while repeating someone else’s mistakes. And we can add academic recognition to otherwise unpublished research experience and expertise by making it citable.

This is why I decided to found Iceberg (research-iceberg.github.io, @ResearchIceberg). It is a collection of contributions, much like a scientific journal, but not limited to traditional articles. Iceberg lives on GitHub, and includes open source, open access publishing (under a CC BY license) with a trackable and open review process. Every submitted paper will be reviewed on GitHub, and the final version will also receive a license-protected PDF with a citable Digital Object Identifier (doi) number. If you don’t know how GitHub works, or don’t want to create yet another account, don’t worry. There is also a mechanism in place where you simply submit your manuscript and get all correspondence via email, just like you do with any other journal.

The aims of Iceberg are to promote the dissemination of useful scientific knowledge and give academic credit to otherwise unpublished findings and expertise. It wants to publish the bulk of the iceberg, not the tip. Each contribution (or “paper”) can have a very different form, depending on its content. It could be seen as a science community blog, a collection of scientific logbooks, a platform of scientific exchange, a collection of manuals for experiments and code, or a knowledge base of best practices and things to avoid. It doesn’t have a fixed form, but simply a minimal structure to make it work. This way, it can be turned into whatever time and the scientific community want it to be.

Similar efforts have been undertaken on the subject of the publishing scientific code. For instance, the Journal of Open Research Software has been around for many years and has successfully published peer-reviewed papers which are openly accessible and citable. Some of the articles in JORS are very successful, as for instance Hoyer and Hamman (2017) or my own Jucker (2016), with Altmetric scores which place those papers into the top 5% of all research traced by that metric (which includes all of the Web of Science catalogue).

There is also the effort of open science and open research, but these connotations are again mostly related to the tip of the iceberg, namely published journal articles and their immediately attached data and code. They do not consider the bulk of the iceberg as described above.

I am not aware of anyone building a platform similar to what Iceberg aims to achieve.

This is an experiment, but I believe the idea is important, and today’s technology lets me do it without it costing me too much time. All that’s needed is some innovative thinking, and an open mind.

Iceberg is open for business, and this is a general call for manuscripts from the entire research community. Let your peers know what doesn’t work, what you have tried and failed so that they don’t have to repeat the same mistakes. You will be grateful the day you can’t get that future experiment to work and find all the necessary information on Iceberg. Wouldn’t it be great if we could to more than just learn from our peer’s success: learn from their mistakes!