There is a certain beauty and corresponding reverence to the scientific method. The “problem” is that humans are using it: we are subject to biases and, most importantly, incentive systems. I highlight a Wired article about a young billionaire who has declared “a war on bad science”. His foundation’s work seeks to shed light on the instances of bad use of the scientific method, when data was perhaps ignored and corners perhaps cut. Analysts know what I am talking about: brushing away data points that seem “incoherent” with a theme, weighting independent factors disproportionately… We have all been there, no matter how hard we fight to be 100% rational and intellectually honest.
I get repetitive about the number one issue at my mind at any given time: how do I know if and when I know something? How to distinguish superficial from deep knowledge – and what to do to get from one to the other? I have posted this before and allude to it every now and then, and so does the brilliant Shane Parrish at uber-source Farnam Street Blog. His post on Richard Feynman’s famous TV special focuses on the part of truly knowing something.
It may seem like a strange topic to get Buysiders.com rolling again, but it feels good to start writing and this subject really caught my attention as an analyst attending several events every year. It’s about a recent, preliminary study on the benefits (for learning purposes) of taking notes on paper vs on a laptop. I am guilty as charged, but I also highlight the issue of lectures themselves vs a more interactive learning experience.
Pretty good book list by Dealbook editor Andrew Ross Sorkin. I’d also recommend his own book on the 2007-08 crisis called “Too Big To Fail”, “The Essays of Warren Buffett: Lessons from Corporate America”, “When Genius Failed” and “Fooled by Randomness” as other can’t-miss books – and I know I am leaving way too many great books unmentioned, so there are a few links inside to previous “reading list” posts in Buysiders.com (and a bonus for those who click through).
Almost two years ago I posted about spotting lying CEOs, which was a reader suggestion. Now the same reader (keep’em coming!) has sent me an article from the CFA Institute, based on the same research on spotting liars, also mentioning the researcher’s book. I also link to other Buysiders.com posts on related subjects.
I love getting reader suggestions, this one via LinkedIn and a first from this reader. Given the quality of the article, I hope for many more. His recommendation is an 2009 article by Jonah Lehrer at the New Yorker magazine about “that famous marshmallow study”. A great read and it raises interesting questions.
Recurring themes in Buysiders.com: skepticism, intellectual honesty, independent thinking and so on. Seth Godin’s “False Metrics” post is useful to think about management and incentive systems, but I chose to focus just on the investment angle. It’s easy to fall into the trap of false knowledge or, in investment terms, “unearned convictions”. Those can kill you in the long run.
What does an article about the fringes of Physics – that “unexplored territory where truth and fantasy are not yet disentangled” – have to do with investing? Not much, but that’s OK for this blog as long as there are some building blocks or models to be taken away. In this case, the importance of balancing both an abstract imagination and a rigorous experimental discipline seems to apply to investing as well as science.
DLD 2012 has started today in Munich and runs until Jan. 24th. In it, people as diverse as Sheryl Sandberg, Arianna Huffington, the Dyson family and Hiroshi Mikitani share their views on what matters to them. The themes are varied and the program is packed with interesting talks and panels. In the age of multi-disciplinary events, this is one of the best.
Less than 24h after publishing our rant on Economic models, we get John Kay’s brilliant piece in our inbox – “A wise man knows one thing – the limits of his knowledge”. It is the ultimate summary of the many dangers of modelling in general, not just in economics – among which dangers we count over-complicating things, but most importantly over-estimating a model’s value as predictive/forecasting tool. In fact, as the article argues and as we’ve seen countless times, we tend to over-estimate models even in their ability to analyze the past, especially if one is asking the wrong questions. We also cite a few quotes by Taleb and an article on Edge.org by Emanuel Derman.