The New York Public Library
[pic]
NASSIM TALEB & DANIEL KAHNEMAN
February 5, 2013
LIVE from the New York Public Library
live
Celeste Bartos Forum
PAUL HOLDENGRÄBER: Good evening. I would like to thank Flash Rosenberg, our artist in residence, who, together with our videographer, Jared Feldman, provide an afterlife for our LIVE from the New York Public Library events. And it is the afterlife of these conversations which intrigue me dearly, as you can see from the note we have included in your program.
My name is Paul Holdengräber, and I’m the Director of Public Programs here at the New York Public Library, known as LIVE from the New York Public Library. You have all heard me say this many times, for those of you who have come before. My goal here at the Library is simply to make the lions roar, to make a heavy institution dance, and if possible to make it levitate. I’d like to say a few words about the upcoming season. We will have Adam Phillips coming. George Saunders with Dick Cavett. Anne Carson. Supreme Court Justice Sandra Day O’Connor, William Gibson, Nathaniel Rich, Junot Díaz, Daniel Dennett, David Chang, and many, many others. We have just added two visual artists to this spring season, the ever-cool Ed Ruscha on March 6 and in May we will have Matthew Barney. I encourage you to join our e-mail list so that you know what is my fancy at the latest possible moment.
Daniel Kahneman and Nassim Taleb will be signing some books after their conversation. As usual I wish to thank our independent bookseller, 192 Books. Preceding the signing, there will be time, if the mood permits it, to take some questions, brief questions, only questions, only good questions. (laughter) No pressure. Now, I’ve said this also before. A question can be asked usually in about fifty-two seconds.
I have wanted to invite Nassim Taleb back to the New York Public Library to a LIVE program, and his most enticing new book, The Antifragile: Things That Gain from Disorder, seemed like the perfect occasion. Nassim and I have contemplated many, many times having a conversation onstage here about things French, particularly André Malraux, who we both like and feel has not gotten his share, and also we might want to talk about Michel de Montaigne or even Pascal. We may someday Nassim indulge ourselves, I promise you. I promise to try. But for now.
When I asked Nassim who his utmost desired interlocutor would be tonight, not his interviewer, but someone who would be in true conversation with him, he said without hesitation as if I knew who he meant, “Danny!” (laughter) He meant Daniel Kahneman, the 2002 winner of the Nobel Prize in Economics, professor emeritus of psychology at Princeton University, and the author of Thinking Fast and Slow. I’m so delighted that Daniel Kahneman agreed to this conversation, where I hope these two gentlemen will goad each other sufficiently.
Now, over the last five, six years I’ve asked my various guests to provide me a biography in seven words rather than reading their accomplishments, which are many in each case, to give me seven words which might define them or not at all. A haiku of sorts, or if one wants to be very modern, a tweet. Nassim Taleb provided me with this, which I think is tremendously enlightening: “A convexity metaprobability and heuristics approach to uncertainty,” “which is best explained,” and then he provided me a three-line link and I clicked on the link, hoping for some enlightenment, maybe some of you don’t need it, but upon opening the link I came upon a 395-page document, (laughter) which after the seven words mentioned, which were part of it, has the following, which I hope will help you: “Metaprobability consisting of published and unpublished academic-style papers, short technical notes, and comments.” I did not print the close to 400-page document, because I was hoping for some clarification tonight.
Daniel Kahneman sent me his seven words, but from the outset negotiated with me and asked me if I would accept five. (laughter) He had I’m sure more to offer, but he offered me five. That was a relief after the 395-page document. Here are the five words: “Endlessly amused by people’s minds.” Ladies and gentlemen, please welcome warmly to this stage Daniel Kahneman and Nassim Taleb.
(applause)
NASSIM TALEB: In the book Antifragile, I discuss something called the Lindy Effect, that time is a great tester of fragility, with the following law. When you see an old gentleman next to a young man, you know, based on actuarial tables, that the young man will outlive the old gentleman, but for anything nonperishable—like an idea, a book, a movie—it’s the exact opposite. If you see an old book, we just saw the Gutenberg, a 500-year-old book, quite impressive. If you see a book that has been in print say, 2,000 years, odds are you can guess it will be in print for 2,000 years. Been in print for ten years, ten years. If an idea has been in fashion for fifty years, then fifty years. That is why we’re using this glass, this three-thousand-year-old technology.
Danny’s ideas are forty years old. People think that Thinking Fast and Slow is a new book, and effectively, before you published your book, I was discussing it here, as exemplified, you can predict the life of an idea or the importance of an idea based on its age, so therefore I should be talking about your book, not mine, given it has forty years. This is a great way to show how time can detect fragility and take care of it. This is introduction, so, about why you should be one running the show and your ideas should be the ones—
DANIEL KAHNEMAN: Well, I’m not going to run the show because I think the focus of this conversation should be your recent book, I mean, mine is already old, I mean, it was out I think in October 2011. Yours is a lot newer, so let’s talk about what Antifragile is.
NASSIM TALEB: Okay, let me introduce the idea of antifragility with the following. I was an option trader before I met Danny. We met in 2002, 2003, and of course it changed a lot of things for me. I decided then to become a scholar, all right, right after meeting him, immediately, all right, I’m going to be a full-time scholar. It took me a couple of years to become a scholar and of course I went and I had a book that I had to revise.
Anyway, before that I was an option trader. And option traders aren’t very sophisticated people. They know two things: volatility and alcohol, all right? (laughter) So—and they classify things into things that like volatility and things that hate volatility. There are packages that like volatility, packages that hate variability, volatility, they call it long gamma or short gamma. That was my specialty. When I entered the scholarly world I realized that there was no name for things that benefit from volatility. “Robust” isn’t it, you see. Things that are robust, things that are adaptable, things that are agile, resilient, destructively resilient, creatively resilient, all these things, they are not the opposite of fragile, and they’re not the equivalent of things that gain from volatility, so I decided, figured out, that what is short volatility is fragile, this doesn’t like volatility, because if there is an earthquake in New York—you never know with, you know, Paul—there may be earthquakes here, but if there is an earthquake, this is not going to gain from the earthquake, you see? So it does not like disorder, it does not like volatility, it doesn’t like these things.
So I figured out that the fragile is a category of object and the opposite of fragile was not robust. The opposite of fragile had to be a different category. If I’m sending a package to Siberia, “fragile,” you know, you translate into Siberian or whatever Russian they use, “handle with care.” The opposite wouldn’t be, you know, a robust package, you write nothing on it, the opposite would be something on which you write, “please mishandle.” There’s no name for that category, so I called it antifragile, what benefits from volatility has antifragility, and I realized that somehow the people who would interview wouldn’t get it. When we shoot for something, we shoot for resilience. That’s not it. If you aim for resilience, you’re not going to do very well. So, you know, I decided to classify, and this book is about classification of things into the three categories: fragile, robust, antifragile. So Danny, there you go.
DANIEL KAHNEMAN: Well, I mean, you’re almost forcing me to define what antifragile is because you haven’t done it.
NASSIM TALEB: It gains from variability, disorder, stress, what else, stressors, harm, things that benefit from—
DANIEL KAHNEMAN: In an early chapter of the book you have a very long table with three columns: Fragile, Robust, and Antifragile, and you can pick any of the rows in that table and elaborate on it. For example, tourists and flâneur, one of your favorite words. Why are tourists—and there is something quite deep in that discussion.
NASSIM TALEB: I was an option trader. Options you like optionality, you see, you like uncertainty, you benefit from uncertainty, you like some disorder. When you’re a tourist, you’re on the track. If the bus is late, you’re in trouble. If you’re an adventurer, you benefit, you’re opportunistically taking advantage of uncertainty so therefore you’re in that category of antifragile, and if you’re robust you don’t care. So the idea of entrepreneurs are in the category of antifragile and the people who have you know very rationalistic thing, you’re put on track, you’re following a, you know, a certain code, you are fragile because if something breaks you’re in trouble. Nothing good can happen. Things can go wrong, but they can’t get better.
So the fragile has more downside than upside. Uncertainty harms them. Take a plane ride. Inject uncertainty into a plane ride. I just came from Europe, it was eight hours, ended up to be sixteen, so I was eight hours late. Have you ever landed in JFK coming from Europe eight hours early? (laughter) So you inject uncertainty the travel time extends. The opposite would be a system in which if you inject uncertainty, you have benefits, and that’s the antifragile, entrepreneurship. If you’re an adventurer, you like uncertainty.
And people, you know, couldn’t get it. And I have a way to explain it. Someone tells me, “what’s the difference with resilient?” I says there’s a big difference, if I buy insurance on my house in ten times the amount needed, I want an accident to happen. It’s—every day, every morning I’d wake up, I’d say there’s no trouble, earthquake, no nothing, I get paid ten times the damages. So that’s like inverse insurance. So insurance companies of course are short volatility are harmed by disorder, they are fragile and someone who has ten times insurance would benefit from uncertainty.
DANIEL KAHNEMAN: There are many ways in which robust and antifragile sort of contrast in your world that I’ve been trying to understand. So you’re opposed to—you’re in favor of decentralization, you’re opposed to planning, you are opposed to—
NASSIM TALEB: I’m not quite opposed to planning. I’m opposed to planning like you’re on a highway and with no exits. You suddenly have a problem you can’t exit, that’s what I’m against. I’m in favor of planning if you have optionality to exit. You see, an option benefits from uncertainty. You need options out, so there’s something technical. A five-year plan, like a five-year option is not as good as a series of one-year, very adaptable, you’re not, you can’t take advantage of changes in the environment if you’re planned.
DANIEL KAHNEMAN: So one of the points that you make which resonates with everybody is that “too big to fail” is actually the theme that you anticipated in your previous book, anticipated the crisis in—you know, one hates to use the word prediction, but you came as close as anyone I think to anticipating that crisis, so—but you take that very far. I mean you wouldn’t be satisfied with just breaking up the banks. You are really questioning globalization—you are questioning a great deal of what—there seems to be a logic within modern economy of things getting bigger, of people searching for economies of scale and people searching for power, and the drive for power and for economies of scale causing things to grow, and you are really, it seems to me, fundamentally opposed to it, or am I pushing you too far?
NASSIM TALEB: No, no, no, definitely. A system that gains from disorder has to have some attributes, and a system that’s not harmed from disorder, has to have some, follow some, has a certain structure. So let me explain, you know, the Greenspan story. The mistake we made, you know, ever since the Enlightenment, but it’s very exacerbated now because we’re more capable of controlling our environment. We want to eliminate volatility from life. We make the categorical mistake, I call it mistaking the cat for a washing machine. I don’t know if you own a cat or a washing machine, they’re very different. (laughter) The washing machine will never self-heal. An organism needs some amount of variability and disorder, otherwise it dies. It actually—an organism communicates with the environment via stressors. You see, you have stressors, you lift weights, your bones are going to get thicker. If you don’t do that, if you go on the space shuttle, you’re going to lose bone density. So you need stressors.
So people made the mistake of thinking that the economy is more like some machine that needs constant maintenance, will never get better, and like this, you put it on a table, okay, it will be harmed by any disorder. And the human body that needs a certain amount of variability. That huge mistake led us to micromanage the economy into chaos. Take forest fires. If you extinguish every forest fire, single forest fire, a lot of flammable material will accumulate, and the big one would be a lot worse.
So what Mr. Greenspan did, actually he’s writing a book, he probably will come here. What Greenspan did is he micromanaged the economy and had you given him the seasons or the nature, he would have fixed nature at 68.7 degrees year-round, no more seasons, so of course things got weaker and you had the pseudostability. Danny himself, you mention in your writings that when human beings are chicken, okay, usually, they claim to have a lot of courage, so they take risks they don’t know. They like to take risks they don’t understand, but when you show them variability and risk they get scared. We try to manage things, overmanage things, our health, the economy, a lot of things, into complete chaos and fragilize it. That’s a mistake, the first mistake is mistaking something organic for an engineered product, that’s the first mistake, and of course it has the psychological—there’s a psychology behind it, no?
DANIEL KAHNEMAN: Well, I think there is an interesting psychological issue. My sense is that people by and large prefer robustness to antifragility and that there are deep reasons for that. So what are you advocating is not intuitive. What you are advocating, it fits you very well. (laughter) I mean, it’s not—but that is because you know, you have what you call “fuck you” money.
(laughter)
NASSIM TALEB: Sorry.
DANIEL KAHNEMAN: You can afford to be antifragile and to live in that particular way. Not many people do. Not many people would want to. You like unpredictability, moderate unpredictability. Most people really don’t like it very much. They like much less unpredictability than you do. And some of your prescriptions push the boundary a lot. That is, this is less obvious perhaps in this book than in your previous one, The Black Swan. You are very much, well, you’re very much a standard economic profession and economic models and models of options, and people trying to predict the future. That is, for you, the attempt to predict is a sort of arrogance, and here I think both of us, this is something that we certainly agree on, both of us, this is something that we certainly agree on, it’s a sort of arrogance, you have a system it seems to me in which probability plays very little role in this book because you don’t believe we can do it, we can actually say much, it seems to me, about the future, we shouldn’t try about the things that really matter and so you have a system that would guide people by the outcomes, by the range of outcomes, so basically the major prescriptions is limit the losses, don’t limit the gains, it’s called convexity.
NASSIM TALEB: Exactly. I have two things to say here. Look at this coffee cup. We know why it is fragile and we can measure the fragility. I can’t predict the event that would break the coffee cup, but I know if there is an event, what will break first. The table will break after the coffee cup, or after the glass, so we can measure pretty much very easily fragility, and let me link it to size, and then I’ll talk about prediction. And, you know, to illustrate too big to fail.
I was trying to explain for a long time why too big to fail was not a good idea, why too large is not good, why empirically companies never become large, they go bust before unless governments prop them up, like the banks. I figured out something from Midrash Tehillim. There’s a story of a king who had a mischievous son. And he was supposed to punish the son, and the standard punishment was to crush him with a huge stone. I don’t know if you’ve had to crush your son with a huge stone, but it’s not a very—he was definitely looking—looked for a way to get out of that. So what did he do? What do you think he did?
He crushed the big stone into small pebbles and then had some coffee and then pelted the son with small pebbles, all right? So you see if you’re harmed by a 10 percent deviation more than twice a 5 percent deviation, you are fragile. If a ten-pound stone harms you more than twice a five-pound stone, you are fragile, it means you have acceleration of harm, so you can measure fragility simply through acceleration of harm, if harm accelerates, if I smash my car against a wall at fifty miles an hour, once, I’m going to be a lot more harmed than if I smash it fifty times at one mile per hour. Don’t try because—definitely a lot more than if I smashed it five thousand times at a one millimeter per hour speed, you realize. So that’s acceleration, you can figure out.
So from that, we can have rules of what’s fragile. We can measure fragility. And we can eliminate fragility, and we know that size brings fragility. For example as projects, Danny and a lot of his disciples have figured out that people tend to underestimate the cost of projects. It’s chronic. Projects tend to last longer. I don’t know if you had to renovate a kitchen, (laughter) but you probably will experience that, okay. It’s getting worse with complexity. One common friend, you know, I asked him to provide me with numbers, and we looked at it, we realized that in the UK a hundred-million-pound project had 30 percent more cost overruns than a ten-million-pound project. I mean, size brings some fragility. Which is why we don’t have that many elephants. An elephant breaks a leg much very quickly compared to a mouse. I don’t know if you have a mouse, but if you play with a mouse, it doesn’t care. An elephant breaks a leg very quickly. So this is why I don’t like size.
A decentralized government makes a lot of small mistakes. It seems messy, because you see a lot of mistakes. They’re in the New York Times front page every day, all right, so it scares people. A large centralized government doesn’t seem to make mistakes, it’s smoother, but guess what, when they make mistakes, we had two of them last decade, we had the guy who went to Iraq, about three trillion dollars so far and counting, and we had Mr. Greenspan, two big mistakes. When you have decentralization multiplies the mistakes, they’re smaller, pretty much like the pebbles, you know? They’re going to bother you, but they’re not going to kill you.
DANIEL KAHNEMAN: I want to raise some discomfiting questions. One of them, you know, goes back to something that you don’t emphasize in this book quite as much as you did in The Black Swan, but that’s your turkey example, and I think it’s important, so you tell the turkey story, and then I’ll respond to you.
NASSIM TALEB: In The Black Swan, there’s a turkey that is fed every day by a butcher and every day confirms to the turkey, to the turkey statistical department, to the turkey’s, you know, policy wonks, to the turkey’s office of management and budget, that the butcher loves turkeys with increased statistical confidence every day. And that goes on for a long time. There is a day in November when it’s never a good idea to be a turkey, which is T minus 2 we call it, all right, Thanksgiving minus two days, all right? So what happens, there’s going to be a surprise, it’s going to be a black swan, a large surprise event, but it would be a black swan for the turkey, not a black swan for the butcher. So this is the turkey story, and my whole idea from the turkey story to explain my black swan problem, which was misinterpreted for five years is the whole idea is let’s not be turkeys, that’s the whole point of The Black Swan. (laughter) Danny sort of, you know, has a psychological interpretation of the turkey problem.
DANIEL KAHNEMAN: I mean, I have a problem with it because when I look at this, at your story, I think the turkey has a pretty good life, until, you know. I think that this sort of worry-free life that the turkey enjoys until Thanksgiving, this is something that we aspire to. That is, people do want robustness, they do want predictability, they do dislike risk and this is very clear in your case, the focus being on extreme events, so you don’t put a lot of weight on however many days the turkey has to enjoy life without worry. You put a lot of stress on the disaster, and it turns out, and the same I think you have this as a general point about your orientation that you put a lot of stress on events that are very rare and extremely important, both the big ones—the good ones and the bad ones. You wrote several examples of bad ones.
Examples of good ones. You bring your own life story. Where I think it is fair to say that you’re a pretty wealthy man, and you made your wealth in two periods, in two brief periods of time, and most of the rest of the time you either broke even or lost some. And you had it set up so that in some sense it was predictable that this policy of waiting, of arranging things so that you left yourself open for a very large positive accident while preventing your losses from becoming very severe. That is clearly your ideal view of how to run things. Whether it is the ideal view for most people I’m not sure. That is, for most people we have a mechanism that will allow us, that makes us extremely sensitive to the small losses that you incur along the way in a way that may not be compensated by the extreme win, and the story’s similar on the other side.
NASSIM TALEB: Yeah, and effectively that’s what, this explanation is why you went to Stockholm in 2002, that’s exactly prospect theory. Danny discovered the following: you’d rather make a million dollars over a year in small amounts and get more pleasure. If you’re going make money, make it in small amounts, and if you’re going to going to lose money or have a bad event, have it all at once, because losing, you’d rather lose a million dollars in one day than lose a little bit of money, even a smaller amount, okay, over two years, because by the third day it’s like Chinese torture, and that’s prospect theory, that’s the reason he had the Nobel, it’s the reason he—you know, the whole thing. And people still haven’t absorbed that point of prospect theory, and when we met in 2003 I immediately found the embodiment of my idea right there in prospect theory and that equation.
DANIEL KAHNEMAN: In a way what you are, and if we link it to prospect theory, what you’re prescribing goes against the grain. That is, you are prescribing a way of being or a way of doing things that exposes an individual to a long series of small losses in the hope of a very large gain.
NASSIM TALEB: Not quite. What I’m doing is the opposite, trying to tell people, “do not mortgage your future for small gains now,” and they get it, 20 percent that way, and there’s another psychological thing that people tend to create a narrative that the large event will never happen, so—and there is another dimension of why do we have bankers, for example? They make pennies, they make pennies, they make pennies, and when a crisis happens, they lose everything they made before, and effectively in 1982, I was writing in The Black Swan, and I got so many hate letters from bankers. In 1982 I showed why the banks lost in one quarter more money than they’d made in the history of banking until then, history of money center banking, they repeated in 1991 and of course 2007 you need more examples, so and every time that there’s another dimension, it’s not their money they’re losing, it’s your money. Everyone here who files that piece of paper on April 15th is subsidizing them, because when they make the money, theirs, and when they lose the money, it’s ours on April 15th, taxpayers sponsored it, so we have here what I call a transfer of antifragility. The banker is antifragile, he has upside, no downside; taxpayer has downside no upsides, so you’re going to have a lot of risk-hiding on top of the psychological thing.
So and then you told me, you know, there is two effects, and I call them one, fooled by randomness and two, crooks of randomness, okay, so there’s the fools of randomness and the crooks of randomness, you see, it’s a combination that we see presently in society. If you removed one, the crooks of randomness, you would have much less of it, this of course phenomenon would prevail, although there is something called dread risk, people may be scared by extreme events if you present it to them properly as you have shown, but the phenomenon, crooks of randomness is what you definitely can remedy.
DANIEL KAHNEMAN: There is something else that troubles me, and is psychologically troubling, I think. You take to task and very severely in The Black Swan and, you know, you repeat it to some extent in Antifragile. You take to task people who attempt to predict in the economic domain because they do not predict the big events. You call them names. You call them charlatans and other.
NASSIM TALEB: They are.
(laughter)
DANIEL KAHNEMAN: And yet those people are quite popular, and their services are very much in demand and I—my image about that is that if you have a system of weather forecasting that does pretty well in the everyday. It does pretty well at telling you when to take an umbrella and when not to, but it completely misses out on hurricanes. You would reject that system altogether.
NASSIM TALEB: No, I mean, we definitely. When we get on planes, all right, when we get on a plane, we focus on the plane crashing, not on comfort. So it’s not like we’re going to have bad coffee on the plane. The risk is well defined, the plane crashing, not having an uncomfortable ride. So what you’re talking about is different. Someone is predicting an event, you have to predict the full span of events, you see, you have to be protected. And people understand it in the military, in the military where they routinely spend eight hundred billion dollars a year to deal with extreme events, we haven’t had a big war here for about fifty-some years, sixty years. So we have the problem of people who predict who are not harmed by their mistakes. So what are they going to do? They’re going to predict the regular very well, and when the extreme event happens, guess what, it was unpredictable, you see? Yet, you’re going to rely on it. And you showed in your research, actually I was inspired by it in the story of the Wheel of Fortune, how if someone flips the Wheel of Fortune randomly in front of you and then you get a number and makes you estimate any variable, whatever you’re going to estimate will be correlated to what you saw on the Wheel of Fortune.
DANIEL KAHNEMAN: That’s a phenomenon called anchoring. Any number that somebody puts in your mind is likely to have an effect. Well, if you consider it as a possible solution to an estimation problem, it’s going to have an effect on how you think about the problem, so if I give you, if I ask you are there more or less than well 65 percent African nations in the UN and then I ask you what is the proportion of African nations in the UN you are going to end up with a different estimate than if I first ask you are there more or less than 15 percent African nations in the UN. You know that those numbers are wrong, but they influence you. They tell you how to think about it.
NASSIM TALEB: The conclusion is that if I produce a forecast, I’m going to harm you, you see. No different—I call it iatrogenics, harm done by the healer. No different from when you go to pharmacy to get a drug, if it doesn’t work, it may harm you. Actually, it will harm you, so this is the fact of forecasting, it is harmful. And my idea is to build a society that can sustain forecast errors, so that charlatans can forecast all they want, they should not be able to harm us, that’s the idea.
How to build such a society? Less government debt, because when you have debt you have to be very precise about the future. You can’t make forecast errors, so you need to have less debt. You’re more robust when you have less debt. Actually, to become antifragile, first you have to become robust, less debt, decentralization, and the third one is elimination of moral hazard. How do you eliminate moral hazard? I call it the Bob Rubin problem. Bob Rubin made 120 million dollars from Citibank from hiding risk, or when Citibank was hiding risks and then when Citibank went bust, he didn’t show up to work with his checkbook you see to return the money, no, we’re paying for it retrospectively. So here we have truck drivers paying for his bonus. So we have to eliminate these three. If you eliminate these three, then we’re a lot better off. Then would come we sit down and look at what psychological problem we have left that we should monitor.
DANIEL KAHNEMAN: I think you know the problem of skin in the game and in a way what you’re saying there is more generally acceptable, that incentives have to be aligned. That would not be a very controversial statement, and you make it in a controversial way.
NASSIM TALEB: No, no, no, it is, because their definition of skin in the game is a bonus for a lot of people. They don’t understand. Skin in the game is not a bonus. Skin in the game is a bonus and malice. You have to be penalized by your mistake, small amount, nevertheless you have to be penalized. You know they used to behead bankers when they made a mistake. And the best risk management rule actually, we discovered that since Hammurabi. Hammurabi had a very simple rule, who is the best risk manager is you, or can be the best risk manager. If an architect, and I’m sure this architect took care of construction, if an architect or engineer builds a building and the building collapse and kills the owner, Hammurabi’s law, the architect is put to death. Why? Not because they wanted to kill architects, look, they build a lot of ziggurats, all right, what they wanted to prevent is risk hiding, because they realized that no cop, no regulator will ever know more about the risks than the person who created them and where are they are going to hide the risks? In tail events, in rare events that happen infrequently, so that’s why we have to have some disincentive, not very large. Even Adam Smith by accepting the limited liability company was not crazy about the fact that people could have no liability. He wanted people to always have some liability. So capitalism doesn’t work unless you have liabilities, and here we have a class of people, bankers, they got us in trouble and they made all the bonuses after. In 2010 they paid themselves a larger bonus pool than before the crisis. It’s crazy.
DANIEL KAHNEMAN: I don’t think, you won’t get an argument from me on this issue, (laughter) so you know we can argue in Antifragile this is robust because the implications of your argument are, I think, are costly. I mean, there is a cost.
NASSIM TALEB: They’re necessary. They’re beyond necessary. If I put someone, a child, completely deprived of stressors in a sterilized room for five years and then bring him here to visit the New York subway, you know, next door, how many minutes will he last? So obviously, if you’re not exposed to stressors, you’re going to be weakened, okay? We’re getting—we have a society that is obsessed by eliminating small stressors, small risks at the expense of large risks. And this manifests itself in a lot of things. The fact that we didn’t have a name for antifragile means half of life doesn’t have a name. It didn’t matter in the old days because we had stressors all the time. And now we control our environment, and we control the wrong thing about the environment. We try to make the ride comfortable but don’t eliminate the large risks. In fact, it’s the opposite.
In any field. We’ve been harmed, I call it denial of antifragility. Denial of antifragility, Greenspan, without the economy created no more boom and bust, it’s natural to have boom and bust in some amount. He created the big bust. Okay? I was discovering. There’s something called Jensen’s inequality. Jensen’s inequality means some properties of nonlinear response that makes you benefit from variability. Your food intake. If you eat food, if you always have the same amount of calories all the time you’re going to be injured, you know. If you stay on a chair, no stressor, all right, your back is going to become weaker. Stuff like that that you can generalize. So what I’ve done here is try to identify the blind spot we have today that matter today did not matter in the past because in the past, the environment was providing us with stressors.
Take, for example, temperature. It’s not healthy to stay at 68.7 degrees twenty-four hours, it’s not healthy. You need variability. We’re made for variability. We’re made for an environment where you had big thermal fluctuation. So you injure yourself and effectively now have a catalog of things, harm you’re getting, from not getting variability. Likewise in the economy. If you don’t have—think of the restaurant business. If you don’t have bankruptcies in the restaurant business, what would happen? Tonight we’re going out to dinner, all right? The quality of the meal is a direct result of the fragility of individual restaurants. A system, to be robust and antifragile, needs to recycle and improve the fragility of the components. Okay, otherwise, you know, look at Russia, restaurants didn’t go bust. I don’t know if you ate food in the seventies in Russia, but some people are still trying to recover from the experience. (laughter)
But I have one more thing to say. A system that does not convert stressors, problems, variability into fuel is doomed. Let me give you a system that’s perfectly adapted to converting stressors into improvement. The air transport. In the last seven years we had one plane crash, commercial plane crash. I mean, I’m not talking about individuals flying on weekends half drunk, plane crash for, you know, major airlines. Why? Every plane crash that has happened has made the next plane ride safer, lowered the probability of the next plane crash. Every plane crash. That’s a system that is overall antifragile where you benefit, you never let a mistake go to waste. Now, compare that to the banking system. A bank crash would make the probability of the next bank crash much more likely, all right, much higher. That’s not a very good system, all right? Now we can based on that compare things.
DANIEL KAHNEMAN: I keep going back to the same point. This is not really what people want to do. That is, I certainly—many of us certainly go almost directly in New York from heating to air conditioning and vice versa because we do like the constant temperature. So you are making a point which I think is true and deep that in some situations that we are made for variability, that is, we are designed by evolution to be able to cope with stressors and indeed as you—a point you made absolutely correctly to benefit from stressors. But we’re also designed to avoid stressors, to try to avoid stressors.
NASSIM TALEB: Identical to randomness. We’re made to hate randomness because the environment was giving us randomness and it prevented us from dying, prevented us from encountering the big, you know, large-scale event so we are made to hate all kind of randomness because we’re not, you know, fine-tuned, you know, for such subtlety. Okay, that some randomness in small amounts is good for us, so we realize randomness is bad, is bad, okay, so it’s the same way, the same way we think that stressors are bad, when in fact big stressors are bad, small stressors are beneficial. This is the nonlinearity that we don’t capture intellectually.
DANIEL KAHNEMAN: Well, the psychology of it is the following, that we’re actually relatively more sensitive to small losses than to big losses and to small harms than to big harms. That is, we have a limited ability, actually, to feel pain and we feel a lot of pain for very little harm and then it doesn’t get worse proportionally. So that in some, in a very real sense we’re designed against what you want.
NASSIM TALEB: Except that guess what saved us from this? Religion. I mean, I’m Greek Orthodox, all right, I’m not practicing, but sometimes practicing for dietary reasons, okay? (laughter) We—mean think about it, religions force you to fast, force you to have variability in your food, especially we have forty days for Lent, forty days plus every Wednesday and Friday, no animal—you’re vegan. So you’re vegan so many days, why? To prevent you from having protein, because we’re part lion and part cow. The lion in us gets the protein with a random frequency whereas the cow in us eats salad without dressing every day all the time, all right, so you see the boring and the hunter, so if you’re made to get protein episodically, intermittently, and you get it all the time, you may be harmed. Religions have evolved to prevent us from that by banning us from eating protein seven days a week, you see, you can look at the fasts in Ramadan has the very same purpose. So you see all these rituals were there to help us cope, force us on grounds of—religion is like someone packaging a story, you know, giving you a story in fact to force you to do something else, and that was a good thing, so we have had mechanism to correct for this—for the diseases of abundance. I mean, we live in a world today where a lot more people are dying of overnutrition than undernutrition, you see? We have seven hundred million people supposed to be underfed, but of these the really ill are a very small number.
DANIEL KAHNEMAN: I want to change the subject (laughter) and I want to tell the story of what I have learned from you, and it’s going to be only part because I’ve learned a fair amount. Nassim really changed the way I think about the world in quite significant ways by making me realize that fundamental unpredictability of truly important things in our lives, this idea, the Black Swan idea of rare events, extreme events dominating what actually happens in our lives, profoundly important insight and uniquely original, I think, and it certainly had a very large impact on how I think about uncertainty. The skepticism about professional predictions. The fundamental skepticism and there are really sort of two personalities and when you read Nassim’s books you’ll encounter several characters. Two of them I think are part of you, that’s Nero Tulip and Fat Tony and Fat Tony is quite an interesting character. He is a trader and he is a cynic and he really fundamentally irreverent. Nero Tulip is the intellectual, and he is very much that part of you and both of them are in you.
NASSIM TALEB: This is a psychologist for me.
DANIEL KAHNEMAN: He is also irreverent and he also doesn’t take nonsense but he’s very interested in ideas, whereas Fat Tony is not particularly interested in ideas at all and the skepticism of both of these, both of Fat Tony and of Nero Tulip, was very instructive for me. I mean, you are rude to economists, frankly, and it’s not you are more rude than you need to be, in my opinion. But there is something really refreshing and something very instructive in seeing a free spirit, and those two kinds of approaches, the approach of the trader and you keep emphasizing that, and then the approach of a scholar who is really a self-taught scholar. You’re not—you don’t really respect academics very much.
NASSIM TALEB: I do, I do.
DANIEL KAHNEMAN: Some of them. (laughter) Very selectively.
NASSIM TALEB: I’m an academic now. I fake it.
DANIEL KAHNEMAN: So that was up to The Black Swan and then I remember we had a conversation in which I said to him, “Okay, you’re raising fundamental questions and you’re creating or focusing our attention on the specter of unpredictability. What are we going to do about? What are you going to do about it?” And Antifragile I think is to some extent an attempt to answer that question; it’s not a question because I raised it.
NASSIM TALEB: Danny forced a subtitle on me, I don’t know what Nassim’s book, next book, is going to be called, and the subtitle will be How To Live in a World We Don’t Understand, so I had to get to work and I had to spend three and a half years locked up trying to work out the antifragile, nonfragile, and it became the subtitle of my UK edition. In the UK they want more strange subtitles, (laughter) in the U.S. they want precise subtitles. The idea was he said, okay I said, “If you know that there is unpredictability in some domains and can identify the domains where there’s unpredictability, you’re done.” He said, “No, no, you had to take the horse to water and now you have to make it drink.” So I had to come up with precise rules, so this one is a little more precise.
DANIEL KAHNEMAN: So that’s what emerges, and what emerges is fairly surprising, actually. The Antifragile is a set of prescriptions and it’s a set of prescriptions how when you have achieved antifragility you really don’t need to predict. I mean, I think that is a very fundamental, you don’t need to predict in detail, this is where probability doesn’t feature very much in this book. It features, you know, in the more technical discussions, but fundamentally, this is about outcomes. This is about making losses small and allowing for full-out gains. That is your recipe, it is avoiding the major crises so you don’t have to anticipate them, so that you don’t have to try to predict them because in fact they’re unpredictable. And the distinction between robustness and antifragility is in some sense really giving up on the possibility of prediction.
NASSIM TALEB: There’s something technical I want to mention here. There’s a domain that’s purely antifragile and it’s entrepreneurship, okay? And I was calling for National Entrepreneur Day, why? Because, you know, as you were saying, losing a little bit of money all the time to big gains isn’t part of human nature, except in California, it is. Where it’s respectable to fail, because you fail early, so you fail seven times before your big thing, so, collectively, you have thousands of people failing for every one succeeding and the person succeeding also has failed probably seven or eight times before, okay, so they deal with failure. But they have small upside. How did they do it? One, they made it respectable, and I want to make it more respectable.
But there’s a mathematical property that’s quite shocking that came out from options that I realized and I call it the Philosopher’s Stone and nobody’s getting the following: that trial and error isn’t really trial and error. Trial and error is trial with small error. If you have small error and big upside. What is antifragile has small losses, big gains. Trial and error has to have small losses, big gains, so if you model it you realize that to outperform trial and error intellectually you need at least a thousand IQ points, which, I mean, I’m sure, you know, you’re close, but even—nobody, you know, gets close to a fraction of that. So you realize that’s my main idea when I say that you’d rather be antifragile than intelligent anytime.
And you look at the data and your realize all the big gains we have had, in any field, almost any field, except the nuclear, and even in medicine except for AZT drugs came from trial and error by people who didn’t have much of a clue about the process. Trying. You try, you discover, you’re rational enough to know that what you found is better than we had before. And this is where, and in this Fat Tony story, Fat Tony, I’m just—like talking to a shrink, I realized there was a Fat Tony in me, Fat Tony is in a sense a self-character, I didn’t know I was Fat Tony, but now I realize that I am Fat Tony. I am Fat Tony half the day now. Things comes out, you know, in these conversations. Watch out when you talk publicly with a psychologist next time. (laughter)
So the Fat Tony—where I got this idea of Fat Tony from Nietzsche, I don’t know if you’ve heard of the notion of crazy disruption, but it’s in Nietzsche, Nietzsche had Dionysus, the Dionysian in us and he has the Apollonian. The Apollonian likes order, knowledge, serenity, harmony, and, of course, creditability and see things, and there’s that dark force that’s hard to understand, the Dionysian, all right, in us and he found that when the balance got disrupted with Socrates, so Fat Tony goes to argue with Socrates. So there’s two poles, Fat Tony who doesn’t believe in knowledge, he believes in tricks and no theory, but he’s doing, and you keep trying and keep trying until something works, and you get rich and then you go have lunch, and this is why he’s called Fat Tony, he has a lot of meals.
So he was arguing with Socrates and he was able to express that sentence that Nietzsche really understood. He said, “The mistake people tend to make is to think that whatever you don’t understand is stupid.” That was from Fat Tony, to say that the unintelligible is not necessarily unintelligent, and antifragility is harvesting the unintelligibles, is harvesting what we don’t understand and this is what was done. Take the Industrial Revolution, take California, you know, the Silicon Valley, take in medicine the discoveries, is harvesting the unintelligible, with small errors and big gains and doing it on an industrial scale. And the problem is education. That’s the only thing I don’t like about academia is one, had we put Bill Gates through an entire, you know, college experience we wouldn’t have had Microsoft, all right, okay, so the problem is the Industrial Revolution happened with people who weren’t really academics and then once we got there, then we wanted the state, you know, to create invention from top down, not bottom up. That’s the problem; education inhibits risk-taking, that’s my only point, it disrupts that balance.
DANIEL KAHNEMAN: Well, when you read Antifragile, some of you, many of you I hope have read it already, but what you see is that many of the concepts that we normally admire are questioned in the book, and the book even if you don’t completely buy the argument, because I think the argument is quite extreme, but even if you don’t completely buy it, it is bound to lead you to questions, to questions about the relative value of theory and practice as you identified, to questions about the value of planning versus trial and error. So we normally favor theory, we favor general understanding, we favor deductive reasoning over induction, that’s the way our values are. These are questions in this book. We normally tend to large size, and it is built in, this thing that you oppose, it is really built in that enterprises tend to grow and they tend to try to grow in part because of hubris, in large part, it turns out because of hubris. You have leaders who seek power, and they seek power by growing. The market wants organizations to grow. I mean, the value of stocks is not determined by a stable input, it is really determined by the anticipation of whether somehow that firm will grow, even when it is already gigantic, like Apple or Microsoft, its value is determined by the expectation that it will grow. So all of these issues, which all of these topics, which we normally accept and we normally consider fairly obviously the way to go in the modern world all of these are really questioned in Antifragile. That makes it—that makes it worth taking a look even if sometimes it—the book is going to make you quite uncomfortable. Hassim has that way. He sometimes makes his friends squirm, and it remains worthwhile. I think we should open it up.
(applause)
NASSIM TALEB: Thanks.
PAUL HOLDENGRÄBER: The rule is here that you come up to the mike and you look at the two distinguished speakers and ask a question which is less than a minute long. Go ahead.
Q: One of the things Mr. Taleb said can be conveyed in the way of an aphorism. The best risk management is management’s own risk. And recently a great scholar surprised me telling scholars resist aphorisms. So in aphorisms often can be wisdom. So my question is: Does it mean that modern scholarship is against wisdom, what is the relation between modern scholarship and wisdom and you write aphorism and Mr. Kahneman studies mind, so it might be you both might contribute to the answer.
NASSIM TALEB: There’s a long line, so let me answer quickly. You have—it’s just like this imbalance that Nietzsche was talking about. We need scholarship, but not too much. We need wisdom a lot more—I mean, we need street smarts, a lot more than we got. We got here from street smarts, and now what we’re shooting for is to preserve the system by transforming it into an academic thing. It doesn’t work that way. You need a balance between Dionysian Fat Tony and Nero Tulip, between the Dionysian and the Apollonian. Can we take another because we don’t have a lot of minutes.
Q: Thank you. This is primarily for Professor Kahneman, but both of you. Why do humans have such a hard time computing probability? It’s very for a child to learn to add, to subtract, to divide, and then the easiest probability—is there an evolutionary advantage to not being able to predict or compute probability?
DANIEL KAHNEMAN: Well, you know, there is a qualitative difficulty in thinking about probability, that you have to maintain several possibilities in the mind at once. And this is really something that turns out to be quite difficult for people. So normally, we have a story, it’s a single story, it’s a single story with a theme and thinking probabilistically in a proper way is extremely difficult for people. People do have a sense of propensity; that is, that a system has a tendency to do something or other, but that’s quite different from thinking about probability, where you do have to think of two possibilities and weigh their relative changes of happening.
Q: You had mentioned Alan Greenspan. What is your opinion of what the current Chairman of our Reserve is doing in expanding—
NASSIM TALEB: This is my nightmare. The one of my ideas of expanding—it’s my nightmare, because debt, we’re living off debt, government debt, centralization more than ever, top-down, government debt, my nightmare, Bernanke, my nightmare, I can’t see his—don’t tell me he looks like me, I can’t see his face. (laughter) My nightmare. He’s my nightmare. I think it’s a big problem. Paul Krugman and this guy are my two nightmares. (laughter)
Q: So there’s no way to resolve what they’re doing, any resolve—
NASSIM TALEB: The point is that you’re creating—you’re not even creating growth, if you print money, it should translate into growth, all right, so the GDP coming from printing money and GDP otherwise, what is it doing? The middle class—the middle class, the standard of living is dropping because income for middle class is dropping, they’re fragilizing the economy. The thing we should have done in 2008. We actually had a similar conversation where I was massively angry, and now look how polite and nice I am. In 2008—we were—2009, early 2009, I was shouting, I was angry, all right. We should have turned debt into equity because for the system, we should not have debt, we should have equity, and equity makes a system antifragile. Look at California, they don’t use debt, they use equity. So instead of doing that, we moved into turning private debt into government debt, and there’s nothing worse than government debt, they’re going to create inflation that they can’t control later, pay the price by—horrible stories. Can we have another question? No Greenspan.
Q: I want to thank you both for having us in your living room this evening. Despite the presence of diabetes and Walmarts, humans persevere, and we’ve been doing antifragility to some level successfully. Could you talk about the heuristics that has made antifragility sexy and let us continue to exist for so many years and also, I mean, you succeeded in correlating Black Swan to a very sexy movie, but antifragility doesn’t roll off the tongue and when I convinced my friends to live in an antifragile way, they’re like, “Oh, could we rebrand this in some way?” And we could use heuristics to do so. Could you speak to that?
NASSIM TALEB: The heuristics of course to achieve antifragility, that is the first one I call the barbell heuristic, is walking and sprinting is better than just jogging, having a very risky portion of your portfolio and very, very, very, very minimal risk on the other side is a lot better than having medium risk. Having in the UK government Conservative plus Liberal Democrats is a lot better than having Labour. Things—there’s some rules, I have about forty-nine explicit heuristics and about a hundred implicit ones of how you can inject antifragility in a system, okay, for example one instead of being an academic all the time be a lensmaker, you know, like a famous lensmaker and then writing treatises on logic at night you do better than being an academic 100 percent. That kind of stuff, so there are some heuristics and then also fasting, try to fast as much as you can, okay, to compensate. If you’re religious it may come automatically, if not, then all right, the problem is the environment that sustains us is harming us somewhere, making us live longer but sicker. You see?
Q: Thank you. In my way of thinking you had focused upon too big to fail. At least in my thinking and I’m an economist, I think too big to fail by definition is too big to exist and once you take a good look at that, I’m wondering what you think about the pension liability debt which I think will be far greater coming down the road and it’s really not being focused upon.
NASSIM TALEB: It’s a nightmare. Let me not be negative, let me propose something positive, all right. There’s one way to mitigate too big to fail that I propose is a very simple heuristic for governments. You take the list of companies. You say, if that company fails, is it a national emergency? Yes/no. If the answer is yes, then all employees should earn civil servant equivalent salary, all right. If it is not bailable out, they can do whatever they want, they can pay themselves whatever, it’s not our problem, taxpayers. If you have that rule, it would eliminate too big to fail, because too big to fail exists because they know you’re going to bail them out, so they take and they hijack the government as the banks did, too big, come, we’re too big, so you force them to be small by controlling their compensation.
Q: Hi, my name is [inaudible] I’m a recent high school graduate and author. My question is. I know you write a lot about in the book about autodidacts and lecturing birds how to fly. How do you think we can bring this autodidactic culture and tinkering back to our schools and our universities?
NASSIM TALEB: You have it automatically. When you see big scholars, they’re—it’s not like what they learned they learned in class, they’re autodidacts or big scholars, you can see it. The problem is at the level of—people start shooting for grades, that’s a problem, instead of using grades like driver’s licenses, you know, where you don’t try to shoot for grades beyond passing and then focusing their time on really what interests them, you see. The problem is racing for grades, make people optimize the system, exactly like we do with ratings, bank ratings, and stuff like that, people game the system, so you’re going to have the most stupid person in everywhere except for that grade, you see, eventually, just like athletic performances, so you should move scholarship away from a competitive sport, okay, into something more, you know, less measurable immediately or at least less tractable that way. But if you look at what we got, the individual scholar has always had a huge advantage over the institutionalized one in history.
DANIEL KAHNEMAN: I mean, you know, there is a theme here. Antifragility is a deeply conservative idea.
NASSIM TALEB: Burkean conservative.
DANIEL KAHNEMAN: Burkean.
NASSIM TALEB: Burkean conservative. Not conservative à la Bush, no, no, not that stuff.
(laughter)
PAUL HOLDENGRÄBER: We’re going to take now three questions.
Q: My question is actually for Daniel Kahneman. I know you poke fun at the irrational quite a bit and you sort of allude to it in your five words. I wonder if you could talk about in what ways creativity and intuition have benefited your career other than serving as a straw man for your research?
DANIEL KAHNEMAN: Creativity has never served as a straw man for my research and in fact intuition hasn’t either. I’m a firm believer in intuition. I have done a lot of researching explaining where intuition leads us astray. But intuition most of the time works extremely well for us. That book that I wrote on system one and system two, fast thinking is really how we live. System two is a commentary and system two sort of keeps us in shape but basically what keeps us moving is system one and it’s intuitive, and mostly it’s fine, occasionally it leads us into amusing errors, and I’ve spent my life studying those amusing errors, but you have to see the perspective of it and clearly the perspective is not the bumbling fools running around. You know, we are very well adapted to our world.
Q: Thank you for your presentation. We tend to look at the future as something ahead of us, before us. The ancients looked at the future as something that comes behind us and sort of carries us along. Is this really the essence behind what—Antifragility and Black Swan and Fooled by Randomness.
NASSIM TALEB: Let me mention now how to find a solution to all of these problems, try to go away from the story of narrative. Of course the Greeks had Prometheus and his brother Epimetheus and one guy was in the past, the other, the next was in the future. How was he in the future? He was in the future not through narrative but through optionality. Tinkering, trial and error, frees you from having a narrative, you see. So it’s trying to avoid not a narrative form of life, all right, opportunistically, taking advantage of things in a structured way. So we need a huge amount of reason but at the same time you don’t need a narrative. It’s much harder to live that way but it’s much more rewarding, much safer.
Q: Apologies. I’m wearing a tie, so I’m a little bit nervous. Just a brief question. In Antifragility you mention something terrible about antifragility around sort of the weak at sort of the expense of the strong. Have you been able to reconcile that?
NASSIM TALEB: Yes. With the barbell. You should protect the very weak, protect the very weak, help the risktaker take more risks and worry less about the middle, which is not very popular, but overprotect the weak. This is why the point is you have to allow people to fail. National Entrepreneur Day is part of this idea, is that to encourage people to become risktakers. What does the government do? They bail out the corporations, bail out the strong. No. You should bail out individuals not corporations, by giving them a backup to take more risks in life, because we got here thanks to risk taking, complex risk taking, optionality and stuff.
DANIEL KAHNEMAN: We should limit it I think to the next two questions.
Q: At any rate, it seems to me, though I’ve read neither of your books, that this was billed as a discussion of making decisions. I have—I think I have been able to understand about half of what you’ve said, and on that half, I would like to ask a question that is not a financial question and not an economic question ad not a business question but a real-life question at least to me and I suspect to many here. How does your—how do your philosophies help somebody who is trying to think about gun control?
(laughter)
DANIEL KAHNEMAN: You know, you can have intellectuals writing books, they don’t have answers to all questions, and I don’t think that either Nassim or I have answers to the gun control questions. We have opinions but I don’t think that anything we’ve said has a direct line to—we don’t have solutions to all decisions.
Q: Change the people’s consciousness first. Yeah, quick question, gentlemen, can you please tell me what do you think what the philosophical mind is, the true philosophical mind is?
DANIEL KAHNEMAN: I didn’t hear the question.
NASSIM TALEB: What the true philosophical mind is.
Q: It doesn’t necessarily have to be a scholar or something, just as a human being, how would you describe a philosopher?
NASSIM TALEB: I don’t know. I personally don’t answer these.
DANIEL KAHNEMAN: I would say that you’re closer to being a philosopher than I am—too hard a question.
NASSIM TALEB: I have something related I may, it’s the last question—something related. There’s two forms of knowledge. My idea of antifragile, a lot of people mistook it as coming from some narrative based on past historical data. No. I use history for examples, but there’s something called an a priori link that you can figure out in your armchair. From necessary relationships, necessary mathematical relationships. If this is fragile, it has to hate volatility. Okay, necessary relationships. It has to be more harmed by, you know, increasingly more harmed as deviation gets larger, so this is how—
DANIEL KAHNEMAN: A philosophical mind is someone who looks at life and looks for patterns and sees them and see large patterns, you know, we’ve been exposed to a fair amount of that this evening. This is a philosophical mind. And you are the last one.
NASSIM TALEB: Practical.
Q: Nassim, this is a question for you. During Hurricane Sandy, New York City basically ran out of gasoline and it appears that the gasoline distribution system is fragile. I can imagine what a robust system would look like for gasoline distribution, you know, the gas stations would not have run out of gasoline.
NASSIM TALEB: Let me answer up to this point.
Q: You see where it’s going, right? So my question is, I can imagine what a more robust system would look like you know with more stockpiles or something. Does it even make sense to talk about an antifragile system for say gasoline distribution, and are there other systems or industries where it just doesn’t make sense to talk about antifragility?
NASSIM TALEB: Okay, an antifragile system improves from stressors. So visibly if they improve from stressors, they have a structure that probably has some amount of antifragility, but you need to be robust, so the problem you moved toward robustness, and robustness requires some kind of redundancy to have inventory. People think it’s silly to have cash in the bank, you’re more robust if you have cash in the bank, see, and actually even antifragile, because you can capture opportunities that way, so you see the idea of having more—I wrote in the book, actually. I wrote in this book, about if you have excess inventory, people think there are costs, okay, if you have excess hummus in your basement, okay, the kind of inventory I have in my basement, being Lebanese, you know, people think it’s a cost. It’s not a cost, to have extra stockpile for companies, because if there’s a crisis, there’s a squeeze and other people don’t have what you need, the price of the commodity shoots up massively and you can sell it, so having inventory is antifragile.
DANIEL KAHNEMAN: I would add by, I would have answered, if I were you, I was expecting you to give a different answer. The antifragile, antifragile would not be at the level of gasoline distribution, it would be at the level of transportation. I mean, gasoline is for a purpose and the more various, the more varied the means of transportation are, the more antifragile you’re going to be.
NASSIM TALEB: You give a better answer—we should have horses and bicycles! Thanks.
(applause)
................
................
In order to avoid copyright disputes, this page is only a partial summary.
To fulfill the demand for quickly locating and searching documents.
It is intelligent file search solution for home and business.
Related searches
- new york public school calendar 2019 20
- new york public school jobs
- new york public schools employment
- new york public schools calendar 2019
- new york public schools jobs
- new york public school calendar 2020 2021
- new york public school holidays
- new york public service commission
- new york public service commission filings
- new york public court records search
- new york public school dashboard
- new york public schools closings