Sunday, April 29, 2012

Economic Research per country and US state

Some posts ago I showed a world map of Economic Departments that I made using online software. Today I want to show the next map that occurred to me. (The following map was made using the same software and data source). Here, I split up USA into states and research was worked out using aggregated academic papers' impact. It can be seen that many US state by themselves are ahead of the economic research worldwide.

Countries map US-World Economics

It would be great to know what variables can explain states/countries amount of research. I guess that GDP may be a basic explanatory variable. So, if we rank all countries by GDP PPP and Economic variables and we plot it, we get the following chart.

Economic Research vs GDP

Where dots on the black line are countries that are ranked in the same position for GDP and 'Economic Research'. Dots above the black line are doing more economic research than we would expect given their GDP. And the opposite is for dots below the black line. District of Columbia is arguably a special case, but Israel, Switzerland, Connecticut, Missouri, Massachusetts,... are examples of overachievements in economic research giving their income. Japan and Texas are on the underachievement zone.

To be honest, I think this is misleading because countries like China and India which have a huge GDP are basically very poor and therefore we shouldn't expected high economic research from them. Using GDP per capita wouldn't solve the problem because Luxembourg is rich but too small.

So, the following chart only includes the richest countries/states (excluding small states/countries like Distr. of Columbia) and its regression line. Note that axis are no ranks anymore, simply the Ln(GDP ppp) and Economic Research value (the lower the more they research):

enlargement 2 of GDP vs Eco.Res.

Sunday, April 22, 2012

One thing you should never predict is the future

The following piece is my first post at New Economic Centre a blog friend:

Sala-i-Martin a Columbia University professor wrote not long ago that if you want to know how the economy is going to perform in the next years you should not ask an economist but a fortune-teller.

The truth is that usually economists are mistaken for fortune-tellers. Economists’ main duty should not be long-term forecasts but to analyse policies effectiveness. Like weathermen, some economists do short-term forecasts, but those are basically based on the ‘what goes ahead must be similar to what’s left behind’ principle. That is, they use statistical models that predict the future based on the past. That is like driving a car looking at the rear-view mirror. In short, economists are ignorant of future events as anyone.

Yet there are exceptions to any rule. In particular, I found two exceptions worth mentioning. One is 1996 Paul Krugman masterly article in the New York Times and the other is Alan Blinder 2005 academic paper. Both forecasted the same future events, but to do so they didn’t use a crystal ball but basic economic principles. Here I just want to focus on their prediction about the end of higher-education. This prediction, they suggest, is going to happen as a result of two economic events or factors.

First event is about the information age and its importance. Krugman disagreed with all those prophets that argued information to be a key sector: “In general, when the economy becomes extremely good at doing something, that activity becomes less, rather than more, important. […] When something becomes abundant, it also becomes cheap. A world awash in information is one in which information has very little market value.” Today’s world is supremely efficient at growing food; that is why it has hardly any farmers. The future world, and to some extent the present one, is supremely efficient at processing routine information; that is why traditional white-collar workers are going to virtually disappear. Many of the jobs that once required a college degree or postgraduate degree will be eliminated. College provides knowledge and information to its students but as it’s been said information is going to lose its value, computers which are proficient analysing and processing information will replace white-collar professionals.

The second event is about the possibility that human analysts play a –small- part in the information sector. If such future happens, white-collar workers of Europe or America won’t have an opportunity either. In this future, information will be transmitted easily to poor countries and analyzed there for a fraction of the cost in Boston or London. This is what has been called, downsizing and outsourcing. Downsizing and Outsourcing are already affecting for the first time the college, white-collar graduates and will affect them even more in the future. Alan Blinder give us a clue why this is happening “…because technology is constantly improving, and because transportation seems to grow easier and cheaper over time, the boundary between what is tradable and what is not tradable is constantly shifting-- Over time, more and more items will become tradable. Many services are now tradable and many more will surely become so”.

Consequently, wages trends are clear, educated jobs will diminish. In relative terms, personal, face-to face jobs, (that is, jobs that cannot be delivered electronically) will see an increase of their wages. Face-to-face jobs like paranursing, waiters, firemen, policemen, carpentry, household maintenance and so on, “[will] pay nearly as much as if not more than a job that requires a master's degree, and pay more than one requiring a Ph.D.”

This should be rather obvious already; Steve Jobs and Bill Gates were college dropouts, Krugman argued. He finished saying that white-collar, college-educated workers will be fired in large numbers, even while skilled machinists and other blue-collar workers will be in demand. This will signal that the days of ever-rising wage premiums for people with higher education are over. Without investment returns for students, higher-education will lose their clients and without clients universities will disappear or in the best-case scenario become what they were back in the 19th century, a club for the children of the rich, a social institution “to refine their social graces and befriend others of their class.”

So, in conclusion “education, full stop, cannot be the answer anymore.” Blinder says. Don’t get too scared, Dr. Blinder also offers a solution “Want to get ahead today? Forget what your parents told you. Instead, do something foreigners can’t do cheaper. Something computers can’t do faster.” For example, playing live a piece of beautiful music can not be done faster or abroad.

Sunday, April 15, 2012

Rats and Easterly

I must confess, William Easterly is probably my favourite economist. He's an expert in development economics with a scientific approach. There are others but prof. Easterly combines deep theoretical and empirical knowledge of development economics with a gift for turning difficult concepts into easy ones. An old article that I found in my own library and available here, is a good example. Below there's a summary of his article. "Laboratory experiments show that rats outperform humans in interpreting data" he goes. The experiment appears in an amazing book by Leonard Mlodinow.
The experiment consists of drawing green and red balls at random, with the probabilities rigged so that greens occur 75 percent of the time. The subject is asked to watch for a while and then predict whether the next ball will be green or red. The rats followed the optimal strategy of always predicting green (I am a little unclear how the rats communicated, but never mind). But the human subjects did not always predict green, they usually want to do better and predict when red will come up too, engaging in reasoning like “after three straight greens, we are due for a red.” As Mlodinow says, “humans usually try to guess the pattern, and in the process we allow ourselves to be outperformed by a rat."
Unfortunately, spurious patterns show up in some important real world settings, like research on the effect of foreign aid on growth.research looks for an association between economic growth and some measure of foreign aid, controlling for other likely determinants of economic growth. Of course, since there is some random variation in both growth and aid, there is always the possibility that an association appears by pure chance. The usual statistical procedures are designed to keep this possibility small. The convention is that we believe a result if there is only a 1 in 20 chance that the result arose at random. So if a researcher does a study that finds a positive effect of aid on growth and it passes this “1 in 20” test (referred to as a “statistically significant” result), we are fine, right? Alas, not so fast. A researcher is very eager to find a result, and such eagerness usually involves running many statistical exercises (known as “regressions”). But the 1 in 20 safeguard only applies if you only did ONE regression. What if you did 20 regressions? Even if there is no relationship between growth and aid whatsoever, on average you will get one “significant result” out of 20 by design. Suppose you only report the one significant result and don’t mention the other 19 unsuccessful attempts. You can do twenty different regressions by varying the definition of aid, the time periods, and the control variables.
This practice is known as “data mining.” It is NOT acceptable practice, but this is very hard to enforce since nobody is watching when a researcher runs multiple regressions. It is seldom intentional dishonesty by the researcher. Because of our non-rat-like propensity to see patterns everywhere, it is easy for researchers to convince themselves that the failed exercises were just done incorrectly, and that they finally found the “real result” when they get the “significant” one. Even more insidious, the 20 regressions could be spread across 20 different researchers. Each of these obediently does only one pre-specified regression, 19 of whom do not publish a paper since they had no significant results, but the 20th one does publish their spuriously “significant” finding (this is known as “publication bias.”)
So, there would be 20 researchers multiplied per 20 regressions each. That's 400 regression, of which only 1 was statistically significant and 399 were not. So then we have a published paper stating a significant relation and 399 unfairly unpublished.
But don’t give up on all damned lies and statistics, there ARE ways to catch data mining. A “significant result” that is really spurious will only hold in the original data sample, with the original time periods, with the original specification. If new data becomes available as time passes you can test the result with the new data, where it will vanish if it was spurious “data mining”. You can also try different time periods, or slightly different but equally plausible definitions of aid and the control variables.
Unfortunately, journals are not keen to publish reviewing papers.

Tuesday, April 10, 2012

World map Wikipedia

This is probably one of the most amazing maps I have ever seen. Made by TraceMedia in collaboration with the Oxford Internet Institute. It is an interactive map of the 7 million articles of the world's encyclopedia, Wikipedia. You can select a language and hit search to see the world distribution of articles. You can click on each dot, too. Just brilliant.

Sunday, April 1, 2012

Would I lie to you?

Do I lie to you? According to a recent paper from Raúl López-Pérez and Eli Spiegelman I do. These authors carried out experiments to find out what personality traits and other variables may explain why some people are more keen to lie than others. Results are surprising. For instance, gender* and religiosity have no predictive value against what other previous studies said. The most significant variable is the major of study. Apparently, those who studied Economics or Business (E&B) tend to lie significantly more than other people.

Another covariate that correlates very well with ones honesty is the expectation of others honesty:
expectations are highly predictive of behavior: an increase in reported expectations of other people’s dishonesty decreases the probability of an honest choice, roughly one-for-one
However, even if we take into account expectations, E&B keeps increasing dishonesty:
This is true even after controlling for subjects’ beliefs about the overall rate of deception, which predict behavior very well: Although B&E subjects expect most others to lie in our decision problem, the effect of major remains.
In other words, according to previous studies we know that people lie as much as he or she thinks others lie, but if your major is E&B then you add a bit more of lies than expected.

This is already embarrassing, but there is more. You could argue that there is an endogenous problem there, because maybe those that are more prone to lie are exactly those who end up studying E&B. Unfortunately, not. López-Pérez and Spiegelman used an Instrumental Variable (political position) to get rid of this effect. Results show that, indeed, E&B major has a significant effect increasing the amount of lies.

[FYI, an Instrumental Variable is a method applied in attempting to estimate the causal effect of some variable x on another y. An instrument is a third variable z which affects y only through its effect on x.]

Finally, it may be worth noting that Raúl López-Pérez and Eli Spiegelman and myself are economists.

* there is a tricky aspect about gender. Lopez-Perez and Spiegelman found that gender is not significant once majors are considered. However, E&B are usually more liked by men than women.