Wednesday, April 27, 2016
This month I found this very interesting paper by Aghion, Algan, Cahuc and Shleifer (AACS) published in 2008.
For long time, it’s been argued that regulation hampers economic performance; what these authors say, however, goes one step further, they argue that regulation hinders social capital, i.e. trust in others.
The authors developed a model where there are two equilibria: a good one with a large share of civic individuals and no regulation, and a bad one, where a large share of uncivic individuals supports heavy regulation.
As the authors put it “when people expect to live in a civic community, they expect low levels of regulation and corruption, and so invest in social capital. Their beliefs are justified, and investment leads to civicness, low regulation, and high levels of entrepreneurial activity. When in contrast people expect to live in an uncivic community, they expect high levels of regulation and corruption, and do not invest in social capital. Their beliefs again are justified, as lack of investment leads to uncivicness, high regulation, high corruption, and low levels of entrepreneurial activity”.
The consequences of distrut are tremendous, “distrust generates demand for regulation even when people realize that the government is corrupt and ineffective; they prefer state control to unbridled activity by uncivic entrepreneurs. The most fundamental implication of the model”
Distrust -> demand for regulation -> regulation -> corruption & unfairness -> distrust
The causality is both ways from distrust to regulation and from regulation to distrust.
This model reminds me of Alesina (2004) (or here) where “If a society believes that individual effort determines income, and that all have a right to enjoy the fruits of their effort, it will chose low redistribution and low taxes. In equilibrium, effort will be high and the role of luck will be limited, in which case market outcomes will be relatively fair and social beliefs will be self-fulfilled. If instead a society believes that luck, birth, connections and/or corruption determine wealth, it will tax a lot, thus distorting allocations and making these beliefs self-sustained as well”.
AACS also states that liberalization in a low trust environment such as the one seen in the 1990s in the ex-soviet bloc, triggers a rise in corruption at a given level of regulation, leading people to demand a re-regulation.
A liberalization shock in a regulated economy doesn’t improve people’s trust, quite the opposite, it increases corruption and as a result a demand for re-regulation.
The origin of distrust and regulation systems are set in history and deep-rooted in countries’ cultures. Hence, a change in regulation will not see a change in trust or corruption. This is the opposite to what Alesina said in his paper and in a sense this new view is a much more deterministic model.
The cross-country data used for distrust comes from World Survey database whilst the data for regulations comes from La Porta (2002), Djankov et al. (2002), Botero et al. (2004) and and Aghion, Algan, Cahuc (2008).
Tuesday, March 1, 2016
This week I saw two graphs that summarize quite well how economics have changed in recent times. The first one seen in Bloomberg View, but originally from this paper, is about the trend in economic academic journals to publish more and more empirical and data intensive papers instead of theoretical ones. The trend has been observed since the 80’s but today is much more evident. The twist here is that in the last decade or so, empirical papers using own data instead of borrowed data (i.e. data from IMF or World Bank…) are now a majority. Field experiment plus own data empirical papers represent 42% of the total. Only 20 years ago they were barely 11%.
That shows the major change economics has been through, from a theoretical field to an empirical one. Economics has become a social science based on experiments and data with the likes of biology and natural sciences.
Obviously, a field that uses data and experiments requires a good amount of statistics and mathematics. The use of those in economics has increased dramatically and as a consequence it is a prerequisite for any economic student-to-be. The second chart points in that direction, it shows the results of the GRE quantitative test by field of study. GRE is a standardized test that is an admissions requirement for most graduate schools. For those who doesn’t know a graduate school is a school that awards advanced academic degrees (i.e. master's and doctoral degrees).
Source: Julie Posselt Twitter account
The GRE scores distribution of economics students is closer to those of physics than to any of the other social science subfields.
Monday, February 22, 2016
I read recently a critique of the economics field, by Timothy Garton Ash, based on the supposed failure of the field to achieve the basic principles of science. The author stresses the need, in economics, of more humble and modest assertions giving the uncertainty of the field.
I went to Wikipedia to find the definition of science and it says “Science is a systematic enterprise that creates, builds and organizes knowledge in the form of testable explanations and predictions about the universe.”
And goes on saying:
“Popper proposed replacing verifiability with falsifiability as the landmark of scientific theories, and replacing induction with falsification as the empirical method. Popper further claimed that there is actually only one universal method, not specific to science: the negative method of criticism, trial and error. It covers all products of the human mind, including science, mathematics, philosophy, and art. […] A scientific theory is empirical, and is always open to falsification if new evidence is presented. That is, no theory is ever considered strictly certain as science accepts the concept of fallibilism.”
This is probably one of the main deficiencies of the economic science, the lack of replicability. Not that it can’t be done, but that there is no incentive whatsoever to the economic researcher for doing so. That causes a big black hole, without new evidence there is no falsification.
So, when Andrew C. Chang and Phillip Li decided to start digging on replicability and falsifiability in economics, this is what happened:
Sunday, February 14, 2016
David Cuberes and Jennifer Roberts published this paper in October last year. It deals with the geographic distribution of households’ income within the main British cities (excluding London).
They study the extent to which distance of residence from the city centre is a function of income. This is, apparently, the first study of its kind for British cities. They take into account the main potential factors influencing location, such as household characteristics and access to transport, as well as city and time effects, and taking account of both spatial and serial correlation.
They state four main findings: i) there is a strong positive association between household’s income and distance from the city centre. ii) there is no evidence that richer households locate further from the city centre mainly because they prefer larger dwellings. iii) poorer households that live closer to the city centre experienced increasing real incomes over the period relative to those who live further away. This supports the view that cities in Britain attract poor people rather than generate poverty. iv) public transport availability can’t explain the spatial distribution of income.
These results are very similar to those found for the US, and, at least for the case of Britain, they contrast with those who argued that in Europe richer households tend to live in the city centres where amenities are concentrated.
To analyze the relationship between household income and distance from the CBD they used the following model:
where i is household, t is wave (British demographic survey waves), D is distance (km) from the CBD (Central Business District), Y is the household’s real equivalent net income and X is a set of control variables.
This is the regression results:
Interesting to see that children and age have a very important and significant push, while higher education population attract people to the CBD. The negative sign of higher education looks counterintuitive but the authors say "this may reflect the fact that household’s income and education are highly correlated."
Sunday, February 7, 2016
A year ago I wrote about statistical software, data analysis languages and their impact on salaries. I decided to give an update today.
Using LinkedIn US, I searched for jobs that had any of those languages in their description in addition to the words ‘data’ and ‘economics’. Then I worked out the average salary for those jobs where wages were available.
Surprisingly, Eviews came on top followed by R. Salaries for those two are, on average, slightly above 63,000 $US. Yet, a big differerence remained; R had more than 3,000 opening positions across US while Eviews had only dozens. Excel ranked, not surprisingly, the lowest; the average salary happened to be 58,000 $US.
It was strange to see Stata performing so badly. Yet that is something we already saw on last’s year analysis. The reason could well be that Stata is the data language of the academia where salaries tend to be lower than the financial/consultancy sectors, for a given set of skills.
Tuesday, January 5, 2016
Last month we saw on the news the New Zealanders (or Kiwis) getting involved in a referendum to choose a new national flag.
New Zealand confirmed that a blue, white, red and black fern and stars design won the referendum to become the contender of the current flag. A second referendum will be held in March to decide whether to adopt the new flag or keep the old one.
No research, as far as I know, analysed before the relation between flag designs and economic or political performance (all I found is this). That is because there is no point whatsoever in doing that. Any relation should be spurious or the consequence of an omitted variable bias (i.e. the assumed specification is incorrect in that it omits an independent variable that is correlated with both the dependent variable and one or more included independent variables). Nonetheless I thought it could be a light-hearted albeit interesting analysis.
I run three regressions. In the first one I used the flag design to predict the Quality of the Democracy (as measured by the Economist Intelligence Unit) by country. The second regression dependent variable is the logarithm of the GDP per capita and the third one is the Economic Complexity as measured by The Observatory of Economic Complexity of the MIT. The reference in all three regressions is a flag with the red colour in it, horizontal stripes and no symbols (those were, independently, the most common features).
Interestingly, having some sort of cross in a country flag has the largest positive impact on that country’s democracy. Adding some kind of weapon such as machine guns (Mozambique) or swords has the largest negative impact. A substantial amount of green colour does, as well, have a negative impact.
In GDP pc terms, the most negative impacts come from having a star or other symbols and a dominant green or yellow colour. On the other hand the positive and statistically significant impact comes from having a moon on the flag.
Finally, the economic complexity, a good predictor of economic development and future performance, is positively correlated with a cross and the colour white. Conversely, it is negatively correlated with the colour green and stars.