Shifting Centre of Gravity of US Fossil Fuel Production

My research Fracking Growth investigates the localised impacts of the current oil and gas production boom in the US. The boom is triggered by extraction of oil and gas deposits, that were previously not possible to exploit. The shale deposits have become technologically recoverable due to a combination of horizontal drilling technology and hydraulic fracturing, which is extremely controversial as it involves injection of significant amounts of water, sands fused with chemicals beneath the surface.

The fact that these shale deposits are geographically located in  distinct places within the US can be highlighted through a very simple graph.

Taking Energy Information Administration state level crude-oil and natural gas (marketed) production data, we can compute a “center of gravity” of US fossil fuel production – simply by weighting the state centroid coordinates by the energy content of the fossil fuels being extracted (or by their dollar value).

The resulting two pictures are worth sharing, because they highlight the dramatic changes that have happened in only recent years.

Plotting Centre of Gravity by Energy Content 

bbtu-gravity-small

 

Note that here I am plotting 14 years of data, from 1998 to 2012. What you can see is striking: the centre of gravity of fossil fuel production by heat content has moved clearly towards the north for the whole period, but has made a decided shift towards the East within the past four years. This represents the dramatic production expansion for natural gas in the Marcellus shale formation.

Now natural gas is becoming significantly cheaper in the US, as the US essentially becomes a net exporter and is self-sufficient. This allows domestic prices to be significantly lower than elsewhere. In the US, natural gas is now cheaper by a factor of three compared to the UK. This emulates policy makers around the world to consider revising rules of fracking legislation, to stimulate their economies in a similar way. In the UK , most recently, land access rules have been changed to make fracking easier.

The fact that US natural gas prices are now dramatically lower than crude oil prices, when comparing them to their respective energy content has implications for how you plot the centre of gravity.

If you plot it by fossil fuel value, the picture is dramatically different.

Plotting Centre of Gravity by Fossil Fuel Value (in USD)

value-gravity-small

 

 

Here, the move in the centre of gravity knows only one direction. It is moving decidedly north west. This is driven by the Bakken oil boom.

It is clear that the two pictures indicate the possibility for their to be dramatic limitations when it comes to physical infrastructure. The sudden and quick relocation of production has put strains on the US physical infrastructure to carry crude oil and natural gas to the market. This is a core argument in my research, and I think this is bound to stay for a while, as the regulatory process and construction time for new pipelines and upgrading of refinery capacity is bound to take a few years.

Note that my approach to the graph is similar to Danny Quah’s work on studying the centre of economic activity around the world, except that I do not have to worry about projection issues too much (the center of gravity could be beneath the surface of the world in a truely 3-dimensional world).

Given a shapefile of US states, that you can e.g. obtain from the GADM , you can compute the centroid of the geographies using the gCentroid function that is part of the rgeos package.

Computing Centroids

Selec All Code:
1
2
3
STATES<-readOGR(dsn="States", layer="states")
STATES.pts<-do.call("rbind", lappy(1:length(STATES), function(x) gCentroid(STATES[x,])@coords))
STATES.data<-data.table(cbind(STATES@data,STATES.pts))

Is rainfall reporting endogenous to conflict?

For my paper on the impact of social insurance on the dynamics of conflict in India, I use some new remote sensed weather data. The data comes from the Tropical Rainfall Measuring Mission (TRMM) satellites. The satellite carries a set of five instruments, and is essentially a rainfall radar located in outer space.

As a robustness check I needed to verify that my main results go through using other rainfall data. In the paper I try to make a humble case in favour of using remote sensed data where possible. The key reason being that the  TRMM data comes from the same set of instruments over time, rather than from input sources that could be varying with e.g., economic conditions. This is a problem that has been identified by climatologist, who try to correct for systematic biases that could arise from the fact that weather stations are more likely to be located in places with a lot of economic activity.

At first I was a bit reluctant as it is quite heavy data that needs to be processed. Nevertheless, thorough analysis required me to jump the hoop and obtain secondary rainfall data sources. I chose the GPCC monthly rainfall data for verification of my results, since these have been used by many other authors in the past in similar contexts. The data is based on rain gauge measurements and is available for the past 100 years.

The raw data is quite heavy ; the monthly rainfall rate data  for the whole world at at 0.5 degree resolution would amount to about 150 million rows of data for the period from 1961-2010. If you drop the non-land grid cells, this reduces the size dramatically to only 40 million rows. Below is a bit of code that   loads in the data once you have downloaded the ASCII source files from the GPCC website. On my personal website, I make a dta and an rdata file available for the whole world. There are three variables appearing in that order: (1) the rainfall rate, (2) the rainfall normals and (3) an integer that gives the number of reporting rain gauges that fall in a grid cell in a particular month.

It turns out that all my results are robust to using this data. However, I do find something that is quite neat. It turns out that, if a district experienced some insurgency related conflict in the previous year,  it is less likely that this district has an active rain gauge reporting data in subsequent years. While it is a no-brainer that places with severe conflict do not have functioning weather reporting, these results suggest  that reporting may also be systematically affected in places with relatively low intensity of conflict – as is the case of India.

While I do not want to overstate the importance of this, it provides another justification of why it makes sense for economists to be using  remotely sensed weather data. This is not to say that ground based data is not useful. Quite the reverse, ground based data is more accurate in many ways, which makes it very important for climatologist. As economist, we are worried about systematic measurement error that correlates with the economic variables we are studying. This is were remote sensed data provides advantages as it does not “decide” to become less accurate in places that are e.g. less developed, suffer from conflict or simply, have nobody living there.

Here the function to read in the data and match to district centroids, you need some packages.

#########LOAD GPPC NOTE THAT YOU NEED TO SUBSET 
THE DATA IF YOU DONT WANT TO END UP WITH A HUGE 
DATA OBJECT

loadGPCC<-function(ff, COORDS) {
yr<-as.numeric(gsub("(.*)\\_([0-9]{2})([0-9]{4})","\\3",ff))
month<-as.numeric(gsub("(.*)\\_([0-9]{2})([0-9]{4})","\\2",ff))
temp<-data.table(data.frame(cbind(COORDS,
read.table(file=paste("Rainfall/gpcc_full_data_archive_v006_05_degree_2001_2010/",
ff,sep=""), header=FALSE, skip=14))))

###YOU COULD SUBSET THE DATA BY EXTENT HERE IF 
YOU DONT WANT TO GET IT FOR THE WHOLE WORLD
##E.G. SUBSET FOR BY BOUNDING BOX
##temp<-temp[x>=73 & x<=136 & y>=16 & y<=54]

temp<-cbind("year"= yr, "month"=month, temp)
gc("free")
temp
}

################
#####
ffs<-list.files("Rainfall/
gpcc_full_data_archive_v006_05_degree_2001_2010")

###THIS DEFINES THE GRID STRUCTURE OF THE DATA
###YOU MAY NEED TO ADJUST IF YOU WORK WITH A 
COARSER GRID
xs=seq(-179.75,179.75,.5)
ys=seq(89.75,-89.75,-.5)
COORDS<-do.call("rbind", lapply(ys, function(x) cbind("x"=xs,"y"=x)))

system.time(GPCC<-do.call("rbind", lapply(1:length(ffs), function(x) loadGPCC(ffs[x], COORDS))))

###MATCHING THIS TO SHAPEFILE?
##YOU COULD MATCH CENTROIDS OF DISTRICTS TO THE 
NEAREST GRID CELL - THE FOLLOWING FUNCTION WOULD 
DO THAT

###find nearest lat / lon pair
##you may want to vectorise this
NEAREST<-NULL
for(k in 1:nrow(CENTROIDS)) {
cat(k," ")
temp<-distHaversine(CENTROIDS[k,c("x","y"),with=F], 
GPCC.coords[, c("delx","dely"), with=F])
NEAREST<-rbind(NEAREST, cbind(CENTROIDS[k],GPCC.coords[which(temp==
min(temp))]))

}

 

Deploying Shiny Server on Amazon: Some Troubleshoots and Solutions

I really enjoyed Treb Allen‘s tutorial on deploying a Shiny server on an Amazon Cloud Instance. I used this approach for my shiny app that is a map highlighting the economic impact of the recent shale oil and gas boom on the places where the actual extraction happens. The easiest way to proceed is to use the AMI Image, which basically is like a virtual box image just running on Amazon Cloud. It has the basic Shiny-server up and running. Along the way, I came across a few troubleshoots for which there are simple solutions.

I cant seem to access the Shiny server through the Browser?

Right after the installation and setting up of the Amazon Instance, I tried to access the shiny server using the public DNS, in my case that was

Public DNS: ec2-54-84-227-28.compute-1.amazonaws.com

However, this did not work since the shiny-server is listening on port 3838 and you need to allow incoming traffic on that port. The way to manage that in the EC2 Dashboard is to go change the security group that is assigned to the instance that you are running. You need to add a rule to allow incoming traffic on port 3838.

Once this is done, you should be able to go to your public DNS, in my case the request URL in the browser now is:

ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:3838/shale-economic-impact/ in your browser

Where are the shiny-apps located?

The standard shiny apps that are preinstalled are  located in
“/var/shiny-server/www” If you ssh into your EC2 instance, you can go to that folder.

I installed packages, but my shiny application can not load them?

The problem is most likely that you are logged in as ec2-user, where you have your own dedicated library path. In order to install R packages system wide, you need to change to root by doing:

sudo -i
##install now your R packages,
R CMD INSTALL ...
exit

The exit part is important as then you turn off administrator rights.

When I run the app, I get Javascript Error  ”The application unexpectedly exited. Diagnostic information has been dumped to the JavaScript error console.”?

It could be that your EC2 instance is not powerful enough. I had that problem because the dataset that was loaded was too big, which creates a time-out. One way to overcome this is to start a medium instance rather than a micro instance. Please be aware that this is not part of the free usage tier and you will be billed for usage. However, an alternative simple  fix by editing the config file. It could be that you are hitting a time-out.

In the shiny-server configuration help, there are two timeouts that can be set in the free shiny server version.

app_init_timeout — Describes the amount of time (in seconds) to wait for an application to start. After this many seconds if the R process still has not become responsive, it will be deemed an unsuccessful startup and the connection will be closed.

app_idle_timeout — Defines the amount of time (in seconds) an R process with no active connections should remain open. After the last connection disconnects from an R process, this timer will start and, after the specified number of seconds, if no new connections have been created, the R process will be killed.

It could be that the javascript error is thrown, because the R process was killed.  You can edit the configuration file to increase the time-out periods, adding:

# Instruct Shiny Server to run applications as the user "shiny"

run_as shiny;

# Define a server that listens on port 3838

server {

listen 3838;

# Define a location at the base URL

location / {

# Host the directory of Shiny Apps stored in this directory
site_dir /var/shiny-server/www;
# Log all Shiny output to files in this directory
log_dir /var/log/shiny-server;
# When a user visits the base URL rather than a particular application,
# an index of the applications available in this directory will be shown.

directory_index off;
app_init_timeout 250;

}

}

This brings us right to the next question,…

Where do I find my shiny server configuration file?

There is a hard coded configuration file, but in the search path there is one located in:

/etc/shiny-server/shiny-server.conf

here you can do the above edits. After you have  done the edits you want to reload the configuration…

How do I reload the Configuration File, how to start or stop the shiny server?

#reload without restarting
sudo reload shiny-server
#stop the shiny server 
sudo stop shiny-server
#start it...
sudo stop shiny-server

Copying files from your local machine to the AWS Instance?

You can use “scp” for secure copying, e.g.

To download files from your instance:
scp -i frackingshiny-eu-west-1.pem ec2-user@ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:/var/shiny-server/www/shale-economic-impact.zip

To upload files to your instance:
scp -r -i frackingshiny-eu-west-1.pem “/Users/thiemo/shale-economic-impact.zip” ec2-user@ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:/var/shiny-server/www/

I plan to add more troubleshoots – if you have come across some error for which you had to find a solution, feel free to comment and I ll amend the list.

Stacking Regressions: Latex Tables with R and stargazer

In my paper on the impact of the shale oil and gas boom in the US, I run various instrumental variables specifications. For these, it is nice to stack the regression results one on the other – in particular, to have one row for the IV results, one row for the Reduced Form and maybe one row for plain OLS to see how the IV may turn coefficients around.

I found that as of now – there is no way to do that directly; please correct me if I am wrong.

The layout I have in mind is as in the screenshot of my table.

Sample Table with Stacked Regression Results In Stata, this can be accomplished through the use of esttab with the fragment, and append feature. This appends successive rows to an existing table and you can label the rows using the “refcat” option.

However, in R this is not possible as of yet. I have mainly worked with stargazer, as Marek has added felm objects for high dimensional fixed effects to be handled by his stargazer package.

The following functions are a “hack” that extracts the particular rows from the generated latex code by the stargazer package.

You can then piece the table together by combining the individual elements. The idea is that you have a single stargazer command that is passed to the various functions that extract the different features. Obviuously, this can be done a lot more elegant as the code is extremely hacky, but it does work

Marek has said that he is thinking of incorporating a stacking option into stargazer, but for now, my hack works reasonably well. The key thing to realise is that stargazer calls have the option to return

The following character strings can be used in the table.layout and omit.table.layout arguments of the stargazer command.

“-” single horizontal line
“=” double horizontal line
“-!” mandatory single horizontal line
“=!” mandatory double horizontal line
“l” dependent variable caption
“d” dependent variable labels
“m” model label
“c” column labels
“#” model numbers
“b” object names
“t” coefficient table
“o” omitted coefficient indicators
“a” additional lines
“n” notes
“s” model statistics


The following functions will simply extract the rows that are being returned.

stargazer.keepcolnums<-function(call="") {
command<-gsub("\\)$",", table.layout='#'\\)",command)
call<-eval(parse(text=command))
bounds<-grep("tabular",call)
row1<- call[(bounds[1]+1):(bounds[2]-1)]
row1<-gsub("^\\\\\\\\\\[-1\\.8ex\\]\\\\hline $", "", row1)
row1<-gsub("^\\\\hline \\\\\\\\\\[-1\\.8ex\\] $","",row1)
row1<-row1[row1!=""]
}
stargazer.keeprow<-function(call="") {
command<-gsub("\\)$",", table.layout='t'\\)",command)
call<-eval(parse(text=command))
bounds<-grep("tabular",call)
row1<- call[(bounds[1]+1):(bounds[2]-1)]
row1<-gsub("^\\\\\\\\\\[-1\\.8ex\\]\\\\hline $", "", row1)
row1<-gsub("^\\\\hline \\\\\\\\\\[-1\\.8ex\\] $","",row1)
row1<-row1[row1!=""]
}

stargazer.keepstats<-function(call="") {
command<-gsub("\\)$",", table.layout='s'\\)",command)
call<-eval(parse(text=command))
row1<-gsub("(.*)\\\\\\\\\\[-1\\.8ex\\](.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse=" "))
row1
}
stargazer.begintable<-function(call="") {
command<-gsub("\\)$",", table.layout='m'\\)",command)
call<-eval(parse(text=command))
row1<-paste("\\begin{tabular}",gsub("(.*)\\\\begin\\{tabular\\}(.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse="\n")),sep="")
row1
}
stargazer.varlabels<-function(call="") {
command<-gsub("\\)$",", table.layout='d'\\)",command)
call<-eval(parse(text=command))
row1<-paste("\\begin{tabular}",gsub("(.*)\\\\begin\\{tabular\\}(.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse="\n")),sep="")
row1
}

stargazer.keepcollabels<-function(call="") {
command<-gsub("\\)$",", table.layout='c'\\)",command)
call<-eval(parse(text=command))
row1<-gsub("(.*)\\\\\\\\\\[-1\\.8ex\\](.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse=" "))
row1
}

stargazer.keepomit<-function(call="") {
command<-gsub("\\)$",", table.layout='o'\\)",command)
call<-eval(parse(text=command))
row1<-gsub("(.*)\\\\\\\\\\[-1\\.8ex\\](.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse="\n"))
row1
}

It easiest to see how you can use these functions to construct stacked regression output by giving a simple example.

###the global command to be passed to the hacky ###functions that extract the individual bits
## OLS is a list of OLS results from running the
## lfe command
command<-'stargazer(OLS , keep=c("post08shale:directutilityshare","post08shale"), covariate.labels=c("Energy Intensity x Shale","Shale"), header=FALSE, out.header=FALSE, keep.stat=c("n","adj.rsq"))'

begintable<-stargazer.begintable(command)
###some multicolumn to combine 
collcombine<-c("& \\multicolumn{4}{c}{Tradable Goods Sector Only} & \\multicolumn{3}{c}{Additional Sectors} & \\cmidrule(lr){2-5} \\cmidrule(lr){6-8}")
collabel<-stargazer.keepcollabels(command)
colnums<-stargazer.keepcolnums(command)
##plain OLS row
row1<-stargazer.keeprow(command)
##the stats part for the OLS (number of Obs, R2)
stats<-stargazer.keepstats(command)
##the rows for the Fixed effect indicators
omitted<-stargazer.keepomit(command)

##the IV command passing a list of IV results in ## IV object
command<-'stargazer(IV , keep=c("post08anywell:directutilityshare","post08anywell"), covariate.labels=c("Energy Intensity x Anywell","Anywell"), header=FALSE, out.header=FALSE, keep.stat=c("n","adj.rsq"))'

##IV row
row2<-stargazer.keeprow(command)
footer<-c("\\end{tabular}\n" )

###now combine all the items
cat(begintable,collcombine,collabel,colnums,"\\hline \\\\\\emph{Reduced Form} \\\\",row1,""\\hline \\\\\\emph{Reduced Form} \\\\",row2,stats,"\\hline\\hline",footer, file="Draft/tables/energyintensity.tex", sep="\n")

I know the solution is a bit hacky, but it works and does the trick.

 

Fracking and House Prices on the Marcellus Shale

Starting last summer I worked on a short project that set out to estimate the potential costs of externalities due to unconventional shale gas production in the Marcellus shale on local house prices using a dataset of roughly 150,000 recently sold houses in Ohio, West Virginia and Pennsylvania.

The data suggests that proximity to a natural gas well is correlated with lower housing prices, which confirms previous studies.

fracking-houseprices

I stopped working on a project that looks at the impact of nearby shale gas extraction on property prices for the Marcellus shale. Instead, I focused on my paper “Fracking Growth” that evaluates the employment consequences of the shale oil and gas boom in the US more generally.

Everybody can have a look at the data and the document as it stands on sharelatex, where I also tried Sharelatex’s Knitr capacities, which are still somewhat limited as a lot of R-packages I usually work with are not yet installed.

The public sharelatex file, the data and the R script can be accessed here:

https://www.sharelatex.com/project/534d232b32ed2b25466b2541?r=6f73efc4&rs=ps&rm=d

Here are some preliminary snippets. The data used in this study comes from Zillow.com In Fall 2013 I downloaded data for recently sold houses. I focused the download to cover all or most of the counties that are somewhat near the Marcellus shale in West Virginia, Ohio and Pennsylvania. This list goes back to 2011 and provides data for 151,156 sold properties.

load(file = "HOUSES.rdata")
library(xtable)
table(HOUSES$year)
####  2011  2012  2013
#### 40087 63248 47821

A simple tabulation suggests that most data is for 2012.

Some characteristics that are included in the data are the sale price in USD, the number of bedrooms, number of bathrooms, the built up land size in square feet, the year the property was built and for some properties also the lot size.

The properties have geo-coordinates, which are used to intersect the location of the property with census-tract shapefiles. This will allow the adding of further characteristics at the census-tract level to control for general local characteristics.

The geo-coordinates are further used to compute the exact distance of a property to the nearest actual or permitted well in the year that the property was sold. Distances are computed in meters by computing the Haversine distance on a globe with radius r = 6378137 meters.

The following graph plots the average price per square foot as a function of distance to the nearest well in the year in which the property was sold. I group distances into 500 meter bins.

plot(HOUSES[, list(soldprice =sum(soldprice)/sum(sqft)), by=distancecat ], xlab="Distance to Well", ylab="Price per sqft")

A first inspection suggests a positive gradient in distance, that is – however, quite non-monotone.

Non-monotonic relationship between distance to the nearest oil or gas well and price per sqft.

Non-monotonic relationship between distance to the nearest oil or gas well and price per sqft.

Does this relationship hold up when running a hedonic pricing regression?

log(y_{ict}) = \gamma \times welldistance_{i} + \beta \times X_i + a_c + \eta_t + e_{ict}

These are estimated using the lfe package, as I introduce quite demanding fixed effects (census-tract and county by year). The lfe package takes these fixed effects out iteratively before running the actual regression on the demeaned data.

The results for two chosen bandwidths are presented in the table below. There appears to be a positive gradient – being further away from a well correlates with higher prices per square foot.

Regression results comparing sold houses nearby unconventional gas wells on the Marcellus shale

Regression results comparing sold houses nearby unconventional gas wells on the Marcellus shale

Clearly, the important question is whether one can separate out the property price appreciation that is likely to happen due to the local economic boom from the price differentials that may arise due to the presence of the local externalities and whether, one can separate out externalities due to environmental degradation as distinct from price differentials arising due to factors discussed in the beginning: no access to mortgage lending or insurances.

Unfortunately, I do not have the time to spend more time on this now, but I think a short paper is still feasible…

 

Regressions with Multiple Fixed Effects – Comparing Stata and R

In my paper on the impact of the recent fracking boom on local economic outcomes, I am estimating models with multiple fixed effects. These fixed effects are useful, because they take out, e.g. industry specific heterogeneity at the county level – or state specific time shocks.

The models can take the form:

y_{cist} = \alpha_{ci} + b_{st} + \gamma_{it}+ X_{cist}'\beta + \epsilon_{cist}  

where \alpha_{ci} is a set of county-industry, b_{ci} a set of state-time and \gamma_{it} is a set of industry-time fixed effects.

Such a specification takes out arbitrary state-specific time shocks and industry specific time shocks, which are particularly important in my research context as the recession hit tradable industries more than non-tradable sectors, as is suggested in Mian, A., & Sufi, A. (2011). What Explains High Unemployment ? The Aggregate Demand Channel.

How can we estimate such a specification?
Running such a regression in R with the lm or reg in stata will not make you happy, as you will need to invert a huge matrix. An alternative in Stata is to absorb one of the fixed-effects by using xtreg or areg. However, this still leaves you with a huge matrix to invert, as the time-fixed effects are huge; inverting this matrix will still take ages.

However, there is a way around this by applying the Frisch-Waugh Lovell theorem iteratively (remember your Econometrics course?); this basically means you iteratively take out each of the fixed effects in turn by demeaning the data by that fixed effect. The iterative procedure is described in detail in Gaure (2013), but also appears in Guimaraes and Portugal(2010).

Simen Gaure has developed an R-package called lfe, which performs the demeaning for you and also provides the possibility to run instrumental variables regressions; it theoretically supports any dimensionality of fixed effects. The key benefit of Simen Gaure’s implementation is the flexibility, the use of C in the background for some of the computing and its support for multicore processing, which speeds up the demeaning process dramatically, especially the larger your samples get..

In Stata there is a package called reg2hdfe and reg3hdfe which has been developed by Guimaraes and Portugal (2010). As the name indicates, these support only fixed effects up to two or three dimensions.

Lets see how – on the same dataset – the runtimes of reg2hdfe and lfe compare.

Comparing Performance of Stata and R

I am estimating the following specification

y_{cist} = \alpha_{ci} + b_{sit} + \gamma_{it}+ X_{cist}'\beta + \epsilon_{cist}  

where \alpha_{ci} is a set of county-industry, b_{ci} a set of state-time fixed effects. There are about 3000 counties in the dataset and 22 industries. Furthermore, there are 50 states and the time period is also about 50 quarters. This means – in total – there are 3000 x 22 = 66,000 county-industry fixed effects to be estimated and 22 x 50 x 50 = 55,000 time fixed effects to be estimated. The sample I work with has sufficient degrees of freedom to allow the estimation of such a specification – I work with roughly 3.7 million observations.

I have about 10 covariates that are in X_{cist}, i.e. these are control variables that vary within county x industry over state x industry x time.

Performance in Stata

In order to time the length of a stata run, you need to run
set rmsg on, which turns on a timer for each command that is run.

The command I run in stata is

Selec All Code:
1
reg2hdfe logy x1-x10,  id1(sitq ) id2(id) cluster(STATE_FIPS )

You should go get a coffee, because this run is going to take quite a bit of time. In my case, it took t=1575.31, or just about 26 minutes.

Performance in R 
In order to make the runs of reg2hdfe and lfe, we need to set the tolerance level of the convergence criterion to be the same in both. The standard tolerance in Stata is set at $$1e^{-6}$$, while for lfe package it is set at $$1e^{-8}$$. In order to make the runs comparable you can set the options in the R package lfe options explicitly:

options(lfe.eps=1e-6)

The second change we need to make is to disallow lfe to use multiple cores, since reg2hdfe uses only a single thread. We can do this by setting:

options(lfe.threads=1)

Now lets run this in R using:

system.time(summary(felm(log(y) ~ x1 + x2 +x3 +x4 + x5 + x6 + x7 +x8 + x9 + x10 + G(id)+G(sitq),
data=EMP, cluster=c("STATE_FIPS"))))

The procedure converges in a lot quicker than Stata…

 user system elapsed 
208.450 23.817 236.831 

It took a mere 4 minutes. Now suppose I run this in four separate threads…

user system elapsed 
380.964 23.540 177.520

 

Running this on four threads saves about one minute in processing time; not bad, but not too much gained; the gains from multi-threading increase, the more fixed-effects are added and the larger the samples are.

Classi-Compare of Raster Satellite Images – Before and After

For my research on the effect of power outages on fertility , we study a period of extensive power rationing that lasted for almost a whole year and affected most of Latin America, but in particular, it affected Colombia. The key difficult was to determine which areas were exposed to the power-outage and the extent to which this was the case. This is not straightforward, since there does not exist household- or even municipality level consumption data.

But here is how R and Satellite Data can help. In particular, we study the night light series obtained from the Defense Meterological Sattelite Program, which has been discussed by Jeffrey before.

We simply look for abnormal variation in municipality level light-emitting intensity from 1992 to 1993.

Here is some code that generates some Raster-Maps using the package rasterVis , and uses jQuery to generate a fancy before and after comparison to highlight the year-on-year changes in light intensity of 1992 compared to 1993.

###load the raster images
tif<-"F101992.v4b_web.stable_lights.avg_vis.tif"
f151 = raster(tif)

tif<-"F101993.v4b_web.stable_lights.avg_vis.tif"
f152 = raster(tif)

##crop a smaller window to plot

e = extent(-78,-72,2,8)
#e = extent(-80,-78,-4.6,-2)
rn= crop(f151, e)
rn2= crop(f152, e)

### do a logarithmic transformation to highlight places that receive not much, but some light.

rn<-log(rn+1)
png("1992.png")
p <- levelplot(rn, layers=1, margin=FALSE,col.regions = gray(0:100/100))
p + layer(sp.polygons(COLPOB, lwd=.25, linetype=2, col='darkgray'))
dev.off()

rn2<-log(rn2+1)
png("1993.png")
p <- levelplot(rn2, layers=1, margin=FALSE,col.regions = gray(0:100/100))
p + layer(sp.polygons(COLPOB, lwd=.25, linetype=2, col='darkgray'))
dev.off()

Now with this together, you can create a fancy slider as I have seen on KFOR — comparing satellite pictures of towns before and after a tornado went through them.

The code is essentially just borrowed from that TV station and it loads the javascript from their server; it is essentially just a clever use of jQuery and is maybe something that could or is already implemented in an R reporting package? Do you know of such a function?

Anyways, all you need is a slider.html page that contains the code referring to the two picture sources; the code is simple:

 

<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
    <script src="http://cache.ltvcms.com/localtv/tornado2/js/jquery.classycompare.js"></script>
    <link rel="stylesheet" type="text/css" href="http://cache.ltvcms.com/localtv/tornado2/css/jquery.classycompare.css">
    <style type="text/css">.sample1 {width:725px; height:725px;}.sample2 {width:725px; height:725px;}.sample3 {width:725px; height:725px;}</style>
  </head>
  <body>
    <div id="wrapper">
      <div class="container_6 clearfix">
        <section class="main-section grid_6">
          <div class="main-content">
            <section class="clearfix">
              <div>
                <div class="container" style="position:relative">
                  <div class="sample1"> <img src="1992municio.png"
                      alt="Before"
                      height="725px"
                      width="725px">
                    <img src="1993municio.png"
                      alt="After"
                      height="725px"
                      width="725px">
                  </div>
                </div>
                <script>
                                        $(window).load(function() {
                                            $('.sample1').ClassyCompare({
                                                defaultgap:50,
                                                leftgap:0,
                                                rightgap:10,
                                                caption: true,
                                                reveal: 0.5
                                            });
                                        });
                                    </script> </div>
            </section>
          </div>
        </section>
      </div>
    </div>

  </body>
</html>

 

This is how it looks — I know the stuff is not perfectly aligned, partly because when cropping the picture I made a mistake and could not be bothered with fixing it.

Have fun!

 

 

rgeos: TopologyException – found non-noded intersection between..

I have been having some issues generating spatial unions and intersections using the rgeos package. The package is extremely powerful, as it serves as an R interface to the powerful GEOS engine.

However, when working with shapefiles or polygons, quite often you will come across a whole range of errors, typically around topology exceptions. These occur in a whole range of applications — they typically throw errors of the type:

TopologyException: found non-noded intersection between LINESTRING (-59.0479 -1.85389, -59.048 -1.854) and LINESTRING (-59.0482 -1.854, -59.0477 -1.854) at -59.048000000000002 -1.8540000000000001

As becomes evident from this error, the error occurs in the xth decimal point, so it should really not be an error really? There are alternative issues that may arise if you try to create a Spatial Intersection of two Polygons that have different precisions.

What typically works in resolving these issues is a combination of two things.

 

  1. Round the polygon coordinates so that you end up having the same precision if you are creating spatial intersections of polygons coming from different sources. A function that implements this is for example:
Selec All Code:
1
2
3
4
5
6
 roundPolygons<-function(shptemp, digitss=3) {
for(i in 1:length(shptemp)) {
shptemp@polygons[[i]]@Polygons[[1]]@coords&<-round(shptemp@polygons[[i]]@Polygons[[1]]@coords,digits=digitss)
}
shptemp
}
  1. A second quick fix is to create a buffer area around the polygons you are trying to intersect, here rgeos has a predefined gBuffer function. You just need to specifiy the width of the buffer and then run the Spatial Union or Intersection with the buffered objects.

In most applications the combination of these two solved all my rgeos spatial join issues.

Computing Maritime Routes in R

Thanks to the attention my paper on the cost of Somali piracy has received, a lot of people have approached me to ask how I computed the maritime routes. It is not a very difficult task using R. The key ingredient is a map of the world, that can be rasterized into a grid; all the landmass needs to be assigned an infinite cost of crossing and last but not least — one needs to compute the actual routes.

What packages do I need?

library(gdistance)
library(maptools)
data(wrld_simpl)
library(data.table)

The package gdistance does most of the actual work of computing the routes. The wrld_simpl map provides what is needed to generate a raster.

Generating a Raster

#create a raster from shape files
shp <- wrld_simpl
r <- raster()
r <-rasterize(shp, r, progress='text')

After the raster is generated, we can proceed by making landmass impassable for vessels.

#make all sea = -999
r[is.na(r)] <- -999
#this turns all landmass to missing
r[r>-999] <- NA
#assign unit cost to all grid cells in water
r[r==-999] <- 1

There are a few more things to do, such as opening up the Suez Canal and some other maritime passages — one needs to find the right grid cells for this task. In the next step we can transform the raster into a transition layer matrix, that comes from the gdistance package. It is a data construct that essentially tells us how one can move from one cell to the other — you can allow diagonal moves by allowing the vessel to move into all 8 adjacent grid cells. There is also a geo-correction necessary, as the diagonals are longer distances than the straight-line moves.

tr <- transition(r, mean, directions = 8)
tr <- geoCorrection(tr, "c")

Well — and thats basically it — of course, there are a few bits and pieces that need additional work — like adding heterogenuous costs as one can imagine exist due to maritime currents and so on. Furthermore, there is a whole logic surrounding the handling of the output and the storing in a local database for further use and so on.

But not to bore you with that — how can I obtain the distance between A and B? This uses Dijkstra’s Algorithm and is called through the gdistance function “shortestPath”.

AtoB <- shortestPath(tr, as.numeric(start[1:2]), as.numeric(end[1:2]), output = "SpatialLines")

Using this output, you can then generate fancy graphs such as …

 

 

Starting Multiple Stata Instances on Mac

I found it useful to have multiple Stata instances running on my Mac, in particular, if I use one instance to clean the data before running merge commands. It is always annoying if the merging does not work out or throws an error and then, one would have to clear the current instance and open the DTA file that was messing up the merge.

Its a simple command that allows you to open multiple Stata instances on a Mac:

Selec All Code:
1
open -n /Applications/Stata12_OSX/StataSE.app

You can also define an alias command in your .bash_profile,

Selec All Code:
1
alias stata='open -n /Applications/Stata12_OSX/StataSE.app'
Good luck!

 

R function: generate a panel data.table or data.frame to fill with data

I have started to work with R and STATA together. I like running regressions in STATA, but I do graphs and setting up the dataset in R. R clearly has a strong comparative advantage here compared to STATA. I was writing a function that will give me a (balanced) panel-structure in R. It then simply works by joining in the additional data.tables or data.frames that you want to join into it.

It consists of two functions:

timeVector <- function(starttime,endtime,timestep="months") {

starttime<- as.POSIXct(strptime(starttime, '%Y-%m-%d'))
endtime<- as.POSIXct(strptime(endtime, '%Y-%m-%d'))
if(timestep=="quarters") {
timestep="months"
ret<-seq(from=as.POSIXct(starttime), to=as.POSIXct(endtime), by=timestep)
quarter <- gsub("(^[123]{1}$)", 1, month(ret))
quarter <- gsub("(^[456]{1}$)", 2, quarter)
quarter <- gsub("(^[789]{1}$)", 3, quarter)
quarter <- as.numeric(gsub("(^[102]{2}$)", 4, quarter))

ret<-paste(year(ret),quarter,sep="-")
ret<-unique(ret)
} else {

ret<-seq(from=as.POSIXct(starttime), to=as.POSIXct(endtime), by=timestep)
}
ret

}

This first function generates the time-vector, you need to tell it what time-steps you want it to have.

panelStructure <- function(group,timevec) {
tt<-rep(timevec,length(group))
tt2 <- as.character(sort(rep(group,length(timevec))))
mat <- cbind("group"=data.frame(tt2),"timevec"=data.frame(tt))
names(mat)<-c("group","timevec")
mat
}

This second function then generates the panel-structure. You need to give it a group vector, such as for example a vector of district names and you need  to pass it the time vector that the other function created.

Hope this is helpful to some of you.

 

 

 

Salmon Fishing in Yemen: Somali Piracy and the Fishing industry

We are working on finalizing our paper on Somalian piracy and the effects on the shipping industry. We believe that our paper is the first serious attempt to identify the cost of piracy using a novel dataset and a novel approach. We find that the direct and indirect cost may be between $ 1.8 – $3.0 billion. This is a lot of money compared with the mere $150 – $250 million that the piracy activity generates for the pirates. This highlights how large the welfare gains from having a functioning state with working institutions and an established monopoly of power can be.

Clearly, the paper is not capturing all the adjustments that are taking place. In particular, we find that local and regional trade is a lot stronger affected by piracy then, e.g. trade from Asia to Europe. This suggests that the piracy burden is especially born by regional economies, such as Yemen, Kenya, the Seychelles and so on. This got me thinking and I started looking at the impact of piracy on the fishing industry — I first started off with Yemen, however, the data quality is very poor.

Here is something very primer…The graph depicts quarterly reported fish catches by the Ministry of Fisheries of the Seychelles and the number of piracy attacks in the vicinity of the Seychelles (roughly in a radius of around 500 miles). Do we believe that the rise in piracy was causing this drop in fish catches, which appears to be persistent?

Exploring Heterogenous Treatment Effects: Returns to Capital

Jon de Quidt and I recently looked a bit into the data from the paper of De Mel, Woodruff and McKenzie (2008) in Sri Lanka. It is quite an influential paper, experimentally administering capital shocks to microenterprises in Sri Lanka in order to estimate the returns to capital.

Their paper highlighted that the returns to capital “at the bottom of the Pyramid” could be very large indeed. In their favorite specification they find that these could be as high as 50 – 63 % per year (in real terms). This suggests that these investments pay-off on average. It was one of the first papers that used experimentally generated variation in capital stocks to estimate these returns. Thats why it became so influential and the authors have a set of papers on Mexico and Ghana, performing similar estimates.

They point out that there is a lot of heterogeneity in the estimated treatment effects. In particular, they observe that for female entrepreneurs, there is virtually no (average) treatment effect. This and the reasons that could be underlying this observation are explored in a second paper.

We had a look at the De Mel et al (2008) data, which is available here. We essentially ran their regressions of (reported) real profits on the treatment dummy. However, we did this iteratively for each individual treated person, using the whole group of non-treated individuals as counterfactual. This allow us to get an estimate of the treatment effect for each individual treated.

From this, we get a distribution of treatment effect estimates. The average of this should be the treatment effect that De Mel et al (2008) report in their paper. And indeed it comes quite close. However, what we are intrigued by is the significant heterogeneity in the point estimates for the treatment effect for different individuals.

A key observation is that the treatment effects are very heterogeneous – the mass of enterprises who saw a drop in profits is almost as large as the mass of enterprises that saw a rise, however, some saw a very significant and large rise in real profits. The vertical line is the average of the treatment effect, which here, as we lumped the cash treatments together, is around 900 rupees.  The median treatment effect however, is only  332 rupees. Thats some food for thought.

Histogram and Kernel Density Estimate of Treatment Effects for Cash Treatments