Leveraging R for Econ Job Market

I wanted to describe a little helper I am using to help refine the places I want to apply at since I am going to be on the Economics Job Market this year.

The two main websites were job openings are advertised are:

Now JOE has a really nice feature where you can download simply all job openings into a nice spreadsheet. This allows you to browse through and refine your search. Econ Job Market does not have such a feature.

The Listing page is quite annoying…

ejm-listing

 

If you want more details for a job opening, such as a description of the fields and application requirements, you will have to click on “more info…”. In the JOE spreadsheet, you have all that information at once.

I wanted to create a JOE like spreadsheet using the openings from EJM. Some of which, of course, do overlap. But for some reason, some jobs are only on EJM but not on JOE.

So how can we use R to help us do that?

The first thing I wanted to do is get the simple listings on the main application page from EJM as a spreadsheet. You can simply download the main listings file and extract the pieces of information. Most important is the “posid” field, which is the position ID contained in the EJM URL. This will give you a direct link to the HTML page of the job opening and it also tells you whether you can apply through EJM.

This leaves you with a listing of EJM links to jobs, their position ID and the EJM Application Link in case the Job Opening accepts applications through EJM. Now you can proceed to simply download all the HTML files using a batch downloader such as DownThemAll. If you want to do that, you can print out a list:

cat(EJM$url, sep="\n")

and enter them into Down Them All. Alternatively you can iterate through the list of links and send HTTP queries to get the details from each job separately. This is part of the next lines of code:

This renders us with a nice csv that we can browse quickly through in Excel…for convenience you can download the listings as of 23-10-2014 below. Good luck for your applications!

econjobmarket-23-10-2014

Spatial Clustering: Conley Standard Errors for R

I have been working quite a lot with climate and weather data, to study the impact of rainfall shocks on violence in India and how this relationship changed, after the social insurance scheme NREGA was introduced.

In my context, it becomes particularly relevant to adjust for spatial correlation if you find yourself in a situation when you have – either too few — or too many clusters. When you have too few clusters, such as states, clustered standard errors are likely to be too small; when you have too many clusters, your standard errors may be again, too small. This is often the case, when you think about small geographies, where shocks to your dependent variable are likely to be spatially correlated (such as Natural disasters or resource booms).

A feasible alternative may be to compute Conley standard errors following the approaches suggested in Conley (1999) and Conley (2008).

Solomon Hsiang has provided some stata and matlab code to compute such standard errors, here is my attempt to compute such standard errors in R.

Spatial and Serial Correlation Correction

If you use Sol’s code, you need to be cautious about computing the standard errors with fixed-effect models, because his code uses plain OLS to get the estimates and the residuals. This is – of course – problematic, if you have a dataset with a large number of fixed effects. The procedure would then compute Conley errors for a whole lot of coefficients (i.e. your fixed effects), that you typically do not care about.

The way to proceed in Stata is to demean the data by your fixed-effects and then simply pass the residuals left-hand side and the right hand side for which you want standard errors to the “ols_spatial_HAC” procedure. I have written a Stata function that does this, but there are still some caveats and it needs to be thoroughly tested.

My R-functions below are almost literally translated equivalents from Sol’s function, except that it is not (yet) as flexible as his function and that it basically just takes the residuals directly and does not run the regression. This saves you time but also gives you a high degree of flexibility when it comes to the type of R-functions you want to use for your regressions.

testspatial.dta

Fracking in your neighborhood? Shale- Oil and Gas Economic Impact Map

I have been working on a visualisation to highlight the results from my paper Fracking Growth, to highlight the local economic impact of the recent oil and gas boom in the US.  There are two key insights. First, there are strong spillover effects from the oil and gas sector. I estimate that every oil and gas sector job created roughly 2.17 other jobs, mainly in the transport, construction and local service sectors. Given that aggregate employment in the oil and gas sector has more than doubled between 2004 and 2013 , increasing from 316,700  to 581,500. Given the estimated multiplier effect, this suggests that  aggregate employment increased by between 500,000 to 600,000 .

The second insight from my paper is that oil and gas boom induced structural transformations away from tradable goods sectors need not happen. The classical argument in the development literature is simple: a resource boom drives up local prices, which is a problem for all those sectors of the economy that can’t pass higher input costs on to their final goods consumers (Corden and Neary, 1982). In development economics, such mechanisms are also described by the term “Dutch disease”.  I argue that such structural transformations need not occur, if there are  significant trade costs for the extracted resource.

shale-impact-explorer

I thought it would be nice to visualise these econometric results through an interactive shiny application. This allows you to see where fracking is happening, and what are its local economic impacts.

North Dakota: An Interactive Visualisation of the two findings

To illustrate the two results from my research, I developed this shiny app that allows you to study key economic aggregates at the county level and compare them to nation wide averages or levels. This is essentially what the difference-in-difference estimators do in my paper, while controlling for a whole lot of fixed-effects.

In the shiny app, if you zoom – for example – into North Dakota, you will see a lot of blue dots. Each dot represents an unconventional oil or gas well. The panel shows the time-variation in the number of wells constructed.   Overall employment of all counties visible at the zoom level in the oil and gas sector has gone up from around 3169 in the early 2004 to more than 25,000 by 2012. Employment in the non-oil and gas sectors follows suit, but increases more smoothly. The average  weighted unemployment rates of the visible counties looks dramatically different when compared to the rest of the US:  unemployment rates in North Dakota, are hovering around 3 percent, while they are above 7 percent in the rest of the US in 2012. The recession did only marginally increase unemployment in North Dakota. The boom in the oil and gas industry implied that there was little effect of the recession to be felt there. This highlights the first finding of my paper. There are significant spillovers from the mining sector into the non-mining sectors.

North Dakota Shale Economic Impact

The second finding from my research can be illustrated using the case of North Dakota as well. While monthly average wages have increased significantly, catching up with the US wide average, natural gas prices in North Dakota have significantly gone down, with the average natural gas price for industrial use in North Dakota now being about 20-30% cheaper than in the rest of the US. This highlights the second case made in my paper.

But enough of that, lets turn to the Shiny app itself.

Shiny App using the Leaflet Javascript Library

I was inspired by the interactive map that visualises the super-zip codes and I use a lot of the code from there. The key feature of the shiny app is a reactive function that returns summary statistics for the county centroids that are currently visible given the users zoom level.

countiesInBounds <- reactive({
    if (is.null(input$map_bounds))
      return(EMP[FALSE,])
    bounds <- input$map_bounds
    latRng <- range(bounds$north, bounds$south)
    lngRng <- range(bounds$east, bounds$west)

    EMP[longitude >= latRng[1] & longitude <= latRng[2] &  latitude >= lngRng[1] & latitude <= lngRng[2]]
  })

This function takes the map-bounds as currently set by the zoom level inn the input object and uses it to subset the data object EMP, which contains most of the statistics that are displayed.

There is a second similar function for the number of wells in the bounds. This function is called by the functions that create the plot panels.

The server.R file is the pasted here:

library(shiny)

#toinst<-c('jcheng5/leaflet-shiny','trestletech/ShinyDash')
#sapply(toinst, function(x) devtools::install_github(x))

library(leaflet)
library(ShinyDash)
library(RColorBrewer)
library(scales)
library(lattice)
library(plyr)
require(maps)
shinyServer(function(input, output, session) {

## Interactive Map ###########################################

# Create the map
map <- createLeafletMap(session, "map")

# A reactive expression that returns the set of zips that are
# in bounds right now
wellsInBounds <- reactive({
if (is.null(input$map_bounds))
return(WELLS.data[FALSE,])
bounds <- input$map_bounds
latRng <- range(bounds$north, bounds$south)
lngRng <- range(bounds$east, bounds$west)

subset(WELLS.data,
latitude >= latRng[1] & latitude <= latRng[2] &
longitude >= lngRng[1] & longitude <= lngRng[2])
})

countiesInBounds <- reactive({
if (is.null(input$map_bounds))
return(EMP[FALSE,])
bounds <- input$map_bounds
latRng <- range(bounds$north, bounds$south)
lngRng <- range(bounds$east, bounds$west)

EMP[longitude >= latRng[1] & longitude <= latRng[2] & latitude >= lngRng[1] & latitude <= lngRng[2]]

})

output$plotSummaryLocal <- renderPlot({
LOCAL<-countiesInBounds()[,list(NonMiningEmpC=sum(NonMiningEmpC, na.rm=TRUE), EmpCClean21=sum(EmpCClean21, na.rm=TRUE)), by=c("year")]
WELLS<-wellsInBounds()[,.N, by=c("year")][order(year)]
WELLS<-data.table(cbind("year"=WELLS$year,"N"=cumsum(WELLS$N)))
LOCAL<-join(LOCAL,WELLS)
if(any(!is.na(LOCAL$EmpCClean21)) & any(!is.na(LOCAL$N))) {
par(mfrow=c(1,3),oma=c(0,0,0,0),mar=c(4,1,3,0))

plot(LOCAL[,c("year","NonMiningEmpC"),with=F], xlab="Year",main="Non-Mining Sector",type="l",col="red",cex.main=1.5,cex.lab=1.5)
plot(LOCAL[,c("year","EmpCClean21"),with=F], xlab="Year",main="Oil and Gas Sector",type="l",col="red",cex.main=1.5,cex.lab=1.5)
plot(LOCAL[,c("year","N"),with=F], xlab="Year",main="Number of Wells",type="l",col="red",cex.main=1.5,cex.lab=1.5)
}
else {

paste("Please zoom out")

}

})

output$plotSummaryMacro <- renderPlot({
par(mfrow=c(1,3),oma=c(0,0,0,0),mar=c(4,1,3,0))
###plot unemployment rate, earnings per worker,

LOCAL<-countiesInBounds()[,list(indgasprice=sum(indgasprice * overallemp,na.rm=TRUE)/sum(overallemp,na.rm=TRUE), unemploymentrate=100*sum(unemploymentrate*labourforce,na.rm=TRUE)/sum(labourforce,na.rm=TRUE), EarnS=sum(EarnS*labourforce, na.rm=TRUE)/sum(labourforce,na.rm=TRUE)), by=c("year")]
LOCAL<-join(NATIONAL,LOCAL)
if(any(!is.na(LOCAL$unemploymentrate))) {
plot(LOCAL[,c("year","unemploymentrate"),with=F], ylim=c(min(min(LOCAL$unemploymentrate),min(LOCAL$natunemploymentrate)), max(max(LOCAL$unemploymentrate),max(LOCAL$natunemploymentrate))), xlab="Year",main="Unemployment",type="l",col="red",cex.main=1.5,cex.lab=1.5)
lines(LOCAL[,c("year","natunemploymentrate"),with=F], col="blue", lty=2)
plot(LOCAL[,c("year","indgasprice"),with=F], ylim=c(min(min(LOCAL$indgasprice),min(LOCAL$natindgasprice)), max(max(LOCAL$indgasprice),max(LOCAL$natindgasprice))), xlab="Year",main="Natural Gas Price",type="l",col="red",cex.main=1.5,cex.lab=1.5)
lines(LOCAL[,c("year","natindgasprice"),with=F], col="blue", lty=2)
plot(LOCAL[,c("year","EarnS"),with=F], ylim=c(min(min(LOCAL$EarnS),min(LOCAL$natEarnS)), max(max(LOCAL$EarnS),max(LOCAL$natEarnS))), xlab="Year",main="Monthly Wages",type="l",col="red",cex.main=1.5,cex.lab=1.5)
lines(LOCAL[,c("year","natEarnS"),with=F], col="blue", lty=2)
} else {
paste("Please zoom out")

}
})

####FOR DATA EXPLORER
output$datatable <- renderDataTable({
EMP[, c("year","Area","population","shaleareashare","TPI_CI","EarnS","unemploymentrate","EmpCClean21","NonMiningEmpC","indgasprice","peind"),with=F]
})

session$onFlushed(once=TRUE, function() {
paintObs <- observe({

# Clear existing circles before drawing
map$clearShapes()
# Draw in batches of 1000; makes the app feel a bit more responsive
chunksize <- 1000
wellplot<-wellsInBounds()
if(nrow(wellplot)>0) {
if(nrow(wellplot)>5000) {
wellplot<-wellplot[WID %in% sample(wellplot$WID,5000)]
}
for (from in seq.int(1, nrow(wellplot), chunksize)) {
to <- min(nrow(wellplot), from + chunksize)
chunkplot <- wellplot[from:to,]
# Bug in Shiny causes this to error out when user closes browser
# before we get here
try(
map$addCircle(
chunkplot$latitude, chunkplot$longitude
)
)
}
}
})

# TIL this is necessary in order to prevent the observer from
# attempting to write to the websocket after the session is gone.
session$onSessionEnded(paintObs$suspend)
})

showWellPopup <- function(event) {
content <- as.character(paste(event, collapse=","))
map$showPopup(event$lat, event$lng, content)
}
# When map is clicked, show a popup with city info
clickObs <- observe({
map$clearPopups()
event <- input$map_shape_click
if (is.null(event))
return()

isolate({
showWellPopup(event)
})
})

session$onSessionEnded(clickObs$suspend)
})

 

The UI.r

library(leaflet)
library(shiny)
library(ShinyDash)
shinyUI(navbarPage("Fracking Growth", id="nav",

tabPanel("Interactive map",
div(class="outer",

tags$head(
# Include our custom CSS
includeCSS("styles.css"),
includeScript("gomap.js")
),

leafletMap("map", width="100%", height="100%",
initialTileLayer = "//{s}.tiles.mapbox.com/v3/jcheng.map-5ebohr46/{z}/{x}/{y}.png",
initialTileLayerAttribution = HTML('Maps by <a href="http://www.mapbox.com/">Mapbox</a>'),
options=list(
center = c(37.45, -93.85),
zoom = 5, maxZoom=9,
maxBounds = list(list(15.961329,-129.92981), list(52.908902,-56.80481)) # Show US only
)
),

absolutePanel(id = "controls", class = "modal", fixed = TRUE, draggable = TRUE,
top = 60, left = "auto", right = 20, bottom = "auto",
width = 450, height = "auto",

h2("Shale Oil and Gas Impact Explorer"),

# selectInput("color", "Color", vars),
# selectInput("size", "Size", vars, selected = "welltype"),
h4("Shale Boom over Time"),
plotOutput("plotSummaryLocal", height = 150) ,
plotOutput("plotSummaryMacro", height = 150) ,
p("Note that all plots and summary statistics are calculated for the wells- or county centroid visible in your current zoom-level")
),

tags$div(id="cite",
'Data compiled for ', tags$a('Fracking Growth',href="http://www.trfetzer.com/fracking-growth/"), ' by Thiemo Fetzer.'
)
)
) ,

tabPanel("Data explorer",
hr(),
dataTableOutput("datatable")
)
)

)

 

Shifting Centre of Gravity of US Fossil Fuel Production

My research Fracking Growth investigates the localised impacts of the current oil and gas production boom in the US. The boom is triggered by extraction of oil and gas deposits, that were previously not possible to exploit. The shale deposits have become technologically recoverable due to a combination of horizontal drilling technology and hydraulic fracturing, which is extremely controversial as it involves injection of significant amounts of water, sands fused with chemicals beneath the surface.

The fact that these shale deposits are geographically located in  distinct places within the US can be highlighted through a very simple graph.

Taking Energy Information Administration state level crude-oil and natural gas (marketed) production data, we can compute a “center of gravity” of US fossil fuel production – simply by weighting the state centroid coordinates by the energy content of the fossil fuels being extracted (or by their dollar value).

The resulting two pictures are worth sharing, because they highlight the dramatic changes that have happened in only recent years.

Plotting Centre of Gravity by Energy Content 

bbtu-gravity-small

 

Note that here I am plotting 14 years of data, from 1998 to 2012. What you can see is striking: the centre of gravity of fossil fuel production by heat content has moved clearly towards the north for the whole period, but has made a decided shift towards the East within the past four years. This represents the dramatic production expansion for natural gas in the Marcellus shale formation.

Now natural gas is becoming significantly cheaper in the US, as the US essentially becomes a net exporter and is self-sufficient. This allows domestic prices to be significantly lower than elsewhere. In the US, natural gas is now cheaper by a factor of three compared to the UK. This emulates policy makers around the world to consider revising rules of fracking legislation, to stimulate their economies in a similar way. In the UK , most recently, land access rules have been changed to make fracking easier.

The fact that US natural gas prices are now dramatically lower than crude oil prices, when comparing them to their respective energy content has implications for how you plot the centre of gravity.

If you plot it by fossil fuel value, the picture is dramatically different.

Plotting Centre of Gravity by Fossil Fuel Value (in USD)

value-gravity-small

 

 

Here, the move in the centre of gravity knows only one direction. It is moving decidedly north west. This is driven by the Bakken oil boom.

It is clear that the two pictures indicate the possibility for their to be dramatic limitations when it comes to physical infrastructure. The sudden and quick relocation of production has put strains on the US physical infrastructure to carry crude oil and natural gas to the market. This is a core argument in my research, and I think this is bound to stay for a while, as the regulatory process and construction time for new pipelines and upgrading of refinery capacity is bound to take a few years.

Note that my approach to the graph is similar to Danny Quah’s work on studying the centre of economic activity around the world, except that I do not have to worry about projection issues too much (the center of gravity could be beneath the surface of the world in a truely 3-dimensional world).

Given a shapefile of US states, that you can e.g. obtain from the GADM , you can compute the centroid of the geographies using the gCentroid function that is part of the rgeos package.

Computing Centroids

Selec All Code:
1
2
3
STATES<-readOGR(dsn="States", layer="states")
STATES.pts<-do.call("rbind", lappy(1:length(STATES), function(x) gCentroid(STATES[x,])@coords))
STATES.data<-data.table(cbind(STATES@data,STATES.pts))

Is rainfall reporting endogenous to conflict?

For my paper on the impact of social insurance on the dynamics of conflict in India, I use some new remote sensed weather data. The data comes from the Tropical Rainfall Measuring Mission (TRMM) satellites. The satellite carries a set of five instruments, and is essentially a rainfall radar located in outer space.

As a robustness check I needed to verify that my main results go through using other rainfall data. In the paper I try to make a humble case in favour of using remote sensed data where possible. The key reason being that the  TRMM data comes from the same set of instruments over time, rather than from input sources that could be varying with e.g., economic conditions. This is a problem that has been identified by climatologist, who try to correct for systematic biases that could arise from the fact that weather stations are more likely to be located in places with a lot of economic activity.

At first I was a bit reluctant as it is quite heavy data that needs to be processed. Nevertheless, thorough analysis required me to jump the hoop and obtain secondary rainfall data sources. I chose the GPCC monthly rainfall data for verification of my results, since these have been used by many other authors in the past in similar contexts. The data is based on rain gauge measurements and is available for the past 100 years.

The raw data is quite heavy ; the monthly rainfall rate data  for the whole world at at 0.5 degree resolution would amount to about 150 million rows of data for the period from 1961-2010. If you drop the non-land grid cells, this reduces the size dramatically to only 40 million rows. Below is a bit of code that   loads in the data once you have downloaded the ASCII source files from the GPCC website. On my personal website, I make a dta and an rdata file available for the whole world. There are three variables appearing in that order: (1) the rainfall rate, (2) the rainfall normals and (3) an integer that gives the number of reporting rain gauges that fall in a grid cell in a particular month.

It turns out that all my results are robust to using this data. However, I do find something that is quite neat. It turns out that, if a district experienced some insurgency related conflict in the previous year,  it is less likely that this district has an active rain gauge reporting data in subsequent years. While it is a no-brainer that places with severe conflict do not have functioning weather reporting, these results suggest  that reporting may also be systematically affected in places with relatively low intensity of conflict – as is the case of India.

While I do not want to overstate the importance of this, it provides another justification of why it makes sense for economists to be using  remotely sensed weather data. This is not to say that ground based data is not useful. Quite the reverse, ground based data is more accurate in many ways, which makes it very important for climatologist. As economist, we are worried about systematic measurement error that correlates with the economic variables we are studying. This is were remote sensed data provides advantages as it does not “decide” to become less accurate in places that are e.g. less developed, suffer from conflict or simply, have nobody living there.

Here the function to read in the data and match to district centroids, you need some packages.

#########LOAD GPPC NOTE THAT YOU NEED TO SUBSET 
THE DATA IF YOU DONT WANT TO END UP WITH A HUGE 
DATA OBJECT

loadGPCC<-function(ff, COORDS) {
yr<-as.numeric(gsub("(.*)\\_([0-9]{2})([0-9]{4})","\\3",ff))
month<-as.numeric(gsub("(.*)\\_([0-9]{2})([0-9]{4})","\\2",ff))
temp<-data.table(data.frame(cbind(COORDS,
read.table(file=paste("Rainfall/gpcc_full_data_archive_v006_05_degree_2001_2010/",
ff,sep=""), header=FALSE, skip=14))))

###YOU COULD SUBSET THE DATA BY EXTENT HERE IF 
YOU DONT WANT TO GET IT FOR THE WHOLE WORLD
##E.G. SUBSET FOR BY BOUNDING BOX
##temp<-temp[x>=73 & x<=136 & y>=16 & y<=54]

temp<-cbind("year"= yr, "month"=month, temp)
gc("free")
temp
}

################
#####
ffs<-list.files("Rainfall/
gpcc_full_data_archive_v006_05_degree_2001_2010")

###THIS DEFINES THE GRID STRUCTURE OF THE DATA
###YOU MAY NEED TO ADJUST IF YOU WORK WITH A 
COARSER GRID
xs=seq(-179.75,179.75,.5)
ys=seq(89.75,-89.75,-.5)
COORDS<-do.call("rbind", lapply(ys, function(x) cbind("x"=xs,"y"=x)))

system.time(GPCC<-do.call("rbind", lapply(1:length(ffs), function(x) loadGPCC(ffs[x], COORDS))))

###MATCHING THIS TO SHAPEFILE?
##YOU COULD MATCH CENTROIDS OF DISTRICTS TO THE 
NEAREST GRID CELL - THE FOLLOWING FUNCTION WOULD 
DO THAT

###find nearest lat / lon pair
##you may want to vectorise this
NEAREST<-NULL
for(k in 1:nrow(CENTROIDS)) {
cat(k," ")
temp<-distHaversine(CENTROIDS[k,c("x","y"),with=F], 
GPCC.coords[, c("delx","dely"), with=F])
NEAREST<-rbind(NEAREST, cbind(CENTROIDS[k],GPCC.coords[which(temp==
min(temp))]))

}

 

Deploying Shiny Server on Amazon: Some Troubleshoots and Solutions

I really enjoyed Treb Allen‘s tutorial on deploying a Shiny server on an Amazon Cloud Instance. I used this approach for my shiny app that is a map highlighting the economic impact of the recent shale oil and gas boom on the places where the actual extraction happens. The easiest way to proceed is to use the AMI Image, which basically is like a virtual box image just running on Amazon Cloud. It has the basic Shiny-server up and running. Along the way, I came across a few troubleshoots for which there are simple solutions.

I cant seem to access the Shiny server through the Browser?

Right after the installation and setting up of the Amazon Instance, I tried to access the shiny server using the public DNS, in my case that was

Public DNS: ec2-54-84-227-28.compute-1.amazonaws.com

However, this did not work since the shiny-server is listening on port 3838 and you need to allow incoming traffic on that port. The way to manage that in the EC2 Dashboard is to go change the security group that is assigned to the instance that you are running. You need to add a rule to allow incoming traffic on port 3838.

Once this is done, you should be able to go to your public DNS, in my case the request URL in the browser now is:

ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:3838/shale-economic-impact/ in your browser

Where are the shiny-apps located?

The standard shiny apps that are preinstalled are  located in
“/var/shiny-server/www” If you ssh into your EC2 instance, you can go to that folder.

I installed packages, but my shiny application can not load them?

The problem is most likely that you are logged in as ec2-user, where you have your own dedicated library path. In order to install R packages system wide, you need to change to root by doing:

sudo -i
##install now your R packages,
R CMD INSTALL ...
exit

The exit part is important as then you turn off administrator rights.

When I run the app, I get Javascript Error  ”The application unexpectedly exited. Diagnostic information has been dumped to the JavaScript error console.”?

It could be that your EC2 instance is not powerful enough. I had that problem because the dataset that was loaded was too big, which creates a time-out. One way to overcome this is to start a medium instance rather than a micro instance. Please be aware that this is not part of the free usage tier and you will be billed for usage. However, an alternative simple  fix by editing the config file. It could be that you are hitting a time-out.

In the shiny-server configuration help, there are two timeouts that can be set in the free shiny server version.

app_init_timeout — Describes the amount of time (in seconds) to wait for an application to start. After this many seconds if the R process still has not become responsive, it will be deemed an unsuccessful startup and the connection will be closed.

app_idle_timeout — Defines the amount of time (in seconds) an R process with no active connections should remain open. After the last connection disconnects from an R process, this timer will start and, after the specified number of seconds, if no new connections have been created, the R process will be killed.

It could be that the javascript error is thrown, because the R process was killed.  You can edit the configuration file to increase the time-out periods, adding:

# Instruct Shiny Server to run applications as the user "shiny"

run_as shiny;

# Define a server that listens on port 3838

server {

listen 3838;

# Define a location at the base URL

location / {

# Host the directory of Shiny Apps stored in this directory
site_dir /var/shiny-server/www;
# Log all Shiny output to files in this directory
log_dir /var/log/shiny-server;
# When a user visits the base URL rather than a particular application,
# an index of the applications available in this directory will be shown.

directory_index off;
app_init_timeout 250;

}

}

This brings us right to the next question,…

Where do I find my shiny server configuration file?

There is a hard coded configuration file, but in the search path there is one located in:

/etc/shiny-server/shiny-server.conf

here you can do the above edits. After you have  done the edits you want to reload the configuration…

How do I reload the Configuration File, how to start or stop the shiny server?

#reload without restarting
sudo reload shiny-server
#stop the shiny server 
sudo stop shiny-server
#start it...
sudo stop shiny-server

Copying files from your local machine to the AWS Instance?

You can use “scp” for secure copying, e.g.

To download files from your instance:
scp -i frackingshiny-eu-west-1.pem ec2-user@ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:/var/shiny-server/www/shale-economic-impact.zip

To upload files to your instance:
scp -r -i frackingshiny-eu-west-1.pem “/Users/thiemo/shale-economic-impact.zip” ec2-user@ec2-54-72-74-90.eu-west-1.compute.amazonaws.com:/var/shiny-server/www/

I plan to add more troubleshoots – if you have come across some error for which you had to find a solution, feel free to comment and I ll amend the list.

Stacking Regressions: Latex Tables with R and stargazer

In my paper on the impact of the shale oil and gas boom in the US, I run various instrumental variables specifications. For these, it is nice to stack the regression results one on the other – in particular, to have one row for the IV results, one row for the Reduced Form and maybe one row for plain OLS to see how the IV may turn coefficients around.

I found that as of now – there is no way to do that directly; please correct me if I am wrong.

The layout I have in mind is as in the screenshot of my table.

Sample Table with Stacked Regression Results In Stata, this can be accomplished through the use of esttab with the fragment, and append feature. This appends successive rows to an existing table and you can label the rows using the “refcat” option.

However, in R this is not possible as of yet. I have mainly worked with stargazer, as Marek has added felm objects for high dimensional fixed effects to be handled by his stargazer package.

The following functions are a “hack” that extracts the particular rows from the generated latex code by the stargazer package.

You can then piece the table together by combining the individual elements. The idea is that you have a single stargazer command that is passed to the various functions that extract the different features. Obviuously, this can be done a lot more elegant as the code is extremely hacky, but it does work

Marek has said that he is thinking of incorporating a stacking option into stargazer, but for now, my hack works reasonably well. The key thing to realise is that stargazer calls have the option to return

The following character strings can be used in the table.layout and omit.table.layout arguments of the stargazer command.

“-” single horizontal line
“=” double horizontal line
“-!” mandatory single horizontal line
“=!” mandatory double horizontal line
“l” dependent variable caption
“d” dependent variable labels
“m” model label
“c” column labels
“#” model numbers
“b” object names
“t” coefficient table
“o” omitted coefficient indicators
“a” additional lines
“n” notes
“s” model statistics


The following functions will simply extract the rows that are being returned.

stargazer.keepcolnums<-function(call="") {
command<-gsub("\\)$",", table.layout='#'\\)",command)
call<-eval(parse(text=command))
bounds<-grep("tabular",call)
row1<- call[(bounds[1]+1):(bounds[2]-1)]
row1<-gsub("^\\\\\\\\\\[-1\\.8ex\\]\\\\hline $", "", row1)
row1<-gsub("^\\\\hline \\\\\\\\\\[-1\\.8ex\\] $","",row1)
row1<-row1[row1!=""]
}
stargazer.keeprow<-function(call="") {
command<-gsub("\\)$",", table.layout='t'\\)",command)
call<-eval(parse(text=command))
bounds<-grep("tabular",call)
row1<- call[(bounds[1]+1):(bounds[2]-1)]
row1<-gsub("^\\\\\\\\\\[-1\\.8ex\\]\\\\hline $", "", row1)
row1<-gsub("^\\\\hline \\\\\\\\\\[-1\\.8ex\\] $","",row1)
row1<-row1[row1!=""]
}

stargazer.keepstats<-function(call="") {
command<-gsub("\\)$",", table.layout='s'\\)",command)
call<-eval(parse(text=command))
row1<-gsub("(.*)\\\\\\\\\\[-1\\.8ex\\](.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse=" "))
row1
}
stargazer.begintable<-function(call="") {
command<-gsub("\\)$",", table.layout='m'\\)",command)
call<-eval(parse(text=command))
row1<-paste("\\begin{tabular}",gsub("(.*)\\\\begin\\{tabular\\}(.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse="\n")),sep="")
row1
}
stargazer.varlabels<-function(call="") {
command<-gsub("\\)$",", table.layout='d'\\)",command)
call<-eval(parse(text=command))
row1<-paste("\\begin{tabular}",gsub("(.*)\\\\begin\\{tabular\\}(.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse="\n")),sep="")
row1
}

stargazer.keepcollabels<-function(call="") {
command<-gsub("\\)$",", table.layout='c'\\)",command)
call<-eval(parse(text=command))
row1<-gsub("(.*)\\\\\\\\\\[-1\\.8ex\\](.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse=" "))
row1
}

stargazer.keepomit<-function(call="") {
command<-gsub("\\)$",", table.layout='o'\\)",command)
call<-eval(parse(text=command))
row1<-gsub("(.*)\\\\\\\\\\[-1\\.8ex\\](.*)(\\\\end\\{tabular\\})(.*)","\\2",paste(call,collapse="\n"))
row1
}

It easiest to see how you can use these functions to construct stacked regression output by giving a simple example.

###the global command to be passed to the hacky ###functions that extract the individual bits
## OLS is a list of OLS results from running the
## lfe command
command<-'stargazer(OLS , keep=c("post08shale:directutilityshare","post08shale"), covariate.labels=c("Energy Intensity x Shale","Shale"), header=FALSE, out.header=FALSE, keep.stat=c("n","adj.rsq"))'

begintable<-stargazer.begintable(command)
###some multicolumn to combine 
collcombine<-c("& \\multicolumn{4}{c}{Tradable Goods Sector Only} & \\multicolumn{3}{c}{Additional Sectors} & \\cmidrule(lr){2-5} \\cmidrule(lr){6-8}")
collabel<-stargazer.keepcollabels(command)
colnums<-stargazer.keepcolnums(command)
##plain OLS row
row1<-stargazer.keeprow(command)
##the stats part for the OLS (number of Obs, R2)
stats<-stargazer.keepstats(command)
##the rows for the Fixed effect indicators
omitted<-stargazer.keepomit(command)

##the IV command passing a list of IV results in ## IV object
command<-'stargazer(IV , keep=c("post08anywell:directutilityshare","post08anywell"), covariate.labels=c("Energy Intensity x Anywell","Anywell"), header=FALSE, out.header=FALSE, keep.stat=c("n","adj.rsq"))'

##IV row
row2<-stargazer.keeprow(command)
footer<-c("\\end{tabular}\n" )

###now combine all the items
cat(begintable,collcombine,collabel,colnums,"\\hline \\\\\\emph{Reduced Form} \\\\",row1,""\\hline \\\\\\emph{Reduced Form} \\\\",row2,stats,"\\hline\\hline",footer, file="Draft/tables/energyintensity.tex", sep="\n")

I know the solution is a bit hacky, but it works and does the trick.

 

Fracking and House Prices on the Marcellus Shale

Starting last summer I worked on a short project that set out to estimate the potential costs of externalities due to unconventional shale gas production in the Marcellus shale on local house prices using a dataset of roughly 150,000 recently sold houses in Ohio, West Virginia and Pennsylvania.

The data suggests that proximity to a natural gas well is correlated with lower housing prices, which confirms previous studies.

fracking-houseprices

I stopped working on a project that looks at the impact of nearby shale gas extraction on property prices for the Marcellus shale. Instead, I focused on my paper “Fracking Growth” that evaluates the employment consequences of the shale oil and gas boom in the US more generally.

Everybody can have a look at the data and the document as it stands on sharelatex, where I also tried Sharelatex’s Knitr capacities, which are still somewhat limited as a lot of R-packages I usually work with are not yet installed.

The public sharelatex file, the data and the R script can be accessed here:

https://www.sharelatex.com/project/534d232b32ed2b25466b2541?r=6f73efc4&rs=ps&rm=d

Here are some preliminary snippets. The data used in this study comes from Zillow.com In Fall 2013 I downloaded data for recently sold houses. I focused the download to cover all or most of the counties that are somewhat near the Marcellus shale in West Virginia, Ohio and Pennsylvania. This list goes back to 2011 and provides data for 151,156 sold properties.

load(file = "HOUSES.rdata")
library(xtable)
table(HOUSES$year)
####  2011  2012  2013
#### 40087 63248 47821

A simple tabulation suggests that most data is for 2012.

Some characteristics that are included in the data are the sale price in USD, the number of bedrooms, number of bathrooms, the built up land size in square feet, the year the property was built and for some properties also the lot size.

The properties have geo-coordinates, which are used to intersect the location of the property with census-tract shapefiles. This will allow the adding of further characteristics at the census-tract level to control for general local characteristics.

The geo-coordinates are further used to compute the exact distance of a property to the nearest actual or permitted well in the year that the property was sold. Distances are computed in meters by computing the Haversine distance on a globe with radius r = 6378137 meters.

The following graph plots the average price per square foot as a function of distance to the nearest well in the year in which the property was sold. I group distances into 500 meter bins.

plot(HOUSES[, list(soldprice =sum(soldprice)/sum(sqft)), by=distancecat ], xlab="Distance to Well", ylab="Price per sqft")

A first inspection suggests a positive gradient in distance, that is – however, quite non-monotone.

Non-monotonic relationship between distance to the nearest oil or gas well and price per sqft.

Non-monotonic relationship between distance to the nearest oil or gas well and price per sqft.

Does this relationship hold up when running a hedonic pricing regression?

[math]log(y_{ict}) = \gamma \times welldistance_{i} + \beta \times X_i + a_c + \eta_t + e_{ict}[/math]

These are estimated using the lfe package, as I introduce quite demanding fixed effects (census-tract and county by year). The lfe package takes these fixed effects out iteratively before running the actual regression on the demeaned data.

The results for two chosen bandwidths are presented in the table below. There appears to be a positive gradient – being further away from a well correlates with higher prices per square foot.

Regression results comparing sold houses nearby unconventional gas wells on the Marcellus shale

Regression results comparing sold houses nearby unconventional gas wells on the Marcellus shale

Clearly, the important question is whether one can separate out the property price appreciation that is likely to happen due to the local economic boom from the price differentials that may arise due to the presence of the local externalities and whether, one can separate out externalities due to environmental degradation as distinct from price differentials arising due to factors discussed in the beginning: no access to mortgage lending or insurances.

Unfortunately, I do not have the time to spend more time on this now, but I think a short paper is still feasible…

 

Regressions with Multiple Fixed Effects – Comparing Stata and R

In my paper on the impact of the recent fracking boom on local economic outcomes, I am estimating models with multiple fixed effects. These fixed effects are useful, because they take out, e.g. industry specific heterogeneity at the county level – or state specific time shocks.

The models can take the form:

[math]y_{cist} = \alpha_{ci} + b_{st} + \gamma_{it}+ X_{cist}'\beta + \epsilon_{cist}[/math]  

where [math]\alpha_{ci}[/math] is a set of county-industry, [math]b_{ci}[/math] a set of state-time and [math]\gamma_{it}[/math] is a set of industry-time fixed effects.

Such a specification takes out arbitrary state-specific time shocks and industry specific time shocks, which are particularly important in my research context as the recession hit tradable industries more than non-tradable sectors, as is suggested in Mian, A., & Sufi, A. (2011). What Explains High Unemployment ? The Aggregate Demand Channel.

How can we estimate such a specification?
Running such a regression in R with the lm or reg in stata will not make you happy, as you will need to invert a huge matrix. An alternative in Stata is to absorb one of the fixed-effects by using xtreg or areg. However, this still leaves you with a huge matrix to invert, as the time-fixed effects are huge; inverting this matrix will still take ages.

However, there is a way around this by applying the Frisch-Waugh Lovell theorem iteratively (remember your Econometrics course?); this basically means you iteratively take out each of the fixed effects in turn by demeaning the data by that fixed effect. The iterative procedure is described in detail in Gaure (2013), but also appears in Guimaraes and Portugal(2010).

Simen Gaure has developed an R-package called lfe, which performs the demeaning for you and also provides the possibility to run instrumental variables regressions; it theoretically supports any dimensionality of fixed effects. The key benefit of Simen Gaure’s implementation is the flexibility, the use of C in the background for some of the computing and its support for multicore processing, which speeds up the demeaning process dramatically, especially the larger your samples get..

In Stata there is a package called reg2hdfe and reg3hdfe which has been developed by Guimaraes and Portugal (2010). As the name indicates, these support only fixed effects up to two or three dimensions.

Lets see how – on the same dataset – the runtimes of reg2hdfe and lfe compare.

Comparing Performance of Stata and R

I am estimating the following specification

[math]y_{cist} = \alpha_{ci} + b_{sit} + \gamma_{it}+ X_{cist}'\beta + \epsilon_{cist}[/math]  

where [math]\alpha_{ci}[/math] is a set of county-industry, [math]b_{ci}[/math] a set of state-time fixed effects. There are about 3000 counties in the dataset and 22 industries. Furthermore, there are 50 states and the time period is also about 50 quarters. This means – in total – there are 3000 x 22 = 66,000 county-industry fixed effects to be estimated and 22 x 50 x 50 = 55,000 time fixed effects to be estimated. The sample I work with has sufficient degrees of freedom to allow the estimation of such a specification – I work with roughly 3.7 million observations.

I have about 10 covariates that are in [math]X_{cist}[/math], i.e. these are control variables that vary within county x industry over state x industry x time.

Performance in Stata

In order to time the length of a stata run, you need to run
set rmsg on, which turns on a timer for each command that is run.

The command I run in stata is

Selec All Code:
1
reg2hdfe logy x1-x10,  id1(sitq ) id2(id) cluster(STATE_FIPS )

You should go get a coffee, because this run is going to take quite a bit of time. In my case, it took t=1575.31, or just about 26 minutes.

Performance in R 
In order to make the runs of reg2hdfe and lfe, we need to set the tolerance level of the convergence criterion to be the same in both. The standard tolerance in Stata is set at 1e^{-6}, while for lfe package it is set at 1e^{-8}. In order to make the runs comparable you can set the options in the R package lfe options explicitly:

options(lfe.eps=1e-6)

The second change we need to make is to disallow lfe to use multiple cores, since reg2hdfe uses only a single thread. We can do this by setting:

options(lfe.threads=1)

Now lets run this in R using:

system.time(summary(felm(log(y) ~ x1 + x2 +x3 +x4 + x5 + x6 + x7 +x8 + x9 + x10 + G(id)+G(sitq),
data=EMP, cluster=c("STATE_FIPS"))))

The procedure converges in a lot quicker than Stata…

 user system elapsed 
208.450 23.817 236.831 

It took a mere 4 minutes. Now suppose I run this in four separate threads…

user system elapsed 
380.964 23.540 177.520

 

Running this on four threads saves about one minute in processing time; not bad, but not too much gained; the gains from multi-threading increase, the more fixed-effects are added and the larger the samples are.

Classi-Compare of Raster Satellite Images – Before and After

For my research on the effect of power outages on fertility , we study a period of extensive power rationing that lasted for almost a whole year and affected most of Latin America, but in particular, it affected Colombia. The key difficult was to determine which areas were exposed to the power-outage and the extent to which this was the case. This is not straightforward, since there does not exist household- or even municipality level consumption data.

But here is how R and Satellite Data can help. In particular, we study the night light series obtained from the Defense Meterological Sattelite Program, which has been discussed by Jeffrey before.

We simply look for abnormal variation in municipality level light-emitting intensity from 1992 to 1993.

Here is some code that generates some Raster-Maps using the package rasterVis , and uses jQuery to generate a fancy before and after comparison to highlight the year-on-year changes in light intensity of 1992 compared to 1993.

###load the raster images
tif<-"F101992.v4b_web.stable_lights.avg_vis.tif"
f151 = raster(tif)

tif<-"F101993.v4b_web.stable_lights.avg_vis.tif"
f152 = raster(tif)

##crop a smaller window to plot

e = extent(-78,-72,2,8)
#e = extent(-80,-78,-4.6,-2)
rn= crop(f151, e)
rn2= crop(f152, e)

### do a logarithmic transformation to highlight places that receive not much, but some light.

rn<-log(rn+1)
png("1992.png")
p <- levelplot(rn, layers=1, margin=FALSE,col.regions = gray(0:100/100))
p + layer(sp.polygons(COLPOB, lwd=.25, linetype=2, col='darkgray'))
dev.off()

rn2<-log(rn2+1)
png("1993.png")
p <- levelplot(rn2, layers=1, margin=FALSE,col.regions = gray(0:100/100))
p + layer(sp.polygons(COLPOB, lwd=.25, linetype=2, col='darkgray'))
dev.off()

Now with this together, you can create a fancy slider as I have seen on KFOR — comparing satellite pictures of towns before and after a tornado went through them.

The code is essentially just borrowed from that TV station and it loads the javascript from their server; it is essentially just a clever use of jQuery and is maybe something that could or is already implemented in an R reporting package? Do you know of such a function?

Anyways, all you need is a slider.html page that contains the code referring to the two picture sources; the code is simple:

 

<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js"></script>
    <script src="http://cache.ltvcms.com/localtv/tornado2/js/jquery.classycompare.js"></script>
    <link rel="stylesheet" type="text/css" href="http://cache.ltvcms.com/localtv/tornado2/css/jquery.classycompare.css">
    <style type="text/css">.sample1 {width:725px; height:725px;}.sample2 {width:725px; height:725px;}.sample3 {width:725px; height:725px;}</style>
  </head>
  <body>
    <div id="wrapper">
      <div class="container_6 clearfix">
        <section class="main-section grid_6">
          <div class="main-content">
            <section class="clearfix">
              <div>
                <div class="container" style="position:relative">
                  <div class="sample1"> <img src="1992municio.png"
                      alt="Before"
                      height="725px"
                      width="725px">
                    <img src="1993municio.png"
                      alt="After"
                      height="725px"
                      width="725px">
                  </div>
                </div>
                <script>
                                        $(window).load(function() {
                                            $('.sample1').ClassyCompare({
                                                defaultgap:50,
                                                leftgap:0,
                                                rightgap:10,
                                                caption: true,
                                                reveal: 0.5
                                            });
                                        });
                                    </script> </div>
            </section>
          </div>
        </section>
      </div>
    </div>

  </body>
</html>

 

This is how it looks — I know the stuff is not perfectly aligned, partly because when cropping the picture I made a mistake and could not be bothered with fixing it.

Have fun!

 

 

rgeos: TopologyException – found non-noded intersection between..

I have been having some issues generating spatial unions and intersections using the rgeos package. The package is extremely powerful, as it serves as an R interface to the powerful GEOS engine.

However, when working with shapefiles or polygons, quite often you will come across a whole range of errors, typically around topology exceptions. These occur in a whole range of applications — they typically throw errors of the type:

TopologyException: found non-noded intersection between LINESTRING (-59.0479 -1.85389, -59.048 -1.854) and LINESTRING (-59.0482 -1.854, -59.0477 -1.854) at -59.048000000000002 -1.8540000000000001

As becomes evident from this error, the error occurs in the xth decimal point, so it should really not be an error really? There are alternative issues that may arise if you try to create a Spatial Intersection of two Polygons that have different precisions.

What typically works in resolving these issues is a combination of two things.

 

  1. Round the polygon coordinates so that you end up having the same precision if you are creating spatial intersections of polygons coming from different sources. A function that implements this is for example:
Selec All Code:
1
2
3
4
5
6
 roundPolygons<-function(shptemp, digitss=3) {
for(i in 1:length(shptemp)) {
shptemp@polygons[[i]]@Polygons[[1]]@coords&<-round(shptemp@polygons[[i]]@Polygons[[1]]@coords,digits=digitss)
}
shptemp
}
  1. A second quick fix is to create a buffer area around the polygons you are trying to intersect, here rgeos has a predefined gBuffer function. You just need to specifiy the width of the buffer and then run the Spatial Union or Intersection with the buffered objects.

In most applications the combination of these two solved all my rgeos spatial join issues.

Computing Maritime Routes in R

Thanks to the attention my paper on the cost of Somali piracy has received, a lot of people have approached me to ask how I computed the maritime routes. It is not a very difficult task using R. The key ingredient is a map of the world, that can be rasterized into a grid; all the landmass needs to be assigned an infinite cost of crossing and last but not least — one needs to compute the actual routes.

What packages do I need?

library(gdistance)
library(maptools)
data(wrld_simpl)
library(data.table)

The package gdistance does most of the actual work of computing the routes. The wrld_simpl map provides what is needed to generate a raster.

Generating a Raster

#create a raster from shape files
shp <- wrld_simpl
r <- raster()
r <-rasterize(shp, r, progress='text')

After the raster is generated, we can proceed by making landmass impassable for vessels.

#make all sea = -999
r[is.na(r)] <- -999
#this turns all landmass to missing
r[r>-999] <- NA
#assign unit cost to all grid cells in water
r[r==-999] <- 1

There are a few more things to do, such as opening up the Suez Canal and some other maritime passages — one needs to find the right grid cells for this task. In the next step we can transform the raster into a transition layer matrix, that comes from the gdistance package. It is a data construct that essentially tells us how one can move from one cell to the other — you can allow diagonal moves by allowing the vessel to move into all 8 adjacent grid cells. There is also a geo-correction necessary, as the diagonals are longer distances than the straight-line moves.

tr <- transition(r, mean, directions = 8)
tr <- geoCorrection(tr, "c")

Well — and thats basically it — of course, there are a few bits and pieces that need additional work — like adding heterogenuous costs as one can imagine exist due to maritime currents and so on. Furthermore, there is a whole logic surrounding the handling of the output and the storing in a local database for further use and so on.

But not to bore you with that — how can I obtain the distance between A and B? This uses Dijkstra’s Algorithm and is called through the gdistance function “shortestPath”.

AtoB <- shortestPath(tr, as.numeric(start[1:2]), as.numeric(end[1:2]), output = "SpatialLines")

Using this output, you can then generate fancy graphs such as …