Ridge Regression and the Lasso

In my last post Which linear model is best? I wrote about using
stepwise selection as a method for selecting linear models, which turns
out to have some issues (see this article, and Wikipedia).
This post will be about two methods that slightly modify ordinary least
squares (OLS) regression – ridge regression and the lasso.

Ridge regression and the lasso are closely related, but only the Lasso
has the ability to select predictors. Like OLS, ridge attempts to
minimize residual sum of squares of predictors in a given model.
However, ridge regression includes an additional ‘shrinkage’ term – the
square of the coefficient estimate – which shrinks the estimate of the
coefficients towards zero. The impact of this term is controlled by
another term, lambda (determined seperately). Two interesting
implications of this design are the facts that when λ = 0 the OLS
coefficients are returned and when λ = ∞, coefficients will approach
zero.

To take a look at this, setup a model matrix (removing the intercept
column), store the independent variable as y, and create a vector of
lambda values.

swiss <- datasets::swiss
x <- model.matrix(Fertility~., swiss)[,-1]
y <- swiss$Fertility
lambda <- 10^seq(10, -2, length = 100)

First, let's prove the fact that when λ = 0 we get the same coefficients
as the OLS model.

#create test and training sets
library(glmnet)
## Loading required package: Matrix

## Loading required package: foreach

## Loaded glmnet 2.0-10
set.seed(489)
train = sample(1:nrow(x), nrow(x)/2)
test = (-train)
ytest = y[test]

Fit your models.

#OLS
swisslm <- lm(Fertility~., data = swiss)
coef(swisslm)
##      (Intercept)      Agriculture      Examination        Education 
##       66.9151817       -0.1721140       -0.2580082       -0.8709401 
##         Catholic Infant.Mortality 
##        0.1041153        1.0770481
#ridge
ridge.mod <- glmnet(x, y, alpha = 0, lambda = lambda)
predict(ridge.mod, s = 0, exact = T, type = 'coefficients')[1:6,]
##      (Intercept)      Agriculture      Examination        Education 
##       66.9365901       -0.1721983       -0.2590771       -0.8705300 
##         Catholic Infant.Mortality 
##        0.1040307        1.0770215

The differences here are nominal. Let's see if we can use ridge to
improve on the OLS estimate.

swisslm <- lm(Fertility~., data = swiss, subset = train)
ridge.mod <- glmnet(x[train,], y[train], alpha = 0, lambda = lambda)
#find the best lambda from our list via cross-validation
cv.out <- cv.glmnet(x[train,], y[train], alpha = 0)
## Warning: Option grouped=FALSE enforced in cv.glmnet, since &lt; 3 observations
## per fold
bestlam <- cv.out$lambda.min
#make predictions
ridge.pred <- predict(ridge.mod, s = bestlam, newx = x[test,])
s.pred <- predict(swisslm, newdata = swiss[test,])
#check MSE
mean((s.pred-ytest)^2)
## [1] 106.0087
mean((ridge.pred-ytest)^2)
## [1] 93.02157

Ridge performs better for this data according to the MSE.

#a look at the coefficients
out = glmnet(x[train,],y[train],alpha = 0)
predict(ridge.mod, type = "coefficients", s = bestlam)[1:6,]
##      (Intercept)      Agriculture      Examination        Education 
##      64.90631178      -0.16557837      -0.59425090      -0.35814759 
##         Catholic Infant.Mortality 
##       0.06545382       1.30375306

As expected, most of the coefficient estimates are more conservative.

Let's have a look at the lasso. The big difference here is in the
shrinkage term – the lasso takes the absolute value of the coefficient
estimates.

lasso.mod <- glmnet(x[train,], y[train], alpha = 1, lambda = lambda)
lasso.pred <- predict(lasso.mod, s = bestlam, newx = x[test,])
mean((lasso.pred-ytest)^2)
## [1] 124.1039

The MSE is a bit higher for the lasso estimate. Let's check out the
coefficients.

lasso.coef  <- predict(lasso.mod, type = 'coefficients', s = bestlam)[1:6,]

Looks like the lasso places high importance on Education,
Examination, and Infant.Mortality. From this we also gain some
evidence that Catholic and Agriculture are not useful predictors for
this model. It is likely that Catholic and Agriculture do have some
effect on Fertility, though, since pushing those coefficients to zero
hurt the model.

There is plenty more to delve into here, but I'll leave the details to
the experts. I am always happy to have your take on the topics I write
about, so please feel free to leave a comment or contact me. Oftentimes
I learn just as much from you all as I do in researching the topic I
write about.

I think the next post will be about more GIS stuff – maybe on rasters or
point pattern analysis.

Thank you for reading!

Kiefer

Which linear model is best?

trains-railroad_00389817

Recently I have been working on a Kaggle competition where participants are tasked with predicting Russian housing prices. In developing a model for the challenge, I came across a few methods for selecting the best regression
model for a given dataset. Let’s load up some data and take a look.

library(ISLR)
set.seed(123)
swiss <- data.frame(datasets::swiss)
dim(swiss)
## [1] 47  6
head(swiss)
##              Fertility Agriculture Examination Education Catholic
## Courtelary        80.2        17.0          15        12     9.96
## Delemont          83.1        45.1           6         9    84.84
## Franches-Mnt      92.5        39.7           5         5    93.40
## Moutier           85.8        36.5          12         7    33.77
## Neuveville        76.9        43.5          17        15     5.16
## Porrentruy        76.1        35.3           9         7    90.57
##              Infant.Mortality
## Courtelary               22.2
## Delemont                 22.2
## Franches-Mnt             20.2
## Moutier                  20.3
## Neuveville               20.6
## Porrentruy               26.6

This data provides data on Swiss fertility and some socioeconomic
indicators. Suppose we want to predict fertility rate. Each observation
is as a percentage. In order to assess our prediction ability, create
test and training data sets.

index <- sample(nrow(swiss), 5)
train <- swiss[-index,]
test <- swiss[index,]

Next, have a quick look at the data.

par(mfrow=c(2,2))
plot(train$Education, train$Fertility)
plot(train$Catholic, train$Fertility)
plot(train$Infant.Mortality, train$Fertility)
hist(train$Fertility)

swissplots

Looks like there could be some interesting relationships here. The
following block of code will take a model formula (or model matrix) and
search for the best combination of predictors.

library(leaps)
leap <- regsubsets(Fertility~., data = train, nbest = 10)
leapsum <- summary(leap)
plot(leap, scale = 'adjr2')

leap plot

According to the adjusted R-squared value (larger is better), the best
two models are: those with all predictors and all predictors less
Examination. Both have adjusted R-squared values around .69 – a decent fit. Fit these models so we can evaluate them further.

swisslm <- lm(Fertility~., data = train)
swisslm2 <- lm(Fertility~.-Examination, data = train)
#use summary() for a more detailed look.

First we'll compute the mean squared error (MSE).

mean((train$Fertility-predict(swisslm, train))[index]^2)
## [1] 44.21879
mean((train$Fertility-predict(swisslm2, train))[index]^2)
## [1] 36.4982

Looks like the model with all predictors is marginally better. We're
looking for a low value of MSE here. We can also look at the Akaike
information criterion (AIC) for more information. Lower is better here
as well.

library(MASS)
step1 <- stepAIC(swisslm, direction = "both")
step1$anova
## Stepwise Model Path 
## Analysis of Deviance Table
## 
## Initial Model:
## Fertility ~ Agriculture + Examination + Education + Catholic + 
##     Infant.Mortality
## 
## Final Model:
## Fertility ~ Agriculture + Education + Catholic + Infant.Mortality
## 
## 
##            Step Df Deviance Resid. Df Resid. Dev      AIC
## 1                                  36   1746.780 168.5701
## 2 - Examination  1 53.77608        37   1800.556 167.8436

Here, the model without Examination scores lower than the full
model. It seems that both models are evenly matched, though I might be
inclined to use the model without Examination.

Since we sampled our original data to make the train and test datasets,
the difference in these tests is subject to change based on the training
data used. I encourage anyone who wants to test themselves to change the
set.seed at the beginning and evaluate the above results again. Even
better: use your own data!

If you learned something or have questions feel free to leave a comment
or shoot me an email.

Happy modeling,

Kiefer

Using R as a GIS

1

In real estate, spatial data is the name of the game. Countless programs
in other domains utilize the power of this data, which is becoming more
prevalent by the day.

In this post I will go over a few simple, but powerful tools to get you
started using using geographic information in R.

##First, some libraries##
#install.packages('GISTools', dependencies = T)
library(GISTools)

GISTools provides an easy-to-use method for creating shading schemes
and choropleth maps. Some of you may have heard of the sp package,
which adds numerous spatial classes to the mix. There are also functions
for analysis and making things look nice.

Let’s get rolling: source the vulgaris dataset, which contains
location information for Syringa Vulgaris (the Lilac) observation
stations and US states. This code plots the states and vulgaris
points.

data(&quot;vulgaris&quot;) #load data
par = (mar = c(2,0,0,0)) #set margins of plot area
plot(us_states)
plot(vulgaris, add = T, pch = 20)

alt text

One thing to note here is the structure of these objects. us_states is
a SpatialPolygonsDataFrame, which stores information for plotting shapes
(like a shapefile) within its attributes. vulgaris by contrast is a
SpatialPointsDataFrame, which contains data for plotting individual
points. Much like a data.frame and $, these objects harbor
information that can be accessed via @.

kable(head(vulgaris@data))
Station Year Type Leaf Bloom Station.Name State.Prov Lat Long Elev
3695 61689 1965 Vulgaris 114 136 COVENTRY CT 41.8 -72.35 146
3696 61689 1966 Vulgaris 122 146 COVENTRY CT 41.8 -72.35 146
3697 61689 1967 Vulgaris 104 156 COVENTRY CT 41.8 -72.35 146
3698 61689 1968 Vulgaris 97 134 COVENTRY CT 41.8 -72.35 146
3699 61689 1969 Vulgaris 114 138 COVENTRY CT 41.8 -72.35 146
3700 61689 1970 Vulgaris 111 135 COVENTRY CT 41.8 -72.35 146

Let’s take a look at some functions that use this data.

newVulgaris kable(head(data.frame(newVulgaris)))
x y
3 4896 -67.65 44.65
3 4897 -67.65 44.65
3 4898 -67.65 44.65
3 4899 -67.65 44.65
3 4900 -67.65 44.65
3 4901 -67.65 44.65

gIntersection, as you may have guessed from the name, returns the
intersection of two spatial objects. In this case, we are given the
points from vulgaris that are within us_states. However, the rest of
the vulgaris data has been stripped from the resulting object. We’ve
got to jump through a couple of hoops to get that information back.

&lt;br /&gt;newVulgaris &lt;- data.frame(newVulgaris)
tmp &lt;- rownames(newVulgaris)
tmp &lt;- strsplit(tmp, &quot; &quot;)
tmp &lt;- (sapply(tmp, &quot;[[&quot;, 2))
tmp &lt;- as.numeric(tmp)
vdf &lt;- data.frame(vulgaris)
newVulgaris &lt;- subset(vdf, row.names(vdf) %in% tmp)
Station Year Type Leaf Bloom Station.Name State.Prov Lat Long Elev Long.1 Lat.1 optional
3695 61689 1965 Vulgaris 114 136 COVENTRY CT 41.8 -72.35 146 -72.35 41.8 TRUE
3696 61689 1966 Vulgaris 122 146 COVENTRY CT 41.8 -72.35 146 -72.35 41.8 TRUE
3697 61689 1967 Vulgaris 104 156 COVENTRY CT 41.8 -72.35 146 -72.35 41.8 TRUE
3698 61689 1968 Vulgaris 97 134 COVENTRY CT 41.8 -72.35 146 -72.35 41.8 TRUE
3699 61689 1969 Vulgaris 114 138 COVENTRY CT 41.8 -72.35 146 -72.35 41.8 TRUE
3700 61689 1970 Vulgaris 111 135 COVENTRY CT 41.8 -72.35 146 -72.35 41.8 TRUE

Look familiar? Now we’ve got a data frame with the clipped vulgaris
values and original data preserved.

vulgarisSpatial ```

After storing our clipped data frame as a SpatialPointsDataFrame, we can
again make use of it - in this case we add a shading scheme to the
`vulgaris` points.

``` r
shades shades$cols plot(us_states)
choropleth(vulgarisSpatial, vulgarisSpatial$Elev,shading = shades, add = T, pch = 20)

alt text

Colors are pretty, but what do they mean? Let’s add a legend.

us_states@bbox #Get us_states bounding box coordinates.
 ##min max
 ## r1 -124.73142 -66.96985
 ## r2 24.95597 49.37173
plot(us_states)
choropleth(vulgarisSpatial, vulgarisSpatial$Elev,shading = shades, add = T, pch = 20)
par(xpd=TRUE) #Allow plotting outside of plot area.
choro.legend(-124, 30, shades, cex = .75, title = &quot;Elevation in Meters&quot;) # Plot legend in bottom left. Takes standard legend() params.

alt text

It looks like there’s a lot going on in the Northeastern states. For a
closer look, create another clipping (like above) and plot it. Using the
structure below, we can create a selection vector. I have hidden the
full code since it is repetitive (check GitHub for the full code.)

index '...'
plot(us_states[index,])
choropleth(vulgarisNE, vulgarisNE$Elev,shading = shades, add = T, pch = 20)
par(xpd = T)
choro.legend(-73, 39.75, shades, cex = .75, title = &quot;Elevation in Meters&quot;)

alt text

Hopefully this has been a useful introduction (or refresher) on spatial
data. I always learn a lot in the process of writing these posts. If you
have any ideas or suggestions please leave a comment or feel free to
contact me!

Happy mapping,

Kiefer

Take your data frames to the next level.

 

leo

In R-rockstar Hadley Wickham’s book (Free Book – R for Data Science), the section on model building elaborates on something pretty cool that I had no idea about – list columns.

Most of us have probably seen the following data frame column format:

df <- data.frame("col_uno" = c(1,2,3),"col_dos" = c('a','b','c'), "col_tres" = factor(c("google", "apple", "amazon")))

And the output:

df
##   col_uno col_dos col_tres
## 1       1       a   google
## 2       2       b    apple
## 3       3       c   amazon

This is an awesome way to organize data and one of R’s strong points. However, we can use list functionality to go deeper. Check this out:

library(tidyverse)
library(datasets)
head(iris)
##   Sepal.Length Sepal.Width Petal.Length Petal.Width Species
## 1          5.1         3.5          1.4         0.2  setosa
## 2          4.9         3.0          1.4         0.2  setosa
## 3          4.7         3.2          1.3         0.2  setosa
## 4          4.6         3.1          1.5         0.2  setosa
## 5          5.0         3.6          1.4         0.2  setosa
## 6          5.4         3.9          1.7         0.4  setosa
nested <- iris %>%
 group_by(Species) %>%
 nest()
# A tibble: 3 × 2
 Species          data
 <fctr>          <list>
1 setosa        <tibble [50 × 4]>
2 versicolor    <tibble [50 × 4]>
3 virginica     <tibble [50 × 4]>

Using nest we can compartmentalize our data frame for readability and more efficient iteration.  As a simple example, we can use map from the purrr package to compute the mean of each column in our nested data.

means <- map(nested$data, colMeans)
## [[1]]
## Sepal.Length  Sepal.Width Petal.Length  Petal.Width 
##        5.006        3.428        1.462        0.246 
## 
## [[2]]
## Sepal.Length  Sepal.Width Petal.Length  Petal.Width 
##        5.936        2.770        4.260        1.326 
## 
## [[3]]
## Sepal.Length  Sepal.Width Petal.Length  Petal.Width 
##        6.588        2.974        5.552        2.026

Once you’re done messing around with data-ception, use unnest to revert your data back to its original state.

head(unnest(nested))
## # A tibble: 6 × 5
##   Species Sepal.Length Sepal.Width Petal.Length Petal.Width
##                                  
## 1  setosa          5.1         3.5          1.4         0.2
## 2  setosa          4.9         3.0          1.4         0.2
## 3  setosa          4.7         3.2          1.3         0.2
## 4  setosa          4.6         3.1          1.5         0.2
## 5  setosa          5.0         3.6          1.4         0.2
## 6  setosa          5.4         3.9          1.7         0.4

I was pretty excited to learn about this property of data.frames and will definitely make use of it in the future. If you have any neat examples of nested dataset usage, please feel free to share in the comments.  As always, I’m happy to answer questions or talk data!

Kiefer Smith

Mapping Housing Data with R

What is my home worth?  Many homeowners in America ask themselves this question, and many have an answer.  What does the market think, though?  The best way to estimate a property’s value is by looking at other, similar properties that have sold recently in the same area – the comparable sales approach.  In an effort to allow homeowners to do some exploring (and because I needed a new project), I developed a small Shiny app with R.

My day job allows me access to the local multiple listing service, which provides a wealth of historic data.  The following project makes use of that data to map real estate that has sold near Raleigh, NC in the past six months.  Without getting too lost in the weeds I’ll go over a few key parts of the process.  Feel free to jump over to my GitHub page to check out the full source code.  Click here to view the app!

  1. Geocode everything.  The data did not come with latitude and longitude coordinates, so we’ll have to do some geocoding.  I haven’t found an efficient way to do this in R, so, like in the mailing list example, I’ll use QGIS to process my data and return a .csv for each town I’m interested in.
    Screen Shot 2017-03-12 at 5.43.28 PM
  2. Setup your data.  To make sure that everything runs smoothly later on, we’ve got to import our data using readr and make sure each attribute is typed properly.
    library(readr)
    apex <- read_csv("apex2.csv")
    
    #Remove non-character elements from these columns.
    df$`Sold Price` <- as.numeric(gsub("[^0-9]","",df$`Sold Price`))
    df$`List Price` <- as.numeric(gsub("[^0-9]","",df$`List Price`))
    
    #Some re-typing for later.
    df$Fireplace <- as.numeric(df$Fireplace)
    df$`New Constr` <- as.factor(df$`New Constr`)
    
    #Assign some placeholders.
    assign("latitude", NA, envir = .GlobalEnv)
    assign("longitude", NA, envir = .GlobalEnv)
    
  3. Get info from the user.  The first thing the app wants you to do is give it some characteristics about the subject property, a property that you are interested in valuating.  A function further down uses this information to produce a map using these inputs.
     #What city's dataset are we using?
     selectInput("city", label = "City", c("Apex", "Cary", "Raleigh"))
    
     #Get some info.
     textInput("address",label = "Insert Subject Property Address", value = "2219 Walden Creek Drive"),
     numericInput("dist", label = "Miles from Subject", value = 5, max = 20),
     numericInput("footage",label = "Square Footage", value = 2000),
     selectInput("acres",label = "How Many Acres?", acresf)
    
     #Changes datasets based on what city you choose on the frontend.
     #This expression is followed by two more else if statements.
    observeEvent(input$city, {
     if(input$city == "Apex") {
     framework_retype(apex)
     cityschools <-schoolsdf$features.properties %>%
     filter(ADDRCITY_1 == "Apex")
     assign("cityschools", cityschools, envir = .GlobalEnv)
    
     #Draw the map on click.
     observeEvent(input$submit, {
     output$map01 <- renderLeaflet({distanceFrom(input$address, input$footage, input$acres,tol = .15, input$dist)
     })
     })
    
    
  4. Filter the data.  The distanceFrom function above uses dplyr to filter the properties in the selected city by square footage, acreage, and distance from the subject property.  The tol argument is used to give a padding around square footage – few houses match exactly in that respect.
     #Filter once.
     houses_filtered <- houses %>%
      filter(Acres == acres)%>%
      filter(LvngAreaSF >= ((1-tol)*sqft)) %>%
      filter(LvngAreaSF <= ((1+tol)*sqft))
    
     #This grabs lat & long from Google.
     getGeoInfo(subj_address)
     longitude_subj <- as.numeric(longitude)
     latitude_subj <- as.numeric(latitude)
    
     #Use the comparable house locations.
     xy <- houses_filtered[,1:2]
     xy <- as.matrix(xy)
    
     #Calculate distance.
     d <- spDistsN1(xy, c(longitude_subj, latitude_subj), longlat = TRUE)
     d <- d/1.60934
     d <- substr(d, 0,4)
    
     #Filter again.
     distance <- houses_filtered %>%
      filter(distanceMi <= dist)
    
  5. Draw the map. The most important piece, the map, is drawn using Leaflet.  I have the Schools layer hidden initially because it detracts from the main focus – the houses.
    map <- leaflet() %>%
     addTiles(group = "Detailed") %>%
     addProviderTiles("CartoDB.Positron", group = "Simple") %>%
     addAwesomeMarkers(lng = longitude, lat = latitude, popup = subj_address, icon = awesomeIcons(icon='home', markerColor = 'red'), group = "Subject Property") %>%
     addAwesomeMarkers(lng = distance$X, lat = distance$Y, popup = paste(distance$Address,distance$`Sold Price`, distance$distanceMi, sep = ""), icon = awesomeIcons(icon = 'home', markerColor = 'blue'), group = "Comps")%>%
     addAwesomeMarkers(lng = schoolsdf$long, lat = schoolsdf$lat, icon = awesomeIcons(icon = 'graduation-cap',library = 'fa', markerColor = 'green', iconColor = '#FFFFFF'), popup = schoolsdf$features.properties$NAMELONG, group = "Schools")%>%
      addLayersControl(
       baseGroups = c("Simple", "Detailed"),
       overlayGroups = c("Subject Property", "Comps", "Schools"),
       options = layersControlOptions(collapsed = FALSE))
    
    map <- map %>% hideGroup(group = "Schools") 
  6. Regression model.  The second tab at the top of the page leads to more information input that is used in creating a predictive model for the subject property.  The implementation is somewhat messy, so if you’d like to check it out, the code is at the bottom of app.R in the GitHub repo.

That’s it!  It took a while to get all the pieces together, but I think the final product is useful and I learned a lot along the way.  There are a few places I want to improve: simplify the re-typing sections, make elements refresh without clicking submit, among others.  If you have any questions about the code please leave a comment or feel free to send me an email.

Happy coding,

Kiefer Smith

 

 

 

 

Mapping Happiness and Isoline Functions

heart-of-texas-hot-air-balloon

Most of the time I get emails they’re either work-related or spam-related.  Sometimes the spam turns out to be interesting.  About once a month I’ll get a digest of articles from Teleport .  This month there was an article from Forbes about mapping global happiness using news headlines.  I’m assuming the author used natural language processing of some sort, as he mentions evaluating the context in which each location is written about ( sentiment analysis).

Not entirely sure how accurate the methodology is (and the final product is somewhat hard to draw conclusions from), but it’s a super cool concept nonetheless.  Unfortunately, the author did not leave us with a GitHub repo to pore through, but did mention making use of Google’s BigQuery platform and Carto’s mapping system.

Being the fantastic procrastinator that I am, I took a look at Carto’s services.  Turns out they have a pretty cool feature (with an API) that creates time and distance isolines.  Might try using something like that in an upcoming project.  Stay tuned!  Or check out my GitHub for a sneak peek.

R Weekly

FyLlO0UU.jpg

During my Monday morning ritual of avoiding work,  I found this publication that is written in R, for people who use R – R Weekly.  The authors do a pretty awesome job of aggregating useful, entertaining, and informative content about what’s happening surrounding our favorite programming language.  Check it out, give the authors some love on GitHub, and leave a like if you find something useful there.

Have a good week,

Kiefer Smith

Creating a Mailing List in QGIS and R

My day job as a real estate agent requires a myriad of skills, ranging from accounting to negotiation to business analysis.  Frequently (about every three months) I whip out my marketing skills to advertise my business.  This time I decided to send out postcards to an entire neighborhood in which I had sold homes recently.  Typically, agents will buy a mail route from the post office and hand over their postcards.  In the spirit of frugality and proving a point, I cracked my knuckles and went hunting for data.

Get the shapefiles.  Wake County Open Data (or your local open data hub) has a wealth of county-level data including subdivision boundaries and individual address points.  Download both shapefiles and  load them into your favorite GIS program.  This step can probably be done in R, but I find using QGIS fairly intuitive and much faster at plotting large shapefiles.

Screen Shot 2017-02-15 at 11.07.05 AM.png     Screen Shot 2017-02-15 at 11.10.33 AM.png

Filter the addresses.  After loading the address and subdivision shapefiles into QGIS, clip the address shapefile using the subdivision shapefile to save the addresses of interest in a new layer.  Save that puppy as a .csv and we can load it up in R.

Screen Shot 2017-02-15 at 11.24.11 AM.png

Manipulate in R.  Now we’ve got the info we want.  A few lines of code will give us something the post office (or Excel) will understand.

screen-shot-2017-02-15-at-11-35-42-am

walden_creek <- read_csv("~/Desktop/walden creek.csv")
attach(walden_creek)
adds <- paste(FULLADDR, POSTAL_CIT, "NC", "27523", sep = ",")
detach(walden_creek)
write.table(adds, "adds.csv", sep = ",")

Short and sweet, but I thought this was an interesting way to use data for a practical purpose.  People seem to be using R in exciting ways these days – if you see any creative, different projects please share.

– Kiefer Smith

Raleigh Permit Trends

Screen Shot 2017-01-23 at 9.42.55 AM.png
Click here for an interactive version.

I’ve been looking into development trends in Raleigh lately using open data.  Here’s a historical look at building permits over the past seven years using Plotly.  What trends do you see?

In my development environment the graph was stacked bars (far easier on the eyes), but when I uploaded it to the hosting site the bars ended up side-by-side.  Also, I could probably have incorporated some sort of sorting algorithm to make the bars look nicer.

Have suggestions for a visualization?  Leave a comment!

-Kiefer