clock menu more-arrow no yes mobile

Filed under:

How we built Vox’s audience-driven election projects

Two developers discuss how the sausage gets made

Election coverage is often not very participatory. But this election season published several projects that relied on audience engagement. This conversation between Soo Oh (News App Developer, Vox) and Kavya Sukumar (Senior Full Stack Engineer, Storytelling Studio) focuses on two of those projects — Make your election result predictions and Election day emotion tracker — and the thoughts and challenges that went into their making.

Making a shareable electoral college map

Soo: OK, so first off — I want to thank you for all your help and camaraderie on the election projects. I know it was a struggle with the time difference and extreme last minute-ness of some of them. At least, it felt like it could have been a struggle. Was it?

Kavya: It was a lot of fun — though a little stressful at times — working on these projects on a tight deadline. I am not a big fan of election maps. I know you have some strong opinions in favor of them. But I enjoyed helping you make the electoral map more than I expected.

How did you come up with the idea for this project that allows users to make their own predictions?

A screenshot from "Make your election result predictions."
Make your election result predictions on

Soo: I was approached by Lauren Katz (Social Media Manager, Vox) and Agnes Mazur (Audience Engagement Manager, Vox) about two weeks before Election Day. The thinking was, readers might appreciate something to… to do, if that makes sense. Like, there’s a period closer to the election when people are kind of waiting around. (Although, of course, we ended up having some late-breaking surprises in the last days).

Kavya: The first time I heard about the project was when we — Ryan Mark, Katie O'Dowd, you and me — met to explore options for supporting this project’s data needs. If I remember, you were thinking of making it similar to the Midwest and South maps projects you had made.

Soo: Right! I wondered if we should save the data somehow so that way it could be shared. I hadn’t seen a news site offer up a "make your own electoral map" that you could also share. I think Vox’s social team was okay if readers couldn’t share a permalink to their own maps — the assumption being that people would take screenshots anyway. But, for me, then the interactive would have a different call to action: Not "make your own electoral map" but something more like… "Use this map to calculate the odds / figure out the Electoral College." I was surprised when I did some searches and could not find a news organization that offered a way to make your own map. Or like, maybe they had a map or other way to parcel the Electoral College, but you couldn’t share it with others without a manual screenshot.

I think the encoded permalink was what made the map incredibly successful.

Animated gif of a Twitter search for people who shared their election result prediction.
Tweets from people who shared their election result prediction, with user names and avatars blurred out.

Kavya: So for you, the project was always about users sharing their predictions with their network.

Soo: For me, that was the most interesting part about the project. Like I mentioned, there are already other tools that let you make a map but don’t let you share. I definitely want to call out It’s not a news organization, but it’s the only site I found that provided permalinks to server-side rendered maps and inserted the predicted maps into the page’s social share image. We didn’t have the resources to provide the social share image.

And, in retrospect, I’m not sure if the social share image makes the fact that it’s a user-generated map very clear, anyway.

At left,'s Twitter share card. At right, Vox's Twitter share card
At left,'s Twitter share card. At right, Vox's Twitter share card.

Kavya: That server side aspect is often the challenge with a lot of our projects. We try to keep things static as much as possible with just HTML, CSS and Javascript. We rarely have active servers powering our apps and graphics. And when they exist they are usually for shared infrastructure like our data service, Kinto. But this map project was completely client side. It did not even use our data store.

Soo: In the end, it seemed overly complicated to save people’s data in a database somewhere since we didn’t have to try to calculate some kind of reader aggregate, like we did with the Midwest and South maps. I think you were the one to give guidance on doing a base-64 encoding of people’s selections, and encoding that into a shared URL, then re-interpreting the page URL to load the results. I had seen that before, but I’m not sure where.

Kavya: With brainstorming meetings, it is hard to say who came up with the original idea. I want to say it was Ryan who asked why do we need the data on a server and you came up with keeping it in the URL. It was a productive back and forth where we built on each others idea. One of you came up with url parameter. I suggested making the order of states static. You raised issues with keeping the data very open on the url and then we all ended up agreeing on base64 encoding of the url parameter to make it just a little harder to fiddle with. I love it when meetings are productive.

A demonstration of the URL query and base-64 encoding

Example share text for Twitter with encoded URL:

Data query string to decode:

The string refers to the winning party for each state, alphabetized by abbreviation:

And so on.

Soo: I didn’t get around to encoding and decoding the data string in the URL until the night before the deadline. You ended up writing the bulk of that code. Do you remember what the big issues were? I think it was Twitter’s tweet intent and encoded URIs and translating that. Pym was also really essential to this whole apparatus. I mean, we’re using Pym for all our iframed graphics, but I had no idea you could send messages from parent page to iframe. I want to use that feature all the time now.

Kavya: Yes. We use Pym for all our iframe embeds. A huge shoutout to NPR team for building and maintaining it. It has a very neat eventing and messaging capability. I first discovered pym’s `sendMessage` feature when I was working on Autotune blueprint for quizzes. We had to get the URL of the story in which the quiz was added for social media shares.

While working on that blueprint I did a lot of Twitter share URL building. So I had encountered most of the issues that came up. Except it was the first time we were trying to send a base64 string as a parameter. It took a couple tries with encoding before I got it right.

I will be honest. It took more than a couple tries. I code lazy. So I ran all permutations of encodings in the browser console and picked the one that worked. I ended up with more than 20 Twitter share windows open in the end.

Soo: HAH. I love that. It was really great when you stepped in. It was past 8 p.m. ET the night before it was supposed to go out. My energy was flagging real hard by that point.

Kavya: That is an advantage of being on the Pacific coast. It was only 5 p.m. for me. I still had some more lines of code left in me.

Soo: You saved me from having to stay up until 3 a.m. to figure that out. Probably longer! Plus, it was good to have some QA. Even though a lot of the code is stuff I reused from previous projects and have QA’d before, there was a bunch of stuff that had never been QA’d, like pre-filling out the states with the decoded URL.

Kavya: I loved that part where we were ad-hoc jumping on and off calls and using Slack. Half of our Slack conversation around QA was one of us saying "pull now" after we made changes to the Github repo.

Soo: It was fun! I discovered you were a masochist who secretly loved election projects.

Kavya: Lol. Shhh!

Soo: Which later saved me from coworker guilt about the "election feels" project.

Kavya: I was excited about election feels project right from the first time I heard it.

The Election Day emotion tracker and its custom storage database

Soo: I’m so glad you were because I was constantly second-guessing whether it was an interesting idea. So having your support was helpful.

A screenshot from the first hour of the Emotion Tracker.
The Election Day Emotion Tracker asked users to submit their feelings every hour.

In the end, we called it the Election Day Emotion Tracker. It was a way for readers to submit how they were feeling and see that mapped into a space where you could see how all the other submitters felt. You were limited to submitting one emotion per hour so we could see what the prevailing mood was per hour interval, e.g. from the 7 p.m. interval, instead of constantly trying to recalculate and grab the last 60 minutes (or whatever arbitrary amount of time).

The browser cached the reader's own emotions in localStorage, and then submitted the emotion to be collected into the whole user database of emotions.

Kavya: Apart from the concept itself, it was going to be a test of our datastore Kinto. When Storytelling Studio was called the Editorial Tools team, our focus was largely on "tooling up." We built Autotune, built and continue to maintain the apps rig, and then we added Kinto when we saw a gap in our data storage capabilities.

We wanted to standardize data infrastructure and integrate it closely with the rig. Kinto is an open source JSON storage from Mozilla. It comes with all the JSON dynamic goodness. You don't have to stick to a data schema and it is easy to handle from Javascript.

But a big disadvantage we saw was how it lacked basic arithmetic operations like count and sum that most databases provide. We are mostly a Ruby shop. Since Kinto does not have an official Ruby client, we ended up writing one, which came in quite handy in this project.

In some ways, for me the election emotion tracker was about realizing the gap between building tools and using them. We needed the total count of emotions by candidate on the hour. And we also wanted to show the running total for the current hour.

To work around the lack of built-in arithmetic operations on the data, I set up a scheduled job on Heroku to count the submissions. Having to find totals really fast made me rewrite large parts of the client. So we ended up running the job every 2 minutes for the current hour and on the hour for the previous hour.

Soo: We had 24 user submission buckets, one for every hour the emotion tracker was live. Why did we end up doing this?

A notebook where Soo kept track of submission buckets.
Keeping track of hours and submission buckets.

Kavya: One of the biggest questions we had was the expected traffic. We made best guesses. But election days are unlike others. So it was hard to nail down one number. To be safe we decided to keep it lean and reduce any operations on large datasets as much as possible. That is why we ended up partitioning the data into 24 tables or buckets as it is called in Kinto. With the partitioning, we could be sure that even if we see a lot of traffic during peak morning hours it wouldn’t adversely affect the performance throughout the day.

Soo: We based the numbers on how many submissions we got for the Midwest/South maps — which were a lot. Like… the first hour was maybe 2,000 to 4,000 submissions or something. We didn’t end up getting enough submissions for this particular project to justify that, but it was good infrastructure practice for the future.

Kavya: Yes. The traffic was much lower than for the map projects you had done. We could have handled the whole data for this project in one bucket. Does that sound like I am boasting? But it is always good to have a plan for high traffic.

There were some other takeaways too from this project that can be applied elsewhere. For instance, we may have solved the problem with getting counts from datasets. Though Kinto doesn't have an endpoint that returns record count, it does return that data in the HEAD of the request.

So I wrote a function that makes only a head request filtered down to each candidate and emotion - a lightweight counter. Even then we had to run nearly 800 requests every two minutes to get count for each emotion and candidate. The request overhead was too much even when running asynchronously. So I added batch request support to the Ruby client. That is something that the official clients already support. But I had not added that in our ruby client. We could now batch 100 operations into one request. So some very re-usable work went into this.

Soo: Oh, let’s talk about our other Kinto failure.

When we realized at some point that we had not checked support for Internet Explorer at all. I sort of think it’s shocking that even IE 10+ can’t handle a standard object method (Object.assign) or Promises. But in any case this wasn’t even something Kinto docs flagged.

Kavya: Yeah. I was a bit in denial too when we learnt that Kinto's Javascript client wouldn't work on IE. It happens often. IE is significantly different from other browsers. It has its own JS engine, Chakra and layout engine, Trident. But still I was surprised when it didn't work out of the box. After we added those polyfills to the project, I went ahead and updated some documentation in the repository. That is the great part of using open source. You take and you give back.

How did you come up with the idea for the election feels project?

Soo: My major point of inspiration was an interactive word cloud (hey, it was 2008!) that the New York Times ran when Obama was elected. I really loved this at the time, but it was hard to see a pattern. I did a lot of research on how how to visualize and grid emotions, and ended up with the two-dimensional system you see in the final product. It was a little similar to the NYT’s Osama response, but I swear it wasn’t intended to be! Four-quadrant graphs seem like a perfectly fine way to capture 2D space.

Animating the emotion tracker’s display

Kavya: How did you end up handling so many data points? Was it just good old d3 force layout?

Soo: I used d3 4’s new forceSimulation! It’s excellent and super intuitive. I also used workers to pre-calculate points and cached final positions in localStorage so your browser wouldn’t have to recalculate the final positions every time you returned to the page. In retrospect, I probably didn’t need to do this, but it was intended to be performant for up to 10 times more submissions.

Kavya: It was fascinating watching the most common emotion go from "anxiety" to "afraid."

An animated image of the emotion tracker switching from "anxious" to "afraid."
Our emotion tracker switched from "anxious" to "afraid" between 9 and 10 pm ET.

Soo: Right between 9 and 10 p.m. on the East Coast. (Interestingly enough, traffic to the custom electoral map spiked again around that hour. It’s hard to know why, but I think people were using the map and electoral vote calculates to figure out whether Clinton still had a shot).

Kavya: I wonder if people picked the first emotion that kind of fit. Both the trending emotions were A words.

Soo: People told me they scrolled through the entire list and really thought about it and "anxious" was the most accurate. I also wonder if the English language front loads a lot of important words in the alphabet. I think it is a valid criticism that people chose A words because they were at the top of the list. But I also wonder, culturally, if we prefer the A words. Around 5 a.m., when we only got a few dozen submissions, the most popular emotion again changed to "agony," another A word.

One thing I do want to add is why we made this. There’s a certain element of presidential elections where people have a lot of feelings and want to share them with others and be able to be present with others, too. There’s nothing really left to do anymore after you vote. So you wait. I wanted to provide a space for that on the web, while acknowledging that this was a very long and intense election cycle.