Before I talk about anything, you should check out the actual application by visiting weather.kyle.in.

A Better Introduction

I worry a little too much about the weather report. I am always asking myself “is it going to snow today? Should I bring an umbrella?” And for that reason, I almost always have a weather report open in Chrome. I thought it would be a cool idea to feed my weather report addiction by making a nice little website and possibly a console-style appliance so I could just look over my shoulder and see what the weather is like.

This idea is simple. Make a little web app and put it on a wall.

Failed Iterations

I started out with a 3-square design that had a few extra features.

I used a simple NodeJS backend with a jQuery-based (ewww) frontend. This was horrible. I did not like the design or any of the code that went into it. It was scrapped. This version was made so long ago that I do not even have the source code anymore. So sadly I can’t post a picture of it.

Take 2

There was a complete overhaul of everything. Now using a Typescript NodeJS backend with a React Typescript frontend. This version was actually worthy of a picture. Ignore the redness:

This version too was not really “it” for me. It was too busy. So yet another redesign took place.

Take 3, The Final Take

This time my friend joshbpls took to designing the frontend. He has a much better artistic sense then I do and it shows:

The Details

This is a pretty simple application. Or at least it is today. Services like Netlify and Amazon Lambda make this an almost laughably trivial application.

All of the backend code is just one Lambda endpoint (1 file). This file represents an endpoint that gets deployed alongside the application.

The frontend is a simple React TS application. Nothing too special about that.

The parts of this app that are more interesting are the smaller details. For instance, all of the icons in the top right respond to the current weather conditions. The wind blows if its sufficiently windy outside or turn into a tornado if it’s really windy outside, the compass points in the direction of the wind, the thermometer level goes up and down and changes color with the current temperature, and the cloud will either rain or snow depending on the precipitation level and type.

Protecting From Attackers

This application had a single endpoint that queries darksky for a weather report given geographic coordinates. If someone had strong feelings against weather reports they may try and spam that endpoint with requests. If someone were to do this they could end up charging me $1 per 10,000 requests. This event is highly unlikely but I still added some simple protections just in case.

Firstly, the darksky account/api key are linked to a disposable debit card. That means if a bill exceeds a specified limit the card will automatically decline. I personally use privacy but there are many other services just like it. This is a horrible way to protect against attacks as it does nothing but save me if a bill gets out of hand.

A more useful protection is the usage of a cold-start rate limiter. This exploits the nature of how the lambda function is executed in order to add persistance to the task. This traditionally would be some sort of redis key-value store that held the source IP and the last request time but this adds a whole new service into the mix and greatly diminishes the simplicity of the backend.

I used lambda-rate-limiter for this. I think their description of how it works and the philosphy behind it is a bit better than my own:

Using serverless computing is usually cheap, especially for low volume. You only pay for usage. Adding a centralized rate limiter storage option like Redis adds a significant amount of cost and overhead. This cost increases drastically and the centralized storage eventually becomes the bottleneck when DOS attacks need to be prevented.

This module keeps all limits in-memory, which is much better for DOS prevention. The only downside: Limits are not shared between function instantiations. This means limits can reset arbitrarily when new instances get spawned (e.g. after a deploy) or different instances are used to serve requests. However, cloud providers will usually serve clients with the same instance if possible, since cached data is most likely to reside on these instances. This is great since we can assume that in most cases the instance for a client does not change and hence the rate limit information is not lost.

Since the cost per request is relatively inexpensive this solution was the obvious choice.


Here’s a link to the Github repo for this project!