Urban Morphology Meets Big Data

For a more complete report on this project, please go here


Availability of massive urban data collections such as Open Street Map from the one hand and the recent advancements in machine learning methods on the other can open up new approaches to investigate urban patterns at a global scale. In this work, by collecting a large data set of street networks and building patterns in more than 65 thousands cities all over the world, and using the recently developed deep learning neural networks, we implemented a model that automatically learns a hierarchical representation of city patterns. As an initial result, we show how the trained model over the fine grain street networks is able to reveal a visual landscape of city patterns with varying characteristics, such as density, size and spatial layout, corresponding to many other latent aspects such as historical, geological, political and economic factors. Since the developed model is indexing huge amount of real cities based on their similarities, conceptually the final outcome of this work can be developed as a search engine of cities, where depending on a specific goal (e.g. a certain question about a specific city), one can find similar developments somewhere in the world, which can open up further discussions and guides the investigation process without imposing a certain theoretical framework. This explorative approach could potentially invert the investigation process of those classical urban studies, which are usually based on a-priori known frames of analysis.

Techniques, Methods and Results

Using a styled map from Mapbox Studio which only shows roads and road networks and the Mapbox static API, we collected the images of the road networks of more than 65K cities across the world.

Using the collected images of the city networks, we trained a Convolutional Autoencoder (CAE), where similar to other autoencoders, the middle layer can be seen as a dense representation of the input data. Later these dense vectors can be used to map similar data points (here, cities) next to each other.

We then just simply use the learned vectors (here 640 dimensions) in a K-NN framework and find similar cities to the selected city by the user.


More images can be found here

Below is the link to an interactive demo, where by selecting a city, other similar cities will be mapped next to it.


Further, in order to create a two dimensional visualization of all of these 65k cities, we trained a Self Organizing Map (SOM) by the  encoding vectors of the trained CAE. The SOM assigns a two dimensional index to each data point in a way that similar data points get similar indexes. As a result, we will have a spectrum of city maps. Bellow is a visualization of around 25K cities using SOM algorithm.

Download high resolution version in 78010 x 21850 pixels(Attention:586MB )

Download lower resolution version (40MB)

GitHub Repository


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s