Introduction

This field guide introduces remote sensing as an ethnographic companion rather than a technical discipline. It is written for anthropologists, artists, and social researchers interested in studying the Anthropocene but unfamiliar with satellite imagery or geospatial analysis. Rather than offering step-by-step instructions to become remote-sensing specialists, it proposes ways of thinking with pixels, clouds, and spectral indices—treating them as materials for observation and interpretation. In this sense, satellites become ethnographic interlocutors: they register residues of extraction, circulation, and care, yet demand contextual grounding to become meaningful. The guide invites ethnographers to approach remote sensing as a mode of attention—one that extends fieldwork beyond the visible, tracing how human and more-than-human relations leave marks on the surface of the Earth.

At its core, this manual reframes remote sensing as a specimenographic practice—a way of curating fragments of planetary change rather than extracting data from above. By combining multispectral imagery with situated accounts, the field guide demonstrates how distant observation can be folded into grounded ethnography. Each workflow, tool, and visual index described here is meant to be interpreted, annotated, and narrated—never taken as self-evident truth. The aim is not mastery but translation: to turn satellite traces into ethnographic fragments that reveal the uneven textures of the Anthropocene.

Framing: Why Pixels Matter to Specimenography

Remote sensing is more than remote measurement: it is a way of making worlds visible, and thus of making them contestable. The notion of Critical Remote Sensing (CRS) emphasises that sensors, platforms, algorithms, and derived indices embed power, politics, and particular world-views (Bennett et al. 2022). This framing aligns with your specimenographic project: you are not simply mapping change, you are investigating how fragments become commodities, waste, health products, or ritual artefacts in the Anthropocene.

Environmental-justice scholarship argues that satellites can both reproduce extractive abstractions and document environmental harm when grounded in community knowledge (Segarra 2024). By pairing satellite change-metrics with ethnographic vignettes and specimen labels, you can build a counter-cartography: co-framed phenomena, co-interpreted imagery, and carefully released layers.

For praxis, look to the methods of Forensic Architecture: stacking multiple sensors, aligning them with witness testimony and archival data, and publishing “thick” spatial narratives rather than bare classifications (Forensic Architecture n.d.).

In short: you are mapping fragility, afterlives, and residues—and you are doing so in ways that honour multiple ontologies.

Sensor Stack You Can Actually Run

Core, no-cost satellite mix (global coverage):

These can all be processed via a cloud platform such as Google Earth Engine (GEE). Recent bibliometric studies show GEE’s accelerated uptake and key role in geospatial analysis (Adami et al. 2023; Velastegui-Montoya et al. 2025).

Optional high-resolution (paid/restricted): e.g., PlanetScope (3–5 m), Vantor, formerly Maxar (sub-meter): suitable for fine feature mapping but budget/logistics intensive.

Indices and Patterns to Trace Anthropocene Residues

Indices are simple mathematical combinations of satellite image bands—small equations that highlight specific elements of the Earth’s surface, such as vegetation, water, soil, or heat. They translate invisible spectral information into intuitive visual contrasts. For instance, the Normalized Difference Vegetation Index (NDVI) shows where plants are thriving or stressed; the Modified Normalized Difference Water Index (MNDWI) reveals moisture and flooded areas; and the Normalized Burn Ratio (NBR) detects disturbances like fires, mining, or construction. For ethnographers, these indices can be read as traces rather than measurements—marks of how human and more-than-human activities transform the land over time. They allow you to see how residues accumulate, scars expand, or ecosystems attempt to recover. You don’t need to become a data scientist to use them; each index acts as a visual narrative, a kind of satellite fieldnote that can be juxtaposed with stories, testimonies, or on-the-ground observations.

  • Disturbance & scarring: e.g., ΔNBR (difference in NBR before/after) to detect industrial scars, tailings, burned soils.
  • Water / tailings / draw-down: e.g., MNDWI or ΔMNDWI to detect emergence, retreat or contamination ponds.
  • Heat & exposure: Land Surface Temperature (LST; e.g., from Landsat) + NDVI drop = “green deserts” in plantations or mono-cultures.
  • Urban/industrial metabolism: VIIRS night-lights temporal trends to capture energetic signatures; SAR coherence drops around pits and structures.
  • Vegetation seasonality & fragmentation: NDVI/EVI time-series to reveal plantation rhythms, labour/migration cycles.

Minimal Workflow (GEE Starter)

The “minimal workflow” section offers a gentle entry point for working with remote sensing data using Google Earth Engine (GEE), a free, browser-based platform. You don’t have to install heavy software or write complex code—the example script included in this guide is designed to show how a few lines can produce meaningful images of change. Think of it as a digital notebook: you define your area of interest, choose the time span, and let the platform reveal how landscapes have shifted. Once these images appear, the real ethnographic work begins—interpreting what the color changes might mean, comparing them with local accounts, and situating them within broader ecological and political processes. The goal is not technical precision but exploratory seeing: to experiment with a new visual scale of fieldwork and to understand how satellites can complement rather than replace embodied, situated observation.

// Define AOI (e.g., Cerrejón)
var aoi = /* polygon */;

// Sentinel-2, add key indices
var s2 = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED')
  .filterBounds(aoi)
  .filterDate('2016-01-01','2025-11-01')
  .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE',20))
  .map(function(img){
    var ndvi   = img.normalizedDifference(['B8','B4']).rename('NDVI');
    var nbr    = img.normalizedDifference(['B8','B12']).rename('NBR');
    var mndwi  = img.normalizedDifference(['B3','B11']).rename('MNDWI');
    return img.addBands([ndvi,nbr,mndwi]);
  });

var median2016 = s2.filterDate('2016-01-01','2016-12-31').median().select(['NDVI','NBR','MNDWI']);
var median2025 = s2.filterDate('2025-01-01','2025-11-01').median().select(['NDVI','NBR','MNDWI']);
var delta = median2025.subtract(median2016).rename(['dNDVI','dNBR','dMNDWI']);

Map.centerObject(aoi, 11);
Map.addLayer(delta.select('dNBR'),
             {min:-0.5, max:0.5}, 'ΔNBR');
Map.addLayer(delta.select('dMNDWI'),
             {min:-0.5, max:0.5}, 'ΔMNDWI');

“Thick Map” Plate Template

The “thick map” is not a technical diagram but an ethnographic composition—a way of bringing together satellite imagery, field observations, and narrative reflection into a single, layered artifact. Inspired by Clifford Geertz’s notion of “thick description,” the thick map transforms geospatial data into a situated story. Rather than isolating numbers or pixels, it pairs them with sensory, historical, and social detail: how a smell lingers near a tailings pond, how a community remembers the expansion of a plantation, how a color shift on the map corresponds to a lived experience of drought or flood. Each layer—satellite index, timeline, vignette, testimony—adds depth and relational context. The aim is to produce a map that feels alive, textured, and uncertain, showing both what is visible from above and what is known from below.

For ethnographers new to visual or digital methods, the thick map offers an approachable structure. Think of it as a hybrid between a fieldnote, a diagram, and a small exhibition. The left side (the map) provides the visual evidence; the right side (the vignette) situates that evidence in lived reality; the bottom (the evidence stack) records events, local data, or quotes that support or complicate what the satellite sees. You don’t need to master cartography to make one—only to compose thoughtfully, combining fragments of different kinds of knowledge. Each map becomes a specimen in your broader cabinet of Anthropocene traces: a crafted intersection of scale, story, and care.

For each site to explore:

A. Cartographic panel (left): composite of ΔNBR or ΔMNDWI + latest SAR VV/VH layer. Include a specimen label: sensor name, dates, index used, threshold notes.

B. Narrative panel (right): ~250-word vignette capturing scene, labour, smell, sound, residue. Then include 2-3 phantom footnotes (in smaller type) to register ambiguity/disagreement (e.g., “vendor insists smell peaked in Sep but ΔMNDWI rose in Dec”).

C. Evidence stack (bottom):– Timeline bar 2010–2025 marking events (spill, protest, court decision). – Local metrics (e.g., fish kills, clinic visits, crop failure). – Confidence card: model assumptions, accuracy (%), limitations, community review date.

Ethics, Politics & Care

  • Aerial/spaceborne panopticism: The “satellite gaze” raises issues of invisibilization and exposure – who is visible and who is hidden (Bennett et al. 2022).
  • Community consent & data sovereignty: Before mapping begins, co-decide what you will show/hide, aggregate sensitive points, and review layers with the community (Fisher et al. 2021).
  • Accuracy vs meaning: Higher spatial/temporal resolution does not guarantee deeper ethnographic meaning; always embed in context (Segarra 2024).
  • Automated detection risks: Using ML/DL introduces interpretability and accountability issues—especially when remote sensing can map “hidden infrastructure” not known to communities.

Minimal release policy:
– Public: the plates, downsampled rasters, narrative, methods.
– Restricted: AOI exact polygons, raw data exports, raw classifier weights/code.

State-of-the-Art Tools, Platforms & Methods (2020 onward)

  • TerrSet 2020 (Clark Labs): Desktop GIS + remote sensing modelling suite for land-change and scenario simulation; released June 2020. Useful for advanced modelling of speculative landscapes.
  • ILWIS 3.8.6 (Open Source): Free raster/vector/time-series environment; version released January 2020; accessible for community-collaborative workflows.
  • Google Earth Engine (GEE) + AI/ML integrations: Cloud-native multi-sensor analysis—recent reviews highlight its centrality in geo-big data and remote sensing (Adami et al. 2023; Velastegui-Montoya et al. 2025; Kumar & Mutanga 2022).
  • Deep-Learning / AI Platforms for EO (2020-25): Convolutional neural networks (CNN), U-Net, Random Forests (RF) are now common for classification, segmentation, anomaly detection in RS (Kumar & Mutanga 2022).
  • Multi-Sensor Fusion Tools (2022-24+): Workflows increasingly combine optical + SAR + LiDAR + UAS (Unmanned Aircraft Systems) to map buried/erased features and industrial-landscape palimpsests (Zhang et al. 2024). These tools free you to move beyond simple index-detection to layered modelling of “specimen-objects” and speculative ontologies.

Optional Add-Ons

  • Urban biodiversity layer (for city plates): Use NDVI variance + fragmentation metrics to highlight Anthropocene ecologies in cities (Finizio et al. 2024).
  • Public engagement: Create short timelapse animations (via GEE) of your sites to use as web-header visuals; pair with caution about techno-sublime.

References

Adami, Marcos, D. Velastegui-Montoya, et al. 2023. “Google Earth Engine: A Global Analysis and Future Trends.” Remote Sensing 15 (14): 3675. https://doi.org/10.3390/rs15143675.

Bennett, Mia M., Janice Kai Chen, Luis F. Álvarez León, and Colin J. Gleason. 2022. “The Politics of Pixels: A Review and Agenda for Critical Remote Sensing.” Progress in Human Geography 46 (3): 729–52. https://doi.org/10.1177/03091325221074691.

Finizio, Max, Andrea Nascetti, Davide Paparella, Gianmarco Parente, and Fabio Del Frate. 2024. “Remote Sensing for Urban Biodiversity: A Review and Meta-Analysis.” Remote Sensing 16 (23): 4483. https://doi.org/10.3390/rs16234483.

Fisher, Michael, Michael G. Fradley, Philippa Flohr, Bijan Rouhani, and Francesca Simi. 2021. “Ethical Considerations for Remote Sensing and Open Data in Relation to the Endangered Archaeology in the Middle East and North Africa Project.” Archaeological Prospection 28 (3): 279–92. https://doi.org/10.1002/arp.1816.

Kumar, Laxmi, and Onisimo Mutanga. 2022. “Google Earth Engine and Artificial Intelligence (AI).” Remote Sensing 14 (14): 3253. https://doi.org/10.3390/rs14143253.

Segarra, Joel. 2024. “The Role of Critical Remote Sensing in Environmental Justice Struggles.” Progress in Environmental Geography 3 (3): 185–211.

Velastegui-Montoya, Diego, Marcos Adami, Alzira T. de Oliveira, and Luiz E. O. C. Aragão. 2025. “Remote Sensing Trends and Challenges in the Anthropocene: Integrating Cloud-Based Platforms and Multisensor Data.” ISPRS Journal of Photogrammetry and Remote Sensing 210: 150–68. https://doi.org/10.1016/j.isprsjprs.2025.02.003.

Wikipedia contributors. 2024. “TerrSet.” Wikipedia, The Free Encyclopedia. June 2024. https://en.wikipedia.org/wiki/TerrSet.

Wikipedia contributors. 2024. “ILWIS.” Wikipedia, The Free Encyclopedia. May 2024. https://en.wikipedia.org/wiki/ILWIS.

Zhang, Yufei, Hao Chen, and Wen Zhao. 2024. “Multi-Sensor Data Fusion for Environmental Monitoring: Recent Advances and Future Directions.” ISPRS Journal of Photogrammetry and Remote Sensing 204: 180–203. https://doi.org/10.1016/j.isprsjprs.2024.01.007.


Posted