Human Geo Annotations

Human-reviewed geospatial labels for difficult imagery

When generic labelers flatten ambiguity, we keep the human review loop visible: edge-case notes, correction decisions, QA flags, and spatial reasoning your model team can audit.

Geospatial Solutions LLC Washington, DC Operating since 2018 35+ clients
Ambiguity handled explicitlyCorrection loopsTaxonomy discipline
css-review-reveal

Reviewer decisions connected to map, source image, and QA status

Proof from the feature extraction review workspace: map, photo, record, and QA decision stay synchronized.
Buyer fitSearch intentreview workspace
How we keep the first step easy

Three commitments that come standard

01

See the work before you contract

Send 25-50 representative frames. We label them at our cost, return the output and a per-class QA scorecard. You decide whether to scope a pilot after you have seen the labels, not before.

02

Per object or per hour, your call

Bill per labeled object when scope and volume are predictable. Bill per labeling hour when the workflow is exploratory or the schema is still firming up. Both models are on the table from the first scoping call.

03

Your labeling platform, our labor

We operate in CVAT, Labelbox, Roboflow, V7, Scale AI workflows, and most in-house labeling stacks. No platform migration on your end. If you have a custom tool, we learn it on the pilot.

The status quo

Where generic annotation services fall short

What we deliver

What we deliver

98%F1 target

On infrastructure asset classes, validated per delivery

Road Infrastructure Labeling

Pavement, striping, lanes, boundaries, and surface condition labels — tied to real geography with QA trails.

02

Asset Geolocation

Signs, signals, poles, utilities, streetlights — bounding boxes, segmentation masks, and point labels with coordinate accuracy.

03

Imagery Workflows

Roadway, street-level, and LiDAR imagery converted into QA-reviewed features your mapping/AI/asset teams can use immediately.

04

GIS-Aware QA

Spatial validation, coordinate-accuracy checks, and asset classification QA against authoritative GIS databases.

05

Schema-Ready Exports

Deliveries in QGIS, ArcGIS, GeoJSON, COCO, KITTI, Mapillary — whatever your pipeline ingests.

Proof-led positioning

What this page needs to make obvious

Human-in-the-loop geospatial annotation, manual GIS annotation, expert map labeling, and QA-reviewed spatial labels.

01

Ambiguity handled explicitly

Reviewer notes document what was visible, what was uncertain, and why the final label was accepted or corrected.

02

Correction loops

Flagged records move through review, correction, and re-verification before delivery.

03

Taxonomy discipline

Class definitions, geometry rules, and edge-case examples prevent silent drift across labeling batches.

Proof workflow

Input, review, evidence, output.

Modeled on the live Geospatial Solutions demos: the page should show what the buyer sends, what they review, what evidence stays visible, and what they receive.

01

Input

Ambiguous imagery, prior label errors, target classes, and examples of disputed edge cases.

02

Review surface

Human reviewers compare source imagery, map context, taxonomy rules, and customer acceptance criteria.

03

Evidence

Each disputed label keeps reviewer notes, confidence, reason codes, and correction status visible.

04

Output

Reviewed labels, edge-case log, QA scorecard, and updated labeling guidance.

Source and limits

Technical trust should stay visible.

Confidence

Human review focuses on the records most likely to degrade model or GIS quality.

Caveat

Ambiguous imagery still needs explicit acceptance rules, not silent guessing.

Source

Street-level imagery, aerial scenes, LiDAR context, GIS records, and customer review notes.

QA boundary

Reviewer notes, correction logs, class consistency checks, and re-verification.

Export path

QA-reviewed labels, issue logs, correction reports, and GIS-ready exports.

Before the first call

What you send · What you get

No vague discovery phase. You bring four or five things, we return a specific plan you can evaluate.

What you send
  • 1A representative sample (50-500 frames) from your imagery source
  • 2Target feature classes and geometry types (point, line, polygon, mask)
  • 3Required output format (GeoJSON, COCO, KITTI, Mapillary, custom)
  • 4Approximate volume, deadline, and accuracy requirement
  • 5Security or NDA constraints (we sign mutual NDA up front)
What you get back
  • 1Calibration set with QA scores returned in 2-4 business days
  • 2Documented edge-case log with our interpretation of every ambiguous class
  • 3Schema-locked production scope with per-frame pricing
  • 4Inter-annotator agreement report (kappa, F1 by class)
  • 5Sample report with feature layer, QA notes, and exports
Class library

83 documented asset classes across 4 categories

Every class has a labeled definition, edge-case examples, and QA rules calibrated against authoritative GIS databases. Add custom classes during pilot and we extend the taxonomy.

Road infrastructure
28 classes
  • Pavement markings
  • Striping (single, double, dashed)
  • Crosswalks (all types)
  • Lane lines (direction-aware)
  • Stop bars + yield triangles
  • Road boundaries + shoulders
  • Surface condition cues (cracking, raveling, rutting)
Asset geolocation
34 classes
  • Traffic signs (R-series, W-series, MUTCD-compliant)
  • Traffic signals + pedestrian heads
  • Utility poles (wood, concrete, steel)
  • Streetlights + cobra heads
  • Guardrails + crash cushions
  • Barriers (Jersey, K-rail, temporary)
  • Manholes + catch basins
  • Fire hydrants + valves
Training data extraction
12 classes
  • Object detection bounding boxes
  • Semantic segmentation masks
  • Instance segmentation
  • Polygon classification
  • False-positive cleanup pass
  • False-negative recovery (hard-negative mining)
GIS delivery formats
9 classes
  • GeoJSON (QGIS / ArcGIS native)
  • COCO (training-ready)
  • KITTI (AV-research convention)
  • Mapillary (street-level standard)
  • OpenStreetMap-ready attributes
  • Custom JSON schemas
  • PostGIS direct write
  • Shapefile (legacy support)
Sample deliverable

A single feature, as you would receive it

Every label is a complete GeoJSON feature with geometry, class, confidence, QA trail, and source provenance. Loads directly into your map, your trainer, or your validator — no conversion script.

json
{
  "type": "Feature",
  "geometry": {
    "type": "Polygon",
    "coordinates": [[[ -77.0364, 38.8951 ], ...]]
  },
  "properties": {
    "class": "crosswalk",
    "class_id": "CW_001",
    "mutcd_type": "continental",
    "confidence": 0.97,
    "qa_status": "approved",
    "qa_reviewer": "annotator_03",
    "qa_timestamp": "2024-08-15T14:23:17Z",
    "source_frame": "frame_847.jpg",
    "capture_timestamp": "2024-08-12T11:18:04-04:00",
    "schema_version": "gss-roads-v2.4"
  }
}
Deliverables

What you walk away with

How we work

A scoped path from sample data to running system

No open-ended retainers. No "discovery phases" that bill for months without producing anything you can evaluate.

  1. 01

    Sample

    50-100 frames, your schema, your edge cases. We return a calibration set so you can see how we interpret your taxonomy before scale.

  2. 02

    Pilot

    500 samples in 2-4 business days. Inter-annotator agreement scores, QA dashboard, format in your pipeline (GeoJSON, COCO, KITTI, Mapillary).

  3. 03

    Scale

    Production volume with SLA. 24/7 follow-the-sun capacity, 98%+ QA target, weekly delivery cadence.

  4. 04

    Integrate

    Wire into your training pipeline, deploy custom validation rules, build out edge case mining. Optional embedded team.

Live on geospatialsolutions.co

Click into the actual work

These open the real, interactive demos on our main site — not screenshots, not videos. Click around before you decide to talk to us.

Why teams trust us
Questions teams ask before they engage us

Common questions, answered honestly

Why 'human geo' annotations vs automated?

Automated extraction works for the easy 70% of features. The 30% with edge cases — occlusion, ambiguous geometry, novel asset types — needs human judgment validated against geographic context. We focus on that gap.

How does this differ from geospatialannotations.com?

Same underlying team and standards, but humangeoannotations.com is positioned for projects where human QA is the explicit differentiator — regulatory deliverables, training data for safety-critical AI, and high-stakes mapping where automated-only output won't defend.

How do you maintain human consistency at scale?

Per-annotator consensus scoring, weekly calibration sessions against ground-truth, and a senior QA pass on a sampled percentage of every delivery. Annotator drift is detected and corrected before it propagates.

Are your annotators full-time employees or contractors?

Mix. The QA team is full-time. Production annotators are contracted but vetted, trained, and retained — we don't use anonymous task-platform crowds. Annotator continuity is what enables our QA discipline.

More from Geospatial Solutions

Adjacent services your team may need

Start a free annotation pilot

Send us 500 frames. Get a labeled pilot in 2 days.

No purchase order, no master service agreement. Send a representative slice and a target schema; we return the labels in the format your pipeline already ingests.

Send edge cases for human review