the only hotel review scraper that pulls TripAdvisor, Booking.com, and Google Maps in one run — then runs AI sentiment across each platform separately and combined, flagging when a hotel's scores diverge across channels (a real signal worth investigating).
Hotels show up on TripAdvisor, Booking.com, and Google Maps with different scores because each platform attracts a different audience. TripAdvisor skews to enthusiast leisure travelers. Booking.com skews to OTA business and family travelers. Google Maps catches walk-by locals and last-minute searchers. Same hotel, three different rooms full of different expectations.
When those scores diverge sharply, it tells you something specific about your property — usually a mismatch between who you're attracting on each channel and what you're actually delivering. Existing hotel reputation tools (Revinate, Trustyou, Olery) either focus on one platform at a time or require a $300-$1,500+/mo annual subscription to compare them.
This scraper does it in one run, on any hotel URL — yours or any competitor — at $0.06 per review. Pay once, no subscription, run quarterly or per-deal.
star ratings get normalized across platforms so cross-platform comparison is apples to apples. traveler type, date, hotel response, and multi-language text all preserved.
point the scraper at any hotel — by name or by URL on any of the 3 platforms. it resolves the property across platforms automatically and returns one combined report.
dateRange filter (30days / 90days / 365days / all) narrows the window.platformConsistency score and a divergence summary that names why the scores differ (audience mismatch, recent service issue, language demographics, etc).trendDirection and platformConsistency drift. run on 5-20 competitor hotels in your market for benchmark studies. for chains, pass a list of property URLs and rank by reputationRisk to find which properties need investment first. push output to Google Sheets, your PMS as a custom field, BI tool (Looker / Tableau / Snowflake), or Slack alerts via the Apify API or n8n / Zapier.per-platform scores + combined scores + a platformConsistency flag + a divergence summary. plus every individual review across all 3 platforms. all downloadable as JSON, CSV, or Excel.
combined.sentimentScore (1-10) · weighted across all 3 platforms by review volume.
combined.reputationRisk (1-10) · combined risk score. 7+ = act now.
combined.trendDirection · improving / declining / stable across the last 90 days.
perPlatform[*].sentimentScore · separate scores for TripAdvisor, Booking.com, Google Maps so you can see who feels what.
platformConsistency · consistent / inconsistent flag based on score divergence across the 3 platforms.
divergenceSummary · plain-English explanation of why scores diverge across platforms (audience demographics, recent operational issue, language mix, etc).
topComplaints / topPraises · top 5 themes each, clustered across all reviews on all platforms.
executiveSummary · 2-3 sentences in plain English. paste it into the GM's weekly review or a board update.
a regional hotel group has 24 properties across Florida and Georgia and wants to know which 3 need the most investment. running the scraper on all 24 URLs with dateRange: "90days" takes about 90 minutes and costs ~$520 for ~8,700 reviews across the 3 platforms. the output ranks properties by combined.reputationRisk and surfaces that 3 properties have platformConsistency: inconsistent with Booking.com scores 1.5+ points below TripAdvisor — driven specifically by WiFi complaints from business travelers that leisure travelers don't surface. the group routes capex to WiFi upgrades on those 3 properties before next quarter.
run frequency: quarterly · cost: ~$400-700 per chain depending on review volume
a buyer is evaluating a $9M boutique hotel acquisition. the seller's deck cites Booking.com 8.4 as proof of brand strength. running the scraper with dateRange: "all" reveals Booking.com 8.4 is real but TripAdvisor is 4.0/5 (≈ 8.0/10) and Google Maps is 3.6/5 (≈ 7.2/10) — platformConsistency: inconsistent, divergence summary cites recent service-quality drop visible on Google Maps but not yet in Booking.com (lag effect). the buyer renegotiates 14% off and writes a 12-month service-recovery clause. for off-hotel cross-checks, the catalog also includes a Google Maps Review Sentiment scraper.
run frequency: per-deal · cost: $15-100 per hotel depending on volume
a hospitality brand is opening a new property in Miami Beach and wants to know exactly what guests complain about across the 12 nearest competitors. one run on 12 competitor URLs costs ~$210 for ~3,500 reviews total. topComplaints clustered across all 12 surface 3 recurring themes: parking confusion, breakfast quality, pool noise at night. the brand designs the operational SOPs around exactly those three before opening. a Trustpilot Sentiment Scraper handles the SaaS/ecommerce version of the same playbook for non-hospitality use cases.
run frequency: pre-launch · cost: $150-300 per market study
honest comparison against major hotel reputation management platforms and other Apify scrapers. checked their public listings as of 2026.
| data-runner.dev | Revinate / Trustyou | Single-platform Apify | |
|---|---|---|---|
| Multi-platform in one run | ✓ TripAdvisor + Booking + Google Maps | ✓ but heavy subscription | single platform only |
| Cross-platform consistency flag | ✓ with divergence summary | limited to dashboards | not supported |
| AI sentiment included | ✓ 8 combined fields + per-platform | ✓ proprietary | raw reviews only, no AI |
| Competitor analysis | ✓ any hotel URL | add-on, often higher tier | ✓ if scraping works |
| Price | $0.06 / review, no subscription | $300-$1,500+/mo + setup | $0.005-0.02 / review (no AI) |
| Star rating normalization | ✓ Booking 1-10 → kept, TA + Google 1-5 → 1-10 | ✓ proprietary | raw rating only |
| Multi-language reviews | ✓ scored in original | partial translation | raw text |
| Export | JSON / CSV / Excel / API / webhooks | dashboard + CSV | varies |
no subscription. no minimum. pricing scales linearly with total review volume across all platforms.
includes every review from TripAdvisor, Booking.com, and Google Maps (text, normalized star rating, traveler type, date, hotel response) + the full per-platform + combined AI sentiment report per hotel. example: a hotel with 300 reviews across the 3 platforms ≈ $18. empty platforms cost nothing.
new to Apify? you get $5 in free credits on signup — that's ~83 reviews analyzed before you spend a cent.
run on Apify →how it works, what it costs, what's legal, and how it handles edge cases.
It is the only Apify scraper that pulls reviews from TripAdvisor, Booking.com, and Google Maps for the same hotel in one run, then runs AI sentiment across all three corpuses together — surfacing platform-level consistency flags you cannot get from any single-platform tool. A hotel with 4.6 on TripAdvisor but 3.9 on Booking.com is broadcasting a real signal about who is staying there and what they expected. This scraper catches that gap automatically and tells you what's driving it.
The scraper pulls every review on each of the three platforms for the hotel — text, star rating, traveler type (business / family / couple / solo), date, and the hotel's response — then runs LLM sentiment across each platform's corpus separately AND across the combined corpus. You get back per-platform scores plus a platformConsistency flag (consistent / inconsistent) and a divergence summary that explains why scores differ. Star ratings get normalized across platforms (Booking.com uses 1-10, TripAdvisor 1-5, Google 1-5) so the comparison is apples to apples.
Hotel reviews are heavily multilingual — a Miami Beach property might have 40% English, 25% Spanish, 15% Portuguese, 10% French, 10% other. The scraper analyzes reviews in their original language without translation, preserving nuance. The batch-level scoring picks up patterns that single-review analysis misses, especially when the same complaint surfaces across languages (e.g. 'shower water pressure' in 4 languages = real maintenance issue, not language quirk).
Yes — pass a list of hotel URLs (one per location) on the input. Each location is analyzed and pushed as a separate dataset item. Pricing scales linearly at $0.06 per review across all 3 platforms. A 12-property chain with ~300 reviews per property across 3 platforms ≈ ~$650 per quarterly run. The output includes per-property scores so you can rank properties by reputationRisk and route investment to the underperformers.
Booking.com reviewers skew toward business travelers and value-focused families who booked through OTAs and care about price/cleanliness/location. TripAdvisor reviewers skew toward leisure travelers and enthusiasts who care more about service nuance, amenities, and 'wow factor.' Same hotel, two different audiences with different expectations. The scraper's platformConsistency flag tells you when this gap is normal (small divergence) vs structural (large divergence = a real signal worth investigating). The divergence summary names the specific reasons.
About 3-5 minutes per hotel because the scraper has to pull from three platforms with proper pagination and rate-limit handling. A 12-property chain analysis takes 35-60 minutes start to finish and runs entirely on Apify's infrastructure — you do not need to keep your machine open. Results stream into the dataset as each property finishes, so you can start reviewing the first results while the rest are still running.
The platform with zero reviews is skipped automatically — you do not pay for it. The output marks that platform as 'no data' and the cross-platform consistency analysis runs only on platforms with actual reviews. This is common for newer boutique hotels (often heavy on Booking.com, light on TripAdvisor) or US-domestic properties (heavy on Google Maps, lighter on Booking.com). The report is honest about what it could and could not see.
$0.06 per review across all 3 platforms. cross-platform consistency flag + divergence summary included. pay only for what you analyze.