Historical Figure Ranking Sites Compared: Who Does It Best?
How do you decide who the most important person in history is? It depends entirely on where you look.
Type "greatest historical figures" into a search engine and you will get a dozen different answers from a dozen different platforms — each using a completely different methodology, each producing completely different results. Wikipedia will give you an encyclopedia entry with no ranking at all. Ranker will show you a poll where Keanu Reeves somehow outranks Isaac Newton. Academic indexes will bury you in citation metrics. And IMDb will tell you that historical importance is measured by how many movies someone has appeared in.
The problem is not a lack of data. The problem is that no single platform has figured out how to aggregate collective human judgment into a meaningful, continuously updated score — until now.
Let us walk through the major approaches and see how they stack up.
Wikipedia: The Encyclopedia That Refuses to Rank
Wikipedia is the default starting point for anyone researching a historical figure. It is comprehensive, well-sourced, and available in hundreds of languages. If you want to know when Napoleon Bonaparte was born or how many battles Alexander the Great won, Wikipedia delivers.
But Wikipedia explicitly refuses to rank people. Its neutral point of view policy means editors cannot say one person is "more important" than another. The result is an information resource, not an evaluation tool.
Some researchers have tried to extract rankings from Wikipedia metadata — article length, number of page views, number of language editions. These proxies are interesting but deeply flawed. Article length reflects how much controversy someone generated, not their significance. Page views spike when a celebrity dies or a biopic drops and then crash back down. And language editions favor figures from countries with large Wikipedia editor communities.
Strengths: Comprehensive factual information, multilingual, well-cited.
Weaknesses: No evaluation mechanism, no ranking, metadata proxies are misleading, editorial bias toward Western English-language figures.
Ranker: Democracy Without Stakes
Ranker lets anyone vote on ranked lists. "Greatest Military Leaders of All Time." "Most Influential Scientists." "Best Presidents." The concept is appealing — let the crowd decide.
The execution has problems. Ranker votes are free, anonymous, and unlimited. This means results are dominated by recency bias (living figures rank absurdly high), popularity bias (pop culture figures dominate), and low-effort engagement (most voters spend seconds on each list). There is no cost to voting, so there is no incentive to vote thoughtfully.
On Ranker's "Most Important People in History" list, you will find Albert Einstein competing with figures whose primary qualification is being famous on the internet in the last decade. The wisdom of crowds only works when the crowd has skin in the game.
Strengths: Large voter base, covers many categories, accessible.
Weaknesses: No stakes mean no thoughtful engagement, extreme recency and popularity bias, easily gamed, static lists that rarely update.
IMDb: Stars, Not Significance
IMDb does not set out to rank historical figures, but its "STARmeter" and biographical film databases create an implicit ranking system. The more movies, TV shows, and documentaries feature a person, the higher their cultural footprint appears.
This creates an entertainment-driven distortion. Cleopatra ranks highly because Hollywood loves her. Figures like Nikola Tesla surged after a wave of documentaries in the 2010s and 2020s. Meanwhile, transformative figures who lack cinematic appeal — great mathematicians, bureaucratic reformers, agricultural innovators — are invisible.
IMDb measures cultural penetration through entertainment, not significance. These are related but very different things.
Strengths: Massive database, tracks cultural impact through media, well-maintained.
Weaknesses: Measures entertainment value not historical importance, biased toward visually dramatic figures, Western media dominance.
Academic Rankings: Rigorous but Inaccessible
Several academic projects have attempted to create data-driven historical importance rankings. MIT's Pantheon project ranks figures by Wikipedia language editions, Historical Popularity Index uses digitized book mentions, and various bibliometric approaches count scholarly citations.
These efforts are methodologically rigorous but suffer from three critical problems. First, they are static — published once and rarely updated. History's evaluation of figures changes constantly, but academic datasets update on the timescale of years, not days. Second, they are inaccessible — buried in academic papers and databases that the public never sees. Third, they measure what scholars write about, not what people think. A figure can be extensively studied without being widely admired or condemned.
The gap between academic ranking and public opinion is often enormous. Scholars rank Genghis Khan as one of history's most consequential figures. The general public is more divided — is he a great empire builder or a mass murderer? Academic metrics cannot capture this tension.
Strengths: Methodologically sound, data-driven, peer-reviewed.
Weaknesses: Static, inaccessible to public, measures scholarly attention not public judgment, slow to update.
The Comparison Table
| Feature | Wikipedia | Ranker | IMDb | Academic | JudgeMarket |
|---|---|---|---|---|---|
| Ranks figures | No | Yes | Indirectly | Yes | Yes |
| Continuous updates | Daily edits | Periodic | Daily | Yearly | Real-time |
| Skin in the game | No | No | No | Reputation | Yes (OPS) |
| Public accessible | Yes | Yes | Yes | No | Yes |
| Captures controversy | Somewhat | No | No | Somewhat | Yes (volatility) |
| Resists gaming | Somewhat | No | Somewhat | Yes | Yes |
| Cross-cultural | Partial | No | No | Partial | Yes |
| Historical + living | Yes | Yes | Yes | Historical only | Yes |
| Quantitative score | No | Vote count | Rating | Index score | Price (0-100) |
Curious how market-based scoring works in practice? See live prices for thousands of figures right now.
Browse all figures on JudgeMarket →
Why Market-Based Scoring Is Superior
Every approach above fails at the same fundamental challenge: how do you aggregate millions of individual opinions into a single meaningful number without that number being gamed, biased, or stale?
Market mechanisms solve this. Here is why.
Stakes force honesty. On JudgeMarket, you trade OPS to express your opinion. If you think Leonardo da Vinci is undervalued at 72, you buy. If the market proves you right, you profit. If you are wrong, you lose. This is fundamentally different from clicking a vote button. When your resources are on the line, you think harder.
Prices update continuously. A figure's JudgeMarket price reflects the latest collective judgment at every moment. When a new documentary drops, when a scandal surfaces, when new historical evidence emerges — the price moves in real time. Compare this to an academic ranking that updates once every three years.
Markets resist manipulation. Trying to artificially inflate a figure's price is expensive. You have to keep buying against sellers who disagree with you. On a voting platform, one person with multiple accounts can swing results. On a market, manipulators bleed resources to informed traders who arbitrage away the distortion.
Controversy becomes visible. High trading volume and price volatility on Karl Marx tells you something that no ranking can: this figure is actively debated. The price of 55 says "contested." A ranking that places Marx at number 47 says nothing about the intensity of disagreement.
Everyone participates equally. You do not need a PhD in history to trade on JudgeMarket. You do not need to be a Wikipedia editor or an academic. You need an opinion and the willingness to back it up. This is the most democratic form of evaluation ever created for historical figures.
What the Market Reveals That Rankings Cannot
Consider Mother Teresa. On most ranking sites, she appears near the top of "greatest humanitarians" lists. But her legacy is more complex than that — journalist Christopher Hitchens, medical ethicists, and historians have raised serious criticisms about conditions in her missions and her political associations.
A simple ranking cannot capture this. A market price can. If Mother Teresa's JudgeMarket price is volatile, trading between 58 and 74 over a quarter, that tells you the public is genuinely divided. The price itself encodes uncertainty in a way that a static rank of "#12 Greatest Humanitarian" never could.
Or consider the Einstein vs. Newton debate. Who was the greater physicist? This is a question people have argued about for a century. A ranking forces you to put one above the other. A market lets both prices coexist, and the relative prices — and their movements over time — tell a richer story than any ordered list.
The Case for Market-Based Reputation
Every other approach to ranking historical figures treats evaluation as a one-time event — a vote cast, a paper published, an article written. JudgeMarket treats evaluation as a continuous process, because that is what it actually is.
Our collective judgment of history is always changing. Thomas Jefferson was once an unambiguous hero; today his legacy is fiercely debated. Alan Turing was once a forgotten figure; today he is recognized as one of the most important minds of the twentieth century. These shifts deserve a scoring system that can keep up.
Markets are that system. They have aggregated human opinion more efficiently than any other mechanism for centuries — in finance, in prediction, in commodity pricing. JudgeMarket applies that same engine to the most fundamental question of all: who matters?
Ready to see what the market thinks? Browse live prices for thousands of historical figures and place your first trade.