There’s always something to howl about.

Why doesn’t Zillow.com act like Trulia.com? Because life is short but art is long . . .

Cathy had lunch today with a friend and ex-colleague. Cathy was talking about Zillow.com as a Web 2.0 phenomenon, and her friend was having trouble wrapping her mind around the idea of Web 2.0.

I sent her mail when I heard about this, summarizing and quoting from the seminal Tim O’Reilly article:


In an elevator speech: Web 2.0 creates an ongoing community of active users by integrating a user-modifiable database through an interactive, as opposed to static, web-based interface.

This is O’Reilly’s summary:

Web 2.0 Design Patterns

In his book, A Pattern Language, Christopher Alexander prescribes a format for the concise description of the solution to architectural problems. He writes: “Each pattern describes a problem that occurs over and over again in our environment, and then describes the core of the solution to that problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice.”

  1. The Long Tail
    Small sites make up the bulk of the internet’s content; narrow niches make up the bulk of internet’s the possible applications. Therefore: Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.
  2. Data is the Next Intel Inside
    Applications are increasingly data-driven. Therefore: For competitive advantage, seek to own a unique, hard-to-recreate source of data.
  3. Users Add Value
    The key to competitive advantage in internet applications is the extent to which users add their own data to that which you provide. Therefore: Don’t restrict your “architecture of participation” to software development. Involve your users both implicitly and explicitly in adding value to your application.
  4. Network Effects by Default
    Only a small percentage of users will go to the trouble of adding value to your application. Therefore: Set inclusive defaults for aggregating user data as a side-effect of their use of the application.
  5. Some Rights Reserved. Intellectual property protection limits re-use and prevents experimentation. Therefore: When benefits come from collective adoption, not private restriction, make sure that barriers to adoption are low. Follow existing standards, and use licenses with as few restrictions as possible. Design for “hackability” and “remixability.”
  6. The Perpetual Beta
    When devices and programs are connected to the internet, applications are no longer software artifacts, they are ongoing services. Therefore: Don’t package up new features into monolithic releases, but instead add them on a regular basis as part of the normal user experience. Engage your users as real-time testers, and instrument the service so that you know how people use the new features.
  7. Cooperate, Don’t Control
    Web 2.0 applications are built of a network of cooperating data services. Therefore: Offer web services interfaces and content syndication, and re-use the data services of others. Support lightweight programming models that allow for loosely-coupled systems.
  8. Software Above the Level of a Single Device
    The PC is no longer the only access device for internet applications, and applications that are limited to a single device are less valuable than those that are connected. Therefore: Design your application from the get-go to integrate services across handheld devices, PCs, and internet servers.

There are things that are missing here, e.g., the ubiquitous Ajax programming that makes the web behave more like a stand-alone application on your desktop machine. But the essence of Web 2.0 is community creation, maintenance and control of a shared database through the web. Ebay and Wikipedia are perfect examples, as is the user rating system on Amazon.

More:

Core Competencies of Web 2.0 Companies

In exploring the seven principles above, we’ve highlighted some of the principal features of Web 2.0. Each of the examples we’ve explored demonstrates one or more of those key principles, but may miss others. Let’s close, therefore, by summarizing what we believe to be the core competencies of Web 2.0 companies:

  • Services, not packaged software, with cost-effective scalability
  • Control over unique, hard-to-recreate data sources that get richer as more people use them
  • Trusting users as co-developers
  • Harnessing collective intelligence
  • Leveraging the long tail through customer self-service
  • Software above the level of a single device
  • Lightweight user interfaces, development models, AND business models

The next time a company claims that it’s “Web 2.0,” test their features against the list above. The more points they score, the more they are worthy of the name. Remember, though, that excellence in one area may be more telling than some small steps in all seven.

There’s way more. Amazon is the archetypical long tail site, and they aggregate data on your past searches and purchases to predict what kind of long tail stuff you will be interested in, so they can promote it to you. Zillow is acrawl with statisticians, and my expectation is that they are building statistical models of every interaction point in the system. Want to know what makes the frog jump? Study frogs…


This is all a very long way of answering the question posed in the headline: Why doesn’t Zillow.com go off and beg, borrow or steal a whole bunch of residential real estate listings in order to populate its listings database overnight?

There are a lot of answers to this question, all, I think, essentially the same answer:

  • A listings database has a temporary appeal to users, where a very robust permanent database of information about homes and neighborhoods has an enduring and ever-increasing appeal to end-users
  • Home shopping as a temporary activity undertaken at intervals throughout life is only one of the needs Zillow is building its database to satisfy
  • Zillow’s objective is not to accumulate short-term listings data but to acquire and archive long-term records about homes
  • All of this turns critically on three of O’Reilly’s criteria: Data is the next Intel Inside, users add value and network effects by default
  • Ergo, the ever-improving real estate and user databases are a secondary consequence, a side-effect of the creation of a community

In the world of Trulia.com — and other listings.bots focused on evanescent listings — users come and go. On the idealized Planet Zillow, users come and stay.

Home buying is at most an 18-month effort undertaken every seven to ten years, on average. Home ownership is continuous. Zillow attracts a lot of sellers, and it seems certain that it hopes to attract a countervailing cadre of buyers. But what Zillow is really doing, I think, is aiming at the 100 million-plus Americans who own their own homes. Some may come every day — to see new listings, to see new home photos, to ask or answer questions. Some may come only once in a while, when they have a particular need.

But its databases are permanent and accretive, constantly improving. I think Zillow’s goal is not to compete with Trulia or Google Base for home shoppers in the short run. I think its goal is to suck every bit of oxygen out of the residential real estate space as a vertical market. I’m not implying malice. But where others see this opportunity or that opportunity, I think Zillow.com sees the information marketplace for homeowners as a single unified whole, and I think the company’s goal is to dominate the whole thing in its entirety.

Vita brevis ars longa. Life is short, art is long. If Zillow’s moves seem not to make sense with respect to the recent changes made by Trulia.com or Google Base, it could mean one of two things:

Either the home-shopping listing.bots are right, and Zillow is going to burn through $57 million in venture capital without making a profit.

Or: Zillow understands better than anyone else in web-based residential real estate what makes the frog jump…