GE users vs NWW users

Sorry Frank nothing personal this just really amused me.

The GE blog ran a story yesterday on how Googles (DG’s) new WorldView I was only going to give black and white (actually panchromatic) imagery, and wondering why anyone would want non-colour imagery, there were some typical GE Fanboy comments too –

I can’t believe they would have launched a satellite without color capabilities. Maybe they just left the color sensor specs off by mistake.

This is just a typical reaction from the GE types who just want to see thier houses, they don’t understand any of the technology behind the software they use, I’m no GIS expert but even I know generally pan images will be higher res and show more detail, and they can always be fused with colour imagery for high res colour.

And here is a response from a World Wind blog, The confused life, on the same topic, and a quote from our channel

nhv – westside fyi http://www.gearthblog.com/blog/archives/2007/09/digitalglobe_new_satellite_worldvie.html
westside – nhv: thanks yes WV01 is Pan. NGA funded NextView and they decided which sensor of the two we built they wanted
westside – we launched that and that capacity will go to NGA. QB02 will now have more commercial capacity available.
westside – QB02 has 4 band multispectral plus the pan band
westside – WV02 which currently is scheduled to launch late 08 will have 8 bands of multispectral

Can you spot the difference?  GE users are far more like IE users and NWW users like FF users in my opinion.

Please ShareShare on FacebookTweet about this on TwitterShare on Google+Share on TumblrPin on PinterestEmail this to someone

8 thoughts on “GE users vs NWW users

  • Yes, I’m fully aware that panchromatic imagery is higher-resolution than color – my comment references and quotes the sensor specs for the previous Quickbird satellite, which shows the panchromatic resolution at 0.6m, and the multispectral at 2.4m.

    And yes, you can do pan-sharpening by combining higher-res black-and-white imagery with lower-res color imagery. But if the two datasets aren’t taken at the same time, then a pan-sharpened image may not be accurate. If there’s a major change in the imagery between the time you take the lower-res color image and the higher-res panchromatic (flood, fire, earthquake, new buildings, seasonal changes, etc.), pan-sharpening may not be valid. That’s why I speculated (incorrectly as it turns out) that the multispectral sensor specs had been left off, since not having them reduces the utility of the satellite. Especially so if you want to use the individual bands of the multispectral output to analyze information like vegetation cover (NDVI) or fire (NBR); can’t do that with panchromatic alone.

  • I disagree. If you have the data out of time synch, you’ll create features that don’t reflect reality: phantom and miscolored cars and buildings, fields of snow in winter panchromatic images colored green by summer multispectral images, etc..

    Moot point anyway, as the reference you quote says that this imagery isn’t intended for Google Earth, but exclusively for the National Geospatial-Intelligence Agency. Upcoming satellites for commercial purposes will have multispectral sensors in addition to the panchromatic.

  • Merging with different dates also brings havoc when the look angles are different, and most of the new high-res sensors can look all around causing terrible parallax problems – especially for merges. Imagine a merge of the Washington monument with a pan image captured at 20 degrees and an XS image captured at -20 degrees. Not pretty.

    All that being said, B&W imagery is fine for so many applications out there – not the ‘pretty picture’ applications, but useful GIS apps.

    Plus, images have to be ortho-co-registered *really* well for merging to work.

    I’d be surprised if these types of merges are seen by the public very much.

You may Leave a reply here