Augmented Reality becomes Real

Augmented Reality becomes Real

Will augmented reality devices prove themselves useful? So many big cool ideas in technology show some initial promise and either die or are adopted years later in some unique way. Consider the graphical interface, mouse and one handed keyboard invented by Douglas Engelbart in 1967.

First Mouse 1967 Douglas Engelbart

In this “Mother of All Demos“ he showed pieces of a future (graphical interface, mouse, hypertext, cut and paste) that was accessible to most people starting on January 24, 1984 with the Macintosh. It took 17 years for the ideas to be released as a commercial product.

Today advancements in human computer interaction continue to be invented. Perhaps it will take 17 years for them to be perfected. Today we happen to call these devices Augmented Reality or Virtual Reality devices but they are much like Engelbart’s mouse in his day, a crude wooden box with wheels.

I still hold out hope for the five finger keyboard, imagine one hand on the keyboard and one hand on the mouse. But even in Doug’s demo he continues to use a standard keyboard. But the keyset as he called it is one item that did not catch on.

Doug using keyset with 3-button mouse (1968) httpwww.dougengelbart.org

http://www.dougengelbart.org

Augmented (AR) and Virtual reality (VR) are somewhat similar and a few current devices on the market and under development are capable of both. Augmented reality places computer graphics in front of what the eye sees, whereas; virtual reality devices replaces everything the eye sees with computer graphics. Although with a camera a VR device could display reality with computer graphics interleaved.

Perhaps Doug Engelbart demonstrated one of the first augmented reality devices in his demo in 1968 when he projected himself onto a large screen displaying his actions editing text with the first mouse.

There are many devices in the works and out on the market. Each have various methods of displaying information and providing the wearer with input devices. Some track head or eye movements while others rely on voice or touch commands.

Today we are interested in investigating augmented reality devices.

Options in AR

Google Glass

Google Glass

Atheer Air

Air Glasses

  • 50° field of view
  • Processor via cable to Android device
  • WiFi, LTE, GPS
  • $3,950
  • Release Q1 2016
  • Allows for wide coverage of vision area. Holographic overlays.
  • Two eye display
  • Gesture interaction

Epson Moverio BT-200 smart glasses

Moverio BT-200

  • 23° field of view
  • $699
  • dual screen 3D
  • WiFi, bluetooth, GPS
  • Processor via cable (Android Device)
  • Touch input on attached controller

GlassUP

  • $499
  • Informational display only
  • Bluetooth to smartphone
  • Uses GPS on smartphone

Innovega iOptik

iOptik magnifying lens

  • Licensing to vendors, no consumer device
  • Still in development
  • Uses a contact lens and glasses. Contact lens magnifies small image on glasses but allows for normal distance viewing.
  • 60° field of view

LaForge Optical Acis

LaForge Drive Mode

  • $590
  • Onboard processor. Connects to smartphone via bluetooth
  • Heads up display device to augment smartphone
  • Touchpad input
  • Camera, microphone, speaker.
  • Single eye display

Meta

Meta

  • $667 for developer version, $3,000+ for pro version
  • 35° field of view
  • 3D holographic display / 2 eye display
  • gestural control
  • Relies on external processing.

Recon Instruments

Recon Jet

  • Sports heads up display
  • Onboard processor
  • WiFi, Bluetooth, GPS
  • One eye display
  • $499

 

CastAR Protype

  • $200 Kickstarter (campaign canceled due to investor interest)
  • 1280 x 720 resolution per eye at 120 hertz
  • two eye display, 3D
  • 65° field of view
  • Connects to computer via usb/hdmi
  • Uses a reflective background surface which reflects the images from the projectors on the glasses back to the eyes making it more of a Virtual Reality system.

Microsoft Hololens

Microsoft Hololens

HTC Vive

HTC’s former executive director of marketing Jeff Gattis wearing a Vive

  • A virtual reality system with one key feature that adds in augmented reality. It has a front camera.
  • Front camera can be programmed to display real world mixed with virtual elements. Or just display when there is a wall in the way.
  • Wand controllers
  • 12o° field of view
  • 90 frames per second
  • 70 sensors, laser positioning
  • external laser lighthouse can map entire room

Interest

I am most interested in the devices that support holographic 3D display with gesture control as I want to work with analytical data visualization. For our purposes the Hololens, Meta, Atheer Air show the most promise. The limited field of view of the Epson limits its potential. The CastAR may turn out to be a good option despite the need for the reflective surface. The cost is reasonable and for our application the tool will be used in an office setting anyway.

Adoption

I have seen technologies for analytical data display come and go. I remember large table-top touch screens which were just encumbering devices for viewing data. So much time went into preparing the data to be visualized that a data analyst had already figured out the answer without putting it on the table. Perhaps it will be the same with these devices. I think much of their success will depend on perfecting the human interaction lag time and finding the perfect problems it is a useful tool for.

Concern

After looking at the possibilities and usages of these systems, I would only use them for narrow applications. I am not sure having something like this beaming information into my eyes all day would be a benefit. The human brain needs time to absorb things slowly and meditate on what is happening . I value either being present and aware of surroundings or being inside my head daydreaming or concentrating on a difficult problem. I fear more information could lead to less meaning. There will need to be a way to filter information being presented that is based on personal choices and not driven by outside influences.

More information

Transforming Sound Into Sight

Transforming Sound Into Sight

The simple idea that sound can be transformed into shapes fascinates me.

How Does It Work?

An oscilloscope provides a visual representation of sound or electronic waves. Typically, you see a wave pattern running from left to right.

Heartbeat Sound

However, when you put the oscilloscope into x-y mode, one channel of sound produces a line on the x-axis, and the other channel of sound makes a line on the y-axis. By creating a stereo signal, one can represent the sound in two visual dimensions.

Oscilloscope

Oscilloscopes Through Time

This visual representation is the basis of vector graphic monitors that were used on the first computer displays in the late 1950s. In fact, these displays were oscilloscopes. Before the pixel was invented, it was easier for a computer to draw with vectors since they take up even less memory than bitmap images.

Computers were invented to defend against attacks during the Cold War. The first one, called Whirlwind, was built by MIT for the Navy in 1951. Later, the Air Force assumed ownership to intercept incoming bombers.

Some of Whirlwind’s output interfaces included oscilloscopes, typewriters, speakers, and lights. The Whirlwind was featured on See It Now with Edward R. Murrow with oscilloscope output.

Following Whirlwind, another oscilloscope called SAGE (Semi-Automatic Ground Environment) was built with IBM AN/FSQ-7 computers. It was used to display maps and incoming aircraft so that they could be targeted by use of a light gun. Wikipedia says, “The AN/FSQ-7 had 100 system consoles, including the OA-1008 Situation Display (SD) with a light gun, cigarette lighter, and ash tray.“ This is one of the earliest computers I have seen with a map drawn on the screen.

The following 1956 clip shows the Display Scope in action is from _On Guard! _which tells the story of SAGE:

You can see even more of SAGE in this commercial from IBM and in an Air Force film.

SAGE Vector Graphics Display

This technique of making images with circuits and an oscilloscope was also used in the title sequence of Alfred Hitchcock’s Vertigo.

My Oscilloscope Adventure

When I first found a program called Rabiscoscopio, I immediately began shopping for an oscilloscope on eBay.

Hardware Shopping

My first purchase of a Hitachi oscilloscope for $17 was a bust, since it arrived with only one channel working. The key is to get an oscilloscope that has at least two channels and supports x-y mode display. The Hitachi 20 MHz would have worked great if only it had two working channels.

My next purchase of a Leader 20 MHz dual-channel oscilloscope for $39.99 was more successful. I tried out the image of the umbrella provided by Alex, and it worked!

Using Rabiscoscopio

Since my goal was to use the oscilloscope to generate the Volume Integration logo, I proceeded to the next step of attempting to draw my own pictures with Rabiscoscopio. I found that some BNC Male Plug to RCA Female Jack Adapters were the best way to connect my computer to the oscilloscope by plugging the headphone jack into an RCA cable.

Mini plug to RCA cable

BNC to RCA Adaptor

At first, I tried to take a standard Scalable Vector Graphics (SVG) file and convert it to sound with Rabiscoscopio. This caused it to throw an error. Apparently, I did not read the instructions where it says to only use straight lines and only use one continuous line.

I found Inkscape to be the easiest free tool for creating graphics. My experiments also led me to discover that drawing lines that cross each other also causes problems. Here is a gallery of my early experiments:

Stonehenge Wave Form by Bradley L. Johnson

Face by Bradley L. Johnson

Rabbit by Bradley L. Johnson

Square Spiral by Bradley L. Johnson

US Flag by Bradley L. Johnnson

A by Bradley L. Johnson

This is my refined process:

  1. Find image to trace.
  2. Open Inkscape.
  3. Import original image to trace over it.
  4. Use the pencil tool to trace a single line around the image in straight segments without crossing. Finish without joining the beginning and end of the line.
  5. Delete the original traced image.
  6. Save as SVG file.
  7. Open Rabiscoscopio and SVG file.
  8. Rabiscoscopio will generate the WAV sound file automatically.
  9. Plug in sound output from headphone jack to oscilloscope.
  10. Turn on oscilloscope and play sound file.
  11. Watch and enjoy, or go back and refine drawing.

My next feat was to draw the Volume Integration logo. It turned out to be more difficult than I expected because of all the intersecting lines and 3D-like shapes. The SVG file ends up looking like this:

new_volume.svg

The most difficult part was drawing a continuous line without crossing previous lines. After multiple attempts, a picture of the final product emerged.

Oscilloscope with Volume Logo

Volume Logo On Oscilloscope by Bradley L. Johnson

I have also included the wav file for your listening enjoyment. If you have an oscilloscope, you can watch it appear! You can use a software based oscilloscope by loading this software called Oscilloscope! on your computer.

I hope you’ve enjoyed following my geeky artistic endeavor. If you would like to see more of my oscilloscope art, take a look at Temple of Tech and Tumblr and SoundCloud.

 

Check out more of our work at Volume Integration and follow us on Twitter.

War what is it good for

War, What is it Good For?

Do you remember the Edwin Starr song “War” from 1969? The chorus repeats:

War, huh yeah

What is it good for?

Absolutely nothing, oh hoh, oh
Well, war is good for at least one thing…maps!

Wartime Maps

Mapping data before computers was difficult and seems to have been a primary concern during war. In fact, wars have advanced the state of the art in mapping data for situational awareness throughout history. The speed at which we can determine events and plot them on a map shows amazing technical advancement.

The basic idea is to visualize the placement of the enemy and friendly forces on a paper map with pins, which we still do today. But instead of physical pins, we use images of pins on an electronic map.

Churchill’s War Rooms

The Map room – Churchill War Rooms

I want to take you to where Winston Churchill poured over maps during World War II. His war rooms were contained in an underground bunker beneath five feet of concrete in London. According to the Imperial War Museums, there was a concern that Londoners would feel abandoned and evacuation would be slow. So the government built a bunker right in London for use during the next war.

These rooms were left exactly the way they were found on August 16, 1945, at the end of the war. You can still see the pin holes in the maps for past troop movements and ships as they crossed the ocean.

Large Wall Map – Churchill War Rooms

There are also walls full of graphs and charts. It’s the 1948 version of today’s management dashboard. These charts outlined the number of troops and were kept up to date by an army of people moving pins and updating charts.

Informational Bar Charts – Churchill War Rooms

It is obvious how these maps and charts were used to enhance decision-making. They provided accurate knowledge and understanding of location, type and counts of equipment, and health of the troops for both the axis and the allies.

Graphs – Churchill War Rooms

There is even a map of Germany with an acetate covering to allow them to write on it. The last thing they wrote were the outlines of which countries would administer the division of Germany.

Germany Divided

Men of Maps

Churchill enjoyed studying maps so much that he had his sleeping/office quarters in the bunker papered with maps from floor to ceiling. His love for maps was well known.

In fact, his peer and collaborator in America, Franklin Roosevelt, was also a big fan of maps and had a steady stream of updated maps provided to him by the National Geographic Society. In the FDR White House, there was a cloakroom converted into a map room modeled after Churchill’s map room. The FDR Library says, “Maps posted in the room were used to track the locations of land, sea and air forces.”

Secret Room

There was another more secretive part of the Churchill War Rooms. Down a back hallway there was a restroom, or as it is called in England, the WC.

It was reserved for Winston Churchill’s use alone. Very few people really knew what was on the other side of the door.

Churchill’s “Water Closet” in the War Rooms

Typical Restroom Lock Indicator for Restrooms in England

The space was actually a secret telephone room with a direct line to FDR in the White House. The two leaders would coordinate the war operations over the encrypted line. It was encrypted with a system called SIGSALY that sat under the Selfridges department store on one end and the Pentagon on the other.

Innovations Continue Today

The use of great human effort, paper maps, and telecommunications aided in the war effort and led to innovations in managing logistics and monitoring world events geospatially. We have come along way, but we still put pins in a map – they just happen to be electronic. The militaries of the world continue to upgrade their map rooms into walls of video screens and server rooms of computers to make visualization updates in near real-time. Onward!

 

To learn more about Volume Labs and Volume Integration, please follow us on Twitter @volumeint and check out our website.

Experimenting with Google

Experimenting with Google+ Photos

One reason I enjoy working for Volume Integration is the people and the way they care for each other. A recent addition to our benefits package is an allowance for public transit with SmartBenefits loaded onto my SmarTrip card.

I have been using this incentive and enjoy having someone else drive me to work while I work on my company-provided laptop. Blogging and taking pictures on my way to the office is preferable to the stress of navigating traffic. Plus, I’m doing my part to reduce my carbon footprint.

When I enrolled in SmartBenefits, I expected all of the perks above, but I have found unexpected benefits too. Now I am able to watch the sunrise every morning as I transfer from the bus to the Metro, take pictures with my iPhone, and experiment with Google+ Photos.

I am running the Google+ app on my phone, which I have set to back up my photos automatically. Once Google+ gets the photos, it changes them in interesting ways. They call it AutoAwesome.

Into the Sunrise

 

Running…Running…Running…Running…

 

Smoke

 

Teamwork?

Tysons Corner Metro Station – Brad Johnson / httptot.wowak.com

 

Tysons Corner Moving Train

Google+ Photos analyzes the photos and performs various enhancements. I’ve been surprised by what they can produce with automated algorithms including animated GIFs from a series of shots, a panorama from adjacent pictures with a humorous result, and an enhanced image by applying a series of filters.

But sometimes the “enhancements” do not turn out so well:

Not Like the Other

 

Shifting Perspective

Vertigo on the Metro

Warped Sunrise

So I keep taking pictures to see what Google is going to do with them. The photos have become a great resource to enhance these blog posts.

Mapping an Epidemic

Mapping an Epidemic

This map changed the way we see the world and the way we study science, nature, and disease.

In August of 1854, cholera was ravaging the Soho neighborhood of London where John Snow) was a doctor. People were fleeing the area as they thought cholera was spread by gasses in the air or, as they called it, “bad air.”

Just as there is disinformation today about Ebola being airborne, the experts of that time thought most disease was spread in the air. There was no concept that disease might be in the water. They had no idea that bacteria even existed.

John had worked as a doctor in a major outbreak of cholera in a mine. But despite working in close quarters with the miners, he never contracted the disease. He wondered why the air did not affect him.

This inspired him to write a paper on why he believed cholera was spread through water and bodily fluids. The experts at the time did not accept his theory; they continued to believe cholera was caused by the odors emitted by rotting waste.

In the Soho outbreak in August 1854, John Snow saw a chance to further prove his theory. He went door to door keeping a tally of deaths at each home. This was only part of his quest to find evidence to prove the source of the plagues of the day.

He had been collecting statistical information, personal interviews, and other research for many years. He added this information to his paper, “On the Mode of Communication of Cholera.” The paper and his work in researching and collecting evidence founded the science of epidemiology.

One of the most innovative features was plotting data using a map; it was the first published use of dots on a map supporting a scientific conclusion. Each of the bars on John Snow’s map represents one death. Using this visual technique, he could illustrate that the deaths were centered around a point and further investigate and interview people in the area. He could also find anomalies and outliers such as deaths far from the concentration and areas with no deaths.

Epicenter Pump and Brewery

He found through personal interviews and mapping the data that the workers in the brewery (in the epicenter of the epidemic) were not dying. The owner of the brewery said that the workers were given free beer, and he thought that they never drank water at all. In fact, there was a deep well in the brewery used in the beer. In other cases, John Snow found that addresses with low deaths had their own personal well.

He also investigated the outlying incidents through interviews: some worked in the area of the pump or walked by it on the way to school. One woman who got sick had the water brought to her by a wagon each day because she liked the taste of that particular well water. One person he talked to even said the water smelled like sewage and did not drink it, but his servant did and came down with a case of cholera.

The incidents highlighted the area around a public pump on Broad Street. Using his data, he convinced the local authorities to have the pump handle removed.

The most innovative feature of the map is that it changed the way we use maps. The idea that data could be visualized to prove a fact was very new.

John Snow’s map of the service areas of two water companies

John Snow also produced another map showing which water companies supplied water in London. This map showed that the water company which stopped using water from the Thames had a lower death rate due to cholera. The map allowed John Snow to provide further evidence of disease spread through water and what could be done to fix the issue.

This is similar to the Ebola outbreak of today where tracking the disease is important. John Snow’s idea of collecting data in the field and mapping it lives on in maps like those from HealthMap, which show the spread of the Ebola virus.

Data Exploration via Map

Today, we use data driven maps as a powerful tool for all sorts of reasons. But it all started with John Snow.

(For an interesting take on this event and other historical technology that changed the way we live today, watch the “Clean“ episode of the How We Got to Now series on PBS.)

To learn more about Volume Labs and Volume Integration, please follow us on Twitter @volumeint and check out our website.

Technology Inspiring Art

Technology Inspiring Art

Technologists routinely get pegged in the geek category, but our roles also require us to come up with creative solutions to technical challenges. This creativity can help us extend into the realm of art. Recently, I used technology as a media for artistic expression while creating a sculpture entitled The Technologist.

Inspiration

Inspiration is so often fused from many memories, emotions, ideas, and events. My inspiration for The Technologist occurred while on vacation at a resort called Twin Farms in Vermont.

I was dancing with my young daughter in a recreation room of witty art to the sound of salsa music on the jukebox. The room contained a strange, playful set of old televisions with old school MTV graphics playing. It happened to be one of the twelve installations called Internet Dweller) by Nam June Paik from his exhibition Electronic Super Highway: Nam June Paik in the ‘90s.

Internet Dweller Nam June Paik

Background

Nam June Paik’s exhibitions were meant to be participatory, as many of his pieces allow the audience to manipulate the sound or video to make their own art. As a one time member of the Fluxus movement and a performance artist, Paik passionately encouraged everyone to participate in events that create art.

One of Paik’s pieces called Random Access allowed the viewer to create sounds with a wand that read magnetic tape attached to a wall. Another, entitled TV Crown, enabled the viewer to change the artistic patterns of lines on a TV screen. Other installations just put the viewer’s face right into the piece with a closed loop camera, like the Electronic Superhighway exhibit.

Creating The Technologist

Based on my encounter with Internet Dweller in Vermont and exposure to other work by Nam June Paik, I created The Technologist. My sculpture is comprised of a simple male wig head with CPU and memory chips – some carefully placed and others smashed into the surface. It also includes parts from a Flip camera and wireless routers. The Technologist‘s eye plays a five-minute video on an embedded iPod nano (4th generation) with sound from attached speakers or headphones.

Two angles of the Technologist

The video playing in The Technologist‘s eye is the first thing that pulls you into the sculpture. It includes footage from Paik’s piece with sights and sounds from the day I encountered it in Vermont. I was so busy recording the artwork that my daughter was begging me to dance more, “Daddy…dance after this picture. Dance!” So my daughter’s voice is also preserved for posterity in the piece.

I have been working on _The Technologist _for over a year as I refined the video and the installation into the head. At first, I attempted to use a Microsoft Zune since it has WiFi capability. The original vision was to produce a series of pieces that could be connected into a network in a future exhibition titled Temple of Technology. Unfortunately, the small Zune does not have a way to loop videos, and I did not want to invest the time in programming a video player for it.

Most iPod nanos have video playback ability and allow looping in a playlist. The iPod nano also places the power cord and headphone jack in a convenient position that allows the wires to run through the middle of the head. So it seemed like a much better solution.

Views of The Technologist — top row: Wild Side, Third Eye, Antenna & AMD CPU; bottom row: Processing Steps, Video Eye, Heads Up Controls

Layers of Meaning

The video in The Technologist‘s eye also contains QR codes, generated with QR Code Generator, that invite the viewer to explore further layers. This leads us to ask: what is the boundary of this art now that it has jumped into your smart phone?

This piece is partly an expression of the relationship between human individuality, spirituality, and natural rhythm and their conflict with the drive of technology. It contains multilayered ideas on this theme, including building loving relationships in the midst of ever increasing demands for efficiency. There is an emotional paradox expressed in the piece on the role of technique versus creativity and love.

The piece is filled with layers of meaning and emotion. So I leave it to you to discover what you see and feel in The Technologist. Please let me know what you find.

 

To learn more about Volume Labs and Volume Integration, please follow us on Twitter @volumeint and check out our website.

10

10+ Surprising Geospatial Technologies

Data Organized on Map

I’ve spent years in the geospatial arena, so I’m a bit of a geospatial technology geek. But now it seems like the rest of the world is increasingly interested in this technology too.

You may remember the old latitude and longitude numbers that you learned about in school. Perhaps they didn’t seem very useful or relevant to life at the time, but these coordinates are now tracked constantly with our various GPS enabled gadgets. It’s becoming increasingly common to use coordinates to define the location of data collected, a person, landmark, and more. We can add even further accuracy by recording elevation and point in time.

I would like to describe some of the components that fall under the umbrella of geospatial technology. You might find some surprises!

Equipment

First, let’s discuss some of the tools used to collect geospatial data.

1. GPS

Global Positioning System (GPS) technology is the software and equipment needed to provide the location of things on the planet. This is most often done with the use of special satellites but is often augmented by other methods like WiFi signals. There are even technologies in use that determine location by looking at the stars.

2. Field Sensors

Field sensors are electronic devices that are placed to collect information about weather, soil, or other environmental conditions. These data collecting devices could be anything from a camera to a cell phone. During collection, the data is tagged with geospatial information, so the location of the event is known and can be mapped.

Overhead Imagery

My next geospatial category is overhead imagery. This includes all the imagery from aircrafts and satellites.

3. Visual Overhead Imagery

Visual overhead imagery includes what you see in Google Maps and Google Earth when you use the satellite function. This imagery could be collected via satellite or aircraft, and the technology used involves cameras, aircraft, satellites, global positioning systems, altimeters, and microwave transmission equipment. Today, even video is collected overhead by Planet Labs.

If you don’t own an airplane or satellite, can you collect visual overhead imagery? Yes! It doesn’t have to be expensive. Some hobbyists and students are cutting their teeth on low-cost imagery collection using kites and balloons.

Balloon mapping of Lake Borgne, Louisiana (Cartographer: Stewart Long/publiclab.org)

4. Hyperspectral Overhead Imagery

Hyperspectral refers to the waves of light that are beyond human sight. Engineers have developed sensors that can gather these waves from space, but it can also be done from aircraft. The data is then transformed into a visual representation through analysis and processing to create hyperspectral overhead imagery.

This type of geospatial technology has some surprising uses. Over at the US Geological Survey (USGS), they have used hyperspectral overhead imagery collected via satellite to detect the presence of arsenic in the leaves of ferns. Further analysis led them to aid in locating arsine gas canisters buried in Washington, DC. For more information, check out the full dissertation entitled _Remote Sensing Investigations of Furgative Soil Arsenic and its Effects on Vegetation Reflectance_.

5. LIDAR

Light Detection and Ranging (LIDAR) is a technology that uses an airborne system to measure distance by shining a laser to the ground and measuring the reflected light. This yields a very accurate contour of the earth’s surface as shown in the image of the Three Sisters below.

LIDAR image of the Three Sisters volcanic peaks in Oregon (DOGAMI)

LIDAR can also measure objects on the ground such as trees and houses. This type of data is used to determine elevation and is often used when processing other imagery to improve accuracy.

How do autonomous vehicles “see” where they are going and what is in the way? LIDAR, of course! Plus, it’s even used in various industries to make 3D models of buildings and topography.

Processing

So now that we collected all this imagery, how do we use it?

6. Imagery Processing Systems

The overhead imagery produced from satellites and aircraft is not perfect for human viewing in raw form. So we use imagery processing systems to help automate the manipulation of images and data collected. This collection of computer systems makes the images and data useful to us.

Most images are taken from an angle and must be adjusted or warped. Imagery processing systems assign each pixel a geographic coordinate and an elevation. This is done by combining GPS data that was collected with each click of the camera.

Often this process is called orthorectification. To see a simplified illustration, take a look at this orthorectification animation from Satellite Imaging Corporation.

7. Geospatial Mapping

Geospatial mapping is the process and technology involved in placing information on a map. It is often the final stage of geospatial processing.

Mapping combines data from many sources and layers it onto a map, so conclusions can be drawn about the data. There are different degrees of accuracy required in this process. For some applications, showing data in an approximate relation to each other is sufficient. But other applications, like construction and military exercises, require specialized software and equipment to be as precise as possible.

In an earlier post, I wrote about creating maps with D3. The goal was to build a heat map to display the count of documents for each place name as shown in the image below.

Data Organized on Map

Applications

Let’s explore the some of the applications of all this geospatial technology.

8. Geospatial Marketing

Geospatial marketing is the concept of using geospatial tools and the collection of location information to improve marketing to customers. This is often a subset of geospatial mapping, but this application combines data about customers’ locations. This can help determine where to place a store or how many customers purchase from a particular location. For example, companies can use data about where people typically go after a ballgame to determine where advertisements should be placed.

Another widespread application of geospatial data in marketing is using the IP addresses gained from customers browsing websites and viewing advertisements. These IP addresses can be geographically located, sometimes as specifically as a person’s house, and then used to target advertisements or redesign a website.

9. Location-Aware Applications

Location-aware applications are a category of technologies that are cognizant of their location and provide feedback based that location. In fact, if an IP address can be tied to a location, almost any application can be location-aware.

With the advent of smart phones, location-aware applications have become even more common. Of course, your phone’s mapping application can display your location on a map.

There are also smartphone apps that will trigger events or actions on a phone when you cross into a geospatial area. Some examples are Geofencer and PhoneWeaver.

Additionally, the cameras on smart phones can collect the location of the phone when taking a picture. This is imbedded within the picture and can be used by Facebook, Picasa, Photoshop, and other photo software to display locale information on a map. (You may want to disable this feature if you would rather not have people know where you live.)

10. Internet of Things

The Internet of Things (IoT) is the category of technology that includes electronic objects that connect to the internet and transmit their location. This is a broad and emerging area of geospatial technology that will add even more location data to the world.

IoT could contain objects like cars, fire alarms, energy savings devices like Nest and Neurio, fitness tracking bands like the ones from Jawbone or Nike, and more. For these IoT applications and devices to work optimally, they need to know your location and combine it with other information sensed around them.

Nike+ FuelBand (Peter Parkes/flickr.com)

11. Geospatial Virtual Reality

Virtual reality that makes use of geospatial data is another emerging category. This technology will allow for an immersive experience in realistic geospatial models.

Geospatial virtual reality incorporates all of the technologies listed above to put people into the middle of simulated real-word environments. It’s already been implemented with new hardware like the Oculus Rift, which is a virtual reality headset that enables players to step inside their favorite games and virtual worlds.

Oculus Rift (Sebastian Stabinger/commons.wikimedia.org)

Show Me the Data!

At the base of all of this technology is data. Increasingly, we have to invent more ways to store geospatial data in order for it to be processed and analyzed. The next steps of geospatial technologies involve attaching geospatial information to all data collection and then processing and filtering the massive amounts of data, which is known as big data.

This is my list of surprising geospatial technologies that matter today. It started out as a top 10 list, but evolved to 11 because I just couldn’t leave out geospatial virtual reality. It’s so cool! Feel free to add your suggestions of geospatial technologies in the comments below or as a pingback.

Making maps with D3

Making maps with D3

I used D3 to build a data driven map. The goal was to build a map using D3 using data from a service.

The service provides a JSON file, which consists of place names and a count of documents. The place names are country names or US states, as in the following sample:

[{"id":121,"value":"iran","count":2508},{"id":88,"value":"washington","count":1778}]

Overview

I started with Mike Bostock’s Let’s Make a Map since it was most helpful in getting me to a US map.

The general steps are as follows:

  1. Get shape files
  2. Filter out what you need
  3. Merge and convert to TopoJSON
  4. Build D3 Javascript to join data to the TopoJSON and display the map

1.  Find Shape Files

After much experimentation I found the best shape files to use were from Natural Earth Data. They have three sizes – large 1:10m, medium 1:50m, and small 1:110m.

I found that the large size produced a JSON file that was around 2.4 megabytes, much too large for use in a web browser. The lines drawn for the large map were very smooth. The small shape would produce a JSON file that was 96k, but it was missing a good number of small countries and used more jagged lines. The medium size came out to 618k and contained all of the countries I needed.

2.  Filter Shape Files

For this project, I used Admin 0 Countries and Admin 1 States & Provinces without large lakes. To begin, we need to extract just the US states from the states & provinces shape file, since we only want the US states.

To do this, we use some SQL. First, find the column name that indicates what data is from the USA using ogrinfo.

ogrinfo -sql 'select * from ne_50m_admin_1_states_provinces_lakes' ne_50m_admin_1_states_provinces_lakes.shp -fid 0

This will print out the data in the first row of the shape file, which should be all the data for the first state in the file. Find the column name that indicates the country name. In this case it is _sr_adm0a3. To see if it works with the USA use this:

ogrinfo -sql "select * from ne_50m_admin_1_states_provinces_lakes where sr_adm0_a3 = 'USA'" ne_50m_admin_1_states_provinces_lakes.shp -fid 0

So now we want to convert it to a GeoJSON file using ogr2ogr.

ogr2ogr -f GeoJSON -where "sr_adm0_a3 = 'USA'" states.json ne_50m_admin_1_states_provinces_lakes.shp

Now let’s move on to countries. The first time I tried this, I got all the way to trying to produce the map in the browser and found that it would not add the coloring to represent the data into the countries.

It turns out that this shape file has the country names defined with a column name that is uppercase NAME. The states file had it defined as lowercase name. This is the key to matching up the data in the JSON file. I could tell by running ogrinfo that the column names were different.

ogr2ogr countries.shp ne_50m_admin_0_countries.shp -sql "select NAME as name, POSTAL, ISO_A2, ISO_A3, scalerank, LABELRANK from ne_50m_admin_0_countries"

To change the name of the column use SQL as (shown in the command above). The shape file also contains lots of data I did not need like population and GDP. I eliminated it by only selecting the columns that I wanted to use. This will produce an interim shape file called countries.shp.

Next, convert the countries shape files into a GeoJSON file using ogr2ogr.

ogr2ogr -f GeoJSON countries.json countries.shp

3. Convert to TopoJSON

The goal is to get a topoJSON file since it stores the data most efficiently. This next command will convert the two GeoJSON files (states and countries) into one TopoJSON file by merging the data together. (Remember that we have named the countries of the world as countries and the US states are called states.)

topojson --id-property name --allow-empty -o world.json countries.json states.json

The –id-property setting will make the name field the id. This is used to join the data from the document count JSON file. The allow-empty setting forces it to save the polygons for all the countries even if they are very small. Without that setting, I found that TopoJSON would remove some of the small geographical entities that are countries or territories like Aruba. See the TopoJSON documentation for more info.

4. Build D3 Javascript

The next step is to build the html page and javascript that will draw the map using our data. If you prefer to skip to the completed code use this link.

First of course, we must have the TopoJSON and D3 javascripts loaded.

Next, set up the map projection and size of the map.

<script> var width = 960, height = 960; var projection = d3.geo.mercator().scale(200); var path = d3.geo.path().projection(projection); var svg = d3.select("body").append("svg").attr("width", width).attr("height", height); var g = svg.append("g");

The queue command will set up the loading of the two JSON files that will be merged.
queue()
.defer(d3.json, "world.json")
.defer(d3.json, "toplocations.json")
.await(ready);

Now, here’s the main function that is used to draw everything.
function ready(error, world, locations) {
console.log(world)

In the style section, we need to add the following styles in order to draw the lines of the map.

.subunit-boundary {
fill: none;
stroke: #777;
stroke-linejoin: round;
}

The console command will output the contents to the browser console which you can view using Firebug or the JavaScript console in many browsers. Inside of this function, put the following code to draw the boundaries of the continents. Note that I am referencing world.objects.countries; this is where each country is stored in the world.json file.

Listing of the countries in the world.json

g.append("path")
.datum(topojson.mesh(world, world.objects.countries, function(a, b) { return a == b }))
.attr("d", path)
`.attr(“class”, “subunit-boundary”);“

 

Ocean borders sans countries

This next one draws the lines between each country.

g.append("path")
.datum(topojson.mesh(world, world.objects.countries, function(a, b) { return a !== b }))
.attr("d", path)
.attr("class", "subunit-boundary");

Country Borders

Now add the US states and the little bit around the great lakes.

g.append("path")
.datum(topojson.mesh(world, world.objects.states, function(a, b) { return a !== b }))
.attr("d", path)
.attr("class", "subunit-boundary");
g.append("path")
.datum(topojson.mesh(world, world.objects.states, function(a, b) { return a == b }))
.attr("d", path)
.attr("class", "subunit-boundary");
};

US States

The next step involves converting the count on each country into a color, finding the matching state or country, and filling in the color. This is done using D3 and CSS.

Add the following code to the style section, which will control what colors go in each range of values indicated for each country. Again, thanks to Mike Bostock for this piece of code. The .subunit class is used to fill in the regions that do not have a count value in the toplocations.json.

.subunit { fill: #aaa; }
.q0-9 { fill:rgb(247,251,255); }
.q1-9 { fill:rgb(222,235,247); }
.q2-9 { fill:rgb(198,219,239); }
.q3-9 { fill:rgb(158,202,225); }
.q4-9 { fill:rgb(107,174,214); }
.q5-9 { fill:rgb(66,146,198); }
.q6-9 { fill:rgb(33,113,181); }
.q7-9 { fill:rgb(8,81,156); }
.q8-9 { fill:rgb(8,48,107); }

Next is the javascript to run through the toplocations.json file and make a map of every country and the count. This is a loop that will iterate through each country and put its name and count in a map. This is used later to match up the country name (id) in the world.json file and find the count.

locations.forEach(function(data) {
countByName.set(data.value, data.count);
})

We also need this quantization code outside of the main function block. Open this window to see where it is placed in the code.

var quantize = d3.scale.quantize()
.domain([0, 2000])
.range(d3.range(9).map(function(i) { return "q" + i + "-9"; }));

This will take the count from each country and turn it into a range from 1 – 9. It will only handle values up to 2000 because of the domain command.

Next, we need similar code to the one we used to draw the borders. One is for the countries, and one is for the states because they ended up in two different arrays in the world.json.

g.selectAll(".countries")
.data(topojson.feature(world, world.objects.countries).features)
.enter().append("path")
.attr("class", function(d) { return "subunit " + quantize(countByName.get(d.id.toLowerCase())); })
.attr("d", path);
g.selectAll(".states")
.data(topojson.feature(world, world.objects.states).features)
.enter().append("path")
.attr("class", function(d) { return "subunit " + quantize(countByName.get(d.id.toLowerCase())); })
.attr("d", path);

Note the countByName function which finds the id from the map JSON. This is the country or state name. It must be changed to lowercase so it will match up with the data in our toplocations.json file. It will return the count for that country and then quantize will convert it to a CSS class that corresponds to a color. This class is added to the JSON so that when the browser draws, it will be filled with the correct color.

Now for the cream on top. It is always nice to allow for zooming and movement of the map. The following code will allow your map users to control their point of view.

var zoom = d3.behavior.zoom()
.on("zoom",function() {
g.attr("transform","translate("+
d3.event.translate.join(",")+")scale("+d3.event.scale+")");
});
svg.call(zoom)

Your Turn

Hopefully, this example will help you in building your own data driven map. All the code and a working sample can be found at http://bl.ocks.org/bradllj/8326068.

References