These are the notes for the Making with Google Glass presentation at World Maker Faire New York on September 22, 2013. You can watch a video of the presentation at FORA.tv. All the information below is subject to change at any moment since Google is still actively developing Glass. I'll do my best to update this page as things change, but feel free to contact me if you notice an error or out-of-date information.
Using the Google Glass Mirror API is one way of creating applications for Glass (also known as "Glassware") by using web technology. Right now, it's the Google-approved method for creating third party applications for Glass and the capabilities are somewhat limited compared to developing a native application for a mobile device.
Google's Mirror API Quick Start walks you through how to use their Google App Engine servers to create Glassware with the languages Go, Java, .NET, PHP, Python, and Ruby. It may be helpful to also review how to use Google App Engine for the language you're most comfortable with. You're not required to use App Engine or even the languages above to use the Mirror API, but it will make getting started much easier.
By following the examples in the Python Quick Start tutorial, I created my own Glassware called Do you sleep with that thing on? which is for tracking reactions that Glass Explorers hear while wearing Google Glass. The source code for this application is found on Github.
Unfortunately, if you don't have Google Glass, you won't have access to the Mirror API. In the meantime, if you want to develop for Glass using the Mirror API, I would suggest learning (or brushing up on) Google App Engine. I also recommend watching this talk by Timothy Jordan for an overview on how the Mirror API works.
Since Glass is an Android device, you can develop for it a lot like how you would you develop for traditional Android devices, using the Android Software Development Kit (SDK). Of course, there are some differences to Glass: there's no touch screen and the screen resolution is much lower than a typical Android device. The benefit of building apps with the Android SDK is that you'll have direct access to the sensors, display, and networking. The Glass API, on the other hand, doesn't have "low-level" access to this hardware.
Unfortunately, developing native Glassware using the Android SDK is not currently supported by Google and there's no guarantee that you'd be able to deploy an application you've made to many people. Think of it as a way to explore what's possible with the technology, but not yet a way to make a product. Time will tell how the official Glass SDK for native applications will compare to the Android SDK.
If you've never developed for Android before, you'll want to start by downloading the ADT Bundle for your desktop operating system and working through the tutorial for building your first app. Android is developed in Java, so if you're looking to learn the language, you can start with Processing, a simplified version of Java or jump right into Java and Android development resources.
This Google I/O presentation by Glass Software engineers Hyunyoung Song and P.Y. Laligand will give you an overview of making the transition from developing for a traditional Android Device to developing for Glass. A few important notes from that presentation:
Even though the presentation is called "Voiding your warranty," they explain that loading custom Android apps onto Glass won't actually void your device's warranty. However, there are a some limits you'll have to work within:
To do any of the above, you'll have to "root" your device, which will void your warranty. At about seventeen minutes into the presentation video, Hyunyoung shows you how to do that.
The Glass seveloper site now has a resource page on how to develop and sideload APK files that are made from the Android Software Development Kit. That's the page to watch for more developments in this realm. Check out their sample Android projects for a great place to start.
"We are actively working on a Glass Development Kit (GDK) that allows you to build Glassware in the form of APKs," according to the site. "We’re still hard at work on the GDK, but in the meantime, you can start designing and developing your awesome ideas with the existing Android SDK."
I was eager to develop a project for Google Glass that integrated it with Raspberry Pi, one of my favorite development boards. And, well, it's also a fantastic opportunity for me to plug Getting Started with Raspberry Pi, which I co-authored with Shawn Wallace.
A Raspberry Pi with a USB camera acts as a server, which transmits a Motion JPEG video stream to Google Glass. At the same time, Glass transmits its orientation to the server, which uses hobby servo motors to adjust the position of the camera. The effect is that you can use Glass to "look around" a space from a remote location.
There are two main parts to the software loaded onto Google Glass. The first is the one which downloads and displays the video stream from the server. The other part of the software takes the data from the orientation sensors and transmits it to the server.
On Stack Overflow, I found a set of Android classes that practically worked "out-of-the-box" for displaying MJPG video streams on screen. For reading the sensors in Google Glass, this basic example code by Lars Vogel was the clearest example I found. It lays out how to read the accelerometers to tell if the device is being shaken. Modifying it so that it reads the orientation sensors was fairly simple. The UDP client code on this Android forum made it easy to transmit the sensor data over the network to the Raspberry Pi.
The full Android project is a hybrid of the code examples above and the basic "Hello World" example from the official Android developer site. If you'd like to take a look at the code, it's available on Github.
Much like the Google Glass client software, the Raspberry Pi server software comes in two parts: one part handles streaming the video from the USB camera and the other part receives the sensor data from Google Glass and adjusts the camera's servos.
To transmit the video, I used a free open source software project called mjpg-streamer. This guide by Dipto Pratyaksa does a great job explaining how to get it working on Raspberry Pi. I had a little trouble installing it, but this forum post came in handy.
I made a few adjustments to Dipto Pratyaksa's suggested command to start the streaming server:
mjpg_streamer -i "/usr/lib/input_uvc.so -d /dev/video0 -r 320x180 -f 5 -y" -o "/usr/lib/output_http.so -p 8090 -w /var/www"
This will set the resolution to be exactly half of Google Glass's screen, which is 640x360. I also bumped up the frame rate to 5 frames per second. This combination of resolution and frame rate seemed to be a good combination for reducing lag while maintaining quality. Your mileage may vary.
On the Python Wiki, there's a simple example to set it up to listen for UDP messages. Whenever Python receives a UDP message from Glass with the orientation data, it maps the X and Y values to the camera's two servos. The Adafruit 16-Channel 12-bit PWM/Servo Driver and its accomanying Python library made it easy to set the position of the camera servos.
Here's the code for translating the UDP messages from Glass into servo positions. I hope to implement some additional code to smooth out the changes in the servo position so that it can ease into each new position and not jerk around so much when there's a fluke in the sensor numbers. Along with the Python script, there's also a shell script, which executes mjpg_streamer (along with its many parameters) before launching the UDP Python script.