Oculus Sensor Hacking

Posted on 2018-10-31 08:40 in misc

What else to do with a Rift.

As already documented by a few people, the CV1's Sensors are actually UVC webcams with an IR filter.
On Linux, the UVC driver is immediately loaded for the device, and it can be used with regular V4L tools.

For this experiment I used a Rift CV1, and a Kali VM in VirtualBox on a Windows host, passing it the USB device. (Kali was chosen for simplicity because i had it ready for use and it had the related packages and an up-to-date Linux)

Tests

  • a lighter, which i expected to generate a wide spectrum of IR light:

  • me with messy hair lit by a 20W halogen light bulb, quickly turning around and smiling (rare)

  • Oculus Touch

  • Oculus Rift

  • a LED flashlight against a surface: the output image is black.

  • a LED flashlight pointing directly towards the Sensor: the output image is black. I've noticed a dim point on a previous try.

  • an IR LED from my RGB LED light bulb remote control: the LED is visible on the image but very dim.

Images are cropped outputs of the following script (or from guvcview for the second image if present).
With those tests, it seems fair to say that it is indeed capturing IR mostly, and can act as a functional monochrome webcam provided you have an IR light source.

Another thing that i was interested in testing was the light on top of the Sensors; they lit up shortly during capture and may be a reliable hardware indicator.

Code

#!/usr/bin/env python3
"""
basic usage: 
$ uvccapture -d/dev/video0 -x2560 -y720 -osnap.jpg
$ ./this.py snap.jpg output.jpg
"""

import argparse
from PIL import Image, ImageFilter
import numpy as np

parser = argparse.ArgumentParser(
    description=__doc__,
    formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument('input', help="input image path")
parser.add_argument('output', help="output image path")
parser.add_argument('-d', '--darken', type=int, default=65, metavar='N',
                    help="darken image (0-254, default: 65)")
args = parser.parse_args()

im = Image.open(args.input)

width = im.size[0]
height = im.size[1]

# Lower half is always black
# Upper half contains two similar images side by side
left_im = im.crop((0, 0, width / 2, height / 2))
right_im = im.crop((width / 2, 0, width, height / 2))

# Extract a single channel as a numpy array
# let's ignore red and blue, they look empty and noisy.
def get_array(i):
    (r, g, b) = i.split()
    return np.array(g)
left_arr = get_array(left_im)
right_arr = get_array(right_im)

# Merge sides
arr = (left_arr + right_arr) / 2

# Normalization/leveling
arr = np.interp(arr, (arr.min() + args.darken, arr.max()), (0, 255))
arr = np.rint(arr).astype('uint8')

# Back to an Image, then some smoothing and horizontal scaling
g = Image.fromarray(arr, 'L')

scaled_w = int(round(width / 4))
scaled_h = int(round(height / 2))

g = g.filter(ImageFilter.GaussianBlur(radius=1))
g = g.resize((scaled_w, scaled_h), Image.LANCZOS)

g.save(args.output)

Notes

I assumed the red and blue channels were noise only, but the red channel had a very dim signal. It may be more than noise.

I'm pretty anxious about posting a public selfie so uhh please don't use it against me in any way. thank's't